WorldWideScience

Sample records for modeling techniques results

  1. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N K; Duan, Q; Gao, X; Sorooshian, S

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  2. Multi-Model Combination Techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N; Duan, Q; Gao, X; Sorooshian, S

    2006-05-08

    This paper examines several multi-model combination techniques: the Simple Multimodel Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  3. Numerical modelling of radon-222 entry into houses: An outline of techniques and results

    DEFF Research Database (Denmark)

    Andersen, C.E.

    2001-01-01

    Numerical modelling is a powerful tool for studies of soil gas and radon-222 entry into houses. It is the purpose of this paper to review some main techniques and results. In the past, modelling has focused on Darcy flow of soil gas (driven by indoor–outdoor pressure differences) and combined...... diffusive and advective transport of radon. Models of different complexity have been used. The simpler ones are finite-difference models with one or two spatial dimensions. The more complex models allow for full three-dimensional and time dependency. Advanced features include: soil heterogeneity, anisotropy......, fractures, moisture, non-uniform soil temperature, non-Darcy flow of gas, and flow caused by changes in the atmospheric pressure. Numerical models can be used to estimate the importance of specific factors for radon entry. Models are also helpful when results obtained in special laboratory or test structure...

  4. Fusing Observations and Model Results for Creation of Enhanced Ozone Spatial Fields: Comparison of Three Techniques

    Science.gov (United States)

    This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati...

  5. A surgical rat model of sleeve gastrectomy with staple technique: long-term weight loss results.

    Science.gov (United States)

    Patrikakos, Panagiotis; Toutouzas, Konstantinos G; Perrea, Despoina; Menenakos, Evangelos; Pantopoulou, Alkistis; Thomopoulos, Theodore; Papadopoulos, Stefanos; Bramis, John I

    2009-11-01

    Sleeve gastrectomy (SG) is one of the surgical procedures applied for treating morbid obesity consisting of removing the gastric fundus and transforming the stomach into a narrow gastric tube. The aim of this experimental study is to create a functional model of SG and to present the long-term weight loss results. Twenty adult Wistar rats were fed with high fat diet for 12 weeks before being divided randomly in two groups of ten rats each. One group underwent SG performed with the use of staples, and the other group underwent a sham operation (control group). The animals' weight was evaluated weekly for 15 weeks after the operation. All animals survived throughout the experiment. After the operation both groups started to lose weight with maximum weight loss on the seventh postoperative day (POD) for the sham-operated group and on the 15th POD for the SG group. Thereafter, both groups started to regain weight but with different rates. By the fourth postoperative week (POW), the average weight of the sham group did not differ statistically significantly compared to the preoperative weight, while after the eighth POW, rats' average weight was statistically significantly increased compared to the preoperative value. On the other hand, average weight of the SG group was lower postoperatively until the end of the study compared to the preoperative average weight. We have created a surgical rat model of experimental SG model, enabling the further study of biochemical and hormonal parameters.

  6. The Bullet Cluster revisited: New results from new constraints and improved strong lensing modeling technique

    CERN Document Server

    Paraficz, D; Richard, J; Morandi, A; Limousin, M; Jullo, E

    2012-01-01

    We present a new detailed parametric strong lensing mass reconstruction of the Bullet Cluster (1E 0657-56) at z=0.296, based on new WFC3 and ACS HST imaging and VLT/FORS2 spectroscopy. The strong lensing constraints undergone deep revision, there are 14 (6 new and 8 previously known) multiply imaged systems, of which 3 have spectroscopically confirmed redshifts (including 2 newly measured). The reconstructed mass distribution includes explicitly for the first time the combination of 3 mass components: i) the intra-cluster gas mass derived from X-ray observation, ii) the cluster galaxies modeled by their Fundamental Plane (elliptical) and Tully-Fisher (spiral) scaling relations and iii) dark matter. The best model has an average rms value of 0.158" between the predicted and measured image positions for the 14 multiple images considered. The derived mass model confirms the spacial offset between the X-ray gas and dark matter peaks. The galaxy halos to total mass fraction is found to be f_s=11+/-5% for a total m...

  7. Robotic pyeloplasty: technique and results.

    Science.gov (United States)

    Peschel, Reinhard; Neururer, Richard; Bartsch, Georg; Gettman, Matthew T

    2004-11-01

    The da Vinci robotic system can be used to perform dismembered and nondismembered pyeloplasty techniques effectively. Robotics not only seems to improve dexterity and surgical precision but also provides an ergonomic surgical environment for a surgeon performing complex reconstructive procedures such as pyeloplasty. Although performance-enhancing features of the da Vinci robot seem to decrease the difficulty of intracorporeal suturing, a learning curve also exists for telerobotic procedures. This learning curve may decrease as experience with telerobotics increases and as advances in technology are introduced. Presently, the interaction between the primary and assistant surgeon seems crucial to the success of the procedure. Although the early clinical experience with robotic pyeloplasty is favorable, continuing clinical evaluation and careful follow-up are required to determine if the procedure is as efficacious in the long run as open pyeloplasty and laparoscopic pyeloplasty.

  8. Communication Analysis modelling techniques

    CERN Document Server

    España, Sergio; Pastor, Óscar; Ruiz, Marcela

    2012-01-01

    This report describes and illustrates several modelling techniques proposed by Communication Analysis; namely Communicative Event Diagram, Message Structures and Event Specification Templates. The Communicative Event Diagram is a business process modelling technique that adopts a communicational perspective by focusing on communicative interactions when describing the organizational work practice, instead of focusing on physical activities1; at this abstraction level, we refer to business activities as communicative events. Message Structures is a technique based on structured text that allows specifying the messages associated to communicative events. Event Specification Templates are a means to organise the requirements concerning a communicative event. This report can be useful to analysts and business process modellers in general, since, according to our industrial experience, it is possible to apply many Communication Analysis concepts, guidelines and criteria to other business process modelling notation...

  9. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network...... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....

  10. Data flow modeling techniques

    Science.gov (United States)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  11. Prediction of Heavy Metal Removal by Different Liner Materials from Landfill Leachate: Modeling of Experimental Results Using Artificial Intelligence Technique

    Science.gov (United States)

    Turan, Nurdan Gamze; Gümüşel, Emine Beril; Ozgonenel, Okan

    2013-01-01

    An intensive study has been made to see the performance of the different liner materials with bentonite on the removal efficiency of Cu(II) and Zn(II) from industrial leachate. An artificial neural network (ANN) was used to display the significant levels of the analyzed liner materials on the removal efficiency. The statistical analysis proves that the effect of natural zeolite was significant by a cubic spline model with a 99.93% removal efficiency. Optimization of liner materials was achieved by minimizing bentonite mixtures, which were costly, and maximizing Cu(II) and Zn(II) removal efficiency. The removal efficiencies were calculated as 45.07% and 48.19% for Cu(II) and Zn(II), respectively, when only bentonite was used as liner material. However, 60% of natural zeolite with 40% of bentonite combination was found to be the best for Cu(II) removal (95%), and 80% of vermiculite and pumice with 20% of bentonite combination was found to be the best for Zn(II) removal (61.24% and 65.09%). Similarly, 60% of natural zeolite with 40% of bentonite combination was found to be the best for Zn(II) removal (89.19%), and 80% of vermiculite and pumice with 20% of bentonite combination was found to be the best for Zn(II) removal (82.76% and 74.89%). PMID:23844384

  12. Investigation of condenser deficiencies utilizing state-of-the art test instrumentation and modeling techniques and results from post modification testing

    Energy Technology Data Exchange (ETDEWEB)

    Bell, R.J.; Hardy, C.D. [Heat Exchanger Systems, Inc., Boston, MA (United States)

    1996-05-01

    Higher than design condenser pressure is a major contributor to poor station heat rates and the need to shed load during the summer months. State-of-the art test instrumentation when properly utilized can usually reveal the cause of condenser malperformance. Recent developments in condenser modeling utilizing computational fluids dynamics techniques allow the utility to test various condenser modifications in order to ascertain which modification will result in the greatest decrease in condenser pressure. This paper describes the condenser test instrumentation recently installed in a large mid-western fossil station to diagnose the cause of high condenser pressure. The paper also presents analyzed data results identifying both the cause and magnitude of the performance deficiency. Results of CFD modeling are presented demonstrating what corrective modifications appear the most promising.

  13. Laminotomy in adults: technique and results.

    Science.gov (United States)

    Ruggeri, Andrea; Pichierri, Angelo; Marotta, Nicola; Tarantino, Roberto; Delfini, Roberto

    2012-02-01

    The objective of this study was to describe step by step our surgical technique of laminotomy and analyze our series with regard to spinal deformities (risk and predisposing factors), postoperative pain and rate of postoperative contusions. Data regarding patients who underwent our technique of laminotomy (N = 40, mean follow-up: 52 ms) (N = 40) between 2002 and 2006 were retrospectively evaluated. The technique used is illustrated in depth. Chronic pain was present in 30% with a mean score of 3/10 cm (Graphic Rating Scale). Postoperative kyphoses occurred in three patients, all below 35 years of age and with laminotomies which involved C2 and/or C7. None of these deformities required further surgical treatment because they were self-limiting or asymptomatic at a mean follow-up of 52 months. Based on the results, our technique proved to be safe and effective in terms of late deformities, blood loss, early and chronic postoperative pain and protection from postoperative accidents over the surgical site.

  14. Model building techniques for analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Walther, Howard P.; McDaniel, Karen Lynn; Keener, Donald; Cordova, Theresa Elena; Henry, Ronald C.; Brooks, Sean; Martin, Wilbur D.

    2009-09-01

    The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the product definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.

  15. Using super-resolution technique to elucidate the effects of imaging resolution on transport properties resulting from pore-scale modelling simulations

    Science.gov (United States)

    Karsanina, Marina; Gerke, Kirill; Khirevich, Siarhei; Sizonenko, Timofey; Korost, Dmitry

    2017-04-01

    Permeability is one of the fundamental properties of porous media and is required for large-scale Darcian fluid flow and mass transport models. Whilst permeability can be directly measured at a range of scales, there are increasing opportunities to evaluate permeability from pore-scale simulations. It is well known that single phase flow properties of digital rocks will depend on the resolution of the 3D pore image. Such studies are usually performed by coarsening X-ray microtomography scans. Recently we have proposed a novel approach to fuse multi-scale porous media images using stochastic reconstruction techniques based on directional correlation functions. Here we apply this slightly modified approach to create 3D pore images of different spatial resolution, i.e. stochastic super-resolution method. Contrary to coarsening techniques, this approach preserves porosity values and allows to incorporate fine scale data coming from such imaging techniques as SEM or FIB-SEM. We compute absolute permeability of the same porous media species under different resolutions using lattice-Boltzmann and finite difference methods to model Stokes flow in order to elucidate the effects of image resolution on resulting permeability values and compare stochastic super-resolution technique against conventional coarsening image processing technique. References: 1) Karsanina, M.V., Gerke, K.M., Skvortsova, E.B. and Mallants, D. (2015) Universal spatial correlation functions for describing and reconstructing soil microstructure. PLoS ONE 10(5), e0126515. 2) Gerke, K. M., & Karsanina, M. V. (2015). Improving stochastic reconstructions by weighting correlation functions in an objective function. EPL (Europhysics Letters),111(5), 56002. 3) Gerke, K. M., Karsanina, M. V., Vasilyev, R. V., & Mallants, D. (2014). Improving pattern reconstruction using directional correlation functions. EPL (Europhysics Letters), 106(6), 66002. 4) Gerke, K.M., Karsanina, M. V, Mallants, D., 2015. Universal

  16. Technique and results of cartilage shield tympanoplasty

    Directory of Open Access Journals (Sweden)

    Sohil I Vadiya

    2014-01-01

    Full Text Available Aim: Use of cartilage for repair of tympanic membrane is recommended by many otologists. The current study aims at evaluating results of cartilage shield tympanoplasty in terms of graft take up and hearing outcomes. Material and Methods: In the current study, cartilage shield tympanoplasty(CST is used in ears with high risk perforations of the tympanic membrane. A total of 40 ears were selected where type I CST was done in 30 ears and type III CST was done in 10 ears. Results: An average of 37.08 dB air bone gap(ABG was present in pre operative time and an average of 19.15 dB of ABG was observed at 6 months after the surgery with hearing gain of 17.28 dB on average was observed. Graft take up rate of 97.5% was observed. The technique is modified to make it easier and to minimize chances of lateralization of graft. Conclusion: The hearing results of this technique are comparable to other methods of tympanic membrane repair.

  17. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  18. Saturation of superstorms and finite compressibility of the magnetosphere: Results of the magnetogram inversion technique and global PPMLR-MHD model

    Science.gov (United States)

    Mishin, V. V.; Mishin, V. M.; Karavaev, Yu.; Han, J. P.; Wang, C.

    2016-07-01

    We report on novel features of the saturation process of the polar cap magnetic flux and Poynting flux into the magnetosphere from the solar wind during three superstorms. In addition to the well-known effect of the interplanetary electric (Esw) and southward magnetic (interplanetary magnetic field (IMF) Bz) fields, we found that the saturation depends also on the solar wind ram pressure Pd. By means of the magnetogram inversion technique and a global MHD numerical model Piecewise Parabolic Method with a Lagrangian Remap, we explore the dependence of the magnetopause standoff distance on ram pressure and the southward IMF. Unlike earlier studies, in the considered superstorms both Pd and Bz achieve extreme values. As a result, we show that the compression rate of the dayside magnetosphere decreases with increasing Pd and the southward Bz, approaching very small values for extreme Pd ≥ 15 nPa and Bz ≤ -40 nT. This dependence suggests that finite compressibility of the magnetosphere controls saturation of superstorms.

  19. Amygdalohippocampotomy: surgical technique and clinical results.

    Science.gov (United States)

    Gonçalves-Ferreira, Antonio; Campos, Alexandre Rainha; Herculano-Carvalho, Manuel; Pimentel, Jose; Bentes, Carla; Peralta, Ana Rita; Morgado, Carlos

    2013-05-01

    The removal of mesial temporal structures, namely amygdalohippocampectomy, is the most efficient surgical procedure for the treatment of epilepsy. However, disconnection of the epileptogenic zones, as in temporal lobotomy or, for different purposes, hemispherotomy, have shown equivalent results with less morbidity. Thus, authors of the present study began performing selective amygdalohippocampotomy in cases of refractory mesial temporal lobe epilepsy (TLE) to treat mesial temporal lobe sclerosis (MTLS). The authors conducted a retrospective analysis of all cases of amygdalohippocampotomy collected in a database between November 2007 and March 2011. Since 2007, 21 patients (14 males and 7 females), ages 20-58 years (mean 41 years), all with TLE due to MTLS, were treated with selective ablation of the lateral amygdala plus perihippocampal disconnection (anterior one-half to two-thirds in dominant hemisphere), the left side in 11 cases and the right in 10. In 20 patients the follow-up was 2 or more years (range 24-44 months, average 32 months). Clinical outcome for epilepsy 2 years after surgery (20 patients) was good/very good in 19 patients (95%) with an Engel Class I (15 patients [75%]) or II outcome (4 patients [20%]) and bad in 1 patient (5%) with an Engel Class IV outcome (extratemporal focus and later reoperation). Surgical morbidity included hemiparesis (capsular hypertensive hemorrhage 24 hours after surgery, 1 patient), verbal memory worsening (2 patients), and quadrantanopia (permanent in 2 patients, transient in 1). Late psychiatric depression developed in 3 cases. Operative time was reduced by about 30 minutes (15%) on average with this technique. Amygdalohippocampotomy is as effective as amygdalohippocampectomy to treat MTLS and is a potentially safer, time-saving procedure.

  20. Laparoscopic antireflux surgery--technique and results.

    Science.gov (United States)

    Fingerhut, A; Etienne, J C; Millat, B; Comandella, M G

    1997-09-01

    Although gastroesophageal reflux disease (GERD) can be effectively treated by proton-pump inhibitors, surgery is still the only means of definitive cure of the disease. After introduction of laparoscopic surgery, there has been a clear trend to surgical repair of the incompetent cardia. The indications for surgical treatment are: endoscopically proven esophagitis, persistent or recurrent complaints under medical treatment, esophageal stricture and/or pH-metrically proven acid reflux as well as reflux-induced coughing (chronic aspiration). Although the laparoscopic antireflux operations is a technically demanding procedure, it can be performed with similar results as compared to conventional surgery. The operative technique is reported in detail. From January 1992 to March 1997, 146 consecutive patients with GERD have been operated on laparoscopically. The overall conversion rate was 8.2% (n = 12). 133 patients were operated on according to the Nissen procedure including hiatoplasty. The Toupet operation was performed in only one case. 84 men and 42 women had a mean age of 49 years (20-76). The median duration of symptoms was 48 months (1-600). Except five patients all had medical treatment for at least 2 years. Twice pneumatic balloon dilatation of an esophageal stricture was necessary preoperatively. The median operation time was 210 minutes (70-660). Conversion to open surgery because of intraoperative complications was necessary in 6 patients. Postoperative complications occurred in 14 patients, all of them being successfully treated conservatively. No patient died. 121 patients (90.3%) had follow up examinations for at least 6 months. Retreatment was necessary in 5 cases: 1x slipped Nissen (laparoscopic repair), 1x intrathoracic hernia (conventional reoperation), 2x dysphagia > 4 months postoperatively (endoscopic balloon dilatation) and 1x recurrent ulcer (conventional operation). With a correct indication, laparoscopic Nissen repair for GERD is a suitable

  1. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  2. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  3. Transjugular liver biopsy: indications, technique and results.

    Science.gov (United States)

    Dohan, A; Guerrache, Y; Boudiaf, M; Gavini, J-P; Kaci, R; Soyer, P

    2014-01-01

    Transjugular liver biopsy is a safe, effective and well-tolerated technique to obtain liver tissue specimens in patients with diffuse liver disease associated with severe coagulopathies or massive ascites. Transjugular liver biopsy is almost always feasible. The use of ultrasonographic guidance for percutaneous puncture of the right internal jugular vein is recommended to decrease the incidence of local cervical minor complications. Semiautomated biopsy devices are very effective in obtaining optimal tissue samples for a precise and definite histological diagnosis with a very low rate of complication. The relative limitations of transjugular liver biopsy are the cost, the radiation dose given to the patient, the increased procedure time by comparison with the more common percutaneous liver biopsy, and the need of a well-trained interventional radiologist.

  4. The retreatment: Indications, technique and results

    Energy Technology Data Exchange (ETDEWEB)

    Islak, Civan, E-mail: cislak@istanbul.edu.tr [Istanbul University, Cerrahpasa Medical Faculty, Department of Radiology, Division of Neuroradiology, Kocamustafapasa, Istanbul 34098 (Turkey)

    2013-10-01

    Durability of endovascular treatment of intracranial aneurysms has always been an issue and a very strong point of criticism. Although studies on long-term results have made it clear that endovascular treatment safe and effective they, nonetheless showed retreatment after endovascular treatment is nearly 5–10 times more frequent than surgical clipping. Risk factors predisposing high probability of retreatment are aneurysm with dissecting nature, incomplete coiling, sac size larger than 10 mm and localization at the bifurcations such as basilar tip. The indications for retreatment after endovascular treatment are not clear yet, although certain morphologic criteria can be used. Retreatment appears not to negate the initial advantage of endovascular treatment over surgical treatment and can be performed very small morbi-mortality numbers.

  5. Mandibular distraction in neonates: indications, technique, results

    Directory of Open Access Journals (Sweden)

    Sesenna Enrico

    2012-02-01

    Full Text Available Abstract Background The Pierre Robin Sequence features were first described by Robin in 1923 and include micrognathia, glossoptosis and respiratory distress with an incidence estimated as 1:8,500 to 1:20,000 newborns. Upper airway obstruction and feeding difficulties are the main concerns related to the pathology. Mandibular distraction should be considered a treatment option (when other treatments result inadequate. Patiants and methods Ten patients between the ages of 1 month and 2 years with severe micrognathia and airway obstruction were treated with Mandibular Distraction Osteogenesis (MDO. All patients underwent fibroscopic examination of the upper airway and a radiographic imaging and/or computed tomography scans to detect malformations and to confirm that the obstruction was caused by posterior tongue displacement. All patients were evaluated by a multidisciplinary team. Indications for surgery included frequent apneic episodes with severe desaturation (70%. Gavage therapy was employed in all patients since oral feeding was not possible. The two tracheotomy patients were 5 months and 2 years old respectively, and the distraction procedure was performed to remove the tracheotomy tube. All patients were treated with bilateral mandibular distraction: two cases with an external multivector distraction device, six cases with an internal non-resorbable device and two cases with an internal resorbable device. In one case, the patient with Goldenhar's Syndrome, the procedure was repeated. Results The resolution of symptoms was obtained in all patients, and, when present, tracheotomy was removed without complications. Of the two patients with pre-existing tracheotomies, in the younger patient (5 months old the tracheotomy was removed 7 days postoperatively. In the Goldenhar's syndrome case (2 years old a Montgomery device was necessary for 6 months due to the presence of tracheotomy-inducted tracheomalacia. Patients were discharged when the

  6. Recurrent coarctation: interventional techniques and results.

    Science.gov (United States)

    Saxena, Anita

    2015-04-01

    Coarctation of the aorta (CoA) accounts for 5% to 8% of all congenital heart defects. With all forms of interventions for native CoA, repeat intervention may be required due to restenosis and/or aneurysm formation. Restenosis rates vary from 5% to 24% and are higher in infants and children and in those with arch hypoplasia. Although repeat surgery can be done for recurrent CoA, guidelines from a number of professional societies have recommended balloon angioplasty with or without stenting as the preferred intervention for patients with isolated recoarctation. For infants and young children with recurrent coarctation, balloon angioplasty has been shown to be safe and effective with low incidence of complications. However, the rates of restenosis and reinterventions are high with balloon angioplasty alone. Endovascular stent placement is indicated, either electively in adults or as a bailout procedure in those who develop a complication such as dissection or intimal tear after balloon angioplasty. Conventionally bare metal stents are used; these can be dilated later if required. Covered stents, introduced more recently, are best reserved for those who have aneurysm at the site of previous repair or who develop a complication such as aortic wall perforation or tear. Stents produce complete abolition of gradients across the coarct segment in a majority of cases with good opening of the lumen on angiography. The long-term results are better than that of balloon angioplasty alone, with very low rates of restenosis. However, endovascular stenting is a technically demanding procedure and can be associated with serious complications rarely.

  7. Development of a Technique and Method of Testing Aircraft Models with Turboprop Engine Simulators in a Small-scale Wind Tunnel - Results of Tests

    Directory of Open Access Journals (Sweden)

    A. V. Petrov

    2004-01-01

    Full Text Available This report presents the results of experimental investigations into the interaction between the propellers (Ps and the airframe of a twin-engine, twin-boom light transport aircraft with a Π-shaped tail. An analysis was performed of the forces and moments acting on the aircraft with rotating Ps. The main features of the methodology for windtunnel testing of an aircraft model with running Ps in TsAGI’s T-102 wind tunnel are outlined.The effect of 6-blade Ps slipstreams on the longitudinal and lateral aerodynamic characteristics as well as the effectiveness of the control surfaces was studied on the aircraft model in cruise and takeoff/landing configurations. The tests were conducted at flow velocities of V∞ = 20 to 50 m/s in the ranges of angles of attack α =  -6 to 20 deg, sideslip angles of β = -16 to 16 deg and blade loading coefficient of B 0 to 2.8. For the aircraft of unusual layout studied, an increase in blowing intensity is shown to result in decreasing longitudinal static stability and significant asymmetry of the directional stability characteristics associated with the interaction between the Ps slipstreams of the same (left-hand rotation and the empennage.

  8. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  9. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  10. A Biomechanical Modeling Guided CBCT Estimation Technique.

    Science.gov (United States)

    Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing

    2017-02-01

    Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks.

  11. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  12. Selected Logistics Models and Techniques.

    Science.gov (United States)

    1984-09-01

    ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC

  13. Modeling Techniques: Theory and Practice

    OpenAIRE

    Odd A. Asbjørnsen

    1985-01-01

    A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the pro...

  14. EM techniques for archaeological laboratory experiments: preliminary results

    Science.gov (United States)

    Capozzoli, Luigi; De Martino, Gregory; Giampaolo, Valeria; Raffaele, Luongo; Perciante, Felice; Rizzo, Enzo

    2015-04-01

    model. The integration of electric and electromagnetic data allowed us to overcome the limits of each technique, especially in terms of resolution and depth, in humid/saturated conditions was investigated and the effectiveness of three-dimensional acquisitions was studied to better explore archeological sites and reduce the uncertainties related on the interpretation of geophysical analysis. The complexity of the relationship between archaeological features in the subsoil and their geophysical response requires efforts in the interpretation of resulting data. Reference Campana S. and Piro, S., (2009): Seeing the unseen - Geophysics and landscape archaeology., CRC Press, London, 2. No. of pages: 376. ISBN: 978-0-415-44721-8. Conyers, L. and Goodman, D., (1997): Ground-Penetrating Radar: An Introduction for Archaeologists. Walnut Creek, Calif.: AltaMira Press. Davis, J.L. and Annan, A.P. (1989): Ground-penetrating radar for high-resolution mapping of soil and rock stratigraphy. Geophysical Prospecting, 37, 531-551.

  15. Endoscopic thoracic sympathectomy for hyperhidrosis: Technique and results

    Directory of Open Access Journals (Sweden)

    Cina C

    2007-01-01

    Full Text Available Outline: We review the clinical features of hyperhidrosis and the range of treatments used for this condition. We describe in detail the technique of endoscopic sympathectomy. We summarize studies that have reported results of endoscopic sympathectomy. We present new data highlighting the difference in quality of life between patients with hyperhidrosis and controls.

  16. Seismic techniques of enhanced oil recovery: experimental and field results

    Energy Technology Data Exchange (ETDEWEB)

    Kuznetsov, O.L.; Simkin, E.M.; Chilingar, G.V.; Gorfunkel, M.V.; Robertson, J.O. Jr.

    2002-09-15

    Application of secondary and tertiary oil recovery techniques during late field development stages usually yields poor results. The reasons are principally due to the low efficiency of these technologies, probably because the gravity and capillary forces are not properly considered. Improved efficiency for hydrocarbon recovery produced by seismic vibration is discussed. (author)

  17. Endoscopic thoracic sympathectomy for hyperhidrosis: Technique and results

    OpenAIRE

    Cina C; Cina M; Clase C

    2007-01-01

    Outline: We review the clinical features of hyperhidrosis and the range of treatments used for this condition. We describe in detail the technique of endoscopic sympathectomy. We summarize studies that have reported results of endoscopic sympathectomy. We present new data highlighting the difference in quality of life between patients with hyperhidrosis and controls.

  18. Endoscopic thoracic sympathectomy for hyperhidrosis: Technique and results

    Science.gov (United States)

    Cinà, C S; Cinà, M M; Clase, C M

    2007-01-01

    Outline: We review the clinical features of hyperhidrosis and the range of treatments used for this condition. We describe in detail the technique of endoscopic sympathectomy. We summarize studies that have reported results of endoscopic sympathectomy. We present new data highlighting the difference in quality of life between patients with hyperhidrosis and controls. PMID:19789674

  19. Modeling Techniques: Theory and Practice

    Directory of Open Access Journals (Sweden)

    Odd A. Asbjørnsen

    1985-07-01

    Full Text Available A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the process variables. This allows residence time distribution function parameters to be estimated with the reaction in situ, but without any correlation between the estimated residence time distribution parameters and the estimated reaction kinetic parameters. A general word of warning is given to the choice of wrong mathematical structure of models.

  20. The influence of sampling technique on ACT Plus results.

    Science.gov (United States)

    Brouwer, Monique E; Miraziz, Ramen; Agbulos, Grace; Steel, Rona; Hales, Peter; Klineberg, Peter

    2012-12-01

    The manufacturer of the ACT Plus Automated Coagulation Timer, Medtronic Inc., recommends that test cartridges be prewarmed and the activating reagent resuspended (tapped/ mixed) before patient testing. In a busy clinical environment, these recommendations may be overlooked or disregarded. In this study, the impact of sampling technique on ACT Plus test results was investigated. In Series 1, two test cartridges were split into four individual chambers. Two ACT Plus machines were used, allowing for three separate comparisons to be made. The sample results from test Chambers 2 (cold/tapped), 3 (warmed/ not tapped), and 4 (cold/not tapped) were compared individually against the result from test Chamber 1, the recommended technique (warm/tapped). In Series 2, the manufacturer's recommendations were tested using a single double cartridge (warm/ tapped). Results were interpreted using the Bland-Altman method of analysis. The prewarming and tapping of cartridges before use independently influenced the agreement of results when compared with cartridges that were not prewarmed and tapped. Each factor (temperature and mixing) when excluded was found to affect the standard deviation and decrease the agreement of results. By following the manufacturer's recommendations to standardize the sampling technique, ACT Plus test results are more accurate.

  1. Results of NDE Technique Evaluation of Clad Hydrides

    Energy Technology Data Exchange (ETDEWEB)

    Kunerth, Dennis C. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-09-01

    This report fulfills the M4 milestone, M4FT-14IN0805023, Results of NDE Technique Evaluation of Clad Hydrides, under Work Package Number FT-14IN080502. During service, zirconium alloy fuel cladding will degrade via corrosion/oxidation. Hydrogen, a byproduct of the oxidation process, will be absorbed into the cladding and eventually form hydrides due to low hydrogen solubility limits. The hydride phase is detrimental to the mechanical properties of the cladding and therefore it is important to be able to detect and characterize the presence of this constituent within the cladding. Presently, hydrides are evaluated using destructive examination. If nondestructive evaluation techniques can be used to detect and characterize the hydrides, the potential exists to significantly increase test sample coverage while reducing evaluation time and cost. To demonstrate the viability this approach, an initial evaluation of eddy current and ultrasonic techniques were performed to demonstrate the basic ability to these techniques to detect hydrides or their effects on the microstructure. Conventional continuous wave eddy current techniques were applied to zirconium based cladding test samples thermally processed with hydrogen gas to promote the absorption of hydrogen and subsequent formation of hydrides. The results of the evaluation demonstrate that eddy current inspection approaches have the potential to detect both the physical damage induced by hydrides, e.g. blisters and cracking, as well as the combined effects of absorbed hydrogen and hydride precipitates on the electrical properties of the zirconium alloy. Similarly, measurements of ultrasonic wave velocities indicate changes in the elastic properties resulting from the combined effects of absorbed hydrogen and hydride precipitates as well as changes in geometry in regions of severe degradation. However, for both approaches, the signal responses intended to make the desired measurement incorporate a number of contributing

  2. Performance results of HESP physical model

    Science.gov (United States)

    Chanumolu, Anantha; Thirupathi, Sivarani; Jones, Damien; Giridhar, Sunetra; Grobler, Deon; Jakobsson, Robert

    2017-02-01

    As a continuation to the published work on model based calibration technique with HESP(Hanle Echelle Spectrograph) as a case study, in this paper we present the performance results of the technique. We also describe how the open parameters were chosen in the model for optimization, the glass data accuracy and handling the discrepancies. It is observed through simulations that the discrepancies in glass data can be identified but not quantifiable. So having an accurate glass data is important which is possible to obtain from the glass manufacturers. The model's performance in various aspects is presented using the ThAr calibration frames from HESP during its pre-shipment tests. Accuracy of model predictions and its wave length calibration comparison with conventional empirical fitting, the behaviour of open parameters in optimization, model's ability to track instrumental drifts in the spectrum and the double fibres performance were discussed. It is observed that the optimized model is able to predict to a high accuracy the drifts in the spectrum from environmental fluctuations. It is also observed that the pattern in the spectral drifts across the 2D spectrum which vary from image to image is predictable with the optimized model. We will also discuss the possible science cases where the model can contribute.

  3. Incorporation of RAM techniques into simulation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.C. Jr.; Haire, M.J.; Schryver, J.C.

    1995-07-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model represents the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army`s next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through ``what if`` questions, sensitivity studies, and battle scenario changes.

  4. Model checking timed automata : techniques and applications

    NARCIS (Netherlands)

    Hendriks, Martijn.

    2006-01-01

    Model checking is a technique to automatically analyse systems that have been modeled in a formal language. The timed automaton framework is such a formal language. It is suitable to model many realistic problems in which time plays a central role. Examples are distributed algorithms, protocols, emb

  5. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  6. Using Visualization Techniques in Multilayer Traffic Modeling

    Science.gov (United States)

    Bragg, Arnold

    We describe visualization techniques for multilayer traffic modeling - i.e., traffic models that span several protocol layers, and traffic models of protocols that cross layers. Multilayer traffic modeling is challenging, as one must deal with disparate traffic sources; control loops; the effects of network elements such as IP routers; cross-layer protocols; asymmetries in bandwidth, session lengths, and application behaviors; and an enormous number of complex interactions among the various factors. We illustrate by using visualization techniques to identify relationships, transformations, and scaling; to smooth simulation and measurement data; to examine boundary cases, subtle effects and interactions, and outliers; to fit models; and to compare models with others that have fewer parameters. Our experience suggests that visualization techniques can provide practitioners with extraordinary insight about complex multilayer traffic effects and interactions that are common in emerging next-generation networks.

  7. Intramedullary nailing of the proximal humerus: evolution, technique, and results.

    Science.gov (United States)

    Dilisio, Matthew F; Nowinski, Robert J; Hatzidakis, Armodios M; Fehringer, Edward V

    2016-05-01

    Proximal humerus fractures are the third most common fracture in the elderly. Although most fractures can be treated conservatively with acceptable outcomes, certain fracture patterns are at high risk for progression to humeral malunions, nonunions, stiffness, and post-traumatic arthrosis. The goal of antegrade humeral nailing of proximal humerus fractures is to provide stability to a reduced fracture that allows early motion to optimize patient outcomes. Certain technical pearls are pivotal in managing these difficult fractures with nails; these include rotator cuff management, respect of the soft tissues, anatomic tuberosity position, blood supply maintenance, knowledge of the deforming forces on the proximal humerus, fracture reduction, and rehabilitation strategies. Modern proximal humeral nail designs and techniques assist the surgeon in adhering to these principles and have demonstrated promising outcomes. Humeral nail designs have undergone significant innovation during the past 40 years and now can provide stable fixation in the humeral shaft distally as well as improved stability in the head and tuberosity fragments, which were the common site of fixation failure with earlier generation implants. Compared with other fixation strategies, such as locking plate fixation, no compelling evidence exists to suggest one technique over another. The purpose of this review is to describe the history, results, new designs, and techniques that make modern intramedullary nailing of proximal humerus fractures a viable treatment option.

  8. On-shell Techniques and Universal Results in Quantum Gravity

    CERN Document Server

    Bjerrum-Bohr, N E J; Vanhove, Pierre

    2013-01-01

    We compute the leading post-Newtonian and quantum corrections to the Coulomb and Newtonian potentials using the full modern arsenal of on-shell techniques; we employ spinor-helicity variables everywhere, use the Kawai-Lewellen-Tye (KLT) relations to derive gravity amplitudes from gauge theory and use unitarity methods to extract the terms needed at one-loop order. We stress that our results are universal and thus will hold in any quantum theory of gravity with the same low-energy degrees of freedom as we are considering. Previous results for the corrections to the same potentials, derived historically using Feynman graphs, are verified explicitly, but our approach presents a huge simplification, since starting points for the computations are compact and tedious index contractions and various complicated integral reductions are eliminated from the onset, streamlining the derivations. We also analyze the spin dependence of the results using the KLT factorization, and show how the spinless correction in the fram...

  9. Model assisted qualification of NDE techniques

    Science.gov (United States)

    Ballisat, Alexander; Wilcox, Paul; Smith, Robert; Hallam, David

    2017-02-01

    The costly and time consuming nature of empirical trials typically performed for NDE technique qualification is a major barrier to the introduction of NDE techniques into service. The use of computational models has been proposed as a method by which the process of qualification can be accelerated. However, given the number of possible parameters present in an inspection, the number of combinations of parameter values scales to a power law and running simulations at all of these points rapidly becomes infeasible. Given that many NDE inspections result in a single valued scalar quantity, such as a phase or amplitude, using suitable sampling and interpolation methods significantly reduces the number of simulations that have to be performed. This paper presents initial results of applying Latin Hypercube Designs and M ultivariate Adaptive Regression Splines to the inspection of a fastener hole using an oblique ultrasonic shear wave inspection. It is demonstrated that an accurate mapping of the response of the inspection for the variations considered can be achieved by sampling only a small percentage of the parameter space of variations and that the required percentage decreases as the number of parameters and the number of possible sample points increases. It is then shown how the outcome of this process can be used to assess the reliability of the inspection through commonly used metrics such as probability of detection, thereby providing an alternative methodology to the current practice of performing empirical probability of detection trials.

  10. Results of arthrospine assisted percutaneous technique for lumbar discectomy

    Directory of Open Access Journals (Sweden)

    Mohinder Kaushal

    2016-01-01

    Full Text Available Background: Avaialable minimal invasive arthro/endoscopic techniques are not compatible with 30 degree arthroscope which orthopedic surgeons uses in knee and shoulder arthroscopy. Minimally invasive “Arthrospine assisted percutaneous technique for lumbar discectomy” is an attempt to allow standard familiar microsurgical discectomy and decompression to be performed using 30° arthroscope used in knee and shoulder arthroscopy with conventional micro discectomy instruments. Materials and Methods: 150 patients suffering from lumbar disc herniations were operated between January 2004 and December 2012 by indiginously designed Arthrospine system and were evaluated retrospectively. In lumbar discectomy group, there were 85 males and 65 females aged between 18 and 72 years (mean, 38.4 years. The delay between onset of symptoms to surgery was between 3 months to 7 years. Levels operated upon included L1-L2 (n = 3, L2-L3 (n = 2, L3-L4 (n = 8, L4-L5 (n = 90, and L5-S1 (n = 47. Ninety patients had radiculopathy on right side and 60 on left side. There were 22 central, 88 paracentral, 12 contained, 3 extraforaminal, and 25 sequestrated herniations. Standard protocol of preoperative blood tests, x-ray LS Spine and pre operative MRI and pre anaesthetic evaluation for anaesthesia was done in all cases. Technique comprised localization of symptomatic level followed by percutaneous dilatation and insertion of a newly devised arthrospine system devise over a dilator through a 15 mm skin and fascial incision. Arthro/endoscopic discectomy was then carried out by 30° arthroscope and conventional disc surgery instruments. Results: Based on modified Macnab's criteria, of 150 patients operated for lumbar discectomy, 136 (90% patients had excellent to good, 12 (8% had fair, and 2 patients (1.3% had poor results. The complications observed were discitis in 3 patients (2%, dural tear in 4 patients (2.6%, and nerve root injury in 2 patients (1.3%. About 90% patients

  11. Hydroponic cultivation techniques: good results with Eg system

    Energy Technology Data Exchange (ETDEWEB)

    Mimiola, G.; Sigliuzzo, C. (Tecnagro, Valenzano (Italy))

    1988-12-01

    This report describes results obtained at the Tecnagro agronomic institute (Valenzano, Italy) in which research is being carried out on the use of the Eg hydroponic system developed in Israel. The research program examined the following: composition of nutritive solutions for ornamental plants and vegetables, methods of application of nutritive substances, breeding densities for ornamental plants and vegetables. Successful nutritive formulas were obtained which resulted, in the case of ornamental plants, in increases in plant height (from 30 to 50%), foliage area (50%), as well as, in shortened growth cycles. For vegetables, shortened growth cycles were developed along with a greater and more consistant production. From the economics point of view, tomatoes proved to be the best choice of vegetable for cultivation with the Eg technique.

  12. Research Techniques Made Simple: Skin Carcinogenesis Models: Xenotransplantation Techniques.

    Science.gov (United States)

    Mollo, Maria Rosaria; Antonini, Dario; Cirillo, Luisa; Missero, Caterina

    2016-02-01

    Xenotransplantation is a widely used technique to test the tumorigenic potential of human cells in vivo using immunodeficient mice. Here we describe basic technologies and recent advances in xenotransplantation applied to study squamous cell carcinomas (SCCs) of the skin. SCC cells isolated from tumors can either be cultured to generate a cell line or injected directly into mice. Several immunodeficient mouse models are available for selection based on the experimental design and the type of tumorigenicity assay. Subcutaneous injection is the most widely used technique for xenotransplantation because it involves a simple procedure allowing the use of a large number of cells, although it may not mimic the original tumor environment. SCC cell injections at the epidermal-to-dermal junction or grafting of organotypic cultures containing human stroma have also been used to more closely resemble the tumor environment. Mixing of SCC cells with cancer-associated fibroblasts can allow the study of their interaction and reciprocal influence, which can be followed in real time by intradermal ear injection using conventional fluorescent microscopy. In this article, we will review recent advances in xenotransplantation technologies applied to study behavior of SCC cells and their interaction with the tumor environment in vivo.

  13. Computer Package of Metasubject Results Valuation Techniques in Elementary School

    Directory of Open Access Journals (Sweden)

    Ulanovskaya I.M.,

    2014-08-01

    Full Text Available The new Federal state educational standards define requirements for metasubject results of primary schooling. For their assessment, diagnostic package and test methods were developed in Psychological Institute of Russian Academy of Education and Moscow State University of Psychology and Education. A computer version of this package is provided. It includes techniques "Permutations" (author A.Z. Zak, "Calendar" (authors G.A. Zuckerman and O.L. Obukhova, "Quests of Mathematics" (authors S.F. Gorbov, O.V. Savelyeva, N.L. Tabachnikova, "Preparation of the text" (author Z.N. Novlyanskaya. The computer package helps to evaluate main metasubject results related to the development of thinking and mastery of the tools of learning activity. Two additional methods ("Children’s tasks" by G.A. Zuckerman and "Tips" by E.V. Chudinova are focused on the diagnosis of formation of the ability to learn. Using computer tools allow schools to quickly diagnose the results of primary school education, to identify its strengths and weaknesses, prepare correction programs.

  14. Comparative Analysis of Vehicle Make and Model Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Faiza Ayub Syed

    2014-03-01

    Full Text Available Vehicle Make and Model Recognition (VMMR has emerged as a significant element of vision based systems because of its application in access control systems, traffic control and monitoring systems, security systems and surveillance systems, etc. So far a number of techniques have been developed for vehicle recognition. Each technique follows different methodology and classification approaches. The evaluation results highlight the recognition technique with highest accuracy level. In this paper we have pointed out the working of various vehicle make and model recognition techniques and compare these techniques on the basis of methodology, principles, classification approach, classifier and level of recognition After comparing these factors we concluded that Locally Normalized Harris Corner Strengths (LHNS performs best as compared to other techniques. LHNS uses Bayes and K-NN classification approaches for vehicle classification. It extracts information from frontal view of vehicles for vehicle make and model recognition.

  15. Laparoscopic Sacral Uteropexy with Cravat Technique--Experience and Results

    Directory of Open Access Journals (Sweden)

    Murat Api

    2014-08-01

    Full Text Available Objective The aim of the present study was to evaluate the safety and efficacy of a “Cravat’’ technique for the management of uterine prolapse in patients who want to preserve uterus, involving suspension of the uterus from the sacral promontory by using polypropylene mesh. Materials and Methods A prospective observational study between January 2011 and September 2013 was conducted. Prior to surgery, prolapse assessment was undertaken with Baden-Walker halfway system to grade the degree of prolapse at all sites. Patients with severe uterine prolapse (stage II-IV who want to preserve uterus, were operated with Cravat technique. All patients were evaluated at 2 weeks and 6 weeks after surgery and followed for 6 months. Outcomes were evaluated objectively by vaginal examination using Baden-Walker halfway classification, and subjectively classifying patients as ‘very satisfied’, ‘satisfied’ and ‘not satisfied’ at the 6th month postoperatively. Results Sacral uteropexy was successfully performed by laparoscopy in 32/33 patients (one needed to be converted to laparotomy. Nine patients also had a concurrent procedure as colporaphy anterior, colporaphy posterior or transobturator tape. Postoperative recovery has been uneventful with subjective and objective cure rates were 96.9% and 93.9%, respectively at six month. One recurrence of total prolapse needed to be reoperated and two patients with sacrouteropexy still remained at stage 2 prolapse. There have been no cases of graft exposure, rejection or infection with a median follow-up of 23.9 months. Conclusions Laparoscopic sacral uteropexy with “Cravat technique” was found to be safe and simple procedure.

  16. Cystoscopic-assisted partial cystectomy: description of technique and results

    Directory of Open Access Journals (Sweden)

    Gofrit ON

    2014-10-01

    Full Text Available Ofer N Gofrit,1 Amos Shapiro,1 Ran Katz,1 Mordechai Duvdevani,1 Vladimir Yutkin,1 Ezekiel H Landau,1 Kevin C Zorn,2 Guy Hidas,1 Dov Pode1 1Department of Urology, Hadassah Hebrew University Hospital, Jerusalem, Israel; 2Department of Surgery, Section of Urology, Montreal, Canada Background: Partial cystectomy provides oncological results comparable with those of radical cystectomy in selected patients with invasive bladder cancer without the morbidity associated with radical cystectomy and urinary diversion. We describe a novel technique of partial cystectomy that allows accurate identification of tumor margins while minimizing damage to the rest of the bladder. Methods: During the study period, 30 patients underwent partial cystectomy for invasive high-grade cancer. In 19 patients, the traditional method of tumor identification was used, ie, identifying the tumor by palpation and cystotomy. In eleven patients, after mobilization of the bladder, flexible cystoscopy was done and the light of the cystoscope was pointed toward one edge of the planned resected ellipse around the tumor, thus avoiding cystotomy. Results: Patients who underwent partial cystectomy using the novel method were similar in all characteristics to patients operated on using the traditional technique except for tumor diameter which was significantly larger in patients operated on using the novel method (4.3±1.5 cm versus 3.11±1.18 cm, P=0.032. Complications were rare in both types of surgery. The 5-year local recurrence-free survival was marginally superior using the novel method (0.8 versus 0.426, P=0.088. Overall, disease-specific and disease-free survival rates were similar. Conclusion: The use of a flexible cystoscope during partial cystectomy is a simple, low-cost maneuver that assists in planning the bladder incision and minimizes injury to the remaining bladder by avoiding the midline cystotomy. Initial oncological results show a trend toward a lower rate of local

  17. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  18. Magnetic resonance imaging urodynamics: technique development and preliminary results

    Directory of Open Access Journals (Sweden)

    Gustavo Borghesi

    2006-06-01

    Full Text Available OBJECTIVES: In this preliminary study we report the development of the video urodynamic technique using magnetic resonance imaging (MRI. MATERIALS AND METHODS: We studied 6 women with genuine stress urinary incontinence, diagnosed by history and physical examination. Urodynamic examination was performed on multichannel equipment with the patient in the supine position. Coughing and Valsalva maneuvers were performed at volumes of 150, 250 and 350 mL. Simultaneously, MRI was carried out by using 1.5 T GE Signa CV/i high-speed scanner with real time fluoroscopic imaging possibilities. Fluoroscopic imaging was accomplished in the corresponding planes with T2-weighted single shot fast spin echo sequences at a speed of about 1 frame per second. Both studies were recorded and synchronized, resulting in a single video urodynamic examination. RESULTS: Dynamic MRI with cine-loop reconstruction of 1 image per second demonstrated the movement of all compartment of the relaxed pelvis during straining with the concomitant registration of abdominal and intravesical pressures. In 5 patients, urinary leakage was demonstrated during straining and the Valsalva leak point pressure (VLPP was determined as the vesical pressure at leak subtracted from baseline bladder pressure. Mean VLPP was 72.6 cm H2O (ranging from 43 to 122 cm H2O. CONCLUSIONS: The concept of MRI video urodynamics is feasible. In a clinical perspective, practical aspects represent a barrier to daily use and it should be recommended for research purposes.

  19. Transcorporeal cervical foraminotomy: description of technique and results

    Directory of Open Access Journals (Sweden)

    Guilherme Pereira Corrêa Meyer

    2014-09-01

    Full Text Available OBJECTIVE: Retrospective analyses of 216 patients undergoing foraminal decompression with transcorporeal approach and review of the surgical technique. METHOD: 216 patients with minimum follow-up of 2 years and an average of 41.8 months were included in the study. The clinical records of these patients were reviewed for complications, NDI (neck disability index and VAS (visual analogue scale. Pre and post-operative radiographs were used to evaluate the disc height. RESULTS: At the end of follow-up patients had significant clinical improvement with reduction of NDI of 88.3% and 86.5% and 68.3% of the VAS for neck and upper limb, respectively (p<0.05. A reduction of 8.8% of the disc height was observed without other complications associated (p<0.05. CONCLUSION: Radicular decompression through a transcorporeal approach is an alternative that provides good clinical results without the need for a fusion and with few complications.

  20. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  1. “ROCAMBOLE-LIKE” BICEPS TENODESIS: TECHNIQUE AND RESULTS

    Science.gov (United States)

    Godinho, Glaydson Gomes; Mesquita, Fabrício Augusto Silva; França, Flávio de Oliveira; Freitas, José Márcio Alves

    2015-01-01

    Objective: To present a new technique for bicipital tenodesis and its results: accomplished partially via arthroscopy and grounded in concepts of the normal and pathological anatomy of the tendon of the biceps long head. It is based on the predisposition of this tendon towards becoming attached to the intertubercular sulcus after rupture or tenotomy (auto-tenodesis). Methods: Evaluations were conducted on 63 patients (63 shoulders), aged from 32 to 77 years (average 55), consisting of 32 females (51%) and 31 males (49%). Thirty-five of the patients (55.6%) were over 60 years of age and 28 patients (44.4%) were under 60 years of age. Eighteen were sports participants (28.6%). Fourteen had injuries associated with the subscapularis (22.2%). The average follow up was 43 months (ranging from 12 to 74 months). The right shoulder accounted for 48 cases (76.2%), of which one was a left-handed individual and 47 were right-handed. The left shoulder accounted for 15 (23%) of the patients, of whom two were left-handed and 13 were right-handed. There were no bilateral occurrences. The statistical analysis were done using SPSS version 18. Pearson's chi-square test and continuity corrections were used to investigate the statistical significance of associations between variables. Associations were taken to be statistically significant when p was less than 0.05. Results: Residual Popeye deformity was perceived by seven patients (11.1%); it was only observed by the examiner in 15 cases (23.8%); and neither the patient nor the examiner observed it in 41 cases (65%). There were no statistically valid influences from age, participation in contact or throwing sports, subscapularis tendon-associated injury or Popeye deformity. Fifty-eight patients (92.06%) were satisfied, two patients were dissatisfied (3.17%) and three patients were indifferent (4.76%). Conclusion: The technique presented high patient satisfaction rates (92.06%) and residual deformity was perceived by 11.1% of the

  2. Field results of antifouling techniques for optical instruments

    Science.gov (United States)

    Strahle, W.J.; Hotchkiss, F.S.; Martini, M.A.

    1998-01-01

    An anti-fouling technique is developed for the protection of optical instruments from biofouling which leaches a bromide compound into a sample chamber and pumps new water into the chamber prior to measurement. The primary advantage of using bromide is that it is less toxic than the metal-based antifoulants. The drawback of the bromide technique is also discussed.

  3. Modeling Techniques for IN/Internet Interworking

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper focuses on the authors' contributions to ITU-T to develop the network modeling for the support of IN/Internet interworking. Following an introduction to benchmark interworking services, the paper describes the consensus enhanced DFP architecture, which is reached based on IETF reference model and the authors' proposal. Then the proposed information flows for benchmark services are presented with new or updated flows identified. Finally a brief description is given to implementation techniques.

  4. The SIGN nail for knee fusion: technique and clinical results

    Directory of Open Access Journals (Sweden)

    Anderson Duane Ray

    2016-01-01

    Full Text Available Purpose: Evaluate the efficacy of using the SIGN nail for instrumented knee fusion. Methods: Six consecutive patients (seven knees, three males with an average age of 30.5 years (range, 18–50 years underwent a knee arthrodesis with SIGN nail (mean follow-up 10.7 months; range, 8–14 months. Diagnoses included tuberculosis (two knees, congenital knee dislocation in two knees (one patient, bacterial septic arthritis (one knee, malunited spontaneous fusion (one knee, and severe gout with 90° flexion contracture (one knee. The nail was inserted through an anteromedial entry point on the femur and full weightbearing was permitted immediately. Results: All knees had clinical and radiographic evidence of fusion at final follow-up and none required further surgery. Four of six patients ambulated without assistive device, and all patients reported improved overall physical function. There were no post-operative complications. Conclusion: The technique described utilizing the SIGN nail is both safe and effective for knee arthrodesis and useful for austere environments with limited fluoroscopy and implant options.

  5. Endoscopic incisional therapy for benign esophageal strictures: Technique and results.

    Science.gov (United States)

    Samanta, Jayanta; Dhaka, Narendra; Sinha, Saroj Kant; Kochhar, Rakesh

    2015-12-25

    Benign esophageal strictures refractory to the conventional balloon or bougie dilatation may be subjected to various adjunctive modes of therapy, one of them being endoscopic incisional therapy (EIT). A proper delineation of the stricture anatomy is a prerequisite. A host of electrocautery and mechanical devices may be used, the most common being the use of needle knife, either standard or insulated tip. The technique entails radial incision and cutting off of the stenotic rim. Adjunctive therapies, to prevent re-stenosis, such as balloon dilatation, oral or intralesional steroids or argon plasma coagulation can be used. The common strictures where EIT has been successfully used are Schatzki's rings (SR) and anastomotic strictures (AS). Short segment strictures (< 1 cm) have been found to have the best outcome. When compared with routine balloon dilatation, EIT has equivalent results in treatment naïve cases but better long term outcome in refractory cases. Anecdotal reports of its use in other types of strictures have been noted. Post procedure complications of EIT are mild and comparable to dilatation therapy. As of the current evidence, incisional therapy can be used for management of refractory AS and SR with relatively short stenosis (< 1 cm) with good safety profile and acceptable long term patency.

  6. Dust tracking techniques applied to the STARDUST facility: First results

    Energy Technology Data Exchange (ETDEWEB)

    Malizia, A., E-mail: malizia@ing.uniroma2.it [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Camplani, M. [Grupo de Tratamiento de Imágenes, E.T.S.I de Telecomunicación, Universidad Politécnica de Madrid (Spain); Gelfusa, M. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Lupelli, I. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); EURATOM/CCFE Association, Culham Science Centre, Abingdon (United Kingdom); Richetta, M.; Antonelli, L.; Conetta, F.; Scarpellini, D.; Carestia, M.; Peluso, E.; Bellecci, C. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Salgado, L. [Grupo de Tratamiento de Imágenes, E.T.S.I de Telecomunicación, Universidad Politécnica de Madrid (Spain); Video Processing and Understanding Laboratory, Universidad Autónoma de Madrid (Spain); Gaudio, P. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy)

    2014-10-15

    Highlights: •Use of an experimental facility, STARDUST, to analyze the dust resuspension problem inside the tokamak in case of loss of vacuum accident. •PIV technique implementation to track the dust during a LOVA reproduction inside STARDUST. •Data imaging techniques to analyze dust velocity field: first results and data discussion. -- Abstract: An important issue related to future nuclear fusion reactors fueled with deuterium and tritium is the creation of large amounts of dust due to several mechanisms (disruptions, ELMs and VDEs). The dust size expected in nuclear fusion experiments (such as ITER) is in the order of microns (between 0.1 and 1000 μm). Almost the total amount of this dust remains in the vacuum vessel (VV). This radiological dust can re-suspend in case of LOVA (loss of vacuum accident) and these phenomena can cause explosions and serious damages to the health of the operators and to the integrity of the device. The authors have developed a facility, STARDUST, in order to reproduce the thermo fluid-dynamic conditions comparable to those expected inside the VV of the next generation of experiments such as ITER in case of LOVA. The dust used inside the STARDUST facility presents particle sizes and physical characteristics comparable with those that created inside the VV of nuclear fusion experiments. In this facility an experimental campaign has been conducted with the purpose of tracking the dust re-suspended at low pressurization rates (comparable to those expected in case of LOVA in ITER and suggested by the General Safety and Security Report ITER-GSSR) using a fast camera with a frame rate from 1000 to 10,000 images per second. The velocity fields of the mobilized dust are derived from the imaging of a two-dimensional slice of the flow illuminated by optically adapted laser beam. The aim of this work is to demonstrate the possibility of dust tracking by means of image processing with the objective of determining the velocity field values

  7. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I/O-mod...... for range reporting problems in the pointer machine and the I/O-model. With this technique, we tighten the gap between the known upper bound and lower bound for the most fundamental range reporting problem, orthogonal range reporting. 5......In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  8. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  9. New results on sporadic ionization observed with the API technique

    Science.gov (United States)

    Bakhmetieva, N. V.; Belikovich, V. V.; Kagan, L. M.

    We present new results of our studies of sporadic E-layers E s by means of the artificial periodic irregularities API technique Artificial periodic irregularities were generated in antinodes of the standing electromagnetic wave formed due to interference of HF radio waves transmitted vertically and reflected from the ionosphere The API are horizontally aligned with a vertical scale of one-half of the wavelength lambda of the transmitted wave for more details on the API method and its applications see Belikovich et al Ionospheric Research by Means of Artificial Periodic Irregularities - Katlenburg-Lindau Germany 2002 Copernicus GmbH ISBN 3-936586-03-9 160 pp Recently we have presented and experimentally realized a method to determine the sporadic E-layer ion composition the molecular masses of the predominant metallic ions and the total ion densities on the basis of the measurements of the amplitude and the decay time of the API signals To study the structure of sporadic ionization layers in the E region as well as a possibility and effectiveness of Es modification by high-power radiowave transmissions we designed and carried out another experiment at the SURA facility 56 1 r N 44 1 r E in August 10-15 2004 The ionosphere modification was done by O-mode waves using two SURA transmitters at the frequency 4 3 MHz with effective radiated power ERP of about 60 MW at the transmitting schedule of the 1-min on 2-min off so-called additional heating The third transmitter was used for API formation and

  10. Microendoscopic lumbar discectomy: Technique and results of 188 cases

    Directory of Open Access Journals (Sweden)

    Arvind G Kulkarni

    2014-01-01

    Full Text Available Background: Discectomy performed open or with an operating microscope remains the standard surgical management. Tubular retractor system is being increasingly used. Potential benefits include less muscle and local damage, better cosmesis, decreased pain and operative time and faster recovery after surgery. We have evaluated the outcome of micro endoscopic discectomy (MED utilizing tubular retractors in terms of safety and efficacy of the technique. Materials and Methods: 188 consecutive patients who underwent surgery for herniated disc using the tubular retractors between April 2007 and April 2012 are reported. All patients had a preoperative MRI (Magnetic Resonance Imaging and were operated by a single surgeon with the METRx system (Medtronic, Sofamor-Danek, Memphis, TN using 18 and 16 mm ports. All patients were mobilized as soon as pain subsided and discharged within 24-48 hours post surgery. The results were evaluated by using VAS (Visual Analog Scale 0-5 for back and leg pain and ODI (Oswestry Disability Index. Patients were followed up at intervals of 1 week, 6 weeks, 3 months, 6 months, 12 months and 2 years. Results: The mean age of patients was 46 years (range 16-78 years and the sex ratio was 1.5 males to 1 female. The mean followup was 22 months (range 8-69 months. The mean VAS scale for leg pain improved from 4.14 to 0.76 ( P < 0.05 and the mean VAS scale for back pain improved from 4.1 to 0.9 ( P < 0.05. The mean ODI changed from 59.5 to 22.6 ( P < 0.05. The mean operative time per level was about 50 minutes (range 20-90 minutes. Dural punctures occurred in 11 (5% cases. Average blood loss was 30 ml (range 10-500 ml. A wrong level was identified and later corrected in a case of revision discectomy. Four patients with residual disc-herniation had revision MED and three patients with recurrent disc herniation later underwent fusion. One patient had wound infection which needed a debridement. Conclusion: MED for herniated discs

  11. Examining the Results of Podcast Relaxation Techniques in Higher Education

    Science.gov (United States)

    Ricks, Jenny; Naquin, Millie; Vest, Amy; Hurtt, Dee; Cole, Diane

    2011-01-01

    College students face many stressors such as academic course work, finances, living away from home, and planning for the future. Knowledge of stress management techniques can assist students in coping with such stressors, especially when disseminated through convenient technologies which are increasingly common in their personal and academic…

  12. Field Assessment Techniques for Bank Erosion Modeling

    Science.gov (United States)

    1990-11-22

    Field Assessment Techniques for Bank Erosion Modeling First Interim Report Prepared for US Army European Research Office US AR DS G-. EDISON HOUSE...SEDIMENTATION ANALYSIS SHEETS and GUIDELINES FOR THE USE OF SEDIMENTATION ANALYSIS SHEETS IN THE FIELD Prepared for US Army Engineer Waterways Experiment...Material Type 3 Material Type 4 Cobbles Toe[’ Toe Toefl Toefl Protection Status Cobbles/boulders Mid-Bnak .. Mid-na.k Mid-Bnask[ Mid-Boak

  13. Advanced interaction techniques for medical models

    OpenAIRE

    Monclús, Eva

    2014-01-01

    Advances in Medical Visualization allows the analysis of anatomical structures with the use of 3D models reconstructed from a stack of intensity-based images acquired through different techniques, being Computerized Tomographic (CT) modality one of the most common. A general medical volume graphics application usually includes an exploration task which is sometimes preceded by an analysis process where the anatomical structures of interest are first identified. ...

  14. Level of detail technique for plant models

    Institute of Scientific and Technical Information of China (English)

    Xiaopeng ZHANG; Qingqiong DENG; Marc JAEGER

    2006-01-01

    Realistic modelling and interactive rendering of forestry and landscape is a challenge in computer graphics and virtual reality. Recent new developments in plant growth modelling and simulation lead to plant models faithful to botanical structure and development, not only representing the complex architecture of a real plant but also its functioning in interaction with its environment. Complex geometry and material of a large group of plants is a big burden even for high performances computers, and they often overwhelm the numerical calculation power and graphic rendering power. Thus, in order to accelerate the rendering speed of a group of plants, software techniques are often developed. In this paper, we focus on plant organs, i.e. leaves, flowers, fruits and inter-nodes. Our approach is a simplification process of all sparse organs at the same time, i. e. , Level of Detail (LOD) , and multi-resolution models for plants. We do explain here the principle and construction of plant simplification. They are used to construct LOD and multi-resolution models of sparse organs and branches of big trees. These approaches take benefit from basic knowledge of plant architecture, clustering tree organs according to biological structures. We illustrate the potential of our approach on several big virtual plants for geometrical compression or LOD model definition. Finally we prove the efficiency of the proposed LOD models for realistic rendering with a virtual scene composed by 184 mature trees.

  15. Using data mining techniques for building fusion models

    Science.gov (United States)

    Zhang, Zhongfei; Salerno, John J.; Regan, Maureen A.; Cutler, Debra A.

    2003-03-01

    Over the past decade many techniques have been developed which attempt to predict possible events through the use of given models or patterns of activity. These techniques work quite well given the case that one has a model or a valid representation of activity. However, in reality for the majority of the time this is not the case. Models that do exist, in many cases were hand crafted, required many man-hours to develop and they are very brittle in the dynamic world in which we live. Data mining techniques have shown some promise in providing a set of solutions. In this paper we will provide the details for our motivation, theory and techniques which we have developed, as well as the results of a set of experiments.

  16. A general technique to train language models on language models

    NARCIS (Netherlands)

    Nederhof, MJ

    2005-01-01

    We show that under certain conditions, a language model can be trained oil the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained auto

  17. Percutaneous peritoneovenous shunt positioning: technique and preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Orsi, Franco; Grasso, Rosario Francesco; Bonomo, Guido; Marinucci, Irene [Division of Radiology, European Institute of Oncology, Milan (Italy); Monti, Cinzia [Institute of Radiology, University of Milan (Italy); Bellomi, Massimo [Division of Radiology, European Institute of Oncology, Milan (Italy); Institute of Radiology, University of Milan (Italy)

    2002-05-01

    Nine peritoneovenous shunts were positioned by percutaneous technique in seven patients with advanced malignancy causing severe refractory ascites, and in two patients with hepatic cirrhosis (one with hepatocarcinoma). In all patients the shunts were percutaneously placed through the subclavian vein in the angiographic suite under digital fluoroscopic guide. No complications directly related to the procedure occurred. The shunt was successfully positioned in all patients in 60 min average time. No patient showed symptoms related to pulmonary overload or to disseminated intravascular coagulation. All patients had a significant improvement of the objective symptoms related to ascites such as respiratory symptoms, dyspepsia, and functional impairment to evacuation describing an improvement of their quality of life. Maximum shunt patency was 273 days. Percutaneous placement of peritoneovenous shunt is a safe, fast, and inexpensive procedure, extremely useful in resolution of refractory ascites, reducing symptoms, and allowing effective palliation, with a great improvement in quality of life. (orig.)

  18. Prediction of survival with alternative modeling techniques using pseudo values.

    Directory of Open Access Journals (Sweden)

    Tjeerd van der Ploeg

    Full Text Available BACKGROUND: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo values enable statistically appropriate analyses of survival outcomes when used in seven alternative modeling techniques. METHODS: In this case study, we analyzed survival of 1282 Dutch patients with newly diagnosed Head and Neck Squamous Cell Carcinoma (HNSCC with conventional Kaplan-Meier and Cox regression analysis. We subsequently calculated pseudo values to reflect the individual survival patterns. We used these pseudo values to compare recursive partitioning (RPART, neural nets (NNET, logistic regression (LR general linear models (GLM and three variants of support vector machines (SVM with respect to dichotomous 60-month survival, and continuous pseudo values at 60 months or estimated survival time. We used the area under the ROC curve (AUC and the root of the mean squared error (RMSE to compare the performance of these models using bootstrap validation. RESULTS: Of a total of 1282 patients, 986 patients died during a median follow-up of 66 months (60-month survival: 52% [95% CI: 50%-55%]. The LR model had the highest optimism corrected AUC (0.791 to predict 60-month survival, followed by the SVM model with a linear kernel (AUC 0.787. The GLM model had the smallest optimism corrected RMSE when continuous pseudo values were considered for 60-month survival or the estimated survival time followed by SVM models with a linear kernel. The estimated importance of predictors varied substantially by the specific aspect of survival studied and modeling technique used. CONCLUSIONS: The use of pseudo values makes it readily possible to apply alternative modeling techniques to survival problems, to compare their performance and to search further for promising

  19. Quantitative model validation techniques: new insights

    CERN Document Server

    Ling, You

    2012-01-01

    This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...

  20. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-10-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  1. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-11-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  2. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.

    1995-12-31

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  3. Geometrical geodesy techniques in Goddard earth models

    Science.gov (United States)

    Lerch, F. J.

    1974-01-01

    The method for combining geometrical data with satellite dynamical and gravimetry data for the solution of geopotential and station location parameters is discussed. Geometrical tracking data (simultaneous events) from the global network of BC-4 stations are currently being processed in a solution that will greatly enhance of geodetic world system of stations. Previously the stations in Goddard earth models have been derived only from dynamical tracking data. A linear regression model is formulated from combining the data, based upon the statistical technique of weighted least squares. Reduced normal equations, independent of satellite and instrumental parameters, are derived for the solution of the geodetic parameters. Exterior standards for the evaluation of the solution and for the scale of the earth's figure are discussed.

  4. A Novel Microvascular Flow Technique: Initial Results in Thyroids.

    Science.gov (United States)

    Machado, Priscilla; Segal, Sharon; Lyshchik, Andrej; Forsberg, Flemming

    2016-03-01

    To evaluate the flow imaging capabilities of a new prototype ultrasound (US) image processing technique (superb micro-vascular imaging [SMI]; Toshiba Medical Systems, Tokyo, Japan) for depiction of microvascular flow in normal thyroid tissue and thyroid nodules compared with standard color and power Doppler US imaging.Ten healthy volunteers and 22 patients, with a total of 25 thyroid nodules, scheduled for US-guided fine needle aspiration were enrolled in this prospective study. Subjects underwent US examination consisting of grayscale, color and power Doppler imaging (CDI and PDI) followed by color and monochrome SMI and pulsed Doppler. SMI is a novel, microvascular flow imaging mode implemented on the Aplio 500 US system (Toshiba). SMI uses advanced clutter suppression to extract flow signals from large to small vessels and depicts this information at high frame rates as a color overlay image or as a monochrome map of flow. Two radiologists independently scored still images and digital clips for overall flow detection, vessel branching details and noise on a visual-analog scale of 1 (worst) to 10 (best).For the volunteers SMI visualized microvasculature with significantly lower velocity than CDI and PDI (P SMI demonstrated microvascular flow with significantly higher image scores and provided better depiction of the vessel branching details compared with CDI and PDI (P SMI mode than in the other modes, including color SMI (P SMI mode consistently improved the depiction of thyroid microvascular flow compared with standard CDI and PDI.

  5. New diagnostic technique for Zeeman-compensated atomic beam slowing: technique and results

    OpenAIRE

    Molenaar, P.A.; Van Der Straten, P.; Heideman, H.G.M.; Metcalf, H

    2001-01-01

    We have developed a new diagnostic tool for the study of Zeeman-compensated slowing of an alkali atomic beam. Our time-of-flight technique measures the longitudinal veloc- ity distribution of the slowed atoms with a resolution below the Doppler limit of 30 cm/s. Furthermore, it can map the position and velocity distribution of atoms in either ground hyperfine level inside the solenoid without any devices inside the solenoid. The technique reveals the optical pumping ef- fects, and shows in de...

  6. Comparison of results from different NDE techniques from ceramic matrix composites with varying porosity levels

    Science.gov (United States)

    Smyth, Imelda; Ojard, Greg; Santhosh, Unni; Ahmad, Jalees; Gowayed, Yasser

    2015-03-01

    Ceramic matrix composites (CMC's) are attractive materials for use in advanced turbine engines. Due to the nature of available processing techniques, however, the amount and distribution of porosity in CMC's can vary greatly. This can be particularly true in parts with complex geometries. It is therefore important to characterize the porosity with non-destructive techniques and understand its effect on properties. A series of CMC samples were fabricated with varying levels of porosity and analyzed with different NDE techniques. The results were categorized and analyzed with respect to ease of interpretation and degree to which they could be quantified and used in models to determine the effects of defects. The results were also correlated with microstructural examination and mechanical properties.

  7. New diagnostic technique for Zeeman-compensated atomic beam slowing: technique and results

    NARCIS (Netherlands)

    Molenaar, P.A.; Straten, P. van der; Heideman, H.G.M.; Metcalf, H.

    2001-01-01

    We have developed a new diagnostic tool for the study of Zeeman-compensated slowing of an alkali atomic beam. Our time-of-flight technique measures the longitudinal veloc- ity distribution of the slowed atoms with a resolution below the Doppler limit of 30 cm/s. Furthermore, it can map the position

  8. Double bundle posterior cruciate ligament reconstruction: surgical technique and results.

    Science.gov (United States)

    Fanelli, Gregory C; Beck, John D; Edson, Craig J

    2010-12-01

    The keys to successful posterior cruciate ligament reconstruction are to identify and treat all pathology, use strong graft material, accurately place tunnels in anatomic insertion sites, minimize graft bending, use a mechanical graft tensioning device, use primary and back-up graft fixation, and use the appropriate postoperative rehabilitation program. Adherence to these technical principles results in successful single and double-bundle arthroscopic transtibial tunnel posterior cruciate ligament reconstruction based upon stress radiography, arthrometer, knee ligament rating scales, and patient satisfaction measurements.

  9. Ear surgery techniques results on hearing threshold improvement

    Directory of Open Access Journals (Sweden)

    Farhad Mokhtarinejad

    2013-01-01

    Full Text Available Background: Bone conduction (BC threshold depression is not always by means of sensory neural hearing loss and sometimes it is an artifact caused by middle ear pathologies and ossicular chain problems. In this research, the influences of ear surgeries on bone conduction were evaluated. Materials and Methods: This study was conducted as a clinical trial study. The ear surgery performed on 83 patients classified in four categories: Stapedectomy, tympanomastoid surgery and ossicular reconstruction partially or totally; Partial Ossicular Replacement Prosthesis (PORP and Total Ossicular Replacement Prosthesis (TORP. Bone conduction thresholds assessed in frequencies of 250, 500, 1000, 2000 and 4000 Hz pre and post the surgery. Results: In stapedectomy group, the average of BC threshold in all frequencies improved approximately 6 dB in frequency of 2000 Hz. In tympanomastoid group, BC threshold in the frequency of 500, 1000 and 2000 Hz changed 4 dB (P-value < 0.05. Moreover, In the PORP group, 5 dB enhancement was seen in 1000 and 2000 Hz. In TORP group, the results confirmed that BC threshold improved in all frequencies especially at 4000 Hz about 6.5 dB. Conclusion: In according to results of this study, BC threshold shift was seen after several ear surgeries such as stapedectomy, tympanoplasty, PORP and TORP. The average of BC improvement was approximately 5 dB. It must be considered that BC depression might happen because of ossicular chain problems. Therefore; by resolving middle ear pathologies, the better BC threshold was obtained, the less hearing problems would be faced.

  10. First successful lower-extremity transplantation: technique and functional result.

    Science.gov (United States)

    Zuker, Ronald M; Redett, Rick; Alman, Ben; Coles, John G; Timoney, Norma; Ein, Sigmund H

    2006-05-01

    Composite tissue transplantation has emerged as a viable alternative to prosthetics and complex reconstructive surgery. Thus far it is reserved for cases which cannot be effectively reconstructed and where it offers some benefits over prostheses. It has been used in the upper extremity with encouraging results and, most recently, in the face. This report outlines what is believed to be the first such use in the lower extremity. A normal lower limb in a 3-month-old ischiopagus twin who was not going to survive was transplanted to the appropriate pelvic position, revascularized, and reinnervated in an otherwise healthy sister. The limb survived and, because of the immune compatibility, did not require immune suppressive therapy. The return of muscle function in the transplanted limb is encouraging. The transplanted limb appears to be fully sensate. In addition to reinnervation, the limb is now spontaneously under the cortical control of the recipient.

  11. Measuring 35S of Aerosol Sulfate: Techniques and First Results

    Science.gov (United States)

    Brothers, L. A.; Dominguez, G.; Bluen, B.; Corbin, A.; Abramian, A.; Thiemens, M. H.

    2007-12-01

    On a global and regional level, the cycling of sulfur in the environment has consequences for air quality, human health, and may contribute to global climate change. Due to its multiple oxidation states, the sulfur cycle is very complex and poorly understood. Stable isotopes are currently used to understand reaction pathways as well as sources and sinks of sulfurous compounds in the environment. Sulfur also has one short lived (τ1/2 ~87 d) radioactive isotope (35S) which is continuously made in the atmosphere by the cosmic ray spallation of argon, is then quickly oxidized to 35SO2 and enters the atmospheric sulfur cycle. The short-lived radioactive nature of this isotope of sulfur provides us with potentially powerful tracer for understanding the time scales at which sulfur is oxidized, deposited, and transported in the atmosphere and the deposition of atmospheric sulfate into rivers and water catchments. However, despite its potential, the use of 35S as a tracer of aerosol chemistry has not been fully exploited, Here we present details of instrumental set up for measuring 35S in aerosol sulfate and some preliminary results of measurements of 35S abundances in aerosols from Riverside (inland) and La Jolla (coastal) CA and discuss the sensitivity and limitations of the measurements in providing insights into day/night aerosol chemistry (Riverside) as well as the uptake of SO2 pollution in coastal environments by sea-salt aerosols. Also, we present preliminary results from measurement of sulfate in river water in Ecuador before and after precipitation events.

  12. Variances in the projections, resulting from CLIMEX, Boosted Regression Trees and Random Forests techniques

    Science.gov (United States)

    Shabani, Farzin; Kumar, Lalit; Solhjouy-fard, Samaneh

    2016-05-01

    The aim of this study was to have a comparative investigation and evaluation of the capabilities of correlative and mechanistic modeling processes, applied to the projection of future distributions of date palm in novel environments and to establish a method of minimizing uncertainty in the projections of differing techniques. The location of this study on a global scale is in Middle Eastern Countries. We compared the mechanistic model CLIMEX (CL) with the correlative models MaxEnt (MX), Boosted Regression Trees (BRT), and Random Forests (RF) to project current and future distributions of date palm (Phoenix dactylifera L.). The Global Climate Model (GCM), the CSIRO-Mk3.0 (CS) using the A2 emissions scenario, was selected for making projections. Both indigenous and alien distribution data of the species were utilized in the modeling process. The common areas predicted by MX, BRT, RF, and CL from the CS GCM were extracted and compared to ascertain projection uncertainty levels of each individual technique. The common areas identified by all four modeling techniques were used to produce a map indicating suitable and unsuitable areas for date palm cultivation for Middle Eastern countries, for the present and the year 2100. The four different modeling approaches predict fairly different distributions. Projections from CL were more conservative than from MX. The BRT and RF were the most conservative methods in terms of projections for the current time. The combination of the final CL and MX projections for the present and 2100 provide higher certainty concerning those areas that will become highly suitable for future date palm cultivation. According to the four models, cold, hot, and wet stress, with differences on a regional basis, appears to be the major restrictions on future date palm distribution. The results demonstrate variances in the projections, resulting from different techniques. The assessment and interpretation of model projections requires reservations

  13. Variances in the projections, resulting from CLIMEX, Boosted Regression Trees and Random Forests techniques

    Science.gov (United States)

    Shabani, Farzin; Kumar, Lalit; Solhjouy-fard, Samaneh

    2017-08-01

    The aim of this study was to have a comparative investigation and evaluation of the capabilities of correlative and mechanistic modeling processes, applied to the projection of future distributions of date palm in novel environments and to establish a method of minimizing uncertainty in the projections of differing techniques. The location of this study on a global scale is in Middle Eastern Countries. We compared the mechanistic model CLIMEX (CL) with the correlative models MaxEnt (MX), Boosted Regression Trees (BRT), and Random Forests (RF) to project current and future distributions of date palm ( Phoenix dactylifera L.). The Global Climate Model (GCM), the CSIRO-Mk3.0 (CS) using the A2 emissions scenario, was selected for making projections. Both indigenous and alien distribution data of the species were utilized in the modeling process. The common areas predicted by MX, BRT, RF, and CL from the CS GCM were extracted and compared to ascertain projection uncertainty levels of each individual technique. The common areas identified by all four modeling techniques were used to produce a map indicating suitable and unsuitable areas for date palm cultivation for Middle Eastern countries, for the present and the year 2100. The four different modeling approaches predict fairly different distributions. Projections from CL were more conservative than from MX. The BRT and RF were the most conservative methods in terms of projections for the current time. The combination of the final CL and MX projections for the present and 2100 provide higher certainty concerning those areas that will become highly suitable for future date palm cultivation. According to the four models, cold, hot, and wet stress, with differences on a regional basis, appears to be the major restrictions on future date palm distribution. The results demonstrate variances in the projections, resulting from different techniques. The assessment and interpretation of model projections requires reservations

  14. Randomization techniques for assessing the significance of gene periodicity results

    Directory of Open Access Journals (Sweden)

    Vuokko Niko

    2011-08-01

    Full Text Available Abstract Background Modern high-throughput measurement technologies such as DNA microarrays and next generation sequencers produce extensive datasets. With large datasets the emphasis has been moving from traditional statistical tests to new data mining methods that are capable of detecting complex patterns, such as clusters, regulatory networks, or time series periodicity. Study of periodic gene expression is an interesting research question that also is a good example of challenges involved in the analysis of high-throughput data in general. Unlike for classical statistical tests, the distribution of test statistic for data mining methods cannot be derived analytically. Results We describe the randomization based approach to significance testing, and show how it can be applied to detect periodically expressed genes. We present four randomization methods, three of which have previously been used for gene cycle data. We propose a new method for testing significance of periodicity in gene expression short time series data, such as from gene cycle and circadian clock studies. We argue that the underlying assumptions behind existing significance testing approaches are problematic and some of them unrealistic. We analyze the theoretical properties of the existing and proposed methods, showing how our method can be robustly used to detect genes with exceptionally high periodicity. We also demonstrate the large differences in the number of significant results depending on the chosen randomization methods and parameters of the testing framework. By reanalyzing gene cycle data from various sources, we show how previous estimates on the number of gene cycle controlled genes are not supported by the data. Our randomization approach combined with widely adopted Benjamini-Hochberg multiple testing method yields better predictive power and produces more accurate null distributions than previous methods. Conclusions Existing methods for testing significance

  15. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  16. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  17. Microwave Diffraction Techniques from Macroscopic Crystal Models

    Science.gov (United States)

    Murray, William Henry

    1974-01-01

    Discusses the construction of a diffractometer table and four microwave models which are built of styrofoam balls with implanted metallic reflecting spheres and designed to simulate the structures of carbon (graphite structure), sodium chloride, tin oxide, and palladium oxide. Included are samples of Bragg patterns and computer-analysis results.…

  18. Comparing modelling techniques for analysing urban pluvial flooding.

    Science.gov (United States)

    van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M

    2014-01-01

    Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.

  19. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    Science.gov (United States)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  20. Assessing atrophy measurement techniques in dementia: Results from the MIRIAD atrophy challenge

    Science.gov (United States)

    Cash, David M.; Frost, Chris; Iheme, Leonardo O.; Ünay, Devrim; Kandemir, Melek; Fripp, Jurgen; Salvado, Olivier; Bourgeat, Pierrick; Reuter, Martin; Fischl, Bruce; Lorenzi, Marco; Frisoni, Giovanni B.; Pennec, Xavier; Pierson, Ronald K.; Gunter, Jeffrey L.; Senjem, Matthew L.; Jack, Clifford R.; Guizard, Nicolas; Fonov, Vladimir S.; Collins, D. Louis; Modat, Marc; Cardoso, M. Jorge; Leung, Kelvin K.; Wang, Hongzhi; Das, Sandhitsu R.; Yushkevich, Paul A.; Malone, Ian B.; Fox, Nick C.; Schott, Jonathan M.; Ourselin, Sebastien

    2015-01-01

    Structural MRI is widely used for investigating brain atrophy in many neurodegenerative disorders, with several research groups developing and publishing techniques to provide quantitative assessments of this longitudinal change. Often techniques are compared through computation of required sample size estimates for future clinical trials. However interpretation of such comparisons is rendered complex because, despite using the same publicly available cohorts, the various techniques have been assessed with different data exclusions and different statistical analysis models. We created the MIRIAD atrophy challenge in order to test various capabilities of atrophy measurement techniques. The data consisted of 69 subjects (46 Alzheimer's disease, 23 control) who were scanned multiple (up to twelve) times at nine visits over a follow-up period of one to two years, resulting in 708 total image sets. Nine participating groups from 6 countries completed the challenge by providing volumetric measurements of key structures (whole brain, lateral ventricle, left and right hippocampi) for each dataset and atrophy measurements of these structures for each time point pair (both forward and backward) of a given subject. From these results, we formally compared techniques using exactly the same dataset. First, we assessed the repeatability of each technique using rates obtained from short intervals where no measurable atrophy is expected. For those measures that provided direct measures of atrophy between pairs of images, we also assessed symmetry and transitivity. Then, we performed a statistical analysis in a consistent manner using linear mixed effect models. The models, one for repeated measures of volume made at multiple time-points and a second for repeated “direct” measures of change in brain volume, appropriately allowed for the correlation between measures made on the same subject and were shown to fit the data well. From these models, we obtained estimates of the

  1. Assessing atrophy measurement techniques in dementia: Results from the MIRIAD atrophy challenge.

    Science.gov (United States)

    Cash, David M; Frost, Chris; Iheme, Leonardo O; Ünay, Devrim; Kandemir, Melek; Fripp, Jurgen; Salvado, Olivier; Bourgeat, Pierrick; Reuter, Martin; Fischl, Bruce; Lorenzi, Marco; Frisoni, Giovanni B; Pennec, Xavier; Pierson, Ronald K; Gunter, Jeffrey L; Senjem, Matthew L; Jack, Clifford R; Guizard, Nicolas; Fonov, Vladimir S; Collins, D Louis; Modat, Marc; Cardoso, M Jorge; Leung, Kelvin K; Wang, Hongzhi; Das, Sandhitsu R; Yushkevich, Paul A; Malone, Ian B; Fox, Nick C; Schott, Jonathan M; Ourselin, Sebastien

    2015-12-01

    Structural MRI is widely used for investigating brain atrophy in many neurodegenerative disorders, with several research groups developing and publishing techniques to provide quantitative assessments of this longitudinal change. Often techniques are compared through computation of required sample size estimates for future clinical trials. However interpretation of such comparisons is rendered complex because, despite using the same publicly available cohorts, the various techniques have been assessed with different data exclusions and different statistical analysis models. We created the MIRIAD atrophy challenge in order to test various capabilities of atrophy measurement techniques. The data consisted of 69 subjects (46 Alzheimer's disease, 23 control) who were scanned multiple (up to twelve) times at nine visits over a follow-up period of one to two years, resulting in 708 total image sets. Nine participating groups from 6 countries completed the challenge by providing volumetric measurements of key structures (whole brain, lateral ventricle, left and right hippocampi) for each dataset and atrophy measurements of these structures for each time point pair (both forward and backward) of a given subject. From these results, we formally compared techniques using exactly the same dataset. First, we assessed the repeatability of each technique using rates obtained from short intervals where no measurable atrophy is expected. For those measures that provided direct measures of atrophy between pairs of images, we also assessed symmetry and transitivity. Then, we performed a statistical analysis in a consistent manner using linear mixed effect models. The models, one for repeated measures of volume made at multiple time-points and a second for repeated "direct" measures of change in brain volume, appropriately allowed for the correlation between measures made on the same subject and were shown to fit the data well. From these models, we obtained estimates of the

  2. Compact Models and Measurement Techniques for High-Speed Interconnects

    CERN Document Server

    Sharma, Rohit

    2012-01-01

    Compact Models and Measurement Techniques for High-Speed Interconnects provides detailed analysis of issues related to high-speed interconnects from the perspective of modeling approaches and measurement techniques. Particular focus is laid on the unified approach (variational method combined with the transverse transmission line technique) to develop efficient compact models for planar interconnects. This book will give a qualitative summary of the various reported modeling techniques and approaches and will help researchers and graduate students with deeper insights into interconnect models in particular and interconnect in general. Time domain and frequency domain measurement techniques and simulation methodology are also explained in this book.

  3. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  4. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  5. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...

  6. Separable Watermarking Technique Using the Biological Color Model

    Directory of Open Access Journals (Sweden)

    David Nino

    2009-01-01

    Full Text Available Problem statement: The issue of having robust and fragile watermarking is still main focus for various researchers worldwide. Performance of a watermarking technique depends on how complex as well as how feasible to implement. These issues are tested using various kinds of attacks including geometry and transformation. Watermarking techniques in color images are more challenging than gray images in terms of complexity and information handling. In this study, we focused on implementation of watermarking technique in color images using the biological model. Approach: We proposed a novel method for watermarking using spatial and the Discrete Cosine Transform (DCT domains. The proposed method deled with colored images in the biological color model, the Hue, Saturation and Intensity (HSI. Technique was implemented and used against various colored images including the standard ones such as pepper image. The experiments were done using various attacks such as cropping, transformation and geometry. Results: The method robustness showed high accuracy in retrieval data and technique is fragile against geometric attacks. Conclusion: Watermark security was increased by using the Hadamard transform matrix. The watermarks used were meaningful and of varying sizes and details.

  7. Techniques And Results For The Calibration Of The MST Prototype For The Cherenkov Telescope Array

    CERN Document Server

    ,

    2016-01-01

    The next generation instrument for ground-based gamma-ray astronomy will be the Cherenkov Telescope Array (CTA), consisting of approximately 100 telescopes in three sizes, built on two sites with one each in the Northern and Southern Hemi- spheres. Up to 40 of these will be Medium Size Telescopes (MSTs) which will dominate sensitivity in the core energy range. Since 2012, a full size mechanical prototype for the modified 12 m Davies-Cotton design MST has been in operation in Berlin. This doc- ument describes the techniques which have been implemented to calibrate and optimise the mechanical and optical performance of the prototype, and gives the results of over three years of observations and measurements. Pointing calibration techniques will be discussed, along with the development of a bending model, and calibration of the CCD cameras used for pointing measurements. Additionally alignment of mirror segments using the Bokeh method is shown.

  8. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  9. Establishment of C6 brain glioma models through stereotactic technique for laser interstitial thermotherapy research

    Directory of Open Access Journals (Sweden)

    Jian Shi

    2015-01-01

    Conclusion: The rat C6 brain glioma model established in the study was a perfect model to study LITT of glioma. Infrared thermograph technique measured temperature conveniently and effectively. The technique is noninvasive, and the obtained data could be further processed using software used in LITT research. To measure deep-tissue temperature, combining thermocouple with infrared thermograph technique would present better results.

  10. Results of hip arthroplasty using Paavilainen technique in patients with congenitally dislocated hip

    Directory of Open Access Journals (Sweden)

    R. M. Tikhilov

    2014-01-01

    Full Text Available The purpose of the study was to analyze the medium- and long-term results of hip arthroplasty using Paavilainen technique in patients with the congenitally dislocated hip. Methods: From 2001 to 2012 180 operations were carried out were using the Paavilainen technique in 140 patients with high dislocation of the hip (Crowe IV. All patients were clinically evaluated using the Harris Hip Score (HHS, VAS and radiography. Statistical analysis was performed using the Pearson correlation coefficients, multiple regression analysis and classification trees analysis. Results: The average Harris score improved from preoperative 41.6 (40,3-43,5 to 79.3 (77,9-82,7 at final follow-up, and the difference was significant. Early complications were 9% (the most frequent were fractures of the proximal femur, later - 16.7% (pseudoarthrosis of the greater trochanter, 13.9%; disclocations-1,1%, aseptic loosening of the components - 1.7%, reoperation performed in 8.3% of cases. Such factors as age and limb length has statistically significant effect on functional outcomes. Established predictive model allows to get the best possible functional outcome in such patients with severe dysplasia. Conclusions: Total Hip arthroplasty using the Paavilainen technique is an effective method of surgical treatment in patients with the congenitally dislocated hip, but it is technically difficult operation with a high incidence of complications in comparison with standard primary total hip replacement.

  11. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    Haan, de G.; Veer, van der G.C.; Vliet, van J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum

  12. Interpolation techniques in robust constrained model predictive control

    Science.gov (United States)

    Kheawhom, Soorathep; Bumroongsri, Pornchai

    2017-05-01

    This work investigates interpolation techniques that can be employed on off-line robust constrained model predictive control for a discrete time-varying system. A sequence of feedback gains is determined by solving off-line a series of optimal control optimization problems. A sequence of nested corresponding robustly positive invariant set, which is either ellipsoidal or polyhedral set, is then constructed. At each sampling time, the smallest invariant set containing the current state is determined. If the current invariant set is the innermost set, the pre-computed gain associated with the innermost set is applied. If otherwise, a feedback gain is variable and determined by a linear interpolation of the pre-computed gains. The proposed algorithms are illustrated with case studies of a two-tank system. The simulation results showed that the proposed interpolation techniques significantly improve control performance of off-line robust model predictive control without much sacrificing on-line computational performance.

  13. Hydraulic fracture model comparison study: Complete results

    Energy Technology Data Exchange (ETDEWEB)

    Warpinski, N.R. [Sandia National Labs., Albuquerque, NM (United States); Abou-Sayed, I.S. [Mobil Exploration and Production Services (United States); Moschovidis, Z. [Amoco Production Co. (US); Parker, C. [CONOCO (US)

    1993-02-01

    Large quantities of natural gas exist in low permeability reservoirs throughout the US. Characteristics of these reservoirs, however, make production difficult and often economic and stimulation is required. Because of the diversity of application, hydraulic fracture design models must be able to account for widely varying rock properties, reservoir properties, in situ stresses, fracturing fluids, and proppant loads. As a result, fracture simulation has emerged as a highly complex endeavor that must be able to describe many different physical processes. The objective of this study was to develop a comparative study of hydraulic-fracture simulators in order to provide stimulation engineers with the necessary information to make rational decisions on the type of models most suited for their needs. This report compares the fracture modeling results of twelve different simulators, some of them run in different modes for eight separate design cases. Comparisons of length, width, height, net pressure, maximum width at the wellbore, average width at the wellbore, and average width in the fracture have been made, both for the final geometry and as a function of time. For the models in this study, differences in fracture length, height and width are often greater than a factor of two. In addition, several comparisons of the same model with different options show a large variability in model output depending upon the options chosen. Two comparisons were made of the same model run by different companies; in both cases the agreement was good. 41 refs., 54 figs., 83 tabs.

  14. A novel technique for rat liver transplantation using Quick Linker system: a preliminary result.

    Science.gov (United States)

    Oldani, Graziano; Maestri, Marcello; Gaspari, Annalisa; Lillo, Ettore; Angelastri, Giacomo; Lenti, Luca Matteo; Rademacher, Johannes; Alessiani, Mario; Dionigi, Paolo

    2008-10-01

    The clinical success of liver transplantation is founded upon years of experimental research. Since Kamada and colleagues developed the "two-cuff" technique, the rat has become the best model for extensive investigations. Although the Kamada technique is technically complex and not easy to master, it is still the mainstay of orthotopic liver transplantation in rodents. We have developed a modified three-cuff version of this technique that facilitates anastomosis and markedly reduces warm ischemia time. The new technique involves a set of five microinstruments (the Quick-Linker system) designed and manufactured by our group. It was tested in male Lewis rats (group 1, donors n = 10, recipients n = 10). The graft was explanted as usual and standard cuffs were attached to the portal vein and the supra- and infrahepatic vena cavae. Corresponding vessels in the recipient were isolated, and Quicker-Linker holding rings were attached to each. The vessels were then clamped and the native organ removed. Once the graft was positioned in the recipient's abdomen, the holding rings attached to the recipient vessels and the cuffs applied to graft vessels were automatically aligned and joined with the aid of a special alignment tool. Warm ischemia times were always inferior to 6 minutes. Survival at postoperative day 10 was 80%. Liver function was well preserved in all of the surviving rats. The Quick-Linker technique significantly shortens warm ischemia time and allows rapid anastomosis that is relatively independent of operator skill. It can be considered a reliable option for microsurgeons looking for quick results and high success rates.

  15. One technique for refining the global Earth gravity models

    Science.gov (United States)

    Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.

    2017-01-01

    The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.

  16. Modeling Malaysia's Energy System: Some Preliminary Results

    Directory of Open Access Journals (Sweden)

    Ahmad M. Yusof

    2011-01-01

    Full Text Available Problem statement: The current dynamic and fragile world energy environment necessitates the development of new energy model that solely caters to analyze Malaysia’s energy scenarios. Approach: The model is a network flow model that traces the flow of energy carriers from its sources (import and mining through some conversion and transformation processes for the production of energy products to final destinations (energy demand sectors. The integration to the economic sectors is done exogeneously by specifying the annual sectoral energy demand levels. The model in turn optimizes the energy variables for a specified objective function to meet those demands. Results: By minimizing the inter temporal petroleum product imports for the crude oil system the annual extraction level of Tapis blend is projected at 579600 barrels per day. The aggregate demand for petroleum products is projected to grow at 2.1% year-1 while motor gasoline and diesel constitute 42 and 38% of the petroleum products demands mix respectively over the 5 year planning period. Petroleum products import is expected to grow at 6.0% year-1. Conclusion: The preliminary results indicate that the model performs as expected. Thus other types of energy carriers such as natural gas, coal and biomass will be added to the energy system for the overall development of Malaysia energy model.

  17. Techniques and Simulation Models in Risk Management

    OpenAIRE

    Mirela GHEORGHE

    2012-01-01

    In the present paper, the scientific approach of the research starts from the theoretical framework of the simulation concept and then continues in the setting of the practical reality, thus providing simulation models for a broad range of inherent risks specific to any organization and simulation of those models, using the informatics instrument @Risk (Palisade). The reason behind this research lies in the need for simulation models that will allow the person in charge with decision taking i...

  18. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  19. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  20. Modelling rainfall erosion resulting from climate change

    Science.gov (United States)

    Kinnell, Peter

    2016-04-01

    It is well known that soil erosion leads to agricultural productivity decline and contributes to water quality decline. The current widely used models for determining soil erosion for management purposes in agriculture focus on long term (~20 years) average annual soil loss and are not well suited to determining variations that occur over short timespans and as a result of climate change. Soil loss resulting from rainfall erosion is directly dependent on the product of runoff and sediment concentration both of which are likely to be influenced by climate change. This presentation demonstrates the capacity of models like the USLE, USLE-M and WEPP to predict variations in runoff and erosion associated with rainfall events eroding bare fallow plots in the USA with a view to modelling rainfall erosion in areas subject to climate change.

  1. Simulation Modeling of Radio Direction Finding Results

    Directory of Open Access Journals (Sweden)

    K. Pelikan

    1994-12-01

    Full Text Available It is sometimes difficult to determine analytically error probabilities of direction finding results for evaluating algorithms of practical interest. Probalistic simulation models are described in this paper that can be to study error performance of new direction finding systems or to geographical modifications of existing configurations.

  2. Radiation Hardening by Software Techniques on FPGAs: Flight Experiment Evaluation and Results

    Science.gov (United States)

    Schmidt, Andrew G.; Flatley, Thomas

    2017-01-01

    We present our work on implementing Radiation Hardening by Software (RHBSW) techniques on the Xilinx Virtex5 FPGAs PowerPC 440 processors on the SpaceCube 2.0 platform. The techniques have been matured and tested through simulation modeling, fault emulation, laser fault injection and now in a flight experiment, as part of the Space Test Program- Houston 4-ISS SpaceCube Experiment 2.0 (STP-H4-ISE 2.0). This work leverages concepts such as heartbeat monitoring, control flow assertions, and checkpointing, commonly used in the High Performance Computing industry, and adapts them for use in remote sensing embedded systems. These techniques are extremely low overhead (typically software, remotely uploading the new experiment to the ISS SpaceCube 2.0 platform, and conducting the experiment continuously for 16 days before the platform was decommissioned. The experiment was conducted on two PowerPCs embedded within the Virtex5 FPGA devices and the experiment collected 19,400 checkpoints, processed 253,482 status messages, and incurred 0 faults. These results are highly encouraging and future work is looking into longer duration testing as part of the STP-H5 flight experiment.

  3. Engineering model development and test results

    Science.gov (United States)

    Wellman, John A.

    1993-08-01

    The correctability of the primary mirror spherical error in the Wide Field/Planetary Camera (WF/PC) is sensitive to the precise alignment of the incoming aberrated beam onto the corrective elements. Articulating fold mirrors that provide +/- 1 milliradian of tilt in 2 axes are required to allow for alignment corrections in orbit as part of the fix for the Hubble space telescope. An engineering study was made by Itek Optical Systems and the Jet Propulsion Laboratory (JPL) to investigate replacement of fixed fold mirrors within the existing WF/PC optical bench with articulating mirrors. The study contract developed the base line requirements, established the suitability of lead magnesium niobate (PMN) actuators and evaluated several tilt mechanism concepts. Two engineering model articulating mirrors were produced to demonstrate the function of the tilt mechanism to provide +/- 1 milliradian of tilt, packaging within the space constraints and manufacturing techniques including the machining of the invar tilt mechanism and lightweight glass mirrors. The success of the engineering models led to the follow on design and fabrication of 3 flight mirrors that have been incorporated into the WF/PC to be placed into the Hubble Space Telescope as part of the servicing mission scheduled for late 1993.

  4. A TECHNIQUE OF DIGITAL SURFACE MODEL GENERATION

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It is usually a time-consuming process to real-time set up 3D digital surface mo del(DSM) of an object with complex sur face.On the basis of the architectural survey proje ct of“Chilin Nunnery Reconstruction",this paper investigates an easy and feasi ble way,that is,on project site,applying digital close range photogrammetry an d CAD technique to establish the DSM for simulating ancient architectures with c omplex surface.The method has been proved very effective in practice.

  5. A Comparison of Evolutionary Computation Techniques for IIR Model Identification

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2014-01-01

    Full Text Available System identification is a complex optimization problem which has recently attracted the attention in the field of science and engineering. In particular, the use of infinite impulse response (IIR models for identification is preferred over their equivalent FIR (finite impulse response models since the former yield more accurate models of physical plants for real world applications. However, IIR structures tend to produce multimodal error surfaces whose cost functions are significantly difficult to minimize. Evolutionary computation techniques (ECT are used to estimate the solution to complex optimization problems. They are often designed to meet the requirements of particular problems because no single optimization algorithm can solve all problems competitively. Therefore, when new algorithms are proposed, their relative efficacies must be appropriately evaluated. Several comparisons among ECT have been reported in the literature. Nevertheless, they suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. This study presents the comparison of various evolutionary computation optimization techniques applied to IIR model identification. Results over several models are presented and statistically validated.

  6. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  7. Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.

  8. A New Mathematical Modeling Technique for Pull Production Control Systems

    Directory of Open Access Journals (Sweden)

    O. Srikanth

    2013-12-01

    Full Text Available The Kanban Control System widely used to control the release of parts of multistage manufacturing system operating under a pull production control system. Most of the work on Kanban Control System deals with multi-product manufacturing system. In this paper, we are proposing a regression modeling technique in a multistage manufacturing system is to be coordinates the release of parts into each stage of the system with the arrival of customer demands for final products. And also comparing two variants stages of the Kanban Control System model and combines with mathematical and Simulink model for the production coordination of parts in an assembly manufacturing systems. In both variants, the production of a new subassembly is authorized only when an assembly Kanban is available. Assembly kanbans become available when finished product is consumed. A simulation environment for the product line system has to generate with the proposed model and the mathematical model have to give implementation against the simulation model in the working platform of MATLAB. Both the simulation and model outputs have provided an in depth analysis of each of the resulting control system for offering model of a product line system.

  9. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  10. Validation technique using mean and variance of kriging model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)

    2007-07-01

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.

  11. Use of surgical techniques in the rat pancreas transplantation model

    Institute of Scientific and Technical Information of China (English)

    Yi Ma; Zhi-Yong Guo

    2008-01-01

    BACKGROUND:Pancreas transplantation is currently considered to be the most reliable and effective treatment for insulin-dependent diabetes mellitus (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years. We investigated the surgical techniques of pancreas transplantation in rats by analysing the difference between cervical segmental pancreas transplantation and abdominal pancreaticoduodenal transplantation. METHODS:Two hundred and forty male adult Wistar rats weighing 200-300 g were used, 120 as donors and 120 as recipients. Sixty cervical segmental pancreas transplants and 60 abdominal pancreaticoduodenal transplants were carried out and vessel anastomoses were made with microsurgical techniques. RESULTS:The time of donor pancreas harvesting in the cervical and abdominal groups was 31±6 and 37.6±3.8 min, respectively, and the lengths of recipient operations were 49.2±5.6 and 60.6±7.8 min. The time for donor operation was not signiifcantly different (P>0.05), but the recipient operation time in the abdominal group was longer than that in the cervical group (P0.05). CONCLUSIONS:Both pancreas transplantation methods are stable models for immunological and physiological studies in pancreas transplantation. Since each has its own advantages and disadvantages, the designer can choose the appropriate method according to the requirements of the study.

  12. Concerning the Feasibility of Example-driven Modelling Techniques

    CERN Document Server

    Thorne, Simon R; Lawson, Z

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven modelling. The main aim was to establish experimentally within an academic environment: the relationship between error and task complexity using a) Traditional spreadsheet modelling; b) example driven techniques. We report on the experimental design, sampling, research methods and the tasks set for both control and treatment groups. Analysis of the completed tasks allows comparison of several different variables. The experimental results compare the performance indicators for the treatment and control groups by comparing accuracy, experience, training, confidence measures, perceived difficulty and perceived completeness. The various results are thoroughly tested for statistical significance using: the Chi squared test, Fisher's exact test for significance, Cochran's Q test and McNemar's test on difficulty.

  13. Three-dimensional nipple-areola tattooing: a new technique with superior results.

    Science.gov (United States)

    Halvorson, Eric G; Cormican, Michael; West, Misti E; Myers, Vinnie

    2014-05-01

    Traditional coloring techniques for nipple-areola tattooing ignore the artistic principles of light and shadow to create depth on a two-dimensional surface. The method presented in this article is essentially the inverse of traditional technique and results in a more realistic and three-dimensional reconstruction that can appear better than surgical methods. The application of three-dimensional techniques or "realism" in tattoo artistry has significant potential to improve the aesthetic outcomes of reconstructive surgery.

  14. Infrared thermography for CFRP inspection: computational model and experimental results

    Science.gov (United States)

    Fernandes, Henrique C.; Zhang, Hai; Morioka, Karen; Ibarra-Castanedo, Clemente; López, Fernando; Maldague, Xavier P. V.; Tarpani, José R.

    2016-05-01

    Infrared Thermography (IRT) is a well-known Non-destructive Testing (NDT) technique. In the last decades, it has been widely applied in several fields including inspection of composite materials (CM), specially the fiber-reinforced polymer matrix ones. Consequently, it is important to develop and improve efficient NDT techniques to inspect and assess the quality of CM parts in order to warranty airworthiness and, at the same time, reduce costs of airline companies. In this paper, active IRT is used to inspect carbon fiber-reinforced polymer (CFRP) at laminate with artificial inserts (built-in sample) placed on different layers prior to the manufacture. Two optical active IRT are used. The first is pulsed thermography (PT) which is the most widely utilized IRT technique. The second is a line-scan thermography (LST) technique: a dynamic technique, which can be employed for the inspection of materials by heating a component, line-by-line, while acquiring a series of thermograms with an infrared camera. It is especially suitable for inspection of large parts as well as complex shaped parts. A computational model developed using COMSOL Multiphysics® was used in order to simulate the inspections. Sequences obtained from PT and LST were processed using principal component thermography (PCT) for comparison. Results showed that it is possible to detect insertions of different sizes at different depths using both PT and LST IRT techniques.

  15. Metamaterials modelling, fabrication and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...

  16. Metamaterials modelling, fabrication, and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    2012-01-01

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...

  17. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  18. Comparing visualization techniques for learning second language prosody – first results

    DEFF Research Database (Denmark)

    Niebuhr, Oliver; Alm, Maria Helena; Schümchen, Nathalie

    2017-01-01

    and then analyzed in terms of (a) prosodic-pattern consistency and (b) correctness of the prosodic patterns. In addition, the participants rated the usability of the visualization techniques. The results from the phonological analysis converged with the usability ratings in showing that iconic techniques...

  19. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  20. Theoretical modeling techniques and their impact on tumor immunology.

    Science.gov (United States)

    Woelke, Anna Lena; Murgueitio, Manuela S; Preissner, Robert

    2010-01-01

    Currently, cancer is one of the leading causes of death in industrial nations. While conventional cancer treatment usually results in the patient suffering from severe side effects, immunotherapy is a promising alternative. Nevertheless, some questions remain unanswered with regard to using immunotherapy to treat cancer hindering it from being widely established. To help rectify this deficit in knowledge, experimental data, accumulated from a huge number of different studies, can be integrated into theoretical models of the tumor-immune system interaction. Many complex mechanisms in immunology and oncology cannot be measured in experiments, but can be analyzed by mathematical simulations. Using theoretical modeling techniques, general principles of tumor-immune system interactions can be explored and clinical treatment schedules optimized to lower both tumor burden and side effects. In this paper, we aim to explain the main mathematical and computational modeling techniques used in tumor immunology to experimental researchers and clinicians. In addition, we review relevant published work and provide an overview of its impact to the field.

  1. Spoken Document Retrieval Leveraging Unsupervised and Supervised Topic Modeling Techniques

    Science.gov (United States)

    Chen, Kuan-Yu; Wang, Hsin-Min; Chen, Berlin

    This paper describes the application of two attractive categories of topic modeling techniques to the problem of spoken document retrieval (SDR), viz. document topic model (DTM) and word topic model (WTM). Apart from using the conventional unsupervised training strategy, we explore a supervised training strategy for estimating these topic models, imagining a scenario that user query logs along with click-through information of relevant documents can be utilized to build an SDR system. This attempt has the potential to associate relevant documents with queries even if they do not share any of the query words, thereby improving on retrieval quality over the baseline system. Likewise, we also study a novel use of pseudo-supervised training to associate relevant documents with queries through a pseudo-feedback procedure. Moreover, in order to lessen SDR performance degradation caused by imperfect speech recognition, we investigate leveraging different levels of index features for topic modeling, including words, syllable-level units, and their combination. We provide a series of experiments conducted on the TDT (TDT-2 and TDT-3) Chinese SDR collections. The empirical results show that the methods deduced from our proposed modeling framework are very effective when compared with a few existing retrieval approaches.

  2. Gas discharges modeling by Monte Carlo technique

    Directory of Open Access Journals (Sweden)

    Savić Marija

    2010-01-01

    Full Text Available The basic assumption of the Townsend theory - that ions produce secondary electrons - is valid only in a very narrow range of the reduced electric field E/N. In accordance with the revised Townsend theory that was suggested by Phelps and Petrović, secondary electrons are produced in collisions of ions, fast neutrals, metastable atoms or photons with the cathode, or in gas phase ionizations by fast neutrals. In this paper we tried to build up a Monte Carlo code that can be used to calculate secondary electron yields for different types of particles. The obtained results are in good agreement with the analytical results of Phelps and. Petrović [Plasma Sourc. Sci. Technol. 8 (1999 R1].

  3. The Danish national passenger modelModel specification and results

    DEFF Research Database (Denmark)

    Rich, Jeppe; Hansen, Christian Overgaard

    2016-01-01

    The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...

  4. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario...... of the meteorological model results. These uncertainties stem from e.g. limits in meteorological obser-vations used to initialise meteorological forecast series. By perturbing the initial state of an NWP model run in agreement with the available observa-tional data, an ensemble of meteorological forecasts is produced....... However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties...

  5. Meteorological Uncertainty of atmospheric Dispersion model results (MUD)

    DEFF Research Database (Denmark)

    Havskov Sørensen, Jens; Amstrup, Bjarne; Feddersen, Henrik

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the ‘most likely...... uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble......’ dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent...

  6. Updates on measurements and modeling techniques for expendable countermeasures

    Science.gov (United States)

    Gignilliat, Robert; Tepfer, Kathleen; Wilson, Rebekah F.; Taczak, Thomas M.

    2016-10-01

    The potential threat of recently-advertised anti-ship missiles has instigated research at the United States (US) Naval Research Laboratory (NRL) into the improvement of measurement techniques for visual band countermeasures. The goal of measurements is the collection of radiometric imagery for use in the building and validation of digital models of expendable countermeasures. This paper will present an overview of measurement requirements unique to the visual band and differences between visual band and infrared (IR) band measurements. A review of the metrics used to characterize signatures in the visible band will be presented and contrasted to those commonly used in IR band measurements. For example, the visual band measurements require higher fidelity characterization of the background, including improved high-transmittance measurements and better characterization of solar conditions to correlate results more closely with changes in the environment. The range of relevant engagement angles has also been expanded to include higher altitude measurements of targets and countermeasures. In addition to the discussion of measurement techniques, a top-level qualitative summary of modeling approaches will be presented. No quantitative results or data will be presented.

  7. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    N.RATHIKA

    2014-07-01

    Full Text Available This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as to optimize some stipulated set of parameters without violating some constraint. Accurate value can be extracted using PSO and it can be reformulated. Modeling and simulation of the machine is executed. MATLAB/Simulink has been cast-off to implement and validate the result.

  8. Modeling Results for the ITER Cryogenic Fore Pump

    Science.gov (United States)

    Zhang, Dongsheng

    The work presented here is the analysis and modeling of the ITER-Cryogenic Fore Pump (CFP), also called Cryogenic Viscous Compressor (CVC). Unlike common cryopumps that are usually used to create and maintain vacuum, the cryogenic fore pump is designed for ITER to collect and compress hydrogen isotopes during the regeneration process of the torus cryopumps. Different from common cryopumps, the ITER-CFP works in the viscous flow regime. As a result, both adsorption boundary conditions and transport phenomena contribute unique features to the pump performance. In this report, the physical mechanisms of cryopumping are studied, especially the diffusion-adsorption process and these are coupled with the standard equations of species, momentum and energy balance, as well as the equation of state. Numerical models are developed, which include highly coupled non-linear conservation equations of species, momentum, and energy and equation of state. Thermal and kinetic properties are treated as functions of temperature, pressure, and composition of the gas fluid mixture. To solve such a set of equations, a novel numerical technique, identified as the Group-Member numerical technique is proposed. This document presents three numerical models: a transient model, a steady state model, and a hemisphere (or molecular flow) model. The first two models are developed based on analysis of the raw experimental data while the third model is developed as a preliminary study. The modeling results are compared with available experiment data for verification. The models can be used for cryopump design, and can also benefit problems, such as loss of vacuum in a cryomodule or cryogenic desublimation. The scientific and engineering investigation being done here builds connections between Mechanical Engineering and other disciplines, such as Chemical Engineering, Physics, and Chemistry.

  9. Modeling results for the ITER cryogenic fore pump

    Science.gov (United States)

    Zhang, D. S.; Miller, F. K.; Pfotenhauer, J. M.

    2014-01-01

    The cryogenic fore pump (CFP) is designed for ITER to collect and compress hydrogen isotopes during the regeneration process of torus cryopumps. Different from common cryopumps, the ITER-CFP works in the viscous flow regime. As a result, both adsorption boundary conditions and transport phenomena contribute unique features to the pump performance. In this report, the physical mechanisms of cryopumping are studied, especially the diffusion-adsorption process and these are coupled with standard equations of species, momentum and energy balance, as well as the equation of state. Numerical models are developed, which include highly coupled non-linear conservation equations of species, momentum and energy and equation of state. Thermal and kinetic properties are treated as functions of temperature, pressure, and composition. To solve such a set of equations, a novel numerical technique, identified as the Group-Member numerical technique is proposed. It is presented here a 1D numerical model. The results include comparison with the experimental data of pure hydrogen flow and a prediction for hydrogen flow with trace helium. An advanced 2D model and detailed explanation of the Group-Member technique are to be presented in following papers.

  10. Symmetry and partial order reduction techniques in model checking Rebeca

    NARCIS (Netherlands)

    Jaghouri, M.M.; Sirjani, M.; Mousavi, M.R.; Movaghar, A.

    2007-01-01

    Rebeca is an actor-based language with formal semantics that can be used in modeling concurrent and distributed software and protocols. In this paper, we study the application of partial order and symmetry reduction techniques to model checking dynamic Rebeca models. Finding symmetry based equivalen

  11. Prediction of survival with alternative modeling techniques using pseudo values

    NARCIS (Netherlands)

    T. van der Ploeg (Tjeerd); F.R. Datema (Frank); R.J. Baatenburg de Jong (Robert Jan); E.W. Steyerberg (Ewout)

    2014-01-01

    textabstractBackground: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo

  12. In vivo dosimetry for total body irradiation: five-year results and technique comparison.

    Science.gov (United States)

    Patel, Reshma P; Warry, Alison J; Eaton, David J; Collis, Christopher H; Rosenberg, Ivan

    2014-07-08

    The aim of this work is to establish if the new CT-based total body irradiation (TBI) planning techniques used at University College London Hospital (UCLH) and Royal Free Hospital (RFH) are comparable to the previous technique at the Middlesex Hospital (MXH) by analyzing predicted and measured diode results. TBI aims to deliver a homogeneous dose to the entire body, typically using extended SSD fields with beam modulation to limit doses to organs at risk. In vivo dosimetry is used to verify the accuracy of delivered doses. In 2005, when the Middlesex Hospital was decommissioned and merged with UCLH, both UCLH and the RFH introduced updated CT-planned TBI techniques, based on the old MXH technique. More CT slices and in vivo measurement points were used by both; UCLH introduced a beam modulation technique using MLC segments, while RFH updated to a combination of lead compensators and bolus. Semiconductor diodes were used to measure entrance and exit doses in several anatomical locations along the entire body. Diode results from both centers for over five years of treatments were analyzed and compared to the previous MXH technique for accuracy and precision of delivered doses. The most stable location was the field center with standard deviations of 4.1% (MXH), 3.7% (UCLH), and 1.7% (RFH). The least stable position was the ankles. Mean variation with fraction number was within 1.5% for all three techniques. In vivo dosimetry can be used to verify complex modulated CT-planned TBI, and demonstrate improvements and limitations in techniques. The results show that the new UCLH technique is no worse than the previous MXH one and comparable to the current RFH technique.

  13. CMS standard model Higgs boson results

    Directory of Open Access Journals (Sweden)

    Garcia-Abia Pablo

    2013-11-01

    Full Text Available In July 2012 CMS announced the discovery of a new boson with properties resembling those of the long-sought Higgs boson. The analysis of the proton-proton collision data recorded by the CMS detector at the LHC, corresponding to integrated luminosities of 5.1 fb−1 at √s = 7 TeV and 19.6 fb−1 at √s = 8 TeV, confirm the Higgs-like nature of the new boson, with a signal strength associated with vector bosons and fermions consistent with the expectations for a standard model (SM Higgs boson, and spin-parity clearly favouring the scalar nature of the new boson. In this note I review the updated results of the CMS experiment.

  14. Use of surgical techniques in the rat pancreas transplantation model

    National Research Council Canada - National Science Library

    Ma, Yi; Guo, Zhi-Yong

    2008-01-01

    ... (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years...

  15. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    in the wellbore); and (3) accurate approaches to account for the effects of reservoir heterogeneity and for the optimization of nonconventional well deployment. An overview of our progress in each of these main areas is as follows. A general purpose object-oriented research simulator (GPRS) was developed under this project. The GPRS code is managed using modern software management techniques and has been deployed to many companies and research institutions. The simulator includes general black-oil and compositional modeling modules. The formulation is general in that it allows for the selection of a wide variety of primary and secondary variables and accommodates varying degrees of solution implicitness. Specifically, we developed and implemented an IMPSAT procedure (implicit in pressure and saturation, explicit in all other variables) for compositional modeling as well as an adaptive implicit procedure. Both of these capabilities allow for efficiency gains through selective implicitness. The code treats cell connections through a general connection list, which allows it to accommodate both structured and unstructured grids. The GPRS code was written to be easily extendable so new modeling techniques can be readily incorporated. Along these lines, we developed a new dual porosity module compatible with the GPRS framework, as well as a new discrete fracture model applicable for fractured or faulted reservoirs. Both of these methods display substantial advantages over previous implementations. Further, we assessed the performance of different preconditioners in an attempt to improve the efficiency of the linear solver. As a result of this investigation, substantial improvements in solver performance were achieved.

  16. Nerve-sparing techniques and results in robot-assisted radical prostatectomy

    Science.gov (United States)

    Aytac, Omer; Atug, Fatih

    2016-01-01

    Nerve-sparing techniques in robot-assisted radical prostatectomy (RARP) have advanced with the developments defining the prostate anatomy and robotic surgery in recent years. In this review we discussed the surgical anatomy, current nerve-sparing techniques and results of these operations. It is important to define the right and key anatomic landmarks for nerve-sparing in RARP which can demonstrate individual variations. The patients' risk assessment before the operation and intraoperative anatomic variations may affect the nerve-sparing technique, nerve-sparing degree and the approach. There is lack of randomized control trials for different nerve-sparing techniques and approaches in RARP, therefore accurate preoperative and intraoperative assessment of the patient is crucial. Current data shows that, performing the maximum possible nerve-sparing using athermal techniques have better functional outcomes. PMID:27995221

  17. Application of experimental design techniques to structural simulation meta-model building using neural network

    Institute of Scientific and Technical Information of China (English)

    费庆国; 张令弥

    2004-01-01

    Neural networks are being used to construct meta-models in numerical simulation of structures. In addition to network structures and training algorithms, training samples also greatly affect the accuracy of neural network models. In this paper, some existing main sampling techniques are evaluated, including techniques based on experimental design theory,random selection, and rotating sampling. First, advantages and disadvantages of each technique are reviewed. Then, seven techniques are used to generate samples for training radial neural networks models for two benchmarks: an antenna model and an aircraft model. Results show that the uniform design, in which the number of samples and mean square error network models are considered, is the best sampling technique for neural network based meta-model building.

  18. Nurse Practitioners' Use of Communication Techniques: Results of a Maryland Oral Health Literacy Survey.

    Directory of Open Access Journals (Sweden)

    Laura W Koo

    Full Text Available We examined nurse practitioners' use and opinions of recommended communication techniques for the promotion of oral health as part of a Maryland state-wide oral health literacy assessment. Use of recommended health-literate and patient-centered communication techniques have demonstrated improved health outcomes.A 27-item self-report survey, containing 17 communication technique items, across 5 domains, was mailed to 1,410 licensed nurse practitioners (NPs in Maryland in 2010. Use of communication techniques and opinions about their effectiveness were analyzed using descriptive statistics. General linear models explored provider and practice characteristics to predict differences in the total number and the mean number of communication techniques routinely used in a week.More than 80% of NPs (N = 194 routinely used 3 of the 7 basic communication techniques: simple language, limiting teaching to 2-3 concepts, and speaking slowly. More than 75% of respondents believed that 6 of the 7 basic communication techniques are effective. Sociodemographic provider characteristics and practice characteristics were not significant predictors of the mean number or the total number of communication techniques routinely used by NPs in a week. Potential predictors for using more of the 7 basic communication techniques, demonstrating significance in one general linear model each, were: assessing the office for user-friendliness and ever taking a communication course in addition to nursing school.NPs in Maryland self-reported routinely using some recommended health-literate communication techniques, with belief in their effectiveness. Our findings suggest that NPs who had assessed the office for patient-friendliness or who had taken a communication course beyond their initial education may be predictors for using more of the 7 basic communication techniques. These self-reported findings should be validated with observational studies. Graduate and continuing

  19. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  20. OSCILLATION RESULTS RELATED TO INTEGRAL AVERAGING TECHNIQUE FOR EVEN ORDER NEUTRAL DIFFERENTIAL EQUATION WITH DEVIATING ARGUMENTS

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, we study an even order neutral differential equation with deviating arguments, and obtain new oscillation results without the assumptions which were required for related results given before. Our results extend and improve many known oscillation criteria, based on the standard integral averaging technique.

  1. Data Assimilation Techniques for Ionospheric Reference Scenarios - project overview and first results

    Science.gov (United States)

    Gerzen, Tatjana; Mainul Hoque, M.; Wilken, Volker; Minkwitz, David; Schlüter, Stefan

    2015-04-01

    The European Geostationary Navigation Overlay Service (EGNOS) is the European Satellite Based Augmentation Service (SBAS) that provides value added services, in particular to Safety of Live (SoL) users of the Global Navigation Satellite Systems (GNSS). In the frame of the European GNSS Evolution Programme (EGEP), ESA has launched several activities, which are aiming to support the design, development and qualification of the future operational EGNOS infrastructure and associated services. The ionosphere is the part of the upper Earth's atmosphere between about 50 km and 1000 km above the Earth's surface, which contains sufficient free electrons to cause strong impact on radio signal propagation. Therefore, treatment of the ionosphere is a critical issue to guarantee the EGNOS system performance. In order to conduct the EGNOS end-to-end performance simulations and to assure the capability for maintaining integrity of the EGNOS system especially during ionospheric storm conditions, Ionospheric Reference Scenarios (IRSs) are introduced by ESA. The project Data Assimilation Techniques for Ionospheric Reference Scenarios (DAIS) - aims to generate improved EGNOS IRSs by combining space borne and ground based GNSS observations. The main focus of this project is to demonstrate that ionospheric radio occultation (IRO) measurements can significantly contribute to fill data gaps in GNSS ground networks (particularly in Africa and over the oceans) when generating the IRSs. The primary tasks are the calculation and validation of time series of IRSs (i.e. TEC maps) by a 3D assimilation approach that combines IRO and ground based GNSS measurements with an ionospheric background model in an optimal way. In the first phase of the project we selected appropriate test periods, one presenting perturbed and the other one - nominal ionospheric conditions, collected and filtered the corresponding data. We defined and developed an applicable technique for the 3D assimilation and applied

  2. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  3. Nerve-sparing techniques and results in robot-assisted radical prostatectomy

    OpenAIRE

    Tavukçu, Hasan Hüseyin; Aytac, Omer; Atug, Fatih

    2016-01-01

    Nerve-sparing techniques in robot-assisted radical prostatectomy (RARP) have advanced with the developments defining the prostate anatomy and robotic surgery in recent years. In this review we discussed the surgical anatomy, current nerve-sparing techniques and results of these operations. It is important to define the right and key anatomic landmarks for nerve-sparing in RARP which can demonstrate individual variations. The patients' risk assessment before the operation and intraoperative an...

  4. Liver metastases: interventional therapeutic techniques and results, state of the art

    Energy Technology Data Exchange (ETDEWEB)

    Vogl, T.J.; Mueller, P.K.; Mack, M.G.; Straub, R.; Engelmann, K. [Dept. of Radiology, Univ. of Frankfurt (Germany); Neuhaus, P. [Dept. of Surgery, Humboldt University of Berlin (Germany)

    1999-05-01

    The liver is the most common site of metastatic tumour deposits. Hepatic metastases are the major cause of morbidity and mortality in patients with gastrointestinal carcinomas and other malignant tumours. The rationale and results for interventional therapeutic techniques in the treatment of liver metastases are presented. For the treatment of patients with irresectable liver metastases, alternative local ablative therapeutic modalities have been developed. Technique and results of local interventional therapies are presented such as microwave-, radiofrequency (RF)- and ultrasound ablation, and laser-induced interstitial therapy (LITT), cryotherapy and local drug administration such as alcohol injection, endotumoral chemotherapy and regional chemoembolisation. In addition to cryotherapy, all ablative techniques can be performed percutaneously with low morbidity and mortality. Cryotherapy is an effective and precise technique for inducing tumour necrosis, but it is currently performed via laparotomy. Percutaneous local alcohol injection results in an inhomogeneous distribution in liver metastases with unreliable control rates. Local chemotherapeutic drug instillation and regional chemoembolisation produces relevant but non-reproducible lesions. Laser-induced interstitial thermotherapy (LITT) performed under MRI guidance results in precise and reproducible areas of induced necrosis with a local control of 94 %, and with an improved survival rate. Interventional therapeutic techniques of liver metastases do result in a remarkable local tumour control rate with improved survival results. (orig.) With 5 figs., 1 tab., 43 refs.

  5. The effect of sampling technique on PCR-based bacteriological results of bovine milk samples.

    Science.gov (United States)

    Hiitiö, Heidi; Simojoki, Heli; Kalmus, Piret; Holopainen, Jani; Pyörälä, Satu; Taponen, Suvi

    2016-08-01

    The aim of the study was to evaluate the effect of sampling technique on the microbiological results of bovine milk samples using multiplex real-time PCR. Comparison was made between a technique where the milk sample was taken directly from the udder cistern of the udder quarter using a needle and vacuum tube and conventional sampling. The effect of different cycle threshold (Ct) cutoff limits on the results was also tested to estimate the amount of amplified DNA in the samples. A total of 113 quarters from 53 cows were tested pairwise using both techniques, and each sample was studied with real-time PCR. Sampling from the udder cistern reduced the number of species per sample compared with conventional sampling. In conventional samples, the number of positive Staphylococcus spp. results was over twice that of samples taken with the needle technique, indicating that most of the Staphylococcus spp. originated from the teat or environmental sources. The Ct values also showed that Staphylococcus spp. were present in most samples only in low numbers. Routine use of multiplex real-time PCR in mastitis diagnostics could benefit from critical evaluation of positive Staphylococcus spp. results with Ct values between 34.0 and 37.0. Our results emphasize the importance of a careful aseptic milk sampling technique and a microbiologically positive result for a milk sample should not be automatically interpreted as an intramammary infection or mastitis. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    Directory of Open Access Journals (Sweden)

    Frederico R. Romero

    2007-02-01

    Full Text Available OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbable running sutures. The complete procedure and stages of gastric dissection, gastric closure, and gastrovesical anastomosis were separately timed for each laparoscopic gastrocystoplasty. The end-result of the gastric suturing and the bladder augmentation were evaluated by fluoroscopy or endoscopy. RESULTS: Mean total operative time was 5.2 (range 3.5 - 8 hours: 84.5 (range 62 - 110 minutes for the gastric dissection, 56 (range 28 - 80 minutes for the gastric suturing, and 170.6 (range 70 to 200 minutes for the gastrovesical anastomosis. A cystogram showed a small leakage from the vesical anastomosis in the first two cases. No extravasation from gastric closure was observed in the postoperative gastrogram. CONCLUSIONS: Total laparoscopic gastrocystoplasty is a feasible but complex procedure that currently has limited clinical application. With the increasing use of laparoscopy in reconstructive surgery of the lower urinary tract, gastrocystoplasty may become an attractive option because of its potential advantages over techniques using small and large bowel segments.

  7. Revisiting Runoff Model Calibration: Airborne Snow Observatory Results Allow Improved Modeling Results

    Science.gov (United States)

    McGurk, B. J.; Painter, T. H.

    2014-12-01

    Deterministic snow accumulation and ablation simulation models are widely used by runoff managers throughout the world to predict runoff quantities and timing. Model fitting is typically based on matching modeled runoff volumes and timing with observed flow time series at a few points in the basin. In recent decades, sparse networks of point measurements of the mountain snowpacks have been available to compare with modeled snowpack, but the comparability of results from a snow sensor or course to model polygons of 5 to 50 sq. km is suspect. However, snowpack extent, depth, and derived snow water equivalent have been produced by the NASA/JPL Airborne Snow Observatory (ASO) mission for spring of 20013 and 2014 in the Tuolumne River basin above Hetch Hetchy Reservoir. These high-resolution snowpack data have exposed the weakness in a model calibration based on runoff alone. The U.S. Geological Survey's Precipitation Runoff Modeling System (PRMS) calibration that was based on 30-years of inflow to Hetch Hetchy produces reasonable inflow results, but modeled spatial snowpack location and water quantity diverged significantly from the weekly measurements made by ASO during the two ablation seasons. The reason is that the PRMS model has many flow paths, storages, and water transfer equations, and a calibrated outflow time series can be right for many wrong reasons. The addition of a detailed knowledge of snow extent and water content constrains the model so that it is a better representation of the actual watershed hydrology. The mechanics of recalibrating PRMS to the ASO measurements will be described, and comparisons in observed versus modeled flow for both a small subbasin and the entire Hetch Hetchy basin will be shown. The recalibrated model provided a bitter fit to the snowmelt recession, a key factor for water managers as they balance declining inflows with demand for power generation and ecosystem releases during the final months of snow melt runoff.

  8. Modeling Malaysia's Energy System: Some Preliminary Results

    OpenAIRE

    Ahmad M. Yusof

    2011-01-01

    Problem statement: The current dynamic and fragile world energy environment necessitates the development of new energy model that solely caters to analyze Malaysias energy scenarios. Approach: The model is a network flow model that traces the flow of energy carriers from its sources (import and mining) through some conversion and transformation processes for the production of energy products to final destinations (energy demand sectors). The integration to the economic sectors is done exogene...

  9. Engineering Glass Passivation Layers -Model Results

    Energy Technology Data Exchange (ETDEWEB)

    Skorski, Daniel C.; Ryan, Joseph V.; Strachan, Denis M.; Lepry, William C.

    2011-08-08

    The immobilization of radioactive waste into glass waste forms is a baseline process of nuclear waste management not only in the United States, but worldwide. The rate of radionuclide release from these glasses is a critical measure of the quality of the waste form. Over long-term tests and using extrapolations of ancient analogues, it has been shown that well designed glasses exhibit a dissolution rate that quickly decreases to a slow residual rate for the lifetime of the glass. The mechanistic cause of this decreased corrosion rate is a subject of debate, with one of the major theories suggesting that the decrease is caused by the formation of corrosion products in such a manner as to present a diffusion barrier on the surface of the glass. Although there is much evidence of this type of mechanism, there has been no attempt to engineer the effect to maximize the passivating qualities of the corrosion products. This study represents the first attempt to engineer the creation of passivating phases on the surface of glasses. Our approach utilizes interactions between the dissolving glass and elements from the disposal environment to create impermeable capping layers. By drawing from other corrosion studies in areas where passivation layers have been successfully engineered to protect the bulk material, we present here a report on mineral phases that are likely have a morphological tendency to encrust the surface of the glass. Our modeling has focused on using the AFCI glass system in a carbonate, sulfate, and phosphate rich environment. We evaluate the minerals predicted to form to determine the likelihood of the formation of a protective layer on the surface of the glass. We have also modeled individual ions in solutions vs. pH and the addition of aluminum and silicon. These results allow us to understand the pH and ion concentration dependence of mineral formation. We have determined that iron minerals are likely to form a complete incrustation layer and we plan

  10. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    . In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  11. Matrix eigenvalue model: Feynman graph technique for all genera

    Energy Technology Data Exchange (ETDEWEB)

    Chekhov, Leonid [Steklov Mathematical Institute, ITEP and Laboratoire Poncelet, Moscow (Russian Federation); Eynard, Bertrand [SPhT, CEA, Saclay (France)

    2006-12-15

    We present the diagrammatic technique for calculating the free energy of the matrix eigenvalue model (the model with arbitrary power {beta} by the Vandermonde determinant) to all orders of 1/N expansion in the case where the limiting eigenvalue distribution spans arbitrary (but fixed) number of disjoint intervals (curves)

  12. Manifold learning techniques and model reduction applied to dissipative PDEs

    CERN Document Server

    Sonday, Benjamin E; Gear, C William; Kevrekidis, Ioannis G

    2010-01-01

    We link nonlinear manifold learning techniques for data analysis/compression with model reduction techniques for evolution equations with time scale separation. In particular, we demonstrate a `"nonlinear extension" of the POD-Galerkin approach to obtaining reduced dynamic models of dissipative evolution equations. The approach is illustrated through a reaction-diffusion PDE, and the performance of different simulators on the full and the reduced models is compared. We also discuss the relation of this nonlinear extension with the so-called "nonlinear Galerkin" methods developed in the context of Approximate Inertial Manifolds.

  13. CIVA workstation for NDE: mixing of NDE techniques and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Benoist, P.; Besnard, R. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Procedes et Systemes Avances; Bayon, G. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Reacteurs Experimentaux; Boutaine, J.L. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Applications et de la Metrologie des Rayonnements Ionisants

    1994-12-31

    In order to compare the capabilities of different NDE techniques, or to use complementary inspection methods, the same components are examined with different procedures. It is then very useful to have a single evaluation tool allowing direct comparison of the methods: CIVA is an open system for processing NDE data; it is adapted to a standard work station (UNIX, C, MOTIF) and can read different supports on which the digitized data are stored. It includes a large library of signal and image processing methods accessible and adapted to NDE data (filtering, deconvolution, 2D and 3D spatial correlations...). Different CIVA application examples are described: brazing inspection (neutronography, ultrasonic), tube inspection (eddy current, ultrasonic), aluminium welds examination (UT and radiography). Modelling and experimental results are compared. 16 fig., 7 ref.

  14. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    Science.gov (United States)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  15. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  16. A finite element parametric modeling technique of aircraft wing structures

    Institute of Scientific and Technical Information of China (English)

    Tang Jiapeng; Xi Ping; Zhang Baoyuan; Hu Bifu

    2013-01-01

    A finite element parametric modeling method of aircraft wing structures is proposed in this paper because of time-consuming characteristics of finite element analysis pre-processing. The main research is positioned during the preliminary design phase of aircraft structures. A knowledge-driven system of fast finite element modeling is built. Based on this method, employing a template parametric technique, knowledge including design methods, rules, and expert experience in the process of modeling is encapsulated and a finite element model is established automatically, which greatly improves the speed, accuracy, and standardization degree of modeling. Skeleton model, geometric mesh model, and finite element model including finite element mesh and property data are established on parametric description and automatic update. The outcomes of research show that the method settles a series of problems of parameter association and model update in the pro-cess of finite element modeling which establishes a key technical basis for finite element parametric analysis and optimization design.

  17. Quantitative magnetospheric models: results and perspectives.

    Science.gov (United States)

    Kuznetsova, M.; Hesse, M.; Gombosi, T.; Csem Team

    Global magnetospheric models are indispensable tool that allow multi-point measurements to be put into global context Significant progress is achieved in global MHD modeling of magnetosphere structure and dynamics Medium resolution simulations confirm general topological pictures suggested by Dungey State of the art global models with adaptive grids allow performing simulations with highly resolved magnetopause and magnetotail current sheet Advanced high-resolution models are capable to reproduced transient phenomena such as FTEs associated with formation of flux ropes or plasma bubbles embedded into magnetopause and demonstrate generation of vortices at magnetospheric flanks On the other hand there is still controversy about the global state of the magnetosphere predicted by MHD models to the point of questioning the length of the magnetotail and the location of the reconnection sites within it For example for steady southwards IMF driving condition resistive MHD simulations produce steady configuration with almost stationary near-earth neutral line While there are plenty of observational evidences of periodic loading unloading cycle during long periods of southward IMF Successes and challenges in global modeling of magnetispheric dynamics will be addessed One of the major challenges is to quantify the interaction between large-scale global magnetospheric dynamics and microphysical processes in diffusion regions near reconnection sites Possible solutions to controversies will be discussed

  18. New results and implications for lunar crustal iron distribution using sensor data fusion techniques

    Science.gov (United States)

    Clark, P. E.; McFadden, L. A.

    2000-02-01

    Remote measurements of the Moon have provided iron maps, and thus essential constraints for models of lunar crustal formation and mare basalt petrogenesis. A bulk crustal iron map was produced for the equatorial region from Apollo gamma-ray (AGR) spectrometer measurements, and a global iron variation map from recent Clementine spectral reflectance (CSR) measurements. Both iron maps show bimodal distribution, but have significantly different peak values and variations. In this paper, CSR data have been recalibrated to pyroxene in lunar landing site soils. A residual iron map is derived from the difference between AGR (bulk) and recalibrated CSR (pyroxene) iron abundances. The most likely interpretation is that the residual represents ferrous iron in olivine. This residual iron is anticorrelated to basin age, with older basins containing less olivine, suggesting segregation of basin basalt sources from a progressively fractionating underlying source region at the time of basin formation. Results presented here provide a quantitative basis for (1) establishing the relationship between direct geochemical (gamma-ray, X-ray) and mineralogical (near-IR) remote sensing data sets using sensor data fusion techniques to allow (2) simultaneous determination of elemental and mineralogical component distribution on remote targets and (3) meaningful interpretation of orbital and ground-based spectral reflectance measurements. When calibrated data from the Lunar Prospector mission are available, mapping of bulk crustal iron and iron-bearing soil components will be possible for the entire Moon. Similar analyses for data from the Near Earth Asteroid Rendezvous (NEAR) mission to asteroid 433 Eros will constrain models of asteroid formation.

  19. An Empirical Study of Smoothing Techniques for Language Modeling

    CERN Document Server

    Chen, S F; Chen, Stanley F.; Goodman, Joshua T.

    1996-01-01

    We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.

  20. Tympanoplasty: a 5-year review of results using the a la demanda (AAD) technique.

    Science.gov (United States)

    Olaizola, F

    1988-07-01

    The effectiveness of different surgical procedures to eradicate cholesteatoma in the middle ear was studied. The author reviewed 1405 cases conducted during 10 years (1974 to 1984) and found that the most important causes of failure are pocket cholesteatoma and residual cholesteatoma. With the goal of diminishing these factors, a la demanda (AAD) technique has been used for the past 5 years, with optimistic results--only 2.4% failures during this period. Other causes of failures have also been studied. The evolution of the surgical technique has had two orientations: to improve the results and to eliminate the failures. In the author's clinic there has been a percentage of failures, which has motivated an orientation toward more resolutive and destructive instead of conservative, techniques.

  1. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  2. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  3. TAPP - Stuttgart technique and result of a large single center series

    Directory of Open Access Journals (Sweden)

    Bittner R

    2006-01-01

    Full Text Available Laparoscopic hernioplasty is assessed as a difficult operation. Operative technique determines the frequency of complications, the time of recovery and the rate of recurrences. A proper technique is absolutely necessary to achieve results that are superior to open hernia surgery. Technique: The key points in our technique are 1 use of nondisposable instruments; 2 use of blunt trocars, consisting of expanding and non-incisive cone-shaped tips; 3 spacious and curved opening to the peritoneum, high above all possible hernia openings; 4 meticulous dissection of the entire pelvic floor; 5 complete reduction of the hernial sac; 6 wide parietalization of the peritoneal sac, at least down to the mid of psoas muscle; 7 implantation of a large mesh, at least 10 cm x 15 cm; 8 fixation of the mesh by clip to Cooper′s ligament, to the rectus muscle and lateral to the epigastric vessels, high above the ileopubic tract; 9 the use of glue allows fixation also to the latero-caudial region; and 10 closure of the peritoneum by running suture. Results: With this technique in 12,678 hernia repairs, the following results could be achieved: operating time - 40 min; morbidity - 2.9%; recurrence rate - 0.7%; disability of work - 14 days. In all types of hernias (recurrence after previous open surgery, recurrence after previous preperitoneal operation, scrotal hernia, hernia in patients after transabdominal prostate resection, similar results could be achieved. Summary: Laparoscopic hernia repair can be performed successfully in clinical practice even by surgeons in training. Precondition for the success is a strictly standardized operative technique and a well-structured educational program.

  4. Galaxy Cluster Mass Reconstruction Project: I. Methods and first results on galaxy-based techniques

    CERN Document Server

    Old, L; Pearce, F R; Croton, D; Muldrew, S I; Muñoz-Cuartas, J C; Gifford, D; Gray, M E; von der Linden, A; Mamon, G A; Merrifield, M R; Müller, V; Pearson, R J; Ponman, T J; Saro, A; Sepp, T; Sifón, C; Tempel, E; Tundo, E; Wang, Y O; Wojtak, R

    2014-01-01

    This paper is the first in a series in which we perform an extensive comparison of various galaxy-based cluster mass estimation techniques that utilise the positions, velocities and colours of galaxies. Our primary aim is to test the performance of these cluster mass estimation techniques on a diverse set of models that will increase in complexity. We begin by providing participating methods with data from a simple model that delivers idealised clusters, enabling us to quantify the underlying scatter intrinsic to these mass estimation techniques. The mock catalogue is based on a Halo Occupation Distribution (HOD) model that assumes spherical Navarro, Frenk and White (NFW) haloes truncated at R_200, with no substructure nor colour segregation, and with isotropic, isothermal Maxwellian velocities. We find that, above 10^14 M_solar, recovered cluster masses are correlated with the true underlying cluster mass with an intrinsic scatter of typically a factor of two. Below 10^14 M_solar, the scatter rises as the nu...

  5. Long-Term Results of Endoscopic Lumbar Discectomy by "Destandau's Technique"

    Science.gov (United States)

    Kamble, Bhavna; Patond, Kisan

    2016-01-01

    Study Design Prospective study. Purpose The aim of the study was to present long-term results from a 10-year follow-up after endoscopic lumbar discectomy (ELD) by "Destandau's technique". Overview of Literature Endoscopic disc surgery by Destandau's technique using ENDOSPINE Karl Storz system is a relatively new technique. It was introduced in 1993. It has been gaining popularity among the spine surgeons, as it is attractive for small skin incision and allows a gentle and excellent tissue dissection with excellent visualization. Many authors have published results of their own studies; however, in all these studies the long-term follow up of the patients has not been emphasized. Methods A total of 21 patients selected on basis of strict inclusion criteria's underwent ELD from November 2004 to March 2005. Surgery outcome was assessed by using "Prolo's Anatomic-Functional-Economic Rating System" (1986). Patients were followed up to 10 years. In addtion, we compared the results of our study with other studies. Results Outcomes were excellent in 17 patients (80.95%), good in 3 (14.28%) and fair in 1 (4.78%), with no patients having a poor result. In our study, 19 patients (90.47%) were able to resume their previous works/jobs, and only 2 (9.52%) needed to change their jobs for lighter work. No patient retired from his or her previous daily routine following the operation. Conclusions The initial and long-term results are very good for endoscopic lumbar discectomy by Destandau's technique. In properly selected patients it is a safe and minimally invasive technique, and we recommend ELD in properly selected patients. PMID:27114770

  6. Team mental models: techniques, methods, and analytic approaches.

    Science.gov (United States)

    Langan-Fox, J; Code, S; Langfield-Smith, K

    2000-01-01

    Effective team functioning requires the existence of a shared or team mental model among members of a team. However, the best method for measuring team mental models is unclear. Methods reported vary in terms of how mental model content is elicited and analyzed or represented. We review the strengths and weaknesses of vatrious methods that have been used to elicit, represent, and analyze individual and team mental models and provide recommendations for method selection and development. We describe the nature of mental models and review techniques that have been used to elicit and represent them. We focus on a case study on selecting a method to examine team mental models in industry. The processes involved in the selection and development of an appropriate method for eliciting, representing, and analyzing team mental models are described. The criteria for method selection were (a) applicability to the problem under investigation; (b) practical considerations - suitability for collecting data from the targeted research sample; and (c) theoretical rationale - the assumption that associative networks in memory are a basis for the development of mental models. We provide an evaluation of the method matched to the research problem and make recommendations for future research. The practical applications of this research include the provision of a technique for analyzing team mental models in organizations, the development of methods and processes for eliciting a mental model from research participants in their normal work environment, and a survey of available methodologies for mental model research.

  7. Modeling clicks beyond the first result page

    NARCIS (Netherlands)

    Chuklin, A.; Serdyukov, P.; de Rijke, M.

    2013-01-01

    Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more

  8. Modeling clicks beyond the first result page

    NARCIS (Netherlands)

    Chuklin, A.; Serdyukov, P.; de Rijke, M.

    2013-01-01

    Most modern web search engines yield a list of documents of a fixed length (usually 10) in response to a user query. The next ten search results are usually available in one click. These documents either replace the current result page or are appended to the end. Hence, in order to examine more docu

  9. Percutaneous radiofrequency ablation of osteoid osteomas. Technique and results; Perkutane Radiofrequenzablation von Osteoidosteomen. Technik und Ergebnisse

    Energy Technology Data Exchange (ETDEWEB)

    Bruners, P.; Penzkofer, T. [Lehrstuhl fuer Angewandte Medizintechnik, Helmholtz Inst. fuer Biomedizinische Technik, RWTH Aachen (Germany); Guenther, R. W.; Mahnken, A. [Klinik fuer Radiologische Diagnostik, Universitaetsklinikum RWTH Aachen (Germany)

    2009-08-15

    Purpose: Osteoid osteoma is a benign primary bone tumor that typically occurs in children and young adults. Besides local pain, which is often worse at night, prompt relief due to medication with acetylsalicylic acid (ASS) is characteristic for this bone lesion. Because long-term medication with ASS does not represent an alternative treatment strategy due to its potentially severe side effects, different minimally invasive image-guided techniques for the therapy of osteoid osteoma have been developed. In this context radiofrequency (RF) ablation in particular has become part of the clinical routine. The technique and results of image-guided RF ablation are compared to alternative treatment strategies. Materials and Methods: Using this technique, an often needle-shaped RF applicator is percutaneously placed into the tumor under image guidance. Then a high-frequency alternating current is applied by the tip of the applicator which leads to ionic motion within the tissue resulting in local heat development and thus in thermal destruction of the surrounding tissue including the tumor. Results: The published primary and secondary success rates of this technique are 87 and 83%, respectively. Surgical resection and open curettage show comparable success rates but are associated with higher complication rates. In addition image-guided RF ablation of osteoid osteomas is associated with low costs. (orig.)

  10. Separator Reconnection at the Magnetopause for Predominantly Northward and Southward IMF: Techniques and Results

    Science.gov (United States)

    Glocer, Alex; Dorelli, J.; Toth, G.; Komar, C. M.; Cassak, P. A.

    2016-01-01

    In this work, we demonstrate how to track magnetic separators in three-dimensional simulated magnetic fields with or without magnetic nulls, apply these techniques to enhance our understanding of reconnection at the magnetopause. We present three methods for locating magnetic separators and apply them to 3-D resistive MHD simulations of the Earth's magnetosphere using the Block-Adaptive-Tree Solar-wind Roe-type Upwind Scheme code. The techniques for finding separators and determining the reconnection rate are insensitive to interplanetary magnetic field (IMF) clock angle and can in principle be applied to any magnetospheric model. Moreover, the techniques have a number of advantages over prior separator finding techniques applied to the magnetosphere. The present work examines cases of high and low resistivity for two clock angles. We go beyond previous work examine the separator during Flux Transfer Events (FTEs). Our analysis of reconnection on the magnetopause yields a number of interesting conclusions: Reconnection occurs all along the separator even during predominately northward IMF cases. Multiple separators form in low-resistivity conditions, and in the region of an FTE the separator splits into distinct branches. Moreover, the local contribution to the reconnection rate, as determined by the local parallel electric field, drops in the vicinity of the FTE with respect to the value when there are none.

  11. Concerning the Feasibility of Example-driven Modelling Techniques

    OpenAIRE

    Thorne, Simon; Ball, David; Lawson, Zoe Frances

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven \\ud modelling. The main aim was to establish experimentally within an academic \\ud environment; the relationship between error and task complexity using a) Traditional \\ud spreadsheet modelling, b) example driven techniques. We report on the experimental \\ud design, sampling, research methods and the tasks set for both control and treatment \\ud groups. Analysis of the completed tasks allows comparison of several...

  12. Advanced Phase noise modeling techniques of nonlinear microwave devices

    OpenAIRE

    Prigent, M.; J. C. Nallatamby; R. Quere

    2004-01-01

    In this paper we present a coherent set of tools allowing an accurate and predictive design of low phase noise oscillators. Advanced phase noise modelling techniques in non linear microwave devices must be supported by a proven combination of the following : - Electrical modeling of low-frequency noise of semiconductor devices, oriented to circuit CAD . The local noise sources will be either cyclostationary noise sources or quasistationary noise sources. - Theoretic...

  13. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  14. Microplasticity of MMC. Experimental results and modelling

    Energy Technology Data Exchange (ETDEWEB)

    Maire, E. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Lormand, G. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Gobin, P.F. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France)); Fougeres, R. (Groupe d' Etude de Metallurgie Physique et de Physique des Materiaux, INSA, 69 Villeurbanne (France))

    1993-11-01

    The microplastic behavior of several MMC is investigated by means of tension and compression tests. This behavior is assymetric : the proportional limit is higher in tension than in compression but the work hardening rate is higher in compression. These differences are analysed in terms of maxium of the Tresca's shear stress at the interface (proportional limit) and of the emission of dislocation loops during the cooling (work hardening rate). On another hand, a model is proposed to calculate the value of the yield stress, describing the composite as a material composed of three phases : inclusion, unaffected matrix and matrix surrounding the inclusion having a gradient in the density of the thermally induced dilocations. (orig.).

  15. Validation of Models : Statistical Techniques and Data Availability

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1999-01-01

    This paper shows which statistical techniques can be used to validate simulation models, depending on which real-life data are available. Concerning this availability three situations are distinguished (i) no data, (ii) only output data, and (iii) both input and output data. In case (i) - no real

  16. Techniques and tools for efficiently modeling multiprocessor systems

    Science.gov (United States)

    Carpenter, T.; Yalamanchili, S.

    1990-01-01

    System-level tools and methodologies associated with an integrated approach to the development of multiprocessor systems are examined. Tools for capturing initial program structure, automated program partitioning, automated resource allocation, and high-level modeling of the combined application and resource are discussed. The primary language focus of the current implementation is Ada, although the techniques should be appropriate for other programming paradigms.

  17. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  18. Study of Semi-Span Model Testing Techniques

    Science.gov (United States)

    Gatlin, Gregory M.; McGhee, Robert J.

    1996-01-01

    An investigation has been conducted in the NASA Langley 14- by 22-Foot Subsonic Tunnel in order to further the development of semi-span testing capabilities. A twin engine, energy efficient transport (EET) model with a four-element wing in a takeoff configuration was used for this investigation. Initially a full span configuration was tested and force and moment data, wing and fuselage surface pressure data, and fuselage boundary layer measurements were obtained as a baseline data set. The semi-span configurations were then mounted on the wind tunnel floor, and the effects of fuselage standoff height and shape as well as the effects of the tunnel floor boundary layer height were investigated. The effectiveness of tangential blowing at the standoff/floor juncture as an active boundary-layer control technique was also studied. Results indicate that the semi-span configuration was more sensitive to variations in standoff height than to variations in floor boundary layer height. A standoff height equivalent to 30 percent of the fuselage radius resulted in better correlation with full span data than no standoff or the larger standoff configurations investigated. Undercut standoff leading edges or the use of tangential blowing in the standoff/ floor juncture improved correlation of semi-span data with full span data in the region of maximum lift coefficient.

  19. Early and midterm results of kissing stent technique in the management of aortoiliac obstructive disease.

    Science.gov (United States)

    Pulli, Raffaele; Dorigo, Walter; Fargion, Aaron; Angiletta, Domenico; Azas, Leonidas; Pratesi, Giovanni; Alessi Innocenti, Alessandro; Pratesi, Carlo

    2015-04-01

    To retrospectively analyze the early and the midterm results of endovascular management of aortoiliac obstructive disease with the kissing stent technique. From January 2005 to September 2012, 229 consecutive endovascular interventions for aortoiliac obstructive disease were performed; data from all the interventions were prospectively collected in a dedicated database. In 41 patients, the kissing stent technique at the level of aortic bifurcation was performed (group 1), whereas in the remaining 188 it was not (group 2). Perioperative results were compared with chi-squared test. Follow-up results were analyzed with Kaplan-Meier curves and compared with log-rank test. Trans-Atlantic Inter-Society Consensus II C and D lesions were present in 66% of patients in group 1 and in 28.5% in group 2 (P kissing stent technique provided satisfactory results in patients with obstructive aortoiliac diseases, without an increase in immediate and midterm complications, representing an effective solution in complex anatomies. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Simple parameter estimation for complex models — Testing evolutionary techniques on 3-dimensional biogeochemical ocean models

    Science.gov (United States)

    Mattern, Jann Paul; Edwards, Christopher A.

    2017-01-01

    Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.

  1. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  2. Rotational Acceleration during Head Impact Resulting from Different Judo Throwing Techniques

    Science.gov (United States)

    MURAYAMA, Haruo; HITOSUGI, Masahito; MOTOZAWA, Yasuki; OGINO, Masahiro; KOYAMA, Katsuhiro

    2014-01-01

    Most severe head injuries in judo are reported as acute subdural hematoma. It is thus necessary to examine the rotational acceleration of the head to clarify the mechanism of head injuries. We determined the rotational acceleration of the head when the subject is thrown by judo techniques. One Japanese male judo expert threw an anthropomorphic test device using two throwing techniques, Osoto-gari and Ouchigari. Rotational and translational head accelerations were measured with and without an under-mat. For Osoto-gari, peak resultant rotational acceleration ranged from 4,284.2 rad/s2 to 5,525.9 rad/s2 and peak resultant translational acceleration ranged from 64.3 g to 87.2 g; for Ouchi-gari, the accelerations respectively ranged from 1,708.0 rad/s2 to 2,104.1 rad/s2 and from 120.2 g to 149.4 g. The resultant rotational acceleration did not decrease with installation of an under-mat for both Ouchi-gari and Osoto-gari. We found that head contact with the tatami could result in the peak values of translational and rotational accelerations, respectively. In general, because kinematics of the body strongly affects translational and rotational accelerations of the head, both accelerations should be measured to analyze the underlying mechanism of head injury. As a primary preventative measure, throwing techniques should be restricted to participants demonstrating ability in ukemi techniques to avoid head contact with the tatami. PMID:24477065

  3. Features of Intangible Assets Inventory Technique and the Recognition of Its Results

    OpenAIRE

    Nataliya Kantsedal

    2014-01-01

    Practical implementation of science-based inventory methodology promotes objective and reliable estimates of results for further interpretation in the accounting system, as well the decision-making on eliminated shortcomings and violations. The scope of the article is an analysis of the legal provisions of inventory of intangible assets and organizing the views of the individual authors to identify problem issues to be addressed in terms of specification of technique for intangible assets inv...

  4. Functional results after cholesteatoma surgery in an adult population using the retrograde mastoidectomy technique.

    Science.gov (United States)

    Minovi, Amir; Venjacob, Johanna; Volkenstein, Stefan; Dornhoffer, John; Dazert, Stefan

    2014-03-01

    In this retrospective study, we analyzed the functional results after using the retrograde mastoidectomy technique for cholesteatoma removal in an adult patient population. The described technique was used at a tertiary referral center for cholesteatoma removal in 218 adult patients, representing 242 operated ears, with an average follow-up time of 20.3 months. With the retrograde mastoidectomy technique, the cholesteatoma is removed posteriorly through the canal wall, from the epitympanic region toward the mastoid, with the option to reconstruct the posterior bony canal wall or create an open mastoid cavity, depending on the size of the defect. Primary surgery was carried out in 58.7 % ears, with the remaining 41.3 % representing revision surgery. In 151 cases, the posterior canal wall was reconstructed, and in 91 cases a classical CWD with an open mastoid cavity was created. In the majority of the cases (n = 213, 88.0 %), a primary hearing restoration was performed. There were 18 recurrences (12.7 %) in primary cases and 22 recurrences (22 %) in revision surgeries. Ninety percent of the recurrences (36 of 40 cases) occurred within 5 years. A postoperative air-bone gap of less than 20 dB was achieved in 61.6 % of the operated ears. Ears with a reconstructed posterior canal wall had significantly better hearing results than those cases in which a CWD procedure was used (air-bone gap of 17.6 versus 22.5 dB, p < 0.05). The retrograde mastoidectomy technique for cholesteatoma removal resulted in satisfying hearing results in the majority of the cases, with a recurrence rate comparable to the current literature.

  5. Evaluation of the functional results after rotator cuff arthroscopic repair with the suture bridge technique

    Directory of Open Access Journals (Sweden)

    Alberto Naoki Miyazaki

    Full Text Available ABSTRACT OBJECTIVE: To evaluate the results of arthroscopic treatment of large and extensive rotator cuff injuries (RCI that involved the supra and infraspinatus muscles using the suture bridge (SB technique. METHODS: Between July 2010 and November 2014, 37 patients with RCI who were treated with SB technique were evaluated. The study included all patients with a minimum follow-up of 12 months who underwent primary surgery of the shoulder. Twenty-four patients were male and 13 were female. The mean age was 60 years (45-75. The dominant side was affected in 32 cases. The most common cause of injury was trauma (18 cases. The mean preoperative motion was 123°, 58°, T11. Through magnetic resonance imaging, 36 fatty degenerations were classified according to Goutallier. Patients underwent rotator cuff repair with SB technique, which consists of using a medial row anchor with two Corkscrew(r fibertape(r or fiberwire(r at the articular margin, associated with lateral fixation without stitch using PushLocks(r or SwiveLocks(r. RESULTS: The mean age was 60 years and mean fatty degeneration was 2.6. The mean range of motion (following the AAOS in the postoperative evaluation was 148° of forward elevation, 55° in lateral rotation and medial rotation in T9. Using the criteria of the University of California at Los Angeles (UCLA, 35 (94% patients had excellent and good results; one (2.7%, fair; and one (2.7%, poor. CONCLUSION: Arthroscopic repair of a large and extensive RCI using SB technique had good and excellent results in 94% of the patients.

  6. Features of Intangible Assets Inventory Technique and the Recognition of Its Results

    OpenAIRE

    Nataliya Kantsedal

    2014-01-01

    Practical implementation of science-based inventory methodology promotes objective and reliable estimates of results for further interpretation in the accounting system, as well the decision-making on eliminated shortcomings and violations. The scope of the article is an analysis of the legal provisions of inventory of intangible assets and organizing the views of the individual authors to identify problem issues to be addressed in terms of specification of technique for intangible assets inv...

  7. Arthroscopic Repair of Combined Bankart and SLAP Lesions: Operative Techniques and Clinical Results

    OpenAIRE

    Cho, Hyung Lae; Lee, Choon Key; Hwang, Tae Hyok; Suh, Kuen Tak; Park, Jong Won

    2010-01-01

    Background To evaluate the clinical results and operation technique of arthroscopic repair of combined Bankart and superior labrum anterior to posterior (SLAP) lesions, all of which had an anterior-inferior Bankart lesion that continued superiorly to include separation of the biceps anchor in the patients presenting recurrent shoulder dislocations. Methods From May 2003 to January 2006, we reviewed 15 cases with combined Bankart and SLAP lesions among 62 patients with recurrent shoulder dislo...

  8. Functional results in airflow improvement using a "flip-flap" alar technique: our experience.

    Science.gov (United States)

    Di Stadio, Arianna; Macro, Carlo

    2017-02-21

    Pinched nasal point can be arising as congenital malformation or as results of unsuccessfully surgery. The nasal valve alteration due to this problem is not only an esthetic problem but also a functional one because can modify the nasal airflow. Several surgical techniques were proposed in literature, we proposed our. The purpose of the study is the evaluation of nose airway flow using our flip-flap technique for correction of pinched nasal tip. This is a retrospective study conducted on twelve patients. Tip cartilages were remodeled by means of autologous alar cartilage grafting. The patients underwent a rhinomanometry pre and post-surgery to evaluate the results, and they performed a self-survey to evaluate their degree of satisfaction in term of airflow sensation improvement. Rhinomanometry showed improved nasal air flow (range from 25% to 75%) in all patients. No significant differences were showed between unilateral and bilateral alar malformation (p=0.49). Patient's satisfaction reached the 87.5%. Our analysis on the combined results (rhinomanometry and surveys) showed that this technique leads to improvement of nasal flow in patients affected by pinched nasal tip in all cases. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  9. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  10. A Short-term Comparison Between Result of Palisade Cartilage Tympanoplasty and Temporalis Fascia Technique

    Directory of Open Access Journals (Sweden)

    Mahmood Shishegar

    2012-03-01

    Full Text Available Introduction: The use of cartilage as a grafting material has been advocated in cases where there is a high risk of graft failure, such as subtotal perforations, adhesive processes, and residual defects after primary tympanoplasties. The purpose of this study was to compare the graft acceptance rates and auditory outcomes of cartilage tympanoplasty operations using a palisade technique with those of primary tympanoplasty using temporalis fascia in a homogenous group of patients. Study Design: Prospective study.  Materials and Methods:The study population included 54 patients who were operated on in two groups (palisade technique & temporalis fascia technique with each group containing 27 patients. Patients with pure subtotal perforations (perforation of >50% of the whole tympanic membrane [TM] area, an intact ossicular chain, at least a one month dry period, and normal middle ear mucosa were included in the study. Grafts acceptance rates and pre- and post-operative audiograms were compared. The follow-up time was six months.  Results: Graft acceptance was achieved in all patients (100% in the palisade cartilage tympanoplasty group and in 25 patients (92.5% in the temporalis fascia group. This difference was not statistically significant (P= 0.15. Comparison of the increases in mean speech reception threshold, air–bone gap, and pure-tone average scores between both techniques showed no significant changes.    Conclusion: Our experience with the palisade cartilage technique demonstrates that subtotal or total perforation at high risk for graft failure can be treated efficiently, and that a durable and resistant reconstruction of the TM with reasonable auditory function can be achieved.  

  11. A short-term evaluation between the result of palisade cartilage tympanoplasty and temporalis fascia technique

    Directory of Open Access Journals (Sweden)

    Irfan Ul Shamas

    2014-01-01

    Full Text Available Introduction: The use of cartilage as a grafting material has been advocated in cases where there is a high risk of graft failure, such as subtotal perforations, adhesive processes, and residual defects after primary tympanoplasties. The purpose of this study was to compare the graft acceptance rates and auditory outcomes of cartilage tympanoplasty operations using a palisade technique with those of primary tympanoplasty using temporalis fascia in a homogenous group of patients. Study Design: Prospective study. Materials and Methods: The study population included 54 patients who were operated on in two groups (palisade technique and temporalis fascia technique with each group containing 27 patients. Patients with pure subtotal perforations (perforation of >50% of the whole tympanic membrane [TM] area, an intact ossicular chain, at least 1 month dry period, and normal middle ear mucosa were included in the study. Grafts acceptance rates and pre and postoperative audiograms were compared. The follow-up time was 6 months. Results: Graft acceptance was achieved in all patients (100% in the palisade cartilage tympanoplasty group and in 25 patients (92.5% in the temporalis fascia group. This difference was not statistically significant (P = 0.15. Comparison of the increases in mean speech reception threshold, air-bone gap, and pure-tone average scores between both techniques showed no significant changes. Conclusion: Our experience with the palisade cartilage technique demonstrates that subtotal or total perforation at high risk for graft failure can be treated efficiently, and that a durable and resistant reconstruction of the TM with reasonable auditory function can be achieved.

  12. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  13. Videogrammetric Model Deformation Measurement Technique for Wind Tunnel Applications

    Science.gov (United States)

    Barrows, Danny A.

    2006-01-01

    Videogrammetric measurement technique developments at NASA Langley were driven largely by the need to quantify model deformation at the National Transonic Facility (NTF). This paper summarizes recent wind tunnel applications and issues at the NTF and other NASA Langley facilities including the Transonic Dynamics Tunnel, 31-Inch Mach 10 Tunnel, 8-Ft high Temperature Tunnel, and the 20-Ft Vertical Spin Tunnel. In addition, several adaptations of wind tunnel techniques to non-wind tunnel applications are summarized. These applications include wing deformation measurements on vehicles in flight, determining aerodynamic loads based on optical elastic deformation measurements, measurements on ultra-lightweight and inflatable space structures, and the use of an object-to-image plane scaling technique to support NASA s Space Exploration program.

  14. An observational model for biomechanical assessment of sprint kayaking technique.

    Science.gov (United States)

    McDonnell, Lisa K; Hume, Patria A; Nolte, Volker

    2012-11-01

    Sprint kayaking stroke phase descriptions for biomechanical analysis of technique vary among kayaking literature, with inconsistencies not conducive for the advancement of biomechanics applied service or research. We aimed to provide a consistent basis for the categorisation and analysis of sprint kayak technique by proposing a clear observational model. Electronic databases were searched using key words kayak, sprint, technique, and biomechanics, with 20 sources reviewed. Nine phase-defining positions were identified within the kayak literature and were divided into three distinct types based on how positions were defined: water-contact-defined positions, paddle-shaft-defined positions, and body-defined positions. Videos of elite paddlers from multiple camera views were reviewed to determine the visibility of positions used to define phases. The water-contact-defined positions of catch, immersion, extraction, and release were visible from multiple camera views, therefore were suitable for practical use by coaches and researchers. Using these positions, phases and sub-phases were created for a new observational model. We recommend that kayaking data should be reported using single strokes and described using two phases: water and aerial. For more detailed analysis without disrupting the basic two-phase model, a four-sub-phase model consisting of entry, pull, exit, and aerial sub-phases should be used.

  15. RESULTS OF THE USE OF PEEK CAGES IN THE TREATMENT OF BASILAR INVAGINATION BY GOEL TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Luís Eduardo Carelli Teixeira da Silva

    2016-03-01

    Full Text Available ABSTRACT Objective: Analysis of the use of polyetheretherketone (PEEK cages for atlantoaxial facet realignment and distraction for treatment of basilar invagination by Goel technique. Method: Retrospective descriptive statistical analysis of the neurological status, pain, presence of subsidence and bone fusion with the use of PEEK cages in 8 atlantoaxial joints of 4 patients with basilar invagination. All patients were treated with atlantoaxial facet distraction and realignment and subsequent arthrodesis C1-C2 by the technique of Goel modified by the use of PEEK cage. Results: All patients showed improvement in Nurick neurological assessment scale and Visual Analogue Scale (VAS of pain. There were no cases of subsidence, migration, or damage to the vertebral artery during the insertion of the cage. All joints evolved with bone fusion, assessed by dynamic radiographs, and computed tomography. Two patients developed neuropathic pain in dermatome of C2 and one patient had unilateral vertebral artery injury during C2 instrumentation treated with insertion of pedicle screw to control the bleeding. Conclusion: The results of the treatment of basilar invagination by the Goel technique with the use of PEEK cages shown to be effective and safe although further studies are needed to confirm this use.

  16. Indication, surgical technique and results of endoscopic fascial release in plantar fasciitis (E FRPF).

    Science.gov (United States)

    Jerosch, Jörg; Schunck, Jochem; Liebsch, Dietrich; Filler, Tim

    2004-09-01

    The purpose of the present study is to present the surgical technique for, and review our indications and results after, endoscopic fascial release in patients with plantar fasciitis. In five thiel-embalmed human specimens, a biportal technique for endoscopic release of the plantar fascia was established. The aim was here to evaluate the relation between the plantar fascia and the heel spur and to perform a release that would not exceed 50-70% of the diameter of the calcaneoplantar fascia. The endoscopic technique was performed within the last 5 years in ten male and seven female patients. All patients with the clinical entity of plantar fasciitis underwent conservative treatment for at least 6 months. The average age at surgery was 35 years (24-56 years). In the first five patients, surgery was performed under c-arm control. In all patients the operation could be finished endoscopically. The endoscopic portals healed without complications. The time for surgery during the learning curve ranged between 21 and 74 min (average 41 min) and was still longer compared to the open technique. The clinical follow-up ranged between 4 and 48 months (average 18.5 months). Out of 17 patients, 13 improved clinically, and they would choose the treatment option again. In the Ogilvie-Harris score, seven patients showed good and six excellent results. In two patients, the initial results were not satisfactory, because of a bony stress reaction of the calcaneus. This complication was treated by 6 weeks of partial weight bearing, without any further problems. Two other patients developed secondary pain in the lateral column. In spite of the minimal invasive approach it seems to be important to be careful in increasing the weight bearing in early rehabilitation. The technique of the endoscopic plantar fascia release (E FRPF) can be performed in a standardised and reproducible procedure. The follow-up examination showed good midterm results, but a loss of stability of the plantar arch

  17. Predicting Performance of Schools by Applying Data Mining Techniques on Public Examination Results

    Directory of Open Access Journals (Sweden)

    J. Macklin Abraham Navamani

    2015-02-01

    Full Text Available This study work presents a systematic analysis of various features of the higher grade school public examination results data in the state of Tamil Nadu, India through different data mining classification algorithms to predict the performance of Schools. Nowadays the parents always targets to select the right city, school and factors which contributes to the success of the results in schools of their children. There could be possible effects of factors such as Ethnic mix, Medium of study, geography could make a difference in results. The proposed work would focus on two fold factors namely Machine Learning algorithms to predict School performance with satisfying accuracy and to evaluate the data mining technique which would give better accuracy of the learning algorithms. It was found that there exist some apparent and some less noticeable attributes that demonstrate a strong correlation with student performance. Data were collected through the credible source data preparation and correlation analysis. The findings revealed that the public examinations results data was a very helpful predictor of performance of school in order to improve the result with maximum level and also improved the overall accuracy with the help of Adaboost technique.

  18. Mathematical analysis techniques for modeling the space network activities

    Science.gov (United States)

    Foster, Lisa M.

    1992-01-01

    The objective of the present work was to explore and identify mathematical analysis techniques, and in particular, the use of linear programming. This topic was then applied to the Tracking and Data Relay Satellite System (TDRSS) in order to understand the space network better. Finally, a small scale version of the system was modeled, variables were identified, data was gathered, and comparisons were made between actual and theoretical data.

  19. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    OpenAIRE

    N.RATHIKA; Dr.A.Senthil kumar; A.ANUSUYA

    2014-01-01

    This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO) technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as...

  20. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  1. Equivalence and differences between structural equation modeling and state-space modeling techniques

    NARCIS (Netherlands)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, E.L.; Dolan, C.V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and

  2. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  3. Internet enabled modelling of extended manufacturing enterprises using the process based techniques

    OpenAIRE

    Cheng, K; Popov, Y

    2004-01-01

    The paper presents the preliminary results of an ongoing research project on Internet enabled process-based modelling of extended manufacturing enterprises. It is proposed to apply the Open System Architecture for CIM (CIMOSA) modelling framework alongside with object-oriented Petri Net models of enterprise processes and object-oriented techniques for extended enterprises modelling. The main features of the proposed approach are described and some components discussed. Elementary examples of ...

  4. Results of an ECE Varying Degrees of Corrosion and Time of Application of the Technique

    Directory of Open Access Journals (Sweden)

    Espericueta-González D.E.

    2012-10-01

    Full Text Available The objective of the Electrochemical Chloride Extraction (ECE is to remove the chloride ions (Cl- which are embedded in concrete, since these ions are considered to result in accelerating the process of corrosion in steel reinforcement. The ECE is an electrochemical technique in which ions Cl- are transported to the outside of the concrete by means of an electric field. In this paper we present the results of mortar samples made in the laboratory. The specimens were previously contaminated with 2% NaCl by mass of cement, and stored in water curing tanks at constant laboratory of 95% RH for one and sixty days. Afterward, mortars underwent an ECE in times of 15, 30 and 60 days of treatment. The results obtained under these conditions show that increasing the duration of the ECE will get a higher amount of Cl-ions removed from the mortar. It should be noted that the extent of the corroded reinforcement is a critical variable in the effectiveness of the technique.

  5. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.; Aziz, Khalid

    2001-08-23

    Research results for the second year of this project on the development of improved modeling techniques for non-conventional (e.g., horizontal, deviated or multilateral) wells were presented. The overall program entails the development of enhanced well modeling and general simulation capabilities. A general formulation for black-oil and compositional reservoir simulation was presented.

  6. Supramalleolar osteotomies for degenerative joint disease of the ankle joint: indication, technique and results.

    Science.gov (United States)

    Barg, Alexej; Pagenstert, Geert I; Horisberger, Monika; Paul, Jochen; Gloyer, Marcel; Henninger, Heath B; Valderrabano, Victor

    2013-09-01

    Patients with varus or valgus hindfoot deformities usually present with asymmetric ankle osteoarthritis. In-vitro biomechanical studies have shown that varus or valgus hindfoot deformity may lead to altered load distribution in the tibiotalar joint which may result in medial (varus) or lateral (valgus) tibiotalar joint degeneration in the short or medium term. The treatment of asymmetric ankle osteoarthritis remains challenging, because more than half of the tibiotalar joint surface is usually preserved. Therefore, joint-sacrificing procedures like total ankle replacement or ankle arthrodesis may not be the most appropriate treatment options. The shortand midterm results following realignment surgery, are very promising with substantial pain relief and functional improvement observed post-operatively. In this review article we describe the indications, surgical techniques, and results from of realignment surgery of the ankle joint in the current literature.

  7. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  8. Evolution of Modelling Techniques for Service Oriented Architecture

    Directory of Open Access Journals (Sweden)

    Mikit Kanakia

    2014-07-01

    Full Text Available Service-oriented architecture (SOA is a software design and architecture design pattern based on independent pieces of software providing functionality as services to other applications. The benefit of SOA in the IT infrastructure is to allow parallel use and data exchange between programs which are services to the enterprise. Unified Modelling Language (UML is a standardized general-purpose modelling language in the field of software engineering. The UML includes a set of graphic notation techniques to create visual models of object-oriented software systems. We want to make UML available for SOA as well. SoaML (Service oriented architecture Modelling Language is an open source specification project from the Object Management Group (OMG, describing a UML profile and meta-model for the modelling and design of services within a service-oriented architecture. BPMN was also extended for SOA but there were few pitfalls. There is a need of a modelling framework which dedicated to SOA. Michael Bell authored a framework called Service Oriented Modelling Framework (SOMF which is dedicated for SOA.

  9. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  10. Surgery for left ventricular aneurysm after myocardial infarction:techniques selection and results assessment

    Institute of Scientific and Technical Information of China (English)

    CHEN Xin; QIU Zhi-bing; XU Ming; LIU Le-le; JIANG Ying-shuo; WANG Li-ming

    2012-01-01

    Background The most appropriate surgical approach for patients with post-infarction left ventricular (LV) aneurysm remains undetermined.We compared the efficacy of the linear versus patch repair techniques,and investigated the mid-term changes of LV geometry and cardiac function,for repair of LV aneurysms.Methods We reviewed the records of 194 patients who had surgery for a post-infarction LV aneurysm between 1998 and 2010.Short-term and mid-term outcomes,including complications,cardiac function and mortality,were assessed.LV end-diastolic and systolic dimensions (LVEDD and LVESD),LV end-diastolic and end-systolic volume indexes (LVEDVI and LVESVI) and LV ejection fraction (LVEF) were measured on pre-operative and follow-up echocardiography.Results Overall in-hospital mortality was 4.12%,and major morbidity showed no significant differences between the two groups.Multivariate analysis identified preoperative left ventricular end diastolic pressure >20 mmHg,low cardiac output and aortic clamping time >2 hours as risk factors for early mortality.Follow-up revealed that LVEF improved from 37% pre-operation to 45% 12 months post-operation in the patch group (P=0.008),and from 44% pre-operation to 40% 12 months postoperation in the linear group (P=0.032).In contrast,the LVEDVI and LVESVI in the linear group were significantly reduced immediately after the operation,and increased again at follow-up.However,in the patch group,the LVEDVI and LVESVI were significantly reduced at follow-up.And there were significant differences in the correct value changes of LVEF and left ventricular remodeling between linear repair and patch groups.Conclusions Persistent reduction of LV dimensions after the patch repair procedure seems to be a procedure-related problem.The choice of the technique should be tailored on an individual basis and surgeon's preference.The patch remodeling technique results in a better LVEF improvement,further significant reductions in LV dimensions

  11. VNIR spectral modeling of Mars analogue rocks: first results

    Science.gov (United States)

    Pompilio, L.; Roush, T.; Pedrazzi, G.; Sgavetti, M.

    Knowledge regarding the surface composition of Mars and other bodies of the inner solar system is fundamental to understanding of their origin, evolution, and internal structures. Technological improvements of remote sensors and associated implications for planetary studies have encouraged increased laboratory and field spectroscopy research to model the spectral behavior of terrestrial analogues for planetary surfaces. This approach has proven useful during Martian surface and orbital missions, and petrologic studies of Martian SNC meteorites. Thermal emission data were used to suggest two lithologies occurring on Mars surface: basalt with abundant plagioclase and clinopyroxene and andesite, dominated by plagioclase and volcanic glass [1,2]. Weathered basalt has been suggested as an alternative to the andesite interpretation [3,4]. Orbital VNIR spectral imaging data also suggest the crust is dominantly basaltic, chiefly feldspar and pyroxene [5,6]. A few outcrops of ancient crust have higher concentrations of olivine and low-Ca pyroxene, and have been interpreted as cumulates [6]. Based upon these orbital observations future lander/rover missions can be expected to encounter particulate soils, rocks, and rock outcrops. Approaches to qualitative and quantitative analysis of remotely-acquired spectra have been successfully used to infer the presence and abundance of minerals and to discover compositionally associated spectral trends [7-9]. Both empirical [10] and mathematical [e.g. 11-13] methods have been applied, typically with full compositional knowledge, to chiefly particulate samples and as a result cannot be considered as objective techniques for predicting the compositional information, especially for understanding the spectral behavior of rocks. Extending the compositional modeling efforts to include more rocks and developing objective criteria in the modeling are the next required steps. This is the focus of the present investigation. We present results of

  12. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    Science.gov (United States)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  13. System identification and model reduction using modulating function techniques

    Science.gov (United States)

    Shen, Yan

    1993-01-01

    Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.

  14. LONG TERM FOLLOW UP RESULTS OF RUPTURE TENDO CALCANEUM TREATED BY LINDHOLM TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Sibaji

    2015-11-01

    Full Text Available INTRODUCTION: Rupture of tendon calcaneum is a common problem.There are proponents of both conservative and operative methods.Inadequate strength and re ruptures are frequent. To address both the problems we have chosen Lindholm technique and doing it for last 20 yrs with very good results. MATERIALS AND METHODS: From January 1994 to August 2013, 112 consecutive patients were treated by this method, 85 cases were fresh ruptures, 23 were neglected ruptures and four cases were re rupture after operation done elsewhere. Torn tendo calcaneum was repaired by Kessler suture, it was then augmented with two 8cm by 1cm turn down flap of gastrosoleus apponeurosis. Skin suture was done with utmost care. BK pop cast was done in equinus position of ankle for four weeks, followed by gradual weight bearing with heel raised shoe for six months. RESULTS: All patients went back to their pre injury activity level. In four patients there were superficial skin infection which healed without skin necrosis. One patient needed rotation flap. Evaluation was done by modified Rupp score. It was found to be excellent in 47% cases good in 43% cases and fair in 8% cases. CONCLUSION: Lindholm technique was originally described for neglected cases, we used it in all cases to avoid any complication in fresh cases and found it universally successful.

  15. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lubna Moin

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  16. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Shahid Ali

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  17. Crop Yield Forecasted Model Based on Time Series Techniques

    Institute of Scientific and Technical Information of China (English)

    Li Hong-ying; Hou Yan-lin; Zhou Yong-juan; Zhao Hui-ming

    2012-01-01

    Traditional studies on potential yield mainly referred to attainable yield: the maximum yield which could be reached by a crop in a given environment. The new concept of crop yield under average climate conditions was defined in this paper, which was affected by advancement of science and technology. Based on the new concept of crop yield, the time series techniques relying on past yield data was employed to set up a forecasting model. The model was tested by using average grain yields of Liaoning Province in China from 1949 to 2005. The testing combined dynamic n-choosing and micro tendency rectification, and an average forecasting error was 1.24%. In the trend line of yield change, and then a yield turning point might occur, in which case the inflexion model was used to solve the problem of yield turn point.

  18. Comparison of NDT techniques to evaluate CFRP. Results obtained in a MAIzfp round robin test

    Energy Technology Data Exchange (ETDEWEB)

    Grosse, Christian U. [Technische Univ. Muenchen (Germany). Chair of Non-destructive Testing; Goldammer, Matthias; Grager, Jan-Carl [Siemens AG Corporate Technology, Muenchen (Germany); and others

    2016-10-01

    Fiber reinforced polymeric materials are used for lightweight constructions and are an integral part of cars, airplanes or rotor blades of wind turbines. Nondestructive testing (NDT) methods play an increasing role concerning the manufacturing process and the inspection during lifetime. The selection of the best NDT technique for a certain application depends - of course - on many factors including the type, position and size of the defect to be detected but also on secondary issues like accessibility, automation, testing costs, reliability and resolution to mention only some. For the more technical-scientific part of these issues, the determination of the probability of detection (PoD) plays a significant role. Early in the design process questions should be raised concerning the probability with which certain attribute of interest (a defect that has an effect on the structural behavior) can be detected (and localized) in a certain construction. Several defect types have been identified to be critical like impact damages, undulations and porosity. Test samples out of differently processed Carbon Fiber-Reinforced Polymers (CFRP) as used in the automotive or aeronautical industry have been produced including defects of different type and size. In order to determine the PoD and to check whether a technique is applicable the different partners applied a broad variety of selected NDT techniques including Micro CT, Ultrasound (including phased-array and air-coupled UT), Active Thermography, Eddy Current, Vibration and Visual Analysis and Local Acoustic Resonance Spectroscopy (LARS). The presentation will summarize some of the results of the experiments and ongoing data analysis.

  19. THE IMPROVEMENT OF THE COMPUTATIONAL PERFORMANCE OF THE ZONAL MODEL POMA USING PARALLEL TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Yao Yu

    2014-01-01

    Full Text Available The zonal modeling approach is a new simplified computational method used to predict temperature distribution, energy in multi-zone building and indoor airflow thermal behaviors of building. Although this approach is known to use less computer resource than CFD models, the computational time is still an issue especially when buildings are characterized by complicated geometry and indoor layout of furnishings. Therefore, using a new computing technique to the current zonal models in order to reduce the computational time is a promising way to further improve the model performance and promote the wide application of zonal models. Parallel computing techniques provide a way to accomplish these purposes. Unlike the serial computations that are commonly used in the current zonal models, these parallel techniques decompose the serial program into several discrete instructions which can be executed simultaneously on different processors/threads. As a result, the computational time of the parallelized program can be significantly reduced, compared to that of the traditional serial program. In this article, a parallel computing technique, Open Multi-Processing (OpenMP, is used into the zonal model, Pressurized zOnal Model with the Air diffuser (POMA, in order to improve the model computational performance, including the reduction of computational time and the investigation of the model scalability.

  20. Laparoscopic vasectomy in African savannah elephant (Loxodonta africana); surgical technique and results.

    Science.gov (United States)

    Marais, Hendrik J; Hendrickson, Dean A; Stetter, Mark; Zuba, Jeffery R; Penning, Mark; Siegal-Willott, Jess; Hardy, Christine

    2013-12-01

    Several small, enclosed reserves in southern Africa are experiencing significant elephant population growth, which has resulted in associated environmental damage and changes in biodiversity. Although several techniques exist to control elephant populations, e.g., culling, relocation, and immunocontraception, the technique of laparoscopic vasectomy of free-ranging bull elephants was investigated. Bilateral vasectomies were performed in 45 elephants. Of these elephants, one died within 24 hr of recovery and two had complications during surgery but recovered uneventfully. Histologic examination confirmed the resected tissue as ductus deferens in all the bulls. Most animals recovered uneventfully and showed no abnormal behavior after surgery. Complications recorded included incisional dehiscence, 1 full-thickness and 2 partial-thickness lacerations of the large intestine, and initial sling-associated complications, for example, deep radial nerve paresis. One bull was found dead 6 weeks after surgery without showing any prior abnormal signs. Vasectomy in free-ranging African bull elephants may be effectively performed in their normal environment. The surgical procedure can be used as a realistic population management tool in free-ranging elephants without major anesthetic, surgical, or postoperative complications.

  1. Creation of a Neovagina by Laparoscopic Modified Vecchietti Technique: Anatomic and Functional Results.

    Science.gov (United States)

    Baptista, Eduardo; Carvalho, Giselda; Nobre, Carlos; Dias, Isabel; Torgal, Isabel

    2016-09-01

    Purpose To evaluate the anatomic and functional results of a laparoscopic modified Vecchietti technique for the creation of a neovagina in patients with congenital vaginal aplasia. Methods Retrospective study of nine patients with congenital vaginal aplasia submitted to the laparoscopic Vecchietti procedure, in our department, between 2006 and 2013. The anatomical results were evaluated by assessing the length, width and epithelialization of the neovagina at the postoperative visits. The functional outcome was evaluated using the Rosen Female Sexual Function Index (FSFI) questionnaire and comparing the patients' results to those of a control group of 20 healthy women. The statistical analysis was performed using SPSS Statistics version 19.0 (IBM, Armonk, NY, USA), Student t-test, Mann-Whitney U test and Fisher exact test. Results The condition underlying the vaginal aplasia was Mayer-Rokitansky-Küster-Hauser syndrome in eight cases, and androgen insensitivity syndrome in one case. The average preoperative vaginal length was 2.9 cm. At surgery, the mean age of the patients was 22.2 years. The surgery was performed successfully in all patients and no intra or postoperative complications were recorded. At the first postoperative visit (6 to 8 weeks after surgery), the mean vaginal length was 8.1 cm. In all cases, the neovagina was epithelialized and had an appropriate width. The mean FSFI total and single domain scores did not differ significantly from those of the control group: 27.5 vs. 30.6 (total); 4.0 vs. 4.2 (desire); 4.4 vs. 5.2 (arousal); 5.2 vs. 5.3 (lubrication); 4.2 vs. 5.0 (orgasm); 5.3 vs. 5.5 (satisfaction) and 4.4 vs. 5.4 (comfort). Conclusions This modified laparoscopic Vecchietti technique is a simple, safe and effective procedure, which allows patients with congenital vaginal aplasia to have a satisfactory sexual activity, comparable to that of normal controls.

  2. Results of Patello-Tibial Cerclage Wire Technique for Comminuted Patella Fractures Treated with Partial Patellectomy

    Directory of Open Access Journals (Sweden)

    Ender Alagöz

    2014-12-01

    Full Text Available Aim: Partial patellectomy and patellotibial cerclage technique used in comminuted inferior pole patellar fractures were evaluated and the results were discussed. Methods: Thirteen patients who have undergone partial distal patellar excision were evaluated in the study. In all patients, the inferior pole of the patella was resected, patellar tendon was sutured to the proximal patellar fragment and patellotibial cerclage was performed. At the last visit, the patients were evaluated using measurement of the distance between the superior pole of the patella and the tibial tubercle, the Lysholm knee scoring scale, knee range of motion and thigh circumference measurement. Results: The mean flexion value was 131.10 (±4.6 in normal knees and 117.20 (±8.0 in operated knees. The mean thigh diameter was 49.5 (±3.7 cm and 46.4 (±4.5 cm in normal knees and in operated knees, respectively. The mean Lysholm knee score in the patient group was 84.3 (±17.1 points. The mean distance between the superior pole of the patella and the tibial tubercle was 10.6 (±1.0 cm in normal knees and 10.1 (±1.2 cm in operated knees. The exstensor mechanism was intact in all patients and no revision surgery was performed. Conclusion: Patellotibial cerclage technique performed after partial patellectomy permits early motion and protects patients from harmful effects of immobilization; and good functional results are obtained if patients start early knee motion.

  3. RESULTS OF INTERBANK EXCHANGE RATES FORECASTING USING STATE SPACE MODEL

    Directory of Open Access Journals (Sweden)

    Muhammad Kashif

    2008-07-01

    Full Text Available This study evaluates the performance of three alternative models for forecasting daily interbank exchange rate of U.S. dollar measured in Pak rupees. The simple ARIMA models and complex models such as GARCH-type models and a state space model are discussed and compared. Four different measures are used to evaluate the forecasting accuracy. The main result is the state space model provides the best performance among all the models.

  4. A formal model for integrity protection based on DTE technique

    Institute of Scientific and Technical Information of China (English)

    JI Qingguang; QING Sihan; HE Yeping

    2006-01-01

    In order to provide integrity protection for the secure operating system to satisfy the structured protection class' requirements, a DTE technique based integrity protection formalization model is proposed after the implications and structures of the integrity policy have been analyzed in detail. This model consists of some basic rules for configuring DTE and a state transition model, which are used to instruct how the domains and types are set, and how security invariants obtained from initial configuration are maintained in the process of system transition respectively. In this model, ten invariants are introduced, especially, some new invariants dealing with information flow are proposed, and their relations with corresponding invariants described in literatures are also discussed.The thirteen transition rules with well-formed atomicity are presented in a well-operational manner. The basic security theorems correspond to these invariants and transition rules are proved. The rationalities for proposing the invariants are further annotated via analyzing the differences between this model and ones described in literatures. At last but not least, future works are prospected, especially, it is pointed out that it is possible to use this model to analyze SE-Linux security.

  5. Identification techniques for phenomenological models of hysteresis based on the conjugate gradient method

    Energy Technology Data Exchange (ETDEWEB)

    Andrei, Petru [Electrical and Computer Engineering Department, Florida State Unviersity, Tallahassee, FL 32310 (United States) and Electrical and Computer Engineering Department, Florida A and M Unviersity, Tallahassee, FL 32310 (United States)]. E-mail: pandrei@eng.fsu.edu; Oniciuc, Liviu [Electrical and Computer Engineering Department, Florida State Unviersity, Tallahassee, FL 32310 (United States); Stancu, Alexandru [Faculty of Physics, ' Al. I. Cuza' University, Iasi 700506 (Romania); Stoleriu, Laurentiu [Faculty of Physics, ' Al. I. Cuza' University, Iasi 700506 (Romania)

    2007-09-15

    An identification technique for the parameters of phenomenological models of hysteresis is presented. The basic idea of our technique is to set up a system of equations for the parameters of the model as a function of known quantities on the major or minor hysteresis loops (e.g. coercive force, susceptibilities at various points, remanence), or other magnetization curves. This system of equations can be either over or underspecified and is solved by using the conjugate gradient method. Numerical results related to the identification of parameters in the Energetic, Jiles-Atherton, and Preisach models are presented.

  6. Application of a systematic finite-element model modification technique to dynamic analysis of structures

    Science.gov (United States)

    Robinson, J. C.

    1982-01-01

    A systematic finite-element model modification technique has been applied to two small problems and a model of the main wing box of a research drone aircraft. The procedure determines the sensitivity of the eigenvalues and eigenvector components to specific structural changes, calculates the required changes and modifies the finite-element model. Good results were obtained where large stiffness modifications were required to satisfy large eigenvalue changes. Sensitivity matrix conditioning problems required the development of techniques to insure existence of a solution and accelerate its convergence. A method is proposed to assist the analyst in selecting stiffness parameters for modification.

  7. Automatic parameter extraction techniques in IC-CAP for a compact double gate MOSFET model

    Science.gov (United States)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-05-01

    In this paper, automatic parameter extraction techniques of Agilent's IC-CAP modeling package are presented to extract our explicit compact model parameters. This model is developed based on a surface potential model and coded in Verilog-A. The model has been adapted to Trigate MOSFETs, includes short channel effects (SCEs) and allows accurate simulations of the device characteristics. The parameter extraction routines provide an effective way to extract the model parameters. The techniques minimize the discrepancy and error between the simulation results and the available experimental data for more accurate parameter values and reliable circuit simulation. Behavior of the second derivative of the drain current is also verified and proves to be accurate and continuous through the different operating regimes. The results show good agreement with measured transistor characteristics under different conditions and through all operating regimes.

  8. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  9. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  10. Cardiac CT for the assessment of chest pain: Imaging techniques and clinical results

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Hans-Christoph, E-mail: christoph.becker@med.uni-muenchen.de [Ludwig-Maximilians-University, Grosshadern Clinic, Department of Clinical Radiology, Marchioninistr. 15, 81377 Munich (Germany); Johnson, Thorsten [Ludwig-Maximilians-University, Grosshadern Clinic, Department of Clinical Radiology, Marchioninistr. 15, 81377 Munich (Germany)

    2012-12-15

    Immediate and efficient risk stratification and management of patients with acute chest pain in the emergency department is challenging. Traditional management of these patients includes serial ECG, laboratory tests and further on radionuclide perfusion imaging or ECG treadmill testing. Due to the advances of multi-detector CT technology, dedicated coronary CT angiography provides the potential to rapidly and reliably diagnose or exclude acute coronary artery disease. Life-threatening causes of chest pain, such as aortic dissection and pulmonary embolism can simultaneously be assessed with a single scan, sometimes referred to as “triple rule out” scan. With appropriate patient selection, cardiac CT can accurately diagnose heart disease or other sources of chest pain, markedly decrease health care costs, and reliably predict clinical outcomes. This article reviews imaging techniques and clinical results for CT been used to evaluate patients with chest pain entering the emergency department.

  11. An evaluation of the partial reflection technique and results from the winter 1971 - 1972 D region

    Science.gov (United States)

    Dasilva, L. C.; Bowhill, S. A.

    1974-01-01

    Fundamental physical and chemical processes, and measurement techniques on the D region are reviewed. Design considerations about a partial-reflection system are made, and the main characteristics of the partial-reflection system at the University of Illinois are presented. The nature of the partial reflections is discussed, particularly reflections produced by gradients in electron density and by random fluctuations in a locally homogeneous random medium. Possible reasons for disagreement between partial reflections and rocket measurements are discussed. Some suggestions are made to improve partial-reflection data reduction, including the use of only maximums of the reflections and deconvolution of the data. The results of partial-reflection measurements at Wallops Island, Virginia during the 1971-1972 winter are presented and compared to rocket measurements.

  12. Numerical Time-Domain Modeling of Lamb Wave Propagation Using Elastodynamic Finite Integration Technique

    OpenAIRE

    Hussein Rappel; Aghil Yousefi-Koma; Jalil Jamali; Ako Bahari

    2014-01-01

    This paper presents a numerical model of lamb wave propagation in a homogenous steel plate using elastodynamic finite integration technique (EFIT) as well as its validation with analytical results. Lamb wave method is a long range inspection technique which is considered to have unique future in the field of structural health monitoring. One of the main problems facing the lamb wave method is how to choose the most appropriate frequency to generate the waves for adequate transmission capab...

  13. Competitive elite golf: a review of the relationships between playing results, technique and physique.

    Science.gov (United States)

    Hellström, John

    2009-01-01

    Elite golfers commonly use fitness and technical training to become more competitive. The aim of this paper was to review the literature regarding the relationships between elite golfers' playing results, technique and physique. The competitive outcome is a direct function of the score. The three golf statistical measures that show the strongest correlations to scoring average are greens in regulation (GIR), scrambling, and putts per GIR. However, more detailed game statistics are needed where the distances to the targets are known before and after the strokes. Players affect ball displacement by controlling clubhead velocity and clubface angle during club and ball impact. X-factor studies have produced ambiguous results, possibly caused by different definitions of upper torso, rotation and top of backswing. Higher clubhead speed is generally associated with larger spinal rotation and shoulder girdle protraction at the top of the backswing. It is also associated with higher ground reaction forces and torques, a bottom-up and sequential increase of body segment angular velocities, a rapid increase of spinal rotation and a late adduction of the wrists during the downswing. Players can increase the clubhead speed generated by a swinging motion by actively adding a force couple. Wrist, elbow and shoulder force couple strategies should be differentiated when investigating the technique. Physical parameters such as anthropometrics, strength and flexibility are associated with skill level and clubhead speed. Current studies have investigated the linear correlation between arm and shaft lengths and clubhead speed, but a quadratic relationship may be stronger due to changes in moment of inertia. Fitness training can increase and perhaps decrease the clubhead speed and striking distance, depending on training methods and the player's fitness and level of skill. Future studies may focus on individual training needs and the relationship between physique, execution and its

  14. Advanced computer modeling techniques expand belt conveyor technology

    Energy Technology Data Exchange (ETDEWEB)

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  15. TESTING DIFFERENT SURVEY TECHNIQUES TO MODEL ARCHITECTONIC NARROW SPACES

    Directory of Open Access Journals (Sweden)

    A. Mandelli

    2017-08-01

    Full Text Available In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH field. In fact, even if the technical specifications (range, accuracy and field of view are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan’s cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  16. Testing Different Survey Techniques to Model Architectonic Narrow Spaces

    Science.gov (United States)

    Mandelli, A.; Fassi, F.; Perfetti, L.; Polari, C.

    2017-08-01

    In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH) field. In fact, even if the technical specifications (range, accuracy and field of view) are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan's cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  17. Determination of Complex-Valued Parametric Model Coefficients Using Artificial Neural Network Technique

    Directory of Open Access Journals (Sweden)

    A. M. Aibinu

    2010-01-01

    Full Text Available A new approach for determining the coefficients of a complex-valued autoregressive (CAR and complex-valued autoregressive moving average (CARMA model coefficients using complex-valued neural network (CVNN technique is discussed in this paper. The CAR and complex-valued moving average (CMA coefficients which constitute a CARMA model are computed simultaneously from the adaptive weights and coefficients of the linear activation functions in a two-layered CVNN. The performance of the proposed technique has been evaluated using simulated complex-valued data (CVD with three different types of activation functions. The results show that the proposed method can accurately determine the model coefficients provided that the network is properly trained. Furthermore, application of the developed CVNN-based technique for MRI K-space reconstruction results in images with improve resolution.

  18. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  19. Antegrade scrotal sclerotherapy of internal spermatic veins for varicocele treatment: technique, complications, and results

    Directory of Open Access Journals (Sweden)

    Alessandro Crestani

    2016-01-01

    Full Text Available Varicocele repair is mainly indicated in young adult patients with clinical palpable varicocele and abnormal semen parameters. Varicocele treatment is associated with a significant improvement in sperm concentration, motility, morphology, and pregnancy rate. Antegrade scrotal sclerotherapy (ASS represented one of the main alternatives to the traditional inguinal or suprainguinal surgical ligation. This article reviews the use of ASS for varicocele treatment. We provide a brief overview of the history of the procedure and present our methods used in ASS. In addition, we review complication and success of ASS, including our own retrospective data of treating 674 patients over the last 17 years. Herein, we analyzed step by step the ASS technique and described our results with an original modified technique with a long follow-up. Between December 1997 and December 2014, we performed 674 ASS. Mean operative time was 14 min (range 9 to 50 min. No significant intraoperative complications were reported. Within 90 days from the procedure, postoperative complications were recorded in overall 49 (7.2% patients. No major complications were recorded. A persistent/recurrent varicocele was detected in 40 (5.9% cases. In 32/40 (80% cases, patients showed preoperative grade III varicoceles. In patients with a low sperm number before surgery, sperm count improved from 13 × 10 6 to 21 × 10 6 ml−1 (P < 0.001. The median value of the percentage of progressive motile forms at 1 h improved from 25% to 45% (P < 0.001. Percentage of normal forms increased from 17% before surgery to 35% 1 year after the procedure (P < 0.001. In the subgroup of the 168 infertile patients, 52 (31% fathered offspring at a 12-month-minimum follow-up. Therefore, ASS is an effective minimal invasive treatment for varicocele with low recurrence/persistence rate.

  20. Transmission resonance Raman spectroscopy: experimental results versus theoretical model calculations.

    Science.gov (United States)

    Gonzálvez, Alicia G; González Ureña, Ángel

    2012-10-01

    A laser spectroscopic technique is described that combines transmission and resonance-enhanced Raman inelastic scattering together with low laser power (view, a model for the Raman signal dependence on the sample thickness is also presented. Essentially, the model considers the sample to be homogeneous and describes the underlying physics using only three parameters: the Raman cross-section, the laser-radiation attenuation cross-section, and the Raman signal attenuation cross-section. The model was applied successfully to describe the sample-size dependence of the Raman signal in both β-carotene standards and carrot roots. The present technique could be useful for direct, fast, and nondestructive investigations in food quality control and analytical or physiological studies of animal and human tissues.

  1. Antenna pointing system for satellite tracking based on Kalman filtering and model predictive control techniques

    Science.gov (United States)

    Souza, André L. G.; Ishihara, João Y.; Ferreira, Henrique C.; Borges, Renato A.; Borges, Geovany A.

    2016-12-01

    The present work proposes a new approach for an antenna pointing system for satellite tracking. Such a system uses the received signal to estimate the beam pointing deviation and then adjusts the antenna pointing. The present work has two contributions. First, the estimation is performed by a Kalman filter based conical scan technique. This technique uses the Kalman filter avoiding the batch estimator and applies a mathematical manipulation avoiding the linearization approximations. Secondly, a control technique based on the model predictive control together with an explicit state feedback solution are obtained in order to reduce the computational burden. Numerical examples illustrate the results.

  2. Photolysis frequency measurement techniques: results of a comparison within the ACCENT project

    Directory of Open Access Journals (Sweden)

    K. C. Clemitshaw

    2008-09-01

    Full Text Available An intercomparison of different radiometric techniques measuring atmospheric photolysis frequencies j(NO2, j(HCHO and j(O1D was carried out in a two-week field campaign in June 2005 at Jülich, Germany. Three double-monochromator based spectroradiometers (DM-SR, three single-monochromator based spectroradiometers with diode-array detectors (SM-SR and seventeen filter radiometers (FR (ten j(NO2-FR, seven j(O1D-FR took part in this comparison. For j(NO2, all spectroradiometer results agreed within ±3%. For j(HCHO, agreement was slightly poorer between −8% and +4% of the DM-SR reference result. For the SM-SR deviations were explained by poorer spectral resolutions and lower accuracies caused by decreased sensitivities of the photodiode arrays in a wavelength range below 350 nm. For j(O1D, the results were more complex within +8% and −4% with increasing deviations towards larger solar zenith angles for the SM-SR. The direction and the magnitude of the deviations were dependent on the technique of background determination. All j(NO2-FR showed good linearity with single calibration factors being sufficient to convert from output voltages to j(NO2. Measurements were feasible until sunset and comparison with previous calibrations showed good long-term stability. For the j(O1D-FR, conversion from output voltages to j(O1D needed calibration factors and correction functions considering the influences of total ozone column and elevation of the sun. All instruments showed good linearity at photolysis frequencies exceeding about 10% of maximum values. At larger solar zenith angles, the agreement was non-uniform with deviations explainable by insufficient correction functions. Comparison with previous calibrations for some j(O1D-FR indicated

  3. Changes in selected biochemical indices resulting from various pre-sampling handling techniques in broilers.

    Science.gov (United States)

    Chloupek, Petr; Bedanova, Iveta; Chloupek, Jan; Vecerek, Vladimir

    2011-05-13

    Since it is not yet clear whether it is possible to satisfactorily avoid sampling-induced stress interference in poultry, more studies on the pattern of physiological response and detailed quantification of stress connected with the first few minutes of capture and pre-sampling handling in poultry are required. This study focused on detection of changes in the corticosterone level and concentrations of other selected biochemical parameters in broilers handled in two different manners during blood sampling (involving catching, carrying, restraint, and blood collection itself) that lasted for various time periods within the interval 30-180 seconds. Stress effects of pre-sampling handling were studied in a group (n = 144) of unsexed ROSS 308 broiler chickens aged 42 d. Handling (catching, carrying, restraint, and blood sampling itself) was carried out in a gentle (caught, held and carried carefully in an upright position) or rough (caught by the leg, held and carried with lack of care in inverted position) manner and lasted for 30 s, 60 s, 90 s, 120 s, 150 s, and 180 s. Plasma corticosterone, albumin, glucose, cholesterol, lactate, triglycerides and total protein were measured in order to assess the stress-induced changes to these biochemical indices following handling in the first few minutes of capture. Pre-sampling handling in a rough manner resulted in considerably higher plasma concentrations of all biochemical indices monitored when compared with gentle handling. Concentrations of plasma corticosterone after 150 and 180 s of handling were considerably higher (P technique. Concentrations of plasma lactate were also increased by prolonged handling duration. Handling for 90-180 seconds resulted in a highly significant elevation of lactate concentration in comparison with 30 s handling regardless of handling technique. Similarly to corticosterone concentrations, a strong positive correlation was found between plasma lactate and duration of pre-sampling handling

  4. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    Science.gov (United States)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  5. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems

    Science.gov (United States)

    Timmis, Jon; Qwarnstrom, Eva E.

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414

  6. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.

    Science.gov (United States)

    Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.

  7. A Morphing Technique Applied to Lung Motions in Radiotherapy: Preliminary Results

    Directory of Open Access Journals (Sweden)

    R. Laurent

    2010-01-01

    Full Text Available Organ motion leads to dosimetric uncertainties during a patient’s treatment. Much work has been done to quantify the dosimetric effects of lung movement during radiation treatment. There is a particular need for a good description and prediction of organ motion. To describe lung motion more precisely, we have examined the possibility of using a computer technique: a morphing algorithm. Morphing is an iterative method which consists of blending one image into another image. To evaluate the use of morphing, Four Dimensions Computed Tomography (4DCT acquisition of a patient was performed. The lungs were automatically segmented for different phases, and morphing was performed using the end-inspiration and the end-expiration phase scans only. Intermediate morphing files were compared with 4DCT intermediate images. The results showed good agreement between morphing images and 4DCT images: fewer than 2 % of the 512 by 256 voxels were wrongly classified as belonging/not belonging to a lung section. This paper presents preliminary results, and our morphing algorithm needs improvement. We can infer that morphing offers considerable advantages in terms of radiation protection of the patient during the diagnosis phase, handling of artifacts, definition of organ contours and description of organ motion.

  8. Arthroscopic repair of combined Bankart and SLAP lesions: operative techniques and clinical results.

    Science.gov (United States)

    Cho, Hyung Lae; Lee, Choon Key; Hwang, Tae Hyok; Suh, Kuen Tak; Park, Jong Won

    2010-03-01

    To evaluate the clinical results and operation technique of arthroscopic repair of combined Bankart and superior labrum anterior to posterior (SLAP) lesions, all of which had an anterior-inferior Bankart lesion that continued superiorly to include separation of the biceps anchor in the patients presenting recurrent shoulder dislocations. From May 2003 to January 2006, we reviewed 15 cases with combined Bankart and SLAP lesions among 62 patients with recurrent shoulder dislocations who underwent arthroscopic repair. The average age at surgery was 24.2 years (range, 16 to 38 years), with an average follow-up period of 15 months (range, 13 to 28 months). During the operation, we repaired the unstable SLAP lesion first with absorbable suture anchors and then also repaired Bankart lesion from the inferior to superior fashion. We analyzed the preoperative and postoperative results by visual analogue scale (VAS) for pain, the range of motion, American Shoulder and Elbow Surgeon (ASES) and Rowe shoulder scoring systems. We compared the results with the isolated Bankart lesion. VAS for pain was decreased from preoperative 4.9 to postoperative 1.9. Mean ASES and Rowe shoulder scores were improved from preoperative 56.4 and 33.7 to postoperative 91.8 and 94.1, respectively. There were no specific complication and no significant limitation of motion more than 10 degree at final follow-up. We found the range of motions after the arthroscopic repair in combined lesions were gained more slowly than in patients with isolated Bankart lesions. In recurrent dislocation of the shoulder with combined Bankart and SLAP lesion, arthroscopic repair using absorbable suture anchors produced favorable clinical results. Although it has technical difficulty, the concomitant unstable SLAP lesion should be repaired in a manner that stabilizes the glenohumeral joint, as the Bankart lesion can be repaired if the unstable SLAP lesion is repaired first.

  9. Técnica de suturas ajustables: Resultados Technique of adjustable sutures: Results

    Directory of Open Access Journals (Sweden)

    Lourdes R. Hernández Santos

    2001-06-01

    Full Text Available Se realizó un estudio sensorial y motor preoperatorio y posoperatorio a 84 pacientes que acudieron a la consulta de Visión Binocular con el diagnóstico de estrabismo horizontal a partir de los 13 años de edad. El método estadístico utilizado fue "t" o Chi cuadrado. Nos trazamos como objetivo determinar los resultados posoperatorios de la cirugía de estrabismo realizada con la técnica de suturas ajustables, que fueron los siguientes: el 61 % de los pacientes con exotropía y el 71,4 % con el diagnóstico de exotropía se encontraban en ortotropía a los 6 meses de la intervención. El 71,4 % de los pacientes con esotropía y el 83,3 % con el diagnóstico de esotropía se encontraban en ortotropía al año de la intervención. Esta técnica quirúrgica permite la modificación de la desviación en el posoperatorio inmediato.A preoperative and postoperative sensorial and motor study was conducted among 84 patients who received attention at the consultation room of Binocular Vision with the diagnosis of horizontal strabismus from the age of 13 years old on. The statistical method used was "t" or chi square test. Our objective was to determine the postoperative results of the strabismus surgery performed by the technique of adjustable sutures. The results were as follows: 61 % of the patients with exotropia and 71.4 % with the diagnosis of exotropia were in orthotropia 6 months after the operation. 71.4 % of the patients with exotropia and 83.3 % with the diagnosis of exotropia were in orthotropia a year after the operation. This surgical technique allows the modification of the deviation in the immediate postoperative.

  10. Ambiguities in results obtained with 2D gel replicon mapping techniques

    NARCIS (Netherlands)

    Linskens, Maarten H.K.; Huberman, Joel A.

    1990-01-01

    Recently, two 2-dimensional (2D) gel techniques, termed neutral/neutral and neutral/alkaline, have been developed and employed to map replication origins in eukaryotic plasmids and chromosomal DNA. The neutral/neutral technique, which requires less DNA for analysis, has been preferentially used in r

  11. Total cysto-prostatectomy: Technique description and results in 2 dogs.

    Science.gov (United States)

    Bacon, Nicholas; Souza, Carlos H de M; Franz, Sarah

    2016-02-01

    We describe a novel technique for total cysto-prostatectomy, followed by uretero-urethral anastomosis in 2 dogs. The technique was successful and was performed without pubic osteotomy. Post-operative urinary tract infections may be a potentially serious event.

  12. Proximal gastric vagotomy: effects of two operative techniques on clinical and gastric secretory results.

    Science.gov (United States)

    Hallenbeck, G A; Gleysteen, J J; Aldrete, J S; Slaughter, R L

    1976-01-01

    PGV performed in 39 patients by separating the lesser omentum from the stomach beginning 6 or 7 cm proximal to the pylorus and skeletonizing the distal 1 to 2 cm of esophagus was followed by 15.4% of proven and 10.2 of suspected recurrent ulcers. Insulin tests were done during the first 3 months postoperatively on 31 of the patients, including the 6 with proven and the 4 with suspected recurrent ulcers. The peak acid output to insulin minus tha basal acid output (PAOI-BAO) was less than 5 mEq/hr in 16 cases (52%) and from 5 to 25 mEq/hr in the remaining 15 cases. In 6 patients with proven recurrent ulcer, PAOI-BAO averaged 21.9 mEq/hr (range, 11.3 to 41.8); in the 4 patients with suspected recurrence, 9.5 (range, 4.4 to 11.8).The operative technique was changed in one respect; the distal 5 to 7.5 cm of the esophagus was skeletonized. In 14 patients, the mean PAOI-BAO +/- S.E. within 3 months of PGV was 1985 +/- 0.7 mEq/hr, and 13 of 14 values were less than 5 mEq/hr. One patient developed recurrent ulcer and required re-operation; this patient's value for PAO-BAO was 1.8 mEq/hr. The results show quantitatively that great differences in the completeness of PGV result from differences in the periesophageal dissection and emphasize its importance if optimal results are to be obtained and, especially, if the efficacy of the operation is to be judged. PMID:1015889

  13. First results on a process-oriented rain area classification technique using Meteosat Second Generation SEVIRI nighttime data

    Directory of Open Access Journals (Sweden)

    B. Thies

    2008-04-01

    Full Text Available A new technique for process-oriented rain area classification using Meteosat Second Generation SEVIRI nighttime data is introduced. It is based on a combination of the Advective Convective Technique (ACT which focuses on precipitation areas connected to convective processes and the Rain Area Delineation Scheme during Nighttime (RADS-N a new technique for the improved detection of stratiform precipitation areas (e.g. in connection with mid-latitude frontal systems. The ACT which uses positive brightness temperature differences between the water vapour (WV and the infrared (IR channels (ΔTWV-IR for the detection of convective clouds and connected precipitating clouds has been transferred from Meteosat First Generation (MFG Metesoat Visible and Infra-Red Imager radiometer (MVIRI to Meteosat Second Generation (MSG Spinning Enhanced Visible and InfraRed Imager (SEVIRI. RADS-N is based on the new conceptual model that precipitating cloud areas are characterised by a large cloud water path (cwp and the presence of ice particles in the upper part of the cloud. The technique considers information about both parameters inherent in the channel differences ΔT3.9-10.8, ΔT3.9-7.3, ΔT8.7-10.8, and ΔT10.8-12.1, to detect potentially precipitating cloud areas. All four channel differences are used to gain implicit knowledge about the cwp. ΔT8.7-10.8 and ΔT10.8-12.1 are additionally considered to gain information about the cloud phase. First results of a comparison study between the classified rain areas and corresponding ground based radar data for precipitation events in connection with a cold front occlusion show encouraging performance of the new proposed process-oriented rain area classification scheme.

  14. NVC Based Model for Selecting Effective Requirement Elicitation Technique

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-10-01

    Full Text Available Requirement Engineering process starts from gathering of requirements i.e.; requirements elicitation. Requirementselicitation (RE is the base building block for a software project and has very high impact onsubsequent design and builds phases as well. Accurately capturing system requirements is the major factorin the failure of most of software projects. Due to the criticality and impact of this phase, it is very importantto perform the requirements elicitation in no less than a perfect manner. One of the most difficult jobsfor elicitor is to select appropriate technique for eliciting the requirement. Interviewing and Interactingstakeholder during Elicitation process is a communication intensive activity involves Verbal and Nonverbalcommunication (NVC. Elicitor should give emphasis to Non-verbal communication along with verbalcommunication so that requirements recorded more efficiently and effectively. In this paper we proposea model in which stakeholders are classified by observing non-verbal communication and use it as a basefor elicitation technique selection. We also propose an efficient plan for requirements elicitation which intendsto overcome on the constraints, faced by elicitor.

  15. Conjunction Assessment Techniques and Operational Results from the Magnetospheric Multiscale Mission

    Science.gov (United States)

    Williams, Trevor; Carpenter, Russell; Farahmand, Mitra; Ottenstein, Neil; Demoret, Michael; Godine, Dominic

    2017-01-01

    This paper will describe the results that have been obtained to date during the MMS mission concerning conjunction assessment. MMS navigation makes use of a weak-signal GPS-based system: this allows signals to be received even when MMS is flying above the GPS orbits, producing a highly accurate determination of the four MMS orbits. This data is downlinked to the MMS Mission Operations Center (MOC) and used by the Flight Dynamics Operations Area (FDOA) for both maneuver design and conjunction assessment. The MMS fly in tetrahedron formations around apogee, in order to collect simultaneous particles and fields science data. The original plan was to fly tetrahedra between 10 and 160 km in size; however, after Phase 1a of the mission, the science team requested that smaller sizes be flown if feasible. After analysis (to be detailed in a companion paper), a new minimum size of 7 km was decided upon. Flying at this reduced scale size makes conjunction assessment between the MMS spacecraft even more important: the methods that are used by the MMS FDOA to address this problem will be described in the paper, and a summary given of the previous analyses that went into the development of these techniques. Details will also be given of operational experiences to date. Finally, two CA mitigation maneuver types that have been designed (but never yet required to actually be performed) will also be outlined.

  16. Laparoscopic parastomal hernia repair: a description of the technique and initial results.

    Science.gov (United States)

    Zacharakis, Emmanouil; Hettige, Roland; Purkayastha, Sanjay; Aggarwal, Rajesh; Athanasiou, Thanos; Darzi, Ara; Ziprin, Paul

    2008-06-01

    In this study, the authors review their initial results with the laparoscopic approach for parastomal hernia repair. Between 2006 and 2007, 4 patients were treated laparoscopically at our institution. The hernia sac was not excised. A piece of Gore-Tex DualMesh with a central keyhole and a radial incision was cut so that it could provide at least 3 to 5 cm of overlap of the fascial defect. The mesh was secured to the margins of the hernia with circumferential metal tacking and trans-fascial sutures. No complications occurred in the postoperative period. After a median follow-up of 9 months, recurrence occurred in 1 patient. This was our first patient in whom mesh fixation was performed only with circumferential metal tacking. The laparoscopic repair of parastomal hernias seems to be a safe, feasible and promising technique offering the advantages of minimally-invasive surgery. The success of this approach depends on longer follow-up reports and standardization of the technical elements.

  17. INVESTIGATION OF DAMPENED SYSTEM AND DRY OFFSET PRINTING TECHNIQUES AND COMPARISON OF REPRODUCTION RESULTS

    Directory of Open Access Journals (Sweden)

    Sinan ULU

    2006-03-01

    Full Text Available Printing ; is the process of reproduction rapid transfering of the images, inscriptions, figures and graphics on to a surface in the original format. The dampened system offset printing is the smooth printing system. Smooth printing is based on the principal of repelling of the water by the oil inside the ink and not to be mixed with. The printed and unprinted surfaces on the plate has a chemical characteristic. The process of printing occurs in the way that. The unprinted areas captures water and repell ink, the printed areas captures ink and repell water.The dry offset system is; the combination of the flexo and dampened system offset methods. In the method the relief part of the plate transfers the image to the smooth surfaced rubber printing cylinder and the image is transferred to the material by the cylinder. Water is not needed for humidification. So the plate is not wetted before printing. The pictures or positive dias are not printed by dry ofset method but, the hand dravings, very detailed desings and quite small texts could be printed to the range of six colors. In this study, an evaluation of the comparison of the process stages and reproduction results of the two techniques is conducted.

  18. Development of a New Technique to Assess Susceptibility to Predation Resulting from Sublethal Stresses (Indirect Mortality)

    Energy Technology Data Exchange (ETDEWEB)

    Cada, G.F.

    2003-08-25

    Fish that pass through a hydroelectric turbine may not be killed directly, but may nonetheless experience sublethal stresses that will increase their susceptibility to predators (indirect mortality). There is a need to develop reliable tests for indirect mortality so that the full consequences of passage through turbines (and other routes around a hydroelectric dam) can be assessed. We evaluated a new technique for assessing indirect mortality, based on a behavioral response to a startling stimulus (akin to perceiving an approaching predator). We compare this technique to the standard predator preference test. The behavioral response is a rapid movement commonly referred to as a startle response, escape response, or C-shape, based on the characteristic body position assumed by the fish. When viewed from above, a startled fish bends into a C-shape, then springs back and swims away in a direction different from its original orientation. This predator avoidance (escape) behavior can be compromised by sublethal stresses that temporarily stun or disorient the fish. We subjected striped shiners and fathead minnows to varying intensities of either turbulence (10-, 20- or 30-min) or 2-min exposures to a fish anesthetic (100 or 200 mg/L of tricaine methanesulfonate), and evaluated their subsequent behavior. Individual fish were given a startle stimulus and filmed with a high-speed video camera. Each fish was startled and filmed twice before being stressed, and then at 1-, 5-, 15-, and 30-min post-exposure. The resulting image files were analyzed for a variety of behavioral measures including: presence of a response, time to first reaction, duration of reaction, time to formation of maximum C-shape, time to completion of C-shape, and completeness of C-shape. The most immediate measure of potential changes in fish behavior was whether stressed fish exhibited a startle response. For striped shiners, the number of fish not responding to the stimulus was significantly different

  19. Evaluation of Midterm Clinical Results of All inside Suture Technique in Meniscus Repair

    Directory of Open Access Journals (Sweden)

    Murat Gül

    2015-03-01

    Full Text Available Aim: The aim of this study was to evaluate the functional outcomes of arthroscopic all-inside meniscal repair at an average 5-year follow-up. Methods: Thirty-two patients (29 males 3 females; 19 right knees, 13 left knees, who underwent arthroscopic all-inside meniscal repair, were included in the study. Clinical examination and magnetic resonance imaging were the main diagnostic tools. The mean age of the patients was 28 years (23-41 years. ACL reconstruction was performed in the same session in 12 patients with meniscal injury associated with ACL tear. Preoperative and postoperative functional knee scores of the patients were assessed by modified Marshall functional knee scores in their last follow-up. Results: The mean follow-up period was 58 months (range 49- 81. Marshall knee scores in the last follow-up were found to be excellent in 23 patients, good - in 8 patients, and moderate in 1 patient. ACL reconstruction was performed in the same session in 12 patients with meniscal injury associated with ACL tear. A statistically significant functional improvement was detected in patients with meniscal repair after 5 years. Conclusion: This study showed that all-inside meniscal repair technique is an easy and reliable method for the treatment of meniscus tears. (The Medical Bulletin of Haseki 2015; 53:47-51

  20. The 21-SPONGE HI Absorption Survey I: Techniques and Initial Results

    CERN Document Server

    Murray, Claire E; Goss, W M; Dickey, John M; Heiles, Carl; Lindner, Robert R; Babler, Brian; Pingel, Nickolas M; Lawrence, Allen; Jencson, Jacob; Hennebelle, Patrick

    2015-01-01

    We present methods and results from "21-cm Spectral Line Observations of Neutral Gas with the EVLA" (21-SPONGE), a large survey for Galactic neutral hydrogen (HI) absorption with the Karl G. Jansky Very Large Array (VLA). With the upgraded capabilities of the VLA, we reach median root-mean-square (RMS) noise in optical depth of $\\sigma_{\\tau}=9\\times 10^{-4}$ per $0.42\\rm\\,km\\,s^{-1}$ channel for the 31 sources presented here. Upon completion, 21-SPONGE will be the largest HI absorption survey with this high sensitivity. We discuss the observations and data reduction strategies, as well as line fitting techniques. We prove that the VLA bandpass is stable enough to detect broad, shallow lines associated with warm HI, and show that bandpass observations can be combined in time to reduce spectral noise. In combination with matching HI emission profiles from the Arecibo Observatory ($\\sim3.5'$ angular resolution), we estimate excitation (or spin) temperatures ($\\rm T_s$) and column densities for Gaussian componen...

  1. Fluid-Structure Interaction in Abdominal Aortic Aneurysm: Effect of Modeling Techniques

    Directory of Open Access Journals (Sweden)

    Shengmao Lin

    2017-01-01

    Full Text Available In this work, the impact of modeling techniques on predicting the mechanical behaviors of abdominal aortic aneurysm (AAA is systematically investigated. The fluid-structure interaction (FSI model for simultaneously capturing the transient interaction between blood flow dynamics and wall mechanics was compared with its simplified techniques, that is, computational fluid dynamics (CFD or computational solid stress (CSS model. Results demonstrated that CFD exhibited relatively smaller vortexes and tends to overestimate the fluid wall shear stress, compared to FSI. On the contrary, the minimal differences in wall stresses and deformation were observed between FSI and CSS models. Furthermore, it was found that the accuracy of CSS prediction depends on the applied pressure profile for the aneurysm sac. A large pressure drop across AAA usually led to the underestimation of wall stresses and thus the AAA rupture. Moreover, the assumed isotropic AAA wall properties, compared to the anisotropic one, will aggravate the difference between the simplified models with the FSI approach. The present work demonstrated the importance of modeling techniques on predicting the blood flow dynamics and wall mechanics of the AAA, which could guide the selection of appropriate modeling technique for significant clinical implications.

  2. Fluid-Structure Interaction in Abdominal Aortic Aneurysm: Effect of Modeling Techniques.

    Science.gov (United States)

    Lin, Shengmao; Han, Xinwei; Bi, Yonghua; Ju, Siyeong; Gu, Linxia

    2017-01-01

    In this work, the impact of modeling techniques on predicting the mechanical behaviors of abdominal aortic aneurysm (AAA) is systematically investigated. The fluid-structure interaction (FSI) model for simultaneously capturing the transient interaction between blood flow dynamics and wall mechanics was compared with its simplified techniques, that is, computational fluid dynamics (CFD) or computational solid stress (CSS) model. Results demonstrated that CFD exhibited relatively smaller vortexes and tends to overestimate the fluid wall shear stress, compared to FSI. On the contrary, the minimal differences in wall stresses and deformation were observed between FSI and CSS models. Furthermore, it was found that the accuracy of CSS prediction depends on the applied pressure profile for the aneurysm sac. A large pressure drop across AAA usually led to the underestimation of wall stresses and thus the AAA rupture. Moreover, the assumed isotropic AAA wall properties, compared to the anisotropic one, will aggravate the difference between the simplified models with the FSI approach. The present work demonstrated the importance of modeling techniques on predicting the blood flow dynamics and wall mechanics of the AAA, which could guide the selection of appropriate modeling technique for significant clinical implications.

  3. Fluid-Structure Interaction in Abdominal Aortic Aneurysm: Effect of Modeling Techniques

    Science.gov (United States)

    Lin, Shengmao; Han, Xinwei; Bi, Yonghua; Ju, Siyeong

    2017-01-01

    In this work, the impact of modeling techniques on predicting the mechanical behaviors of abdominal aortic aneurysm (AAA) is systematically investigated. The fluid-structure interaction (FSI) model for simultaneously capturing the transient interaction between blood flow dynamics and wall mechanics was compared with its simplified techniques, that is, computational fluid dynamics (CFD) or computational solid stress (CSS) model. Results demonstrated that CFD exhibited relatively smaller vortexes and tends to overestimate the fluid wall shear stress, compared to FSI. On the contrary, the minimal differences in wall stresses and deformation were observed between FSI and CSS models. Furthermore, it was found that the accuracy of CSS prediction depends on the applied pressure profile for the aneurysm sac. A large pressure drop across AAA usually led to the underestimation of wall stresses and thus the AAA rupture. Moreover, the assumed isotropic AAA wall properties, compared to the anisotropic one, will aggravate the difference between the simplified models with the FSI approach. The present work demonstrated the importance of modeling techniques on predicting the blood flow dynamics and wall mechanics of the AAA, which could guide the selection of appropriate modeling technique for significant clinical implications. PMID:28321413

  4. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  5. Early Results of Chimney Technique for Type B Aortic Dissections Extending to the Aortic Arch

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Chen [Affiliated Hospital of Nantong University, Department of General Surgery (China); Tang, Hanfei; Qiao, Tong; Liu, Changjian; Zhou, Min, E-mail: 813477618@qq.com [The Affiliated Hospital of Nanjing University Medical School, Department of Vascular Surgery, Nanjing Drum Tower Hospital (China)

    2016-01-15

    ObjectiveTo summarize our early experience gained from the chimney technique for type B aortic dissection (TBAD) extending to the aortic arch and to evaluate the aortic remodeling in the follow-up period.MethodsFrom September 2011 to July 2014, 27 consecutive TBAD patients without adequate proximal landing zones were retrograde analyzed. Chimney stent-grafts were deployed parallel to the main endografts to reserve flow to branch vessels while extending the landing zones. In the follow-up period, aortic remodeling was observed with computed tomography angiography.ResultsThe technical success rate was 100 %, and endografts were deployed in zone 0 (n = 3, 11.1 %), zone 1 (n = 18, 66.7 %), and zone 2 (n = 6, 22.2 %). Immediately, proximal endoleaks were detected in 5 patients (18.5 %). During a mean follow-up period of 17.6 months, computed tomography angiography showed all the aortic stent-grafts and chimney grafts to be patent. Favorable remodeling was observed at the level of maximum descending aorta and left subclavian artery with expansion of true lumen (from 18.4 ± 4.8 to 25 ± 0.86 mm, p < 0.001 and 27.1 ± 0.62 to 28.5 ± 0.37 mm, p < 0.001) and depressurization of false lumen (from 23.7 ± 2.7 to 8.7 ± 3.8 mm, p < 0.001, from 5.3 ± 1.2 to 2.1 ± 2.1 mm, p < 0.001). While at the level of maximum abdominal aorta, suboptimal remodeling of the total aorta (from 24.1 ± 0.4 to 23.6 ± 1.5 mm, p = 0.06) and true lumen (from 13.8 ± 0.6 to 14.5 ± 0.4 mm, p = 0.08) was observed.ConclusionBased on our limited experience, the chimney technique with thoracic endovascular repair is demonstrated to be promising for TBAD extending to the arch with favorable aortic remodeling.

  6. Double-wire sternal closure technique in bovine animal models for total artificial heart implant.

    Science.gov (United States)

    Karimov, Jamshid H; Sunagawa, Gengo; Golding, Leonard A R; Moazami, Nader; Fukamachi, Kiyotaka

    2015-08-01

    In vivo preclinical testing of mechanical circulatory devices requires large animal models that provide reliable physiological and hemodynamic conditions by which to test the device and investigate design and development strategies. Large bovine species are commonly used for mechanical circulatory support device research. The animals used for chronic in vivo support require high-quality care and excellent surgical techniques as well as advanced methods of postoperative care. These techniques are constantly being updated and new methods are emerging.We report results of our double steel-wire closure technique in large bovine models used for Cleveland Clinic's continuous-flow total artificial heart development program. This is the first report of double-wire sternal fixation used in large bovine models.

  7. Study on modeling of vehicle dynamic stability and control technique

    Institute of Scientific and Technical Information of China (English)

    GAO Yun-ting; LI Pan-feng

    2012-01-01

    In order to solve the problem of enhancing the vehicle driving stability and safety,which has been the hot question researched by scientific and engineering in the vehicle industry,the new control method was investigated.After the analysis of tire moving characteristics and the vehicle stress analysis,the tire model based on the extension pacejka magic formula which combined longitudinal motion and lateral motion was developed and a nonlinear vehicle dynamical stability model with seven freedoms was made.A new model reference adaptive control project which made the slip angle and yaw rate of vehicle body as the output and feedback variable in adjusting the torque of vehicle body to control the vehicle stability was designed.A simulation model was also built in Matlab/Simulink to evaluate this control project.It was made up of many mathematical subsystem models mainly including the tire model module,the yaw moment calculation module,the center of mass parameter calculation module,tire parameter calculation module of multiple and so forth.The severe lane change simulation result shows that this vehicle model and the model reference adaptive control method have an excellent performance.

  8. Percutaneous elastic intramedullary nailing of metacarpal fractures: Surgical technique and clinical results study

    Directory of Open Access Journals (Sweden)

    Farook Mohamed Z

    2011-07-01

    Full Text Available Abstract Background We reviewed our results and complications of using a pre-bent 1.6 mm Kirschner wire (K-wire for extra-articular metacarpal fractures. The surgical procedure was indicated for angulation at the fracture site in a true lateral radiograph of at least 30 degrees and/or in the presence of a rotatory deformity. Methods A single K-wire is pre-bent in a lazy-S fashion with a sharp bend at approximately 5 millimeters and a longer smooth curve bent in the opposite direction. An initial entry point is made at the base of the metacarpal using a 2.5 mm drill by hand. The K-wire is inserted blunt end first in an antegrade manner and the fracture reduced as the wire is passed across the fracture site. With the wire acting as three-point fixation, early mobilisation is commenced at the metacarpo-phalangeal joint in a Futuro hand splint. The wire is usually removed with pliers post-operatively at four weeks in the fracture clinic. Results We studied internal fixation of 18 little finger and 2 ring finger metacarpal fractures from November 2007 to August 2009. The average age of the cohort was 25 years with 3 women and 17 men. The predominant mechanism was a punch injury with 5 diaphyseal and 15 metacarpal neck fractures. The time to surgical intervention was a mean 13 days (range 4 to 28 days. All fractures proceeded to bony union. The wire was extracted at an average of 4.4 weeks (range three to six weeks. At an average follow up of 8 weeks, one fracture had to be revised for failed fixation and three superficial wound infections needed antibiotic treatment. Conclusions With this simple and minimally invasive technique performed as day-case surgery, all patients were able to start mobilisation immediately. The general outcome was good hand function with few complications.

  9. Modeling and teaching techniques for conceptual and logical relational database design.

    Science.gov (United States)

    Thompson, Cheryl Bagley; Sward, Katherine

    2005-10-01

    This paper proposes a series of techniques to be used in teaching database design. Common ERD notations are discussed. The authors developed an ERD notation, adapted from the Unified Modeling Language, which facilitates student learning of the database design process. The paper presents a specific step by step process for representing the ERD components as tables and for normalizing the resulting set of tables.

  10. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    2003-01-01

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find th

  11. Combined Rock-physical Modelling and Seismic Inversion Techniques for Characterisation of the Posidonia Shale Formation

    NARCIS (Netherlands)

    Justiniano, A.; Jaya, M.; Diephuis, G.

    2015-01-01

    The objective of this study is to characterise the Jurassic Posidonia Shale Formation at Block Q16 located in the West Netherlands Basin. The characterisation was carried out through combining rock-physics modelling and seismic inversion techniques. The results show that the Posidonia Shale Formatio

  12. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find

  13. Surgical technique: establishing a pre-clinical large animal model to test aortic valve leaflet substitute

    Science.gov (United States)

    Knirsch, Walter; Cesarovic, Niko; Krüger, Bernard; Schmiady, Martin; Frauenfelder, Thomas; Frese, Laura; Dave, Hitendu; Hoerstrup, Simon Philipp; Hübler, Michael

    2016-01-01

    To overcome current limitations of valve substitutes and tissue substitutes the technology of tissue engineering (TE) continues to offer new perspectives in congenital cardiac surgery. We report our experiences and results implanting a decellularized TE patch in nine sheep in orthotropic position as aortic valve leaflet substitute. Establishing the animal model, feasibility, cardiopulmonary bypass issues and operative technique are highlighted. PMID:28149571

  14. Vaccination strategies for SEIR models using feedback linearization. Preliminary results

    CERN Document Server

    De la Sen, M; Alonso-Quesada, S

    2011-01-01

    A linearization-based feedback-control strategy for a SEIR epidemic model is discussed. The vaccination objective is the asymptotically tracking of the removed-by-immunity population to the total population while achieving simultaneously the remaining population (i.e. susceptible plus infected plus infectious) to asymptotically tend to zero. The disease controlpolicy is designed based on a feedback linearization technique which provides a general method to generate families of vaccination policies with sound technical background.

  15. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  16. A comparative assessment of efficient uncertainty analysis techniques for environmental fate and transport models: application to the FACT model

    Science.gov (United States)

    Balakrishnan, Suhrid; Roy, Amit; Ierapetritou, Marianthi G.; Flach, Gregory P.; Georgopoulos, Panos G.

    2005-06-01

    This work presents a comparative assessment of efficient uncertainty modeling techniques, including Stochastic Response Surface Method (SRSM) and High Dimensional Model Representation (HDMR). This assessment considers improvement achieved with respect to conventional techniques of modeling uncertainty (Monte Carlo). Given that traditional methods for characterizing uncertainty are very computationally demanding, when they are applied in conjunction with complex environmental fate and transport models, this study aims to assess how accurately these efficient (and hence viable) techniques for uncertainty propagation can capture complex model output uncertainty. As a part of this effort, the efficacy of HDMR, which has primarily been used in the past as a model reduction tool, is also demonstrated for uncertainty analysis. The application chosen to highlight the accuracy of these new techniques is the steady state analysis of the groundwater flow in the Savannah River Site General Separations Area (GSA) using the subsurface Flow And Contaminant Transport (FACT) code. Uncertain inputs included three-dimensional hydraulic conductivity fields, and a two-dimensional recharge rate field. The output variables under consideration were the simulated stream baseflows and hydraulic head values. Results show that the uncertainty analysis outcomes obtained using SRSM and HDMR are practically indistinguishable from those obtained using the conventional Monte Carlo method, while requiring orders of magnitude fewer model simulations.

  17. Kerf modelling in abrasive waterjet milling using evolutionary computation and ANOVA techniques

    Science.gov (United States)

    Alberdi, A.; Rivero, A.; Carrascal, A.; Lamikiz, A.

    2012-04-01

    Many researchers demonstrated the capability of Abrasive Waterjet (AWJ) technology for precision milling operations. However, the concurrence of several input parameters along with the stochastic nature of this technology leads to a complex process control, which requires a work focused in process modelling. This research work introduces a model to predict the kerf shape in AWJ slot milling in Aluminium 7075-T651 in terms of four important process parameters: the pressure, the abrasive flow rate, the stand-off distance and the traverse feed rate. A hybrid evolutionary approach was employed for kerf shape modelling. This technique allowed characterizing the profile through two parameters: the maximum cutting depth and the full width at half maximum. On the other hand, based on ANOVA and regression techniques, these two parameters were also modelled as a function of process parameters. Combination of both models resulted in an adequate strategy to predict the kerf shape for different machining conditions.

  18. Longo's痔切除術%Stapled Haemorrhoidectomy with Longo's Technique: Our Results After Three Years

    Institute of Scientific and Technical Information of China (English)

    David BERNASCONI; Arianna C. SANZO; Giacomo URSO; Giovanni PIAZZA; Nello GRASSI; Alfonso M. MAIORANA

    2001-01-01

    The authors evaluate a new surgical procedure for the treatment of haemorrhoids. The technique was introduced by Longo in 1994 and consists in what is defined "Anal Lifting". With the help of a special kit equipped of a stapler. The surgical procedure gives several advantages, particularly in the post-operative treatment.No need of medications and no post-operative pain, the technique is also suitable for day-surgery regimen. The authors focuse the need of further long term evaluation.

  19. Results of a M = 5.3 heat transfer test of the integrated vehicle using phase-change paint techniques on the 0.0175-scale model 56-OTS in the NASA/Ames Research Center 3.5-foot hypersonic wind tunnel

    Science.gov (United States)

    Marroquin, J.

    1985-01-01

    An experimental investigation was performed in the NASA/Ames Research Center 3.5-foot Hypersonic Wind Tunnel to obtain supersonic heat-distribution data in areas between the orbiter and external tank using phase-change paint techniques. The tests used Novamide SSV Model 56-OTS in the first and second-stage ascent configurations. Data were obtained at a nominal Mach number of 5.3 and a Reynolds number per foot of 5 x 10 to the 6th power with angles of attack of 0 deg, +/- 5 deg, and sideslip angles of 0 deg and +/- 5 deg.

  20. Modeling and comparative study of various detection techniques for FMCW LIDAR using optisystem

    Science.gov (United States)

    Elghandour, Ahmed H.; Ren, Chen D.

    2013-09-01

    In this paper we investigated the different detection techniques especially direct detection, coherent heterodyne detection and coherent homodyne detection on FMCW LIDAR system using Optisystem package. A model for target, propagation channel and various detection techniques were developed using Optisystem package and then a comparative study among various detection techniques for FMCW LIDAR systems is done analytically and simulated using the developed model. Performance of direct detection, heterodyne detection and homodyne detection for FMCW LIDAR system was calculated and simulated using Optisystem package. The output simulated performance was checked using simulated results of MATLAB simulator. The results shows that direct detection is sensitive to the intensity of the received electromagnetic signal and has low complexity system advantage over the others detection architectures at the expense of the thermal noise is the dominant noise source and the sensitivity is relatively poor. In addition to much higher detection sensitivity can be achieved using coherent optical mixing which is performed by heterodyne and homodyne detection.

  1. Techniques for the manufacturing of stiff and lightweight optical mirror panels based on slumping of glass sheets: concepts and results

    Science.gov (United States)

    Canestrari, R.; Ghigo, M.; Pareschi, G.; Basso, S.; Motta, G.; Doro, M.; Giro, E.; Lessio, L.

    2009-08-01

    In the last decade Very High Energy (VHE) gamma-ray astronomy has improved rapidly opening a new window for ground-based astronomy with surprising implications in the theoretical models. Nowadays, it is possible to make imaging, photometry and spectroscopy of sources with good sensitivity and angular resolution using new facilities as MAGIC, HESS and VERITAS. The latest results of astronomy in the TeV band obtained using such facilities demonstrate the essential role of this window for high energy astrophysics. For this reason new projects (e.g. CTA and AGIS) have been started with the aim to increase the sensitivity and expand the energy band coverage. For such telescopes arrays probably tens of thousands of optical mirror panels must be manufactured with an adequate industrial process, then tested and mounted into the telescopes. Because of the high number of mirrors it is mandatory to perform feasibility studies to test various techniques to meet the technical and cost-effectiveness requirements for the next generation TeV telescopes as CTA and AGIS. In this context at the Astronomical Observatory of Brera (INAF-OAB) we have started the investigation of different techniques for the manufacturing of stiff and lightweight optical glass mirror panels. These panels show a sandwich-like structure with two thin glass skins on both sides, the reflective one being optically shaped using an ad-hoc slumping procedure. The technologies here presented can be addressed both for primary or secondary mirrors for the next generation of Cherenkov telescopes. In this paper we present and discuss the different techniques we are investigating with some preliminary results obtained from test panels realized.

  2. Household water use and conservation models using Monte Carlo techniques

    Science.gov (United States)

    Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.

    2013-10-01

    The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006-2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  3. Results of a nuclear power plant application of A New Technique for Human Error Analysis (ATHEANA)

    Energy Technology Data Exchange (ETDEWEB)

    Whitehead, D.W.; Forester, J.A. [Sandia National Labs., Albuquerque, NM (United States); Bley, D.C. [Buttonwood Consulting, Inc. (United States)] [and others

    1998-03-01

    A new method to analyze human errors has been demonstrated at a pressurized water reactor (PWR) nuclear power plant. This was the first application of the new method referred to as A Technique for Human Error Analysis (ATHEANA). The main goals of the demonstration were to test the ATHEANA process as described in the frame-of-reference manual and the implementation guideline, test a training package developed for the method, test the hypothesis that plant operators and trainers have significant insight into the error-forcing-contexts (EFCs) that can make unsafe actions (UAs) more likely, and to identify ways to improve the method and its documentation. A set of criteria to evaluate the success of the ATHEANA method as used in the demonstration was identified. A human reliability analysis (HRA) team was formed that consisted of an expert in probabilistic risk assessment (PRA) with some background in HRA (not ATHEANA) and four personnel from the nuclear power plant. Personnel from the plant included two individuals from their PRA staff and two individuals from their training staff. Both individuals from training are currently licensed operators and one of them was a senior reactor operator on shift until a few months before the demonstration. The demonstration was conducted over a 5-month period and was observed by members of the Nuclear Regulatory Commission`s ATHEANA development team, who also served as consultants to the HRA team when necessary. Example results of the demonstration to date, including identified human failure events (HFEs), UAs, and EFCs are discussed. Also addressed is how simulator exercises are used in the ATHEANA demonstration project.

  4. The Oblique Metaphyseal Shortening Osteotomy of the Distal Ulna: Surgical Technique and Results of Ten Patients.

    Science.gov (United States)

    Benis, Szabolcs; Goubau, Jean F; Mermuys, Koen; Van Hoonacker, Petrus; Berghs, Bart; Kerckhove, Diederick; Vanmierlo, Bert

    2017-02-01

    Background Ulnocarpal abutment is a common condition following distal radius fractures. There are different surgical methods of treatment for this pathology: open and arthroscopic wafer procedure or an ulnar shortening osteotomy. We describe an oblique metaphyseal shortening osteotomy of the distal ulna using two cannulated headless compression screws. We report the results of 10 patients treated with this method. Materials and Methods Out of 17 patients, 10 could be reviewed retrospectively for this study. Patient-rated outcomes were measured using the VAS (visual analogue scale) for pain, PRWHE (patient-rated wrist and hand evaluation) survey, and Quick-DASH (disability of arm, shoulder and hand) survey for functional outcomes. At the review we measured the range of motion (ROM) of the wrist (extension and flexion, ulnar and radial deviation, pronation and supination). Grip strength, pronation, and supination strength of the forearm was measured using a calibrated hydraulic dynamometer. ROM and strength of the affected wrist was compared with ROM and strength of the unaffected wrist. Surgical Procedure Oblique long metaphyseal osteotomy of the distal ulna (from proximal-ulnar to distal-radial), fixed with two cannulated headless compression screws. Results The average postoperative VAS score for pain was 23.71 (standard deviation [SD] of 30.41). The average postoperative PRWHE score was 32.55 (SD of 26.28). The average postoperative Quick-DASH score was 28.65 (SD of 27.21). The majority of patients had a comparable ROM and strength between the operated side and the non-operated side. Conclusion This surgical technique has the advantage of reducing the amount of hardware and to decrease the potential hinder caused by it on medium term. Moreover, the incision remains smaller, and the anatomic metaphyseal localization of the osteotomy potentially allows a better and rapid healing.

  5. Some results regarding the comparison of the Earth's atmospheric models

    Directory of Open Access Journals (Sweden)

    Šegan S.

    2005-01-01

    Full Text Available In this paper we examine air densities derived from our realization of aeronomic atmosphere models based on accelerometer measurements from satellites in a low Earth's orbit (LEO. Using the adapted algorithms we derive comparison parameters. The first results concerning the adjustment of the aeronomic models to the total-density model are given.

  6. Suitability of sheet bending modelling techniques in CAPP applications

    NARCIS (Netherlands)

    Streppel, A.H.; de Vin, L.J.; de Vin, L.J.; Brinkman, J.; Brinkman, J.; Kals, H.J.J.

    1993-01-01

    The use of CNC machine tools, together with decreasing lot sizes and stricter tolerance prescriptions, has led to changes in sheet-metal part manufacturing. In this paper, problems introduced by the difference between the actual material behaviour and the results obtained from analytical models and

  7. Accuracy Enhanced Stability and Structure Preserving Model Reduction Technique for Dynamical Systems with Second Order Structure

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    A method for model reduction of dynamical systems with the second order structure is proposed in this paper. The proposed technique preserves the second order structure of the system, and also preserves the stability of the original systems. The method uses the controllability and observability...... gramians within the time interval to build the appropriate Petrov-Galerkin projection for dynamical systems within the time interval of interest. The bound on approximation error is also derived. The numerical results are compared with the counterparts from other techniques. The results confirm...

  8. Modelling the potential spatial distribution of mosquito species using three different techniques.

    Science.gov (United States)

    Cianci, Daniela; Hartemink, Nienke; Ibáñez-Justicia, Adolfo

    2015-02-27

    Models for the spatial distribution of vector species are important tools in the assessment of the risk of establishment and subsequent spread of vector-borne diseases. The aims of this study are to define the environmental conditions suitable for several mosquito species through species distribution modelling techniques, and to compare the results produced with the different techniques. Three different modelling techniques, i.e., non-linear discriminant analysis, random forest and generalised linear model, were used to investigate the environmental suitability in the Netherlands for three indigenous mosquito species (Culiseta annulata, Anopheles claviger and Ochlerotatus punctor). Results obtained with the three statistical models were compared with regard to: (i) environmental suitability maps, (ii) environmental variables associated with occurrence, (iii) model evaluation. The models indicated that precipitation, temperature and population density were associated with the occurrence of Cs. annulata and An. claviger, whereas land surface temperature and vegetation indices were associated with the presence of Oc. punctor. The maps produced with the three different modelling techniques showed consistent spatial patterns for each species, but differences in the ranges of the predictions. Non-linear discriminant analysis had lower predictions than other methods. The model with the best classification skills for all the species was the random forest model, with specificity values ranging from 0.89 to 0.91, and sensitivity values ranging from 0.64 to 0.95. We mapped the environmental suitability for three mosquito species with three different modelling techniques. For each species, the maps showed consistent spatial patterns, but the level of predicted environmental suitability differed; NLDA gave lower predicted probabilities of presence than the other two methods. The variables selected as important in the models were in agreement with the existing knowledge about

  9. A Hybrid Model for the Mid-Long Term Runoff Forecasting by Evolutionary Computaion Techniques

    Institute of Scientific and Technical Information of China (English)

    Zou Xiu-fen; Kang Li-shan; Cae Hong-qing; Wu Zhi-jian

    2003-01-01

    The mid-long term hydrology forecasting is one of most challenging problems in hydrological studies. This paper proposes an efficient dynamical system prediction model using evolutionary computation techniques. The new model overcomes some disadvantages of conventional hydrology fore casting ones. The observed data is divided into two parts: the slow "smooth and steady" data, and the fast "coarse and fluctuation" data. Under the divide and conquer strategy, the behavior of smooth data is modeled by ordinary differential equations based on evolutionary modeling, and that of the coarse data is modeled using gray correlative forecasting method. Our model is verified on the test data of the mid-long term hydrology forecast in tbe northeast region of China. The experimental results show that the model is superior to gray system prediction model (GSPM).

  10. The Effect of Bathymetric Filtering on Nearshore Process Model Results

    Science.gov (United States)

    2009-01-01

    Filtering on Nearshore Process Model Results 6. AUTHOR(S) Nathaniel Plant, Kacey L. Edwards, James M. Kaihatu, Jayaram Veeramony, Yuan-Huang L. Hsu...filtering on nearshore process model results Nathaniel G. Plant **, Kacey L Edwardsb, James M. Kaihatuc, Jayaram Veeramony b, Larry Hsu’’, K. Todd Holland...assimilation efforts that require this information. Published by Elsevier B.V. 1. Introduction Nearshore process models are capable of predicting

  11. Measuring Three-Dimensional Thorax Motion Via Biplane Radiographic Imaging: Technique and Preliminary Results.

    Science.gov (United States)

    Baumer, Timothy G; Giles, Joshua W; Drake, Anne; Zauel, Roger; Bey, Michael J

    2016-01-01

    Measures of scapulothoracic motion are dependent on accurate imaging of the scapula and thorax. Advanced radiographic techniques can provide accurate measures of scapular motion, but the limited 3D imaging volume of these techniques often precludes measurement of thorax motion. To overcome this, a thorax coordinate system was defined based on the position of rib pairs and then compared to a conventional sternum/spine-based thorax coordinate system. Alignment of the rib-based coordinate system was dependent on the rib pairs used, with the rib3:rib4 pairing aligned to within 4.4 ± 2.1 deg of the conventional thorax coordinate system.

  12. Fast Spectral Velocity Estimation Using Adaptive Techniques: In-Vivo Results

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Udesen, Jesper

    2007-01-01

    spectral Capon (BPC) method is based on a standard minimum variance technique adapted to account for both averaging over slowtime and depth. The Blood Amplitude and Phase Estimation technique (BAPES) is based on finding a set of matched filters (one for each velocity component of interest) and filtering...... the blood process over slow-time and averaging over depth to find the power spectral density estimate. In this paper, the two adaptive methods are explained, and performance Is assessed in controlled steady How experiments and in-vivo measurements. The three methods were tested on a circulating How rig...

  13. Seasonal differences in the subjective assessment of outdoor thermal conditions and the impact of analysis techniques on the obtained results

    Science.gov (United States)

    Kántor, Noémi; Kovács, Attila; Takács, Ágnes

    2016-11-01

    Wide research attention has been paid in the last two decades to the thermal comfort conditions of different outdoor and semi-outdoor urban spaces. Field studies were conducted in a wide range of geographical regions in order to investigate the relationship between the thermal sensation of people and thermal comfort indices. Researchers found that the original threshold values of these indices did not describe precisely the actual thermal sensation patterns of subjects, and they reported neutral temperatures that vary among nations and with time of the year. For that reason, thresholds of some objective indices were rescaled and new thermal comfort categories were defined. This research investigates the outdoor thermal perception patterns of Hungarians regarding the Physiologically Equivalent Temperature ( PET) index, based on more than 5800 questionnaires. The surveys were conducted in the city of Szeged on 78 days in spring, summer, and autumn. Various, frequently applied analysis approaches (simple descriptive technique, regression analysis, and probit models) were adopted to reveal seasonal differences in the thermal assessment of people. Thermal sensitivity and neutral temperatures were found to be significantly different, especially between summer and the two transient seasons. Challenges of international comparison are also emphasized, since the results prove that neutral temperatures obtained through different analysis techniques may be considerably different. The outcomes of this study underline the importance of the development of standard measurement and analysis methodologies in order to make future studies comprehensible, hereby facilitating the broadening of the common scientific knowledge about outdoor thermal comfort.

  14. Seasonal differences in the subjective assessment of outdoor thermal conditions and the impact of analysis techniques on the obtained results.

    Science.gov (United States)

    Kántor, Noémi; Kovács, Attila; Takács, Ágnes

    2016-11-01

    Wide research attention has been paid in the last two decades to the thermal comfort conditions of different outdoor and semi-outdoor urban spaces. Field studies were conducted in a wide range of geographical regions in order to investigate the relationship between the thermal sensation of people and thermal comfort indices. Researchers found that the original threshold values of these indices did not describe precisely the actual thermal sensation patterns of subjects, and they reported neutral temperatures that vary among nations and with time of the year. For that reason, thresholds of some objective indices were rescaled and new thermal comfort categories were defined. This research investigates the outdoor thermal perception patterns of Hungarians regarding the Physiologically Equivalent Temperature (PET) index, based on more than 5800 questionnaires. The surveys were conducted in the city of Szeged on 78 days in spring, summer, and autumn. Various, frequently applied analysis approaches (simple descriptive technique, regression analysis, and probit models) were adopted to reveal seasonal differences in the thermal assessment of people. Thermal sensitivity and neutral temperatures were found to be significantly different, especially between summer and the two transient seasons. Challenges of international comparison are also emphasized, since the results prove that neutral temperatures obtained through different analysis techniques may be considerably different. The outcomes of this study underline the importance of the development of standard measurement and analysis methodologies in order to make future studies comprehensible, hereby facilitating the broadening of the common scientific knowledge about outdoor thermal comfort.

  15. Meso-damage modelling of polymer based particulate composites using finite element technique

    Science.gov (United States)

    Tsui, Chi Pong

    To develop a new particulate polymer composite (PPC) with desired mechanical properties is usually accomplished by an experimental trial-and-error approach. A new technique, which predicts the damage mechanism and its effects on the mechanical properties of PPC, has been proposed. This meso-mechanical modelling technique, which offers a means to bridge the micro-damage mechanism and the macro-structural behaviour, has been implemented in a finite element code. A three-dimensional finite element meso-cell model has been designed and constructed to simulate the damage mechanism of PPC. The meso-cell model consists of a micro-particle, an interface, and a matrix. The initiation of the particle/polymer matrix debonding process has been predicted on the basis of a tensile criterion. By considering the meso-cell model as a representative volume element (RVE), the effects of damage on the macro-structural constitutive behaviour of PPC have been determined. An experimental investigation has been made on glass beads (GB) reinforced polyphenylene oxides (PPO) for verification of the meso-cell model and the meso-mechanical finite element technique. The predicted constitutive relation has been found to be in good agreement with the experimental results. The results of the in-situ microscopic test also verify the correctness of the meso-cell model. The application of the meso-mechanical finite element modelling technique has been extended to a macro-structural analysis to simulate the response an engineering structure made of PPC under a static load. In the simulation, a damage variable has been defined in terms of the computational results of the cell model in meso-scale. Hence, the damage-coupled constitutive relation of the GB/PPO composite could be derived. A user-defined subroutine VUMAT in FORTRAN language describing the damage-coupled constitutive behaviour has then been incorporated into the ABAQUS finite element code. On a macro-scale, the ABAQUS finite element code

  16. Steel Containment Vessel Model Test: Results and Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Costello, J.F.; Hashimote, T.; Hessheimer, M.F.; Luk, V.K.

    1999-03-01

    A high pressure test of the steel containment vessel (SCV) model was conducted on December 11-12, 1996 at Sandia National Laboratories, Albuquerque, NM, USA. The test model is a mixed-scaled model (1:10 in geometry and 1:4 in shell thickness) of an improved Mark II boiling water reactor (BWR) containment. A concentric steel contact structure (CS), installed over the SCV model and separated at a nominally uniform distance from it, provided a simplified representation of a reactor shield building in the actual plant. The SCV model and contact structure were instrumented with strain gages and displacement transducers to record the deformation behavior of the SCV model during the high pressure test. This paper summarizes the conduct and the results of the high pressure test and discusses the posttest metallurgical evaluation results on specimens removed from the SCV model.

  17. Results and analysis of the 2008-2009 Insulin Injection Technique Questionnaire survey

    NARCIS (Netherlands)

    De Coninck, Carina; Frid, Anders; Gaspar, Ruth; Hicks, Debbie; Hirsch, Larry; Kreugel, Gillian; Liersch, Jutta; Letondeur, Corinne; Sauvanet, Jean-Pierre; Tubiana, Nadia; Strauss, Kenneth

    2010-01-01

    Background: The efficacy of injection therapy in diabetes depends on correct injection technique and, to provide patients with guidance in this area, we must understand how they currently inject. Methods: From September 2008 to June 2009, 4352 insulin-injecting Type 1 and Type 2 diabetic patients fr

  18. Entire papilla preservation technique in the regenerative treatment of deep intrabony defects: 1-Year results.

    Science.gov (United States)

    Aslan, Serhat; Buduneli, Nurcan; Cortellini, Pierpaolo

    2017-09-01

    This study evaluates the clinical outcomes of a novel tunnel-like surgical technique in the treatment of isolated deep intrabony defects. Twelve patients presenting with at least one isolated deep intrabony defect received regenerative periodontal treatment with "entire papilla preservation (EPP)" technique. Access to the intrabony defect for debridement was provided by a bevelled vertical releasing incision positioned in the buccal gingiva of the neighbouring inter-dental space. Following the elevation of a buccal flap, an inter-dental tunnel was prepared undermining the defect-associated papilla. Granulation tissue was removed, root surfaces were carefully debrided and bone substitutes and enamel matrix derivative were applied. Microsurgical suturing technique was used for optimal wound closure. Early healing was uneventful in all cases, and 100% wound closure was maintained during the entire healing period. At 1-year, there was significant attachment gain of 6.83±2.51 mm (p<0.001). The 7±2.8 mm reduction in probing depth was also significant (p<0.001), which was associated with minimal increase in gingival recession (0.16±0.38 mm, p=0.166). Tunnel-like "EPP" technique may limit the risk of wound failure particularly in the early healing phase, thereby preventing exposure of regenerative biomaterials, possibly enhancing stabilization of blood clot in deep intrabony defects and leading to optimal clinical outcomes. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  19. Use of allogenic (iliac) corticocancellous graft for Le Fort I interpositional defects: technique and results.

    Science.gov (United States)

    Posnick, Jeffrey C; Sami, Ali

    2015-01-01

    We performed a retrospective review of a consecutive series of 50 patients who had undergone Le Fort I with interpositional grafting during a 3-year period. Maxillary repositioning included horizontal advancement and vertical and transverse change to the extent that an interpositional graft was considered necessary. Allogenic (iliac) corticocancellous bone was used in all cases. Each patient underwent analytic model planning to document the maxillary vector change data points. The recorded data served as an indicator of the osteotomy site gaps requiring grafting. Standardized photographs in centric relation at a minimum of 12 months after treatment were analyzed to measure overjet, overbite, midline position, and first molar lateral occlusion. Specific maxillary region wound healing parameters were reviewed. The patients' mean age at surgery was 32 years (range 15 to 60). Analytic model planning clarified that the study patients had an average of 8 mm horizontal advancement, 2 mm vertical lengthening, and 2 mm of transverse expansion. The data confirmed a favorable occlusion at a minimum of 1 year after surgery with maintenance of a normal overjet (49 of 50 patients, 98%), normal overbite (48 of 50 patients, 96%), planned dental midline positioning (45 of 50 patients, 90%), and ideal first molar lateral occlusion (48 of 50 patients, 96%) for most patients. None of the study patients sustained wound healing complications. Also, no cases of postoperative sepsis or viral illness developed. The results of the present study have confirmed that iliac corticocancellous allograft has minimal systemic or recipient site complications and can be safely used to fill complex 3-dimensional interpositional defects associated with Le Fort I osteotomy and/or repositioning. Copyright © 2015 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  20. Numerical Time-Domain Modeling of Lamb Wave Propagation Using Elastodynamic Finite Integration Technique

    Directory of Open Access Journals (Sweden)

    Hussein Rappel

    2014-01-01

    integration technique (EFIT as well as its validation with analytical results. Lamb wave method is a long range inspection technique which is considered to have unique future in the field of structural health monitoring. One of the main problems facing the lamb wave method is how to choose the most appropriate frequency to generate the waves for adequate transmission capable of properly propagating in the material, interfering with defects/damages, and being received in good conditions. Modern simulation tools based on numerical methods such as finite integration technique (FIT, finite element method (FEM, and boundary element method (BEM may be used for modeling. In this paper, two sets of simulation are performed. In the first set, group velocities of lamb wave in a steel plate are obtained numerically. Results are then compared with analytical results to validate the simulation. In the second set, EFIT is employed to study fundamental symmetric mode interaction with a surface braking defect.

  1. Modeling and Control of a Photovoltaic Energy System Using the State-Space Averaging Technique

    Directory of Open Access Journals (Sweden)

    Mohd S. Jamri

    2010-01-01

    Full Text Available Problem statement: This study presented the modeling and control of a stand-alone Photovoltaic (PV system using the state-space averaging technique. Approach: The PV module was modeled based on the parameters obtained from a commercial PV data sheet while state-space method is used to model the power converter. A DC-DC boost converter was chosen to step up the input DC voltage of the PV module while the DC-AC single-phase full-bridge square-wave inverter was chosen to convert the input DC comes from boost converter into AC element. The integrated state-space model was simulated under a constant and a variable change of solar irradiance and temperature. In addition to that, maximum power point tracking method was also included in the model to ensure that optimum use of PV module is made. A circuitry simulation was performed under the similar test conditions in order to validate the state-space model. Results: Results showed that the state-space averaging model yields the similar performance as produced by the circuitry simulation in terms of the voltage, current and power generated. Conclusion/Recommendations: The state-space averaging technique is simple to be implemented in modeling and control of either simple or complex system, which yields the similar performance as the results from circuitry method.

  2. The results of cataract nigra cases operated with the mini-nuc technique.

    Science.gov (United States)

    Yuzbasioglu, Erdal; Helvacioglu, Firat; Tugcu, Betul; Terzi, Nazire; Keskinbora, Kadircan

    2009-12-01

    The purpose of this study was to evaluate the efficacy and safety of the mini-nuc technique for the removal of brunescent and black cataracts. A prospective study was carried out in 33 eyes of 33 patients with cataract nigra operated with the mini-nuc technique between April 2002 and June 2003. Slit-lamp examinations, intraocular pressure (IOP) measurements, and best-corrected visual acuity (BCVA) were assessed pre- and postoperatively. Accompanying systemic diseases were noted. Intraoperative and postoperative complications were evaluated. Unilateral eyes of 33 patients (18 male [54.5%], 15 female [45.5%]) aged between 65 and 90 years (mean 72 years) were operated with the mini-nuc technique. Preoperative BCVA values varied between light perception and 0.2 in the Snellen chart. Intraocular lenses (IOL) were implanted into all of the patients (27 in-the-bag [81.8%], four to sulcus [12.1%], and two with scleral fixation [6.1%]). During the surgery, five patients (15.15%) had zonular dialysis and two (6.1%) had posterior capsule rupture and vitreous loss. Postoperatively, three (9.1%) rises in IOP, two (6.1%) hyphema, and one (3%) IOL subluxation were observed. At the first day visit, the mean of the uncorrected visual acuities (UCVA) was 0.5 in the Snellen chart. At the third month visit, the mean BCVA was observed to be 0.8. The residual mean astigmatism was 0.75 D against the rule. The mini-nuc technique was effective in removing brunescent and black cataracts with a low rate of serious complications. The mini-nuc technique, which is also performed with a small incision and without sutures, might be an alternative to phacoemulsification in cases of cataract nigra.

  3. The feasibility of computational modelling technique to detect the bladder cancer.

    Science.gov (United States)

    Keshtkar, Ahmad; Mesbahi, Asghar; Rasta, S H; Keshtkar, Asghar

    2010-01-01

    A numerical technique, finite element analysis (FEA) was used to model the electrical properties, the bio impedance of the bladder tissue in order to predict the bladder cancer. This model results showed that the normal bladder tissue have significantly higher impedance than the malignant tissue that was in opposite with the impedance measurements or the experimental results. Therefore, this difference can be explained using the effects of inflammation, oedema on the urothelium and the property of the bladder as a distensible organ. Furthermore, the different current distributions inside the bladder tissue (in histological layers) in normal and malignant cases and finally different applied pressures over the bladder tissue can cause different impedances for the bladder tissue. Finally, it is believed that further studies have to be carried out to characterise the human bladder tissue using the electrical impedance measurement and modelling techniques.

  4. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling.

    Science.gov (United States)

    Dotto, Cintia B S; Mannina, Giorgio; Kleidorfer, Manfred; Vezzaro, Luca; Henrichs, Malte; McCarthy, David T; Freni, Gabriele; Rauch, Wolfgang; Deletic, Ana

    2012-05-15

    Urban drainage models are important tools used by both practitioners and scientists in the field of stormwater management. These models are often conceptual and usually require calibration using local datasets. The quantification of the uncertainty associated with the models is a must, although it is rarely practiced. The International Working Group on Data and Models, which works under the IWA/IAHR Joint Committee on Urban Drainage, has been working on the development of a framework for defining and assessing uncertainties in the field of urban drainage modelling. A part of that work is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multi-objective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper provides the specific advantages and disadvantages of each method. In relation to computational efficiency (i.e. number of iterations required to generate the probability

  5. Results from a new Cocks-Ashby style porosity model

    Science.gov (United States)

    Barton, Nathan

    2017-01-01

    A new porosity evolution model is described, along with preliminary results. The formulation makes use of a Cocks-Ashby style treatment of porosity kinetics that includes rate dependent flow in the mechanics of porosity growth. The porosity model is implemented in a framework that allows for a variety of strength models to be used for the matrix material, including ones with significant changes in rate sensitivity as a function of strain rate. Results of the effect of changing strain rate sensitivity on porosity evolution are shown. The overall constitutive model update involves the coupled solution of a system of nonlinear equations.

  6. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J. (comps.)

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.

  7. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  8. Error statistics of hidden Markov model and hidden Boltzmann model results

    Directory of Open Access Journals (Sweden)

    Newberg Lee A

    2009-07-01

    Full Text Available Abstract Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results.

  9. Error statistics of hidden Markov model and hidden Boltzmann model results

    Science.gov (United States)

    Newberg, Lee A

    2009-01-01

    Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results. PMID:19589158

  10. Results of the Marine Ice Sheet Model Intercomparison Project, MISMIP

    Directory of Open Access Journals (Sweden)

    F. Pattyn

    2012-05-01

    Full Text Available Predictions of marine ice-sheet behaviour require models that are able to robustly simulate grounding line migration. We present results of an intercomparison exercise for marine ice-sheet models. Verification is effected by comparison with approximate analytical solutions for flux across the grounding line using simplified geometrical configurations (no lateral variations, no effects of lateral buttressing. Unique steady state grounding line positions exist for ice sheets on a downward sloping bed, while hysteresis occurs across an overdeepened bed, and stable steady state grounding line positions only occur on the downward-sloping sections. Models based on the shallow ice approximation, which does not resolve extensional stresses, do not reproduce the approximate analytical results unless appropriate parameterizations for ice flux are imposed at the grounding line. For extensional-stress resolving "shelfy stream" models, differences between model results were mainly due to the choice of spatial discretization. Moving grid methods were found to be the most accurate at capturing grounding line evolution, since they track the grounding line explicitly. Adaptive mesh refinement can further improve accuracy, including fixed grid models that generally perform poorly at coarse resolution. Fixed grid models, with nested grid representations of the grounding line, are able to generate accurate steady state positions, but can be inaccurate over transients. Only one full-Stokes model was included in the intercomparison, and consequently the accuracy of shelfy stream models as approximations of full-Stokes models remains to be determined in detail, especially during transients.

  11. Technique for Early Reliability Prediction of Software Components Using Behaviour Models

    Science.gov (United States)

    Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    2016-01-01

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748

  12. Technique for Early Reliability Prediction of Software Components Using Behaviour Models.

    Science.gov (United States)

    Ali, Awad; N A Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction.

  13. Biomechanical caracterisation of lumbar belt by full-field techniques: Preliminary results

    CERN Document Server

    Bonnaire, Rebecca; Calmels, Paul; Convert, Reynald

    2013-01-01

    In France, 50% of the population per year is suffering from low back pain. Lumbar belt are frequently proposed as a part of the treatment of this pathology. However mechanical ways of working of this medical device is not clearly understood, but abdominal pressure is often related. So an optical method was developed in this study to measure strain in lumbar belt and trunk interface and to derive a pressure estimation. Optical method consisted of coupling fringe projection and digital image correlation (DIC). Measurement has been carried out on the right side of a manikin wearing a lumbar belt. Average strain is 0.2 and average pressure is 1 kPa. Continuation of this study will be comparison of strain and pressure in different areas of lumbar belt (left side, front and back) and comparison of different lumbar belts. Results will be used in a finite elements model to determine lumbar belt impact in intern body. In long term, this kind of study will be done on human.

  14. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model.

  15. Comparative analysis of system identification techniques for nonlinear modeling of the neuron-microelectrode junction.

    Science.gov (United States)

    Khan, Saad Ahmad; Thakore, Vaibhav; Behal, Aman; Bölöni, Ladislau; Hickman, James J

    2013-03-01

    Applications of non-invasive neuroelectronic interfacing in the fields of whole-cell biosensing, biological computation and neural prosthetic devices depend critically on an efficient decoding and processing of information retrieved from a neuron-electrode junction. This necessitates development of mathematical models of the neuron-electrode interface that realistically represent the extracellular signals recorded at the neuroelectronic junction without being computationally expensive. Extracellular signals recorded using planar microelectrode or field effect transistor arrays have, until now, primarily been represented using linear equivalent circuit models that fail to reproduce the correct amplitude and shape of the signals recorded at the neuron-microelectrode interface. In this paper, to explore viable alternatives for a computationally inexpensive and efficient modeling of the neuron-electrode junction, input-output data from the neuron-electrode junction is modeled using a parametric Wiener model and a Nonlinear Auto-Regressive network with eXogenous input trained using a dynamic Neural Network model (NARX-NN model). Results corresponding to a validation dataset from these models are then employed to compare and contrast the computational complexity and efficiency of the aforementioned modeling techniques with the Lee-Schetzen technique of cross-correlation for estimating a nonlinear dynamic model of the neuroelectronic junction.

  16. Planned posterior assisted levitation in severe subluxated cataract: Surgical technique and clinical results

    Directory of Open Access Journals (Sweden)

    Tova Lifshitz

    2012-01-01

    Full Text Available We report the surgical technique and outcome of planned posterior assisted levitation (P-PAL in four cases of subluxated cataract. P-PAL was planned as the preferred approach in all cases. A spatula was inserted via the pars plana, the whole lens was lifted to the anterior chamber and then removed through a scleral tunnel incision. Anterior chamber intraocular lenses were implanted in all cases. All four eyes had severe subluxation of the crystalline lenses with marked phacodonesis. Two eyes had history of blunt trauma, and the other two eyes had severe pseudoexfoliation with spontaneous lens subluxation. Follow-up ranged from 1 to 2 years in three cases. The postoperative visual acuity was 20/80 or better. No intraoperative complications were observed. In conclusion, the P-PAL technique was successfully performed during cataract surgery in four eyes with severe subluxated cataracts. There were no complications over the long-term follow-up.

  17. Planned posterior assisted levitation in severe subluxated cataract: Surgical technique and clinical results

    Science.gov (United States)

    Lifshitz, Tova; Levy, Jaime; Kratz, Assaf; Belfair, Nadav; Tsumi, Erez

    2012-01-01

    We report the surgical technique and outcome of planned posterior assisted levitation (P-PAL) in four cases of subluxated cataract. P-PAL was planned as the preferred approach in all cases. A spatula was inserted via the pars plana, the whole lens was lifted to the anterior chamber and then removed through a scleral tunnel incision. Anterior chamber intraocular lenses were implanted in all cases. All four eyes had severe subluxation of the crystalline lenses with marked phacodonesis. Two eyes had history of blunt trauma, and the other two eyes had severe pseudoexfoliation with spontaneous lens subluxation. Follow-up ranged from 1 to 2 years in three cases. The postoperative visual acuity was 20/80 or better. No intraoperative complications were observed. In conclusion, the P-PAL technique was successfully performed during cataract surgery in four eyes with severe subluxated cataracts. There were no complications over the long-term follow-up. PMID:23202402

  18. A truly knotless technique for scleral fixation of intraocular lenses: Two-year results

    Directory of Open Access Journals (Sweden)

    Naresh K Yadav

    2012-01-01

    Full Text Available Scleral fixated intraocular lens (SFIOL is a safe and effective option for managing optical aphakia. Suture related complications like suture erosion, suture breakage, endophthalmitis, etc. are unique to SFIOL. The knots can be covered by partial thickness flaps or they can be rotated into scleral tissues without flaps to reduce the complications. We performed a recently described novel technique which obviates the need for knot and scleral flaps in securing the SFIOL. This novel 2-point Ab externo knotless technique may reduce the knot related problems. Twenty-three eyes undergoing this knotless SFIOL procedure were analyzed for intraoperative and postoperative complications. Twenty-two eyes either maintained or improved on their preoperative vision. All patients had a minimum follow-up of 24 months.

  19. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 1: Concepts and methodology

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available A comprehensive data driven modeling experiment is presented in a two-part paper. In this first part, an extensive data-driven modeling experiment is proposed. The most important concerns regarding the way data driven modeling (DDM techniques and data were handled, compared, and evaluated, and the basis on which findings and conclusions were drawn are discussed. A concise review of key articles that presented comparisons among various DDM techniques is presented. Six DDM techniques, namely, neural networks, genetic programming, evolutionary polynomial regression, support vector machines, M5 model trees, and K-nearest neighbors are proposed and explained. Multiple linear regression and naïve models are also suggested as baseline for comparison with the various techniques. Five datasets from Canada and Europe representing evapotranspiration, upper and lower layer soil moisture content, and rainfall-runoff process are described and proposed, in the second paper, for the modeling experiment. Twelve different realizations (groups from each dataset are created by a procedure involving random sampling. Each group contains three subsets; training, cross-validation, and testing. Each modeling technique is proposed to be applied to each of the 12 groups of each dataset. This way, both prediction accuracy and uncertainty of the modeling techniques can be evaluated. The description of the datasets, the implementation of the modeling techniques, results and analysis, and the findings of the modeling experiment are deferred to the second part of this paper.

  20. The comparison of measured impedance of the bladder tissue with the computational modeling results

    Directory of Open Access Journals (Sweden)

    ahmad keshtkar

    2015-11-01

    Full Text Available Introduction: The electrical impedance spectroscopy technique can be used to measure the electrical impedance of the human bladder tissue, for differentiating pathological changes in the urothelium. Methods: In this study, the electrical impedance spectroscopy technique and then, a numerical technique, finite element analysis (FEA were used to model the electrical properties of this tissue to predict the impedance spectrum of the normal and malignant areas of this organ. Results: After determining and comparing the modeled data with the experimental results, it is believed that there are some factors that may affect the measurement results. Thus, the effect of inflammation, edema, changes in the applied pressure over the probe and the distensible property of the bladder tissue were considered. Furthermore, the current distribution inside the human bladder tissue was modeled in normal and malignant cases using the FEA. This model results showed that very little of the current actually flows through the urothelium and much of the injected current flows through the connective tissue beneath the urothelium. Conclusion: The results of the models do not explain the measurements results. In conclusion, there are many factors, which may account for discrepancies between the measured and modeled data.

  1. Updated Results for the Wake Vortex Inverse Model

    Science.gov (United States)

    Robins, Robert E.; Lai, David Y.; Delisi, Donald P.; Mellman, George R.

    2008-01-01

    NorthWest Research Associates (NWRA) has developed an Inverse Model for inverting aircraft wake vortex data. The objective of the inverse modeling is to obtain estimates of the vortex circulation decay and crosswind vertical profiles, using time history measurements of the lateral and vertical position of aircraft vortices. The Inverse Model performs iterative forward model runs using estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Iterations are performed until a user-defined criterion is satisfied. Outputs from an Inverse Model run are the best estimates of the time history of the vortex circulation derived from the observed data, the vertical crosswind profile, and several vortex parameters. The forward model, named SHRAPA, used in this inverse modeling is a modified version of the Shear-APA model, and it is described in Section 2 of this document. Details of the Inverse Model are presented in Section 3. The Inverse Model was applied to lidar-observed vortex data at three airports: FAA acquired data from San Francisco International Airport (SFO) and Denver International Airport (DEN), and NASA acquired data from Memphis International Airport (MEM). The results are compared with observed data. This Inverse Model validation is documented in Section 4. A summary is given in Section 5. A user's guide for the inverse wake vortex model is presented in a separate NorthWest Research Associates technical report (Lai and Delisi, 2007a).

  2. Application of Discrete Fracture Modeling and Upscaling Techniques to Complex Fractured Reservoirs

    Science.gov (United States)

    Karimi-Fard, M.; Lapene, A.; Pauget, L.

    2012-12-01

    During the last decade, an important effort has been made to improve data acquisition (seismic and borehole imaging) and workflow for reservoir characterization which has greatly benefited the description of fractured reservoirs. However, the geological models resulting from the interpretations need to be validated or calibrated against dynamic data. Flow modeling in fractured reservoirs remains a challenge due to the difficulty of representing mass transfers at different heterogeneity scales. The majority of the existing approaches are based on dual continuum representation where the fracture network and the matrix are represented separately and their interactions are modeled using transfer functions. These models are usually based on idealized representation of the fracture distribution which makes the integration of real data difficult. In recent years, due to increases in computer power, discrete fracture modeling techniques (DFM) are becoming popular. In these techniques the fractures are represented explicitly allowing the direct use of data. In this work we consider the DFM technique developed by Karimi-Fard et al. [1] which is based on an unstructured finite-volume discretization. The mass flux between two adjacent control-volumes is evaluated using an optimized two-point flux approximation. The result of the discretization is a list of control-volumes with the associated pore-volumes and positions, and a list of connections with the associated transmissibilities. Fracture intersections are simplified using a connectivity transformation which contributes considerably to the efficiency of the methodology. In addition, the method is designed for general purpose simulators and any connectivity based simulator can be used for flow simulations. The DFM technique is either used standalone or as part of an upscaling technique. The upscaling techniques are required for large reservoirs where the explicit representation of all fractures and faults is not possible

  3. Updating prediction models by dynamical relaxation - An examination of the technique. [for numerical weather forecasting

    Science.gov (United States)

    Davies, H. C.; Turner, R. E.

    1977-01-01

    A dynamical relaxation technique for updating prediction models is analyzed with the help of the linear and nonlinear barotropic primitive equations. It is assumed that a complete four-dimensional time history of some prescribed subset of the meteorological variables is known. The rate of adaptation of the flow variables toward the true state is determined for a linearized f-model, and for mid-latitude and equatorial beta-plane models. The results of the analysis are corroborated by numerical experiments with the nonlinear shallow-water equations.

  4. Prediction of Monthly Summer Monsoon Rainfall Using Global Climate Models Through Artificial Neural Network Technique

    Science.gov (United States)

    Nair, Archana; Singh, Gurjeet; Mohanty, U. C.

    2017-08-01

    The monthly prediction of summer monsoon rainfall is very challenging because of its complex and chaotic nature. In this study, a non-linear technique known as Artificial Neural Network (ANN) has been employed on the outputs of Global Climate Models (GCMs) to bring out the vagaries inherent in monthly rainfall prediction. The GCMs that are considered in the study are from the International Research Institute (IRI) (2-tier CCM3v6) and the National Centre for Environmental Prediction (Coupled-CFSv2). The ANN technique is applied on different ensemble members of the individual GCMs to obtain monthly scale prediction over India as a whole and over its spatial grid points. In the present study, a double-cross-validation and simple randomization technique was used to avoid the over-fitting during training process of the ANN model. The performance of the ANN-predicted rainfall from GCMs is judged by analysing the absolute error, box plots, percentile and difference in linear error in probability space. Results suggest that there is significant improvement in prediction skill of these GCMs after applying the ANN technique. The performance analysis reveals that the ANN model is able to capture the year to year variations in monsoon months with fairly good accuracy in extreme years as well. ANN model is also able to simulate the correct signs of rainfall anomalies over different spatial points of the Indian domain.

  5. Ionospheric scintillation forecasting model based on NN-PSO technique

    Science.gov (United States)

    Sridhar, M.; Venkata Ratnam, D.; Padma Raju, K.; Sai Praharsha, D.; Saathvika, K.

    2017-09-01

    The forecasting and modeling of ionospheric scintillation effects are crucial for precise satellite positioning and navigation applications. In this paper, a Neural Network model, trained using Particle Swarm Optimization (PSO) algorithm, has been implemented for the prediction of amplitude scintillation index (S4) observations. The Global Positioning System (GPS) and Ionosonde data available at Darwin, Australia (12.4634° S, 130.8456° E) during 2013 has been considered. The correlation analysis between GPS S4 and Ionosonde drift velocities (hmf2 and fof2) data has been conducted for forecasting the S4 values. The results indicate that forecasted S4 values closely follow the measured S4 values for both the quiet and disturbed conditions. The outcome of this work will be useful for understanding the ionospheric scintillation phenomena over low latitude regions.

  6. Detecting feature interactions in Web services with model checking techniques

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    As a platform-independent software system, a Web service is designed to offer interoperability among diverse and heterogeneous applications.With the introduction of service composition in the Web service creation, various message interactions among the atomic services result in a problem resembling the feature interaction problem in the telecommunication area.This article defines the problem as feature interaction in Web services and proposes a model checking-based detection method.In the method, the Web service description is translated to the Promela language - the input language of the model checker simple promela interpreter (SPIN), and the specific properties, expressed as linear temporal logic (LTL) formulas, are formulated according to our classification of feature interaction.Then, SPIN is used to check these specific properties to detect the feature interaction in Web services.

  7. Semantic techniques for enabling knowledge reuse in conceptual modelling

    NARCIS (Netherlands)

    Gracia, J.; Liem, J.; Lozano, E.; Corcho, O.; Trna, M.; Gómez-Pérez, A.; Bredeweg, B.

    2010-01-01

    Conceptual modelling tools allow users to construct formal representations of their conceptualisations. These models are typically developed in isolation, unrelated to other user models, thus losing the opportunity of incorporating knowledge from other existing models or ontologies that might enrich

  8. Generalised Chou-Yang model and recent results

    Energy Technology Data Exchange (ETDEWEB)

    Fazal-e-Aleem [International Centre for Theoretical Physics, Trieste (Italy); Rashid, H. [Punjab Univ., Lahore (Pakistan). Centre for High Energy Physics

    1996-12-31

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and {rho} together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author) 16 refs.

  9. IMPROVED SOFTWARE QUALITY ASSURANCE TECHNIQUES USING SAFE GROWTH MODEL

    Directory of Open Access Journals (Sweden)

    M.Sangeetha

    2010-09-01

    Full Text Available In our lives are governed by large, complex systems with increasingly complex software, and the safety, security, and reliability of these systems has become a major concern. As the software in today’ssystems grows larger, it has more defects, and these defects adversely affect the safety, security, and reliability of the systems. Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, andmaintenance of software. Software divides into two pieces: internal and external quality characteristics.External quality characteristics are those parts of a product that face its users, where internal quality characteristics are those that do not.Quality is conformance to product requirements and should be free. This research concerns the role of software Quality. Software reliability is an important facet of software quality. It is the probability of failure-freeoperation of a computer program in a specified environment for a specified time. In software reliability modeling, the parameters of the model are typically estimated from the test data of the corresponding component. However, the widely used point estimatorsare subject to random variations in the data, resulting in uncertainties in these estimated parameters. This research describes a new approach to the problem of software testing. The approach is based on Bayesian graphical models and presents formal mechanisms forthe logical structuring of the software testing problem, the probabilistic and statistical treatment of the uncertainties to be addressed, the test design and analysis process, and the incorporation and implication of test results. Once constructed, the models produced are dynamic representations of the software testingproblem. It explains need of the common test-and-fix software quality strategy is no longer adequate, and characterizes the properties of the quality strategy.

  10. Analysis of polyethylene wear of reverse shoulder components: A validated technique and initial clinical results.

    Science.gov (United States)

    Lewicki, Kathleen A; Bell, John-Erik; Van Citters, Douglas W

    2016-06-27

    One of the most prevalent phenomena associated with reverse total shoulder arthroplasty (rTSA) is scapular notching. Current methods examine only the damage to the scapula and no methods are available for quantifying the total wear volume of the polyethylene humeral bearing. Quantifying the polyethylene material loss may provide insight into the mechanism for scapular notching and into the particle dose delivered to the patient. A coordinate measurement machine (CMM) and custom computer algorithms were employed to quantify the volumetric wear of polyethylene humeral bearings. This technique was validated using two never-implanted polyethylene humeral liners with a controlled amount of wear in clinically relevant locations. The technique was determined to be accurate to within 10% of the known value and within 5 mm(3) of the gravimetrically determined values. Following validation, ten retrieved polyethylene humeral liners were analyzed to determine a baseline for future clinical tests. Four of the ten polyethylene humeral liners showed visible and measureable wear volumes ranging from 40 mm(3) to 90 mm(3) total with a maximum wear rate as high as 470 mm(3) /year in one short duration and significantly damaged humeral liner. This validated technique has the potential to relate patient outcomes such as scapular notching grades to polyethylene release into the body. While the total wear volumes are less than reported in literature for cases of osteolysis in knee and hip patients, dosages are well within the osteolytic thresholds that have been suggested, indicating that osteolysis may be a clinical concern in the shoulder. This work provides the basis for future studies that relate volumetric wear to patient outcomes. This article is protected by copyright. All rights reserved.

  11. Autonomous selection of PDE inpainting techniques vs. exemplar inpainting techniques for void fill of high resolution digital surface models

    Science.gov (United States)

    Rahmes, Mark; Yates, J. Harlan; Allen, Josef DeVaughn; Kelley, Patrick

    2007-04-01

    High resolution Digital Surface Models (DSMs) may contain voids (missing data) due to the data collection process used to obtain the DSM, inclement weather conditions, low returns, system errors/malfunctions for various collection platforms, and other factors. DSM voids are also created during bare earth processing where culture and vegetation features have been extracted. The Harris LiteSite TM Toolkit handles these void regions in DSMs via two novel techniques. We use both partial differential equations (PDEs) and exemplar based inpainting techniques to accurately fill voids. The PDE technique has its origin in fluid dynamics and heat equations (a particular subset of partial differential equations). The exemplar technique has its origin in texture analysis and image processing. Each technique is optimally suited for different input conditions. The PDE technique works better where the area to be void filled does not have disproportionately high frequency data in the neighborhood of the boundary of the void. Conversely, the exemplar based technique is better suited for high frequency areas. Both are autonomous with respect to detecting and repairing void regions. We describe a cohesive autonomous solution that dynamically selects the best technique as each void is being repaired.

  12. Life cycle Prognostic Model Development and Initial Application Results

    Energy Technology Data Exchange (ETDEWEB)

    Jeffries, Brien; Hines, Wesley; Nam, Alan; Sharp, Michael; Upadhyaya, Belle [The University of Tennessee, Knoxville (United States)

    2014-08-15

    In order to obtain more accurate Remaining Useful Life (RUL) estimates based on empirical modeling, a Lifecycle Prognostics algorithm was developed that integrates various prognostic models. These models can be categorized into three types based on the type of data they process. The application of multiple models takes advantage of the most useful information available as the system or component operates through its lifecycle. The Lifecycle Prognostics is applied to an impeller test bed, and the initial results serve as a proof of concept.

  13. Computable General Equilibrium Techniques for Carbon Tax Modeling

    Directory of Open Access Journals (Sweden)

    Al-Amin

    2009-01-01

    Full Text Available Problem statement: Lacking of proper environmental models environmental pollution is now a solemn problem in many developing countries particularly in Malaysia. Some empirical studies of worldwide reveal that imposition of a carbon tax significantly decreases carbon emissions and does not dramatically reduce economic growth. To our knowledge there has not been any research done to simulate the economic impact of emission control policies in Malaysia. Approach: Therefore this study developed an environmental computable general equilibrium model for Malaysia and investigated carbon tax policy responses in the economy applying exogenously different degrees of carbon tax into the model. Three simulations were carried out using a Malaysian social accounting matrix. Results: The carbon tax policy illustrated that a 1.21% reduction of carbon emission reduced the nominal GDP by 0.82% and exports by 2.08%; 2.34% reduction of carbon emission reduced the nominal GDP by 1.90% and exports by 3.97% and 3.40% reduction of carbon emission reduced the nominal GDP by 3.17% and exports by 5.71%. Conclusion/Recommendations: Imposition of successively higher carbon tax results in increased government revenue from baseline by 26.67, 53.07 and 79.28% respectively. However, fixed capital investment increased in scenario 1a by 0.43% and decreased in scenarios 1b and 1c by 0.26 and 1.79% respectively from the baseline. According to our policy findings policy makers should consider 1st (scenario 1a carbon tax policy. This policy results in achieving reasonably good environmental impacts without losing the investment, fixed capital investment, investment share of nominal GDP and government revenue.

  14. Statistically-constrained shallow text marking: techniques, evaluation paradigm and results

    Science.gov (United States)

    Murphy, Brian; Vogel, Carl

    2007-02-01

    We present three natural language marking strategies based on fast and reliable shallow parsing techniques, and on widely available lexical resources: lexical substitution, adjective conjunction swaps, and relativiser switching. We test these techniques on a random sample of the British National Corpus. Individual candidate marks are checked for goodness of structural and semantic fit, using both lexical resources, and the web as a corpus. A representative sample of marks is given to 25 human judges to evaluate for acceptability and preservation of meaning. This establishes a correlation between corpus based felicity measures and perceived quality, and makes qualified predictions. Grammatical acceptability correlates with our automatic measure strongly (Pearson's r = 0.795, p = 0.001), allowing us to account for about two thirds of variability in human judgements. A moderate but statistically insignificant (Pearson's r = 0.422, p = 0.356) correlation is found with judgements of meaning preservation, indicating that the contextual window of five content words used for our automatic measure may need to be extended.

  15. Trochanteric osteotomies in revision total hip arthroplasty: contemporary techniques and results.

    Science.gov (United States)

    Jando, Victor T; Greidanus, Nelson V; Masri, Bassam A; Garbuz, Donald S; Duncan, Clive P

    2005-01-01

    Revision total hip arthroplasty (THA) presents several challenges to the orthopaedic surgeon and typically requires the use of a more extensile surgical approach. Osteotomy of the greater trochanter can be considered as the ultimate extensile exposure in revision THA. The methods of trochanteric osteotomy can be categorized into three types: the standard trochanteric osteotomy, the trochanteric slide, and the extended trochanteric osteotomy. Although the standard osteotomy and trochanteric slide osteotomy provide excellent acetabular exposure, in the revision setting they are frequently associated with an unacceptably high rate of nonunion and proximal migration of the trochanteric fragment. The extended trochanteric osteotomy (ETO) has increased in popularity as the number and complexity of revision THAs continue to increase. Two commonly used techniques are the ETO via a posterolateral approach or via a modified direct lateral approach. Both techniques provide wide exposure of the acetabulum, facilitate femoral component exposure and removal, aid in canal preparation and femoral reconstruction, and allow for correction of proximal femoral deformity. The osteotomy fragment is easily secured and may be advanced distally to achieve proper tensioning of the abductors. Recent literature demonstrates that the ETO has a relatively low rate of nonunion and is associated with fewer intraoperative femoral fractures or cortical perforations, as well as decreased surgical time.

  16. Introduction results of radiosurgery technique in cervical pathology consultation. Analysis of five years 2004 – 2009.

    Directory of Open Access Journals (Sweden)

    Rafael E. Pérez Castro

    2011-06-01

    Full Text Available AA cross sectional study and retrospective was performed in patients having cervical intraepithelial neoplasia (CIN, mainly in high grade lesions (CIN II, CIN III and C. In situ, which were treated with radiosurgery technique in the University General Hospital “Camilo Cienfuegos” Sancti Spiritus during the period may 2004 – may 2009 with the objective to evaluate the radiosurgery`s behaviour in the period mentioned before. The studied group was conformed by 550 patients having high grade lesions and low grade persistent lesions (NIC I with previous biopsy, a smaller group of 24 cases was formed by benign pathologies tributary of this procedure. High grade lesions were of major indications (73.1%. 69 cases were false negative (13.3%; since the second decade of life were presented high and low grade lesions of cervical pathology, a good correlation punch – cone byopsy was checked; edges of the surgical section behave properly, lower than what it is established in the international standard (6.7. The procedure had a great economic and social impact, because it is an ambulatory technique with few difficulties, low cost, and a fast integration patient – normal work.

  17. A New Profile Learning Model for Recommendation System based on Machine Learning Technique

    Directory of Open Access Journals (Sweden)

    Shereen H. Ali

    2016-03-01

    Full Text Available Recommender systems (RSs have been used to successfully address the information overload problem by providing personalized and targeted recommendations to the end users. RSs are software tools and techniques providing suggestions for items to be of use to a user, hence, they typically apply techniques and methodologies from Data Mining. The main contribution of this paper is to introduce a new user profile learning model to promote the recommendation accuracy of vertical recommendation systems. The proposed profile learning model employs the vertical classifier that has been used in multi classification module of the Intelligent Adaptive Vertical Recommendation (IAVR system to discover the user’s area of interest, and then build the user’s profile accordingly. Experimental results have proven the effectiveness of the proposed profile learning model, which accordingly will promote the recommendation accuracy.

  18. Groundwater Resources Assessment For Joypurhat District Using Mathematical Modelling Technique

    Directory of Open Access Journals (Sweden)

    Md. Iquebal Hossain

    2015-06-01

    Full Text Available In this study potential recharge as well as groundwater availability for 5 Upazillas (Akkelpur, Kalai, Joypurhat Sadar, Khetlal and Panchbibi of Joypurhat districts has been estimated using MIKE SHE modelling tools. The main aquifers of the study area are dominated by medium sands, medium and coarse sands with little gravels. The top of aquifers ranges from 15 m to 24 m and the screenable thickness of aquifers range from 33 m to 46 m within the depth range from 57 m to 87 m. Heavy abstraction of groundwater for agricultural, industrial and domestic uses results in excessive lowering of water table making the shallow and hand tubewells inoperable in the dry season. The upazilawise potential recharge for the study area was estimated through mathematical model using MIKE SHE modelling tools in an integrated approach. The required data were collected from the different relevant organisations. The potential recharge of the present study varies from 452 mm to 793 mm. Maximum depth to groundwater table in most of the places occurs at the end of April. At this time, groundwater table in most of the part of Kalai, Khetlal, Akkelpur and Panchbibi goes below suction limit causing HTWs and STWs partially/fully in operable.

  19. PENGEMBANGAN MODEL INTERNALISASI NILAI KARAKTER DALAM PEMBELAJARAN SEJARAH MELALUI MODEL VALUE CLARIFICATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Nunuk Suryani

    2013-07-01

    Full Text Available This research produce a product model of internalization of the character in learning history through Value Clarification Technique as a revitalization of the role of social studies in the formation of national character. In general, this research consist of three levels : (1 doing  pre-survey which identified the current condition of  the learning value of character in ​​in learning history (2 development of a model based on the findings of  pre-survey, the model used is the Dick and Carey Model, and (3 validating the models. Development models implemented with limited trials and extensive testing. The findings of this study lead to the conclusion that the VCT model is effective to internalize the character value in learning history. VCT models effective for increasing the role of learning history in the formation of student character. It can be concluded VCT models effective for improving the quality of processes and products of learning character values ​​in social studies SMP especially in Surakarta Keywords: Internalization, the value of character, Model VCT, learning history, learning social studies Penelitian ini bertujuan menghasilkan suatu produk model internalisasi nilai karakter dalam pembelajaran IPS melalui Model Value Clarification Technique sebagai revitalisasi peran pembelajaran IPS dalam pembentukan karakter bangsa. Secara garis besar tahapan penelitian meliputi (1 prasurvai untuk mengidetifikasi kondisi pembelajaran nilai karakter pada pembelajaran  IPS Sejarah SMP yang sedang berjalan, (2 pengembangan model berdasarkan hasil prasurvai, model yang digunakan adalah model Dick and Carey, dan (3 vaidasi model. Pengembangan model dilaksanakan dengan ujicoba terbatas dan uji coba luas. Temuan penelitian ini menghasilkan kesimpulan bahwa model VCT efektif  menginternalisasi nilai karakter dalam pembelajaran Sejarah. Model VCT efektif untuk meningkatkan peran pembelajaran Sejarah dalam

  20. Application of nonlinear forecasting techniques for meteorological modeling

    Directory of Open Access Journals (Sweden)

    V. Pérez-Muñuzuri

    Full Text Available A nonlinear forecasting method was used to predict the behavior of a cloud coverage time series several hours in advance. The method is based on the reconstruction of a chaotic strange attractor using four years of cloud absorption data obtained from half-hourly Meteosat infrared images from Northwestern Spain. An exhaustive nonlinear analysis of the time series was carried out to reconstruct the phase space of the underlying chaotic attractor. The forecast values are used by a non-hydrostatic meteorological model ARPS for daily weather prediction and their results compared with surface temperature measurements from a meteorological station and a vertical sounding. The effect of noise in the time series is analyzed in terms of the prediction results.

    Key words: Meterology and atmospheric dynamics (mesoscale meteorology; general – General (new fields

  1. A titration model for evaluating calcium hydroxide removal techniques

    Directory of Open Access Journals (Sweden)

    Mark PHILLIPS

    2015-02-01

    Full Text Available Objective Calcium hydroxide (Ca(OH2 has been used in endodontics as an intracanal medicament due to its antimicrobial effects and its ability to inactivate bacterial endotoxin. The inability to totally remove this intracanal medicament from the root canal system, however, may interfere with the setting of eugenol-based sealers or inhibit bonding of resin to dentin, thus presenting clinical challenges with endodontic treatment. This study used a chemical titration method to measure residual Ca(OH2 left after different endodontic irrigation methods. Material and Methods Eighty-six human canine roots were prepared for obturation. Thirty teeth were filled with known but different amounts of Ca(OH2 for 7 days, which were dissolved out and titrated to quantitate the residual Ca(OH2 recovered from each root to produce a standard curve. Forty-eight of the remaining teeth were filled with equal amounts of Ca(OH2 followed by gross Ca(OH2 removal using hand files and randomized treatment of either: 1 Syringe irrigation; 2 Syringe irrigation with use of an apical file; 3 Syringe irrigation with added 30 s of passive ultrasonic irrigation (PUI, or 4 Syringe irrigation with apical file and PUI (n=12/group. Residual Ca(OH2 was dissolved with glycerin and titrated to measure residual Ca(OH2 left in the root. Results No method completely removed all residual Ca(OH2. The addition of 30 s PUI with or without apical file use removed Ca(OH2 significantly better than irrigation alone. Conclusions This technique allowed quantification of residual Ca(OH2. The use of PUI (with or without apical file resulted in significantly lower Ca(OH2 residue compared to irrigation alone.

  2. [Tibial defects and infected non-unions : Treatment results after Masquelet technique].

    Science.gov (United States)

    Moghaddam, A; Ermisch, C; Fischer, C; Zietzschmann, S; Schmidmaier, G

    2017-03-01

    The treatment of non-unions with large bone defects or osteitis is a major challenge in orthopedic and trauma surgery. A new concept of therapy is a two-step procedure: Masquelet technique according to the diamond concept. Between February 2010 and June 2014, 55 patients with tibia non-unions or infections were treated in a two-step Masquelet technique in our center. The patients' average age was 48 (median 50; minimum 15-maximum 72) with an average BMI (body mass index) of 28 (27; 18-52). There were 10 (18 %) female and 45 (82 %) male patients in the group. All study patients went through a follow up. Bone healing and clinical functional data were collected, as well as data according to subjective patient statements about pain and everyday limitations. In 42 cases (76.4 %) the outcome was a sufficient bony consolidation. On average, the time to heal was 10.3 (8, 5; 3-40) months, defect gaps were 4 cm (3 cm; 0,6-26 cm), and on average the patients had had 6 (median 4; range 1-31) previous operations . In all cases patients received osteosynthesis as well as a defect filling with RIA (reamer-irrigator-aspirator), and growth factor BMP-7 (bone morphogenetic protein-7). In 13 cases (23.6 %) there was no therapeutic success. In the evaluation of the SF12 questionnaire the mental health score increased from 47.4 (49.1; 27.6-65.7) to 49.8 (53.0; 28.7-69.4) and the well-being score from 32.7 (32.7;16.9-55.7) to 36.6 (36.5; 24.6-55.9). The two-step bone grafting method in the Masquelet technique used for tibia non-unions according to the diamond concept is a promising treatment option. Its application for tibia shaft non-unions with large bone defects or infections means a high degree of safety for the patient.

  3. [ECCE with self-sealing cataract incision. Technique and clinical results].

    Science.gov (United States)

    Pham, D T; Wollensak, J; Drosch, S

    1995-06-01

    We present of modified technique for sutureless ECCE with a trapezoidal tunnel incision of 11 mm. The operation can be performed in a closed system because of the self-sealing wound construction. Compared to the sutured corneoscleral ECCE the new procedure has important advantages: the procedure is safe even during the critical phase following nucleus extraction. The procedure is therefore safer, faster, more economical and suture-induced astigmatism is avoided. Clinical experience after 2 years showed that postoperative complications were reduced significantly. Iris prolapse, wound dehiscens and hyphema occurred at a rate of 2%. The astigmatism (Jaffe analysis) was at a rate of 2 D, stable within 4 weeks after surgery, and did not change up to 2 years postoperatively. The astigmatism induced was then reduced about 0.5 D by a radical suture.

  4. Early Results Show Reduced Infection Rate Using No-touch Technique for Expander/ADM Breast Reconstruction

    OpenAIRE

    Henry B. Wilson, MD, FACS

    2015-01-01

    Summary: Infection is a common complication of immediate breast reconstruction that often leads to device removal, a result emotionally devastating to the patient and frustrating for her surgeon. “No-touch” techniques have been used in other surgical disciplines and plastic surgery, but they have not been reported for breast reconstruction with tissue expanders or implants and acellular dermis. We report a novel technique of tissue expander and acellular dermis placement using no-touch princi...

  5. Percutaneous Nucleoplasty Using Coblation Technique for the Treatment of Chronic Nonspecific Low Back Pain: 5-year Follow-up Results

    Directory of Open Access Journals (Sweden)

    Da-Jiang Ren

    2015-01-01

    Conclusions: Although previously published short- and medium-term outcomes after percutaneous nucleoplasty appeared to be satisfactory, our long-term follow-up results show a significant decline in patient satisfaction over time. Percutaneous nucleoplasty is a safe and simple technique, with therapeutic effectiveness for the treatment of chronic LBP in selected patients. The technique is minimally invasive and can be used as part of a stepwise treatment plan for chronic LBP.

  6. Clinical and Histological Results of Vertical Ridge Augmentation of Atrophic Posterior Mandible with Inlay Technique of Cancellous Equine Bone Blocks

    OpenAIRE

    Pistilli R; Barausse C; Checchi L; Nannmark U; Felice P

    2013-01-01

    Aim: We want to evaluate a new bone block material in the inlay technique, for the vertical bone augmentation of a posterior atrophic mandible, in order to perfom aesthetic and prosthetic rehabilitation and enable implant insertion. Materials & Methods: Inlay technique and the subsequent successful implant rehabilitation in the atrophic right posterior mandible in a 42-year-old woman, was completed using a cancellous equine bone block as grafting material. Results: Three months after ...

  7. [Bernese periacetabular osteotomy. : Indications, technique and results 30 years after the first description].

    Science.gov (United States)

    Lerch, T D; Steppacher, S D; Liechti, E F; Siebenrock, K A; Tannast, M

    2016-08-01

    The Bernese periacetabular osteotomy (PAO) is a surgical technique for the treatment of (1) hip dysplasia and (2) femoroacetabular impingement due to acetabular retroversion. The aim of the surgery is to prevent secondary osteoarthritis by improvement of the hip biomechanics. In contrast to other pelvic osteotomies, the posterior column remains intact with this technique. This improves the inherent stability of the acetabular fragment and thereby facilitates postoperative rehabilitation. The birth canal remains unchanged. Through a shortened ilioinguinal incision, four osteotomies and one controlled fracture around the acetabulum are performed. The direction of acetabular reorientation differs for both indications while the sequence of the osteotomies remains the same. This surgical approach allows for a concomitant osteochondroplasty in the case of an aspherical femoral head-neck junction. The complication rate is relatively low despite the complexity of the procedure. The key point for a successful long term outcome is an optimal reorientation of the acetabulum for both indications. With an optimal reorientation and a spherical femoral head, the cumulative survivorship of the hip after 10 years is 80-90 %. For the very first 75 patients, the cumulative 20-year survivorship was 60 %. The preliminary evaluation of the same series at a 30-year follow-up still showed a survivorship of approximately 30 %. The PAO has become the standard procedure for the surgical therapy of hip dysplasia in adolescents and adults.

  8. A Titration Technique for Demonstrating a Magma Replenishment Model.

    Science.gov (United States)

    Hodder, A. P. W.

    1983-01-01

    Conductiometric titrations can be used to simulate subduction-setting volcanism. Suggestions are made as to the use of this technique in teaching volcanic mechanisms and geochemical indications of tectonic settings. (JN)

  9. New Developments and Techniques in Structural Equation Modeling

    CERN Document Server

    Marcoulides, George A

    2001-01-01

    Featuring contributions from some of the leading researchers in the field of SEM, most chapters are written by the author(s) who originally proposed the technique and/or contributed substantially to its development. Content highlights include latent varia

  10. Molecular dynamics techniques for modeling G protein-coupled receptors.

    Science.gov (United States)

    McRobb, Fiona M; Negri, Ana; Beuming, Thijs; Sherman, Woody

    2016-10-01

    G protein-coupled receptors (GPCRs) constitute a major class of drug targets and modulating their signaling can produce a wide range of pharmacological outcomes. With the growing number of high-resolution GPCR crystal structures, we have the unprecedented opportunity to leverage structure-based drug design techniques. Here, we discuss a number of advanced molecular dynamics (MD) techniques that have been applied to GPCRs, including long time scale simulations, enhanced sampling techniques, water network analyses, and free energy approaches to determine relative binding free energies. On the basis of the many success stories, including those highlighted here, we expect that MD techniques will be increasingly applied to aid in structure-based drug design and lead optimization for GPCRs.

  11. Hybrid models for hydrological forecasting: integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following top

  12. Hybrid models for hydrological forecasting: Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following top

  13. Hybrid models for hydrological forecasting: integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  14. Hybrid models for hydrological forecasting: Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  15. Comparison of different modelling techniques for longitudinally invariant integrated optical waveguides

    Science.gov (United States)

    de Zutter, D.; Lagasse, P.; Buus, J.; Young, T. P.; Dillon, B. M.

    1989-10-01

    In order to compare various modeling techniques for the eigenmode analysis of integrated optical waveguides, twelve different methods are applied to the analysis of two typical III-V rib waveguides. Both a single and a coupled waveguide case are considered. Results focus on the effective refractive index value for the lowest order TE-mode in the case of the single waveguide, and on the coupling length between the lowest order symmetric and antisymmetric TE-modes of the coupled waveguides.

  16. First results from the International Urban Energy Balance Model Comparison: Model Complexity

    Science.gov (United States)

    Blackett, M.; Grimmond, S.; Best, M.

    2009-04-01

    offline to ensure no feedback to larger scale conditions within the modelling domain. Initially, participants were issued with just forcing data from an unknown urban site (termed "Alpha"); in subsequent stages, further details of the site were provided. Results from each stage, for each participating model, were then compared using a variety of statistical and graphical techniques. * The EGU2009-5713 Team: C.S.B. Grimmond1, M. Blackett1, M. Best2 and J. Barlow3and J.-J. Baik4, S. Belcher3, S. Bohnenstengel3, I. Calmet5, F. Chen6, A. Dandou7, K. Fortuniak8, M. Gouvea1, R. Hamdi9, M. Hendry2, H. Kondo10, S. Krayenhoff11, S. H. Lee4, T. Loridan1, A. Martilli12, S. Miao13, K. Oleson6, G. Pigeon14, A. Porson2,3, F. Salamanca12, L. Shashua-Bar15, G.-J. Steeneveld16, M. Tombrou7, J. Voogt17, N. Zhang18. 1King's College London, UK, 2UK Met Office, UK, 3University of Reading, UK, 4Seoul National University, Korea, 5Ecole Centrale de Nantes, France, 6National Center for Atmospheric Research, USA, 7University of Athens, Greece, 8University of Ł ódź , Poland, 9Royal Meteorological Institute, Belgium, 10National Institute of Advanced Industrial Science and Technology, Japan, 11University of British Columbia, Canada, 12CIEMAT, Spain, 13IUM, CMA, China, 14Meteo France, France, 15Ben Gurion University, Israel, 16Wageningen University, Netherlands, 17University of Western Ontario, Canada, 18Nanjing University, China.

  17. Mathematical Existence Results for the Doi-Edwards Polymer Model

    Science.gov (United States)

    Chupin, Laurent

    2017-01-01

    In this paper, we present some mathematical results on the Doi-Edwards model describing the dynamics of flexible polymers in melts and concentrated solutions. This model, developed in the late 1970s, has been used and extensively tested in modeling and simulation of polymer flows. From a mathematical point of view, the Doi-Edwards model consists in a strong coupling between the Navier-Stokes equations and a highly nonlinear constitutive law. The aim of this article is to provide a rigorous proof of the well-posedness of the Doi-Edwards model, namely that it has a unique regular solution. We also prove, which is generally much more difficult for flows of viscoelastic type, that the solution is global in time in the two dimensional case, without any restriction on the smallness of the data.

  18. The Results of Teaching Middle School Students Two Relaxation Techniques as Part of a Conflict Prevention Program.

    Science.gov (United States)

    Dacey, John S.; And Others

    1997-01-01

    Boston College Conflict Prevention Program techniques for relaxation and self-control were taught to middle-school students in two Boston schools. Preliminary results from teacher interviews revealed that students spontaneously used these methods to calm their "fight-or-flight" reactions in real conflicts. Results also indicated that…

  19. Comparison of NASCAP modelling results with lumped circuit analysis

    Science.gov (United States)

    Stang, D. B.; Purvis, C. K.

    1980-01-01

    Engineering design tools that can be used to predict the development of absolute and differential potentials by realistic spacecraft under geomagnetic substorm conditions are described. Two types of analyses are in use: (1) the NASCAP code, which computes quasistatic charging of geometrically complex objects with multiple surface materials in three dimensions; (2) lumped element equivalent circuit models that are used for analyses of particular spacecraft. The equivalent circuit models require very little computation time, however, they cannot account for effects, such as the formation of potential barriers, that are inherently multidimensional. Steady state potentials of structure and insulation are compared with those resulting from the equivalent circuit model.

  20. The East model: recent results and new progresses

    CERN Document Server

    Faggionato, Alessandra; Roberto, Cyril; Toninelli, Cristina

    2012-01-01

    The East model is a particular one dimensional interacting particle system in which certain transitions are forbidden according to some constraints depending on the configuration of the system. As such it has received particular attention in the physics literature as a special case of a more general class of systems referred to as kinetically constrained models, which play a key role in explaining some features of the dynamics of glasses. In this paper we give an extensive overview of recent rigorous results concerning the equilibrium and non-equilibrium dynamics of the East model together with some new improvements.

  1. Constraining hybrid inflation models with WMAP three-year results

    CERN Document Server

    Cardoso, A

    2006-01-01

    We reconsider the original model of quadratic hybrid inflation in light of the WMAP three-year results and study the possibility of obtaining a spectral index of primordial density perturbations, $n_s$, smaller than one from this model. The original hybrid inflation model naturally predicts $n_s\\geq1$ in the false vacuum dominated regime but it is also possible to have $n_s<1$ when the quadratic term dominates. We therefore investigate whether there is also an intermediate regime compatible with the latest constraints, where the scalar field value during the last 50 e-folds of inflation is less than the Planck scale.

  2. Recent MEG Results and Predictive SO(10) Models

    CERN Document Server

    Fukuyama, Takeshi

    2011-01-01

    Recent MEG results of a search for the lepton flavor violating (LFV) muon decay, $\\mu \\to e \\gamma$, show 3 events as the best value for the number of signals in the maximally likelihood fit. Although this result is still far from the evidence/discovery in statistical point of view, it might be a sign of a certain new physics beyond the Standard Model. As has been well-known, supersymmetric (SUSY) models can generate the $\\mu \\to e \\gamma$ decay rate within the search reach of the MEG experiment. A certain class of SUSY grand unified theory (GUT) models such as the minimal SUSY SO(10) model (we call this class of models "predictive SO(10) models") can unambiguously determine fermion Yukawa coupling matrices, in particular, the neutrino Dirac Yukawa matrix. Based on the universal boundary conditions for soft SUSY breaking parameters at the GUT scale, we calculate the rate of the $\\mu \\to e \\gamma$ process by using the completely determined Dirac Yukawa matrix in two examples of predictive SO(10) models. If we ...

  3. Ultra-mini PNL (UMP): Material, indications, technique, advantages and results.

    Science.gov (United States)

    Desai, Janak D

    2017-01-01

    Stone disease has afflicted mankind since centuries; records from ancient civilisations of India and Egypt have shown stones in human bodies. The scientific mind of humans has always made smart endeavours to remove the kidney stones. From large instruments made like the beaks of different animals and birds in 600 BC (Indian civilisation) to extremely sophisticated and miniaturised endoscopic intruments of today the human race has travelled a long way. The theme has always been to remove the stones with minimal morbidity and mortality and with minimum pain to the patient. The article takes you through the journey of instruments used in 600 BC until today. The story of instrumentation is a symbiosis of the medical minds along with engineering advances. The story of miniaturisation could not have moved further without the development of lasers, fiberoptics and sophisticated cameras. As the field stands today, we remove more complex stones by larger endoscopic intervention and smaller stones by miniaturised instruments. The article discusses all the merits and shortcomings of various techniques: from open surgery to standard PCNL to Mini PCNL to Ultra- Mini PCNL to Micro-PCNL.

  4. Pulsed transthrombotic fibrinolysis: technique and results in the management of occluded lower limb bypass grafts.

    Science.gov (United States)

    Payelle, G; Maiza, D; Coffin, O; Alachkar, F; Alweis, S; Courtheoux, P; Khayat, M C; Gérard, J L; Théron, J

    1997-03-01

    Between March 1987 and March 1993 we used pulsed transthrombotic fibrinolysis to treat 58 symptomatic thrombotic occlusions of lower limb bypass grafts in 45 patients. There were 17 suprainguinal grafts and 28 infrainguinal grafts. Treatment consisted of pulsed infusion of fibrinolytic agents into the thrombus followed by continuous infusion using an electric pump. Minor percutaneous or surgical procedures were often associated. The mean delay to treatment was 7 days. The mean duration of treatment was 150 +/- 66 minutes. Immediate patency was achieved in 88% of cases with no significant difference between suprainguinal and infrainguinal grafts. The clinical success rate was 55%. Actuarial patency at 1 year was 54% +/- 11% for suprainguinal grafts and 26% +/- 7% for infrainguinal grafts. The probability of patency was much lower in patients whose grafts had been implanted within 3 months before occlusion and in patients in whom an adjuvant procedure had not been performed. This study demonstrates that, in cases not requiring immediate surgery, pulsed transthrombotic fibrinolysis can achieve durable patency by treating both the bypass and distal arterial network. This technique allows identification of lesions causing thrombosis and adaptation of treatment specifically to these lesions.

  5. Ecological Footprint Model Using the Support Vector Machine Technique

    Science.gov (United States)

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance. PMID:22291949

  6. Summary of FY15 results of benchmark modeling activities

    Energy Technology Data Exchange (ETDEWEB)

    Arguello, J. Guadalupe [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    Sandia is participating in the third phase of an is a contributing partner to a U.S.-German "Joint Project" entitled "Comparison of current constitutive models and simulation procedures on the basis of model calculations of the thermo-mechanical behavior and healing of rock salt." The first goal of the project is to check the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure storage of radioactive wastes in rock salt, thereby enhancing the acceptance of the results. These results may ultimately be used to make various assertions regarding both the stability analysis of an underground repository in salt, during the operating phase, and the long-term integrity of the geological barrier against the release of harmful substances into the biosphere, in the post-operating phase.

  7. Korean round-robin result for new international program to assess the reliability of emerging nondestructive techniques

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyung Cho; Kim, Jin Gyum; Kang, Sung Sik; Jhung, Myung Jo [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2017-04-15

    The Korea Institute of Nuclear Safety, as a representative organization of Korea, in February 2012 participated in an international Program to Assess the Reliability of Emerging Nondestructive Techniques initiated by the U.S. Nuclear Regulatory Commission. The goal of the Program to Assess the Reliability of Emerging Nondestructive Techniques is to investigate the performance of emerging and prospective novel nondestructive techniques to find flaws in nickel-alloy welds and base materials. In this article, Korean round-robin test results were evaluated with respect to the test blocks and various nondestructive examination techniques. The test blocks were prepared to simulate large-bore dissimilar metal welds, small-bore dissimilar metal welds, and bottom-mounted instrumentation penetration welds in nuclear power plants. Also, lessons learned from the Korean round-robin test were summarized and discussed.

  8. Korean Round-Robin Tests Result for New International Program to Assess the Reliability of Emerging Nondestructive Techniques

    Directory of Open Access Journals (Sweden)

    Kyung Cho Kim

    2017-04-01

    Full Text Available The Korea Institute of Nuclear Safety, as a representative organization of Korea, in February 2012 participated in an international Program to Assess the Reliability of Emerging Nondestructive Techniques initiated by the U.S. Nuclear Regulatory Commission. The goal of the Program to Assess the Reliability of Emerging Nondestructive Techniques is to investigate the performance of emerging and prospective novel nondestructive techniques to find flaws in nickel-alloy welds and base materials. In this article, Korean round-robin test results were evaluated with respect to the test blocks and various nondestructive examination techniques. The test blocks were prepared to simulate large-bore dissimilar metal welds, small-bore dissimilar metal welds, and bottom-mounted instrumentation penetration welds in nuclear power plants. Also, lessons learned from the Korean round-robin test were summarized and discussed.

  9. Standard Model physics results from ATLAS and CMS

    CERN Document Server

    Dordevic, Milos

    2015-01-01

    The most recent results of Standard Model physics studies in proton-proton collisions at 7 TeV and 8 TeV center-of-mass energy based on data recorded by ATLAS and CMS detectors during the LHC Run I are reviewed. This overview includes studies of vector boson production cross section and properties, results on V+jets production with light and heavy flavours, latest VBS and VBF results, measurement of diboson production with an emphasis on ATGC and QTGC searches, as well as results on inclusive jet cross sections with strong coupling constant measurement and PDF constraints. The outlined results are compared to the prediction of the Standard Model.

  10. Clustering the Results of Brainstorm Sessions: Applying Word Similarity Techniques to Cluster Dutch Nouns

    NARCIS (Netherlands)

    Amrit, Chintan Amrit; Hek, Jeroen

    2016-01-01

    This research addresses the problem of clustering the results of brainstorm sessions. Going through all ideas and clustering them can be a time consuming task. In this research we design a computer-aided approach that can help with clustering of these results. We have limited ourselves to looking at

  11. Clustering the Results of Brainstorm Sessions: Applying Word Similarity Techniques to Cluster Dutch Nouns

    NARCIS (Netherlands)

    Amrit, Chintan; Hek, Jeroen

    2016-01-01

    This research addresses the problem of clustering the results of brainstorm sessions. Going through all ideas and clustering them can be a time consuming task. In this research we design a computer-aided approach that can help with clustering of these results. We have limited ourselves to looking at

  12. New Diagnostic, Launch and Model Control Techniques in the NASA Ames HFFAF Ballistic Range

    Science.gov (United States)

    Bogdanoff, David W.

    2012-01-01

    This report presents new diagnostic, launch and model control techniques used in the NASA Ames HFFAF ballistic range. High speed movies were used to view the sabot separation process and the passage of the model through the model splap paper. Cavities in the rear of the sabot, to catch the muzzle blast of the gun, were used to control sabot finger separation angles and distances. Inserts were installed in the powder chamber to greatly reduce the ullage volume (empty space) in the chamber. This resulted in much more complete and repeatable combustion of the powder and hence, in much more repeatable muzzle velocities. Sheets of paper or cardstock, impacting one half of the model, were used to control the amplitudes of the model pitch oscillations.

  13. Automatic parameter extraction technique for gate leakage current modeling in double gate MOSFET

    Science.gov (United States)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-11-01

    Direct Tunneling (DT) and Trap Assisted Tunneling (TAT) gate leakage current parameters have been extracted and verified considering automatic parameter extraction approach. The industry standard package IC-CAP is used to extract our leakage current model parameters. The model is coded in Verilog-A and the comparison between the model and measured data allows to obtain the model parameter values and parameters correlations/relations. The model and parameter extraction techniques have been used to study the impact of parameters in the gate leakage current based on the extracted parameter values. It is shown that the gate leakage current depends on the interfacial barrier height more strongly than the barrier height of the dielectric layer. There is almost the same scenario with respect to the carrier effective masses into the interfacial layer and the dielectric layer. The comparison between the simulated results and available measured gate leakage current transistor characteristics of Trigate MOSFETs shows good agreement.

  14. Examining Interior Grid Nudging Techniques Using Two-Way Nesting in the WRF Model for Regional Climate Modeling

    Science.gov (United States)

    This study evaluates interior nudging techniques using the Weather Research and Forecasting (WRF) model for regional climate modeling over the conterminous United States (CONUS) using a two-way nested configuration. NCEP–Department of Energy Atmospheric Model Intercomparison Pro...

  15. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  16. Iliac crest allograft glenoid reconstruction for recurrent anterior shoulder instability in athletes: Surgical technique and results

    Directory of Open Access Journals (Sweden)

    Randy Mascarenhas

    2014-01-01

    Full Text Available Performing a labral repair alone in patients with recurrent anterior instability and a large glenoid defect has led to poor outcomes. We present a technique involving the use of the iliac crest allograft inserted into the glenoid defect in athletes with recurrent anterior shoulder instability and large bony defects of the glenoid (>25% of glenoid diameter. All athletes with recurrent anterior shoulder instability and a large glenoid defect that underwent open anterior shoulder stabilization and glenoid reconstruction with the iliac crest allograft were followed over a 4-year period. Preoperatively, a detailed history and physical exam were obtained along with standard radiographs and magnetic resonance imaging of the affected shoulder. All patients also completed the Simple Shoulder Test (SST and American Shoulder and Elbow Surgeons (ASES evaluation forms preoperatively. A computed tomography scan was obtained postoperatively to assess osseous union of the graft and the patient again went through a physical exam in addition to completing the SST, ASES, and Western Ontario Shoulder Instability Index (WOSI forms. 10 patients (9 males, 1 female were followed for an average of 16 months (4-36 months and had a mean age of 24.4 years. All patients exhibited a negative apprehension/relocation test and full shoulder strength at final follow-up. Eight of 10 patients had achieved osseous union at 6 months (80.0%. ASES scores improved from 64.3 to 97.8, and SST scores improved from 66.7 to 100. Average postoperative WOSI scores were 93.8%. The use of the iliac crest allograft provides a safe and clinically useful alternative compared to previously described procedures for recurrent shoulder instability in the face of glenoid deficiency.

  17. Mucosal proctectomy and ileoanal pull-through technique and functional results in 23 consecutive patients.

    Science.gov (United States)

    Bodzin, J H; Kestenberg, W; Kaufmann, R; Dean, K

    1987-07-01

    Mucosal proctectomy with ileoanal pull-through in the treatment of ulcerative colitis and familial polyposis provides a technique for the preservation of the anal sphincters and relatively normal mechanisms of continence. Five patients had straight ileoanal anastomosis while 18 had the construction of a J-pouch. A two-team approach was used for simultaneous abdominal and perineal procedures to facilitate a shortened operating time. A loop ileostomy was routinely used in the postoperative period and was closed an average of 4.5 months (range: 2-16 months) later without complication. Prolonged preoperative hospitalization was rarely necessary and outpatient steroid enema preparation was routinely used. There were no deaths. Nineteen patients with functioning pull-through procedures have been followed an average of 23 months (range: 3-42 months). Two other patients have not had ileostomy closure because of complications. The two remaining patients had intractable diarrhea and have since undergone conversion to a permanent ileostomy. The 19 patients are continent, having three to nine bowel movements each day. Nearly all wear a perineal sanitary pad because of rare, unpredictable leakage of small amounts of fluid, especially at night. Complications were significant in this group of patients. Intestinal obstruction was a frequent problem, occurring in 52 per cent of the entire series and necessitating reoperation in 22 per cent. Anal stricture was a problem in another five patients. A variety of other minor problems occurred and most were treated nonoperatively. In spite of moderate diarrhea and occasional leakage of stool, all patients with functioning pull-through procedures prefer their current status to life with an ileostomy.(ABSTRACT TRUNCATED AT 250 WORDS)

  18. A comparison of modelling techniques for computing wall stress in abdominal aortic aneurysms

    Directory of Open Access Journals (Sweden)

    McGloughlin Timothy M

    2007-10-01

    Full Text Available Abstract Background Aneurysms, in particular abdominal aortic aneurysms (AAA, form a significant portion of cardiovascular related deaths. There is much debate as to the most suitable tool for rupture prediction and interventional surgery of AAAs, and currently maximum diameter is used clinically as the determining factor for surgical intervention. Stress analysis techniques, such as finite element analysis (FEA to compute the wall stress in patient-specific AAAs, have been regarded by some authors to be more clinically important than the use of a "one-size-fits-all" maximum diameter criterion, since some small AAAs have been shown to have higher wall stress than larger AAAs and have been known to rupture. Methods A patient-specific AAA was selected from our AAA database and 3D reconstruction was performed. The AAA was then modelled in this study using three different approaches, namely, AAA(SIMP, AAA(MOD and AAA(COMP, with each model examined using linear and non-linear material properties. All models were analysed using the finite element method for wall stress distributions. Results Wall stress results show marked differences in peak wall stress results between the three methods. Peak wall stress was shown to reduce when more realistic parameters were utilised. It was also noted that wall stress was shown to reduce by 59% when modelled using the most accurate non-linear complex approach, compared to the same model without intraluminal thrombus. Conclusion The results here show that using more realistic parameters affect resulting wall stress. The use of simplified computational modelling methods can lead to inaccurate stress distributions. Care should be taken when examining stress results found using simplified techniques, in particular, if the wall stress results are to have clinical importance.

  19. Modified urethrovesical anastomosis during robot-assisted simple prostatectomy: Technique and results

    Directory of Open Access Journals (Sweden)

    Octavio Castillo

    2016-06-01

    Conclusion: The results of our study show that RASP with UVA is a feasible, secure, and reproducible procedure with low morbidity. Additional series with larger patient cohorts are needed to validate this approach.

  20. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  1. Evaluating machine learning and statistical prediction techniques for landslide susceptibility modeling

    Science.gov (United States)

    Goetz, J. N.; Brenning, A.; Petschko, H.; Leopold, P.

    2015-08-01

    curvature were consistently highly ranked variables. The prediction methods that create splits in the predictors (RF, BPLDA and WOE) resulted in heterogeneous prediction maps full of spatial artifacts. In contrast, the GAM, GLM and SVM produced smooth prediction surfaces. Overall, it is suggested that the framework of this model evaluation approach can be applied to assist in selection of a suitable landslide susceptibility modeling technique.

  2. Ensembles of signal transduction models using Pareto Optimal Ensemble Techniques (POETs).

    Science.gov (United States)

    Song, Sang Ok; Chakrabarti, Anirikh; Varner, Jeffrey D

    2010-07-01

    Mathematical modeling of complex gene expression programs is an emerging tool for understanding disease mechanisms. However, identification of large models sometimes requires training using qualitative, conflicting or even contradictory data sets. One strategy to address this challenge is to estimate experimentally constrained model ensembles using multiobjective optimization. In this study, we used Pareto Optimal Ensemble Techniques (POETs) to identify a family of proof-of-concept signal transduction models. POETs integrate Simulated Annealing (SA) with Pareto optimality to identify models near the optimal tradeoff surface between competing training objectives. We modeled a prototypical-signaling network using mass-action kinetics within an ordinary differential equation (ODE) framework (64 ODEs in total). The true model was used to generate synthetic immunoblots from which the POET algorithm identified the 117 unknown model parameters. POET generated an ensemble of signaling models, which collectively exhibited population-like behavior. For example, scaled gene expression levels were approximately normally distributed over the ensemble following the addition of extracellular ligand. Also, the ensemble recovered robust and fragile features of the true model, despite significant parameter uncertainty. Taken together, these results suggest that experimentally constrained model ensembles could capture qualitatively important network features without exact parameter information.

  3. Results of error correction techniques applied on two high accuracy coordinate measuring machines

    Energy Technology Data Exchange (ETDEWEB)

    Pace, C.; Doiron, T.; Stieren, D.; Borchardt, B.; Veale, R. (Sandia National Labs., Albuquerque, NM (USA); National Inst. of Standards and Technology, Gaithersburg, MD (USA))

    1990-01-01

    The Primary Standards Laboratory at Sandia National Laboratories (SNL) and the Precision Engineering Division at the National Institute of Standards and Technology (NIST) are in the process of implementing software error correction on two nearly identical high-accuracy coordinate measuring machines (CMMs). Both machines are Moore Special Tool Company M-48 CMMs which are fitted with laser positioning transducers. Although both machines were manufactured to high tolerance levels, the overall volumetric accuracy was insufficient for calibrating standards to the levels both laboratories require. The error mapping procedure was developed at NIST in the mid 1970's on an earlier but similar model. The error mapping procedure was originally very complicated and did not make any assumptions about the rigidness of the machine as it moved, each of the possible error motions was measured at each point of the error map independently. A simpler mapping procedure was developed during the early 1980's which assumed rigid body motion of the machine. This method has been used to calibrate lower accuracy machines with a high degree of success and similar software correction schemes have been implemented by many CMM manufacturers. The rigid body model has not yet been used on highly repeatable CMMs such as the M48. In this report we present early mapping data for the two M48 CMMs. The SNL CMM was manufactured in 1985 and has been in service for approximately four years, whereas the NIST CMM was delivered in early 1989. 4 refs., 5 figs.

  4. Relationship Marketing results: proposition of a cognitive mapping model

    Directory of Open Access Journals (Sweden)

    Iná Futino Barreto

    2015-12-01

    Full Text Available Objective - This research sought to develop a cognitive model that expresses how marketing professionals understand the relationship between the constructs that define relationship marketing (RM. It also tried to understand, using the obtained model, how objectives in this field are achieved. Design/methodology/approach – Through cognitive mapping, we traced 35 individual mental maps, highlighting how each respondent understands the interactions between RM elements. Based on the views of these individuals, we established an aggregate mental map. Theoretical foundation – The topic is based on a literature review that explores the RM concept and its main elements. Based on this review, we listed eleven main constructs. Findings – We established an aggregate mental map that represents the RM structural model. Model analysis identified that CLV is understood as the final result of RM. We also observed that the impact of most of the RM elements on CLV is brokered by loyalty. Personalization and quality, on the other hand, proved to be process input elements, and are the ones that most strongly impact others. Finally, we highlight that elements that punish customers are much less effective than elements that benefit them. Contributions - The model was able to insert core elements of RM, but absent from most formal models: CLV and customization. The analysis allowed us to understand the interactions between the RM elements and how the end result of RM (CLV is formed. This understanding improves knowledge on the subject and helps guide, assess and correct actions.

  5. A Novel Algorithmic Cost Estimation Model Based on Soft Computing Technique

    Directory of Open Access Journals (Sweden)

    Iman Attarzadeh

    2010-01-01

    Full Text Available Problem statement: Software development effort estimation is the process of predicting the most realistic use of effort required for developing software based on some parameters. It has always characterized one of the biggest challenges in Computer Science for the last decades. Because time and cost estimate at the early stages of the software development are the most difficult to obtain and they are often the least accurate. Traditional algorithmic techniques such as regression models, Software Life Cycle Management (SLIM, COCOMO II model and function points, require an estimation process in a long term. But, nowadays that is not acceptable for software developers and companies. Newer soft computing techniques to effort estimation based on non-algorithmic techniques such as Fuzzy Logic (FL may offer an alternative for solving the problem. This work aims to propose a new fuzzy logic realistic model to achieve more accuracy in software effort estimation. The main objective of this research was to investigate the role of fuzzy logic technique in improving the effort estimation accuracy by characterizing inputs parameters using two-side Gaussian function which gave superior transition from one interval to another. Approach: The methodology adopted in this study was use of fuzzy logic approach rather than classical intervals in the COCOMO II. Using advantages of fuzzy logic such as fuzzy sets, inputs parameters can be specified by distribution of its possible values and these fuzzy sets were represented by membership functions. In this study to get a smoother transition in the membership function for input parameters, its associated linguistic values were represented by two-side Gaussian Membership Functions (2-D GMF and rules. Results: After analyzing the results attained by means of applying COCOMO II and proposed model based on fuzzy logic to the NASA dataset and created an artificial dataset, it had been found that proposed model was performing

  6. Survey of the results of acute sciatic nerve repair comparing epineural and perineurial techniques in the lower extremities of rat

    Institute of Scientific and Technical Information of China (English)

    Hamid Karimi; Kamal Seyed Forootan; Gholamreza Moein; Seyed Jaber Mosavi; Batol Ghorbani Iekta

    2015-01-01

    Objective: To study the result of nerve repair in the two mentioned techniques in rats to find the proper answer to the existing disagreement. Methods: Twenty adult male rats were included in treatment group. Acutely disconnected sciatic nerve was repaired by Epineural technique in half of the rats;in the other half perineurial technique was applied. After 80 d, the number of grown axons of distal on the repair site was calculated through the use of an optical microscope. Additionally by studying the foot print of the rats the return of neural motor activity was evaluated. Results: In epineural group, SFI index was: (56.33±32.30) and in perineurial group: (55.71±30.31);P value=0.930 with their being no difference between these two techniques of surgery. However, in comparing epineural and perineurial groups in the groups themselves, statistical tests showed a significant difference showing functional improvement in comparison with the day before surgery P value=0.0001. Statistical tests showed that the average of axons' number distal to anastomosis site in the epineural group was (349±80) and in the perineurial group was (405±174). These groups have no significant difference regarding the number of axons (P value=0.36). Conclusion:The results of epineural and perineurial surgery techniques show no difference in nerve repair, SFI index, or axon counting in distal part.

  7. OFF-LINE HANDWRITING RECOGNITION USING VARIOUS HYBRID MODELING TECHNIQUES AND CHARACTER N-GRAMS

    NARCIS (Netherlands)

    Brakensiek, A.; Rottland, J.; Kosmala, A.; Rigoll, G.

    2004-01-01

    In this paper a system for on-line cursive handwriting recognition is described. The system is based on Hidden Markov Models (HMMs) using discrete and hybrid modeling techniques. Here, we focus on two aspects of the recognition system. First, we present different hybrid modeling techniques, whereas

  8. Prescribed wind shear modelling with the actuator line technique

    DEFF Research Database (Denmark)

    Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels

    2007-01-01

    A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...

  9. Value of the distant future: Model-independent results

    Science.gov (United States)

    Katz, Yuri A.

    2017-01-01

    This paper shows that the model-independent account of correlations in an interest rate process or a log-consumption growth process leads to declining long-term tails of discount curves. Under the assumption of an exponentially decaying memory in fluctuations of risk-free real interest rates, I derive the analytical expression for an apt value of the long run discount factor and provide a detailed comparison of the obtained result with the outcome of the benchmark risk-free interest rate models. Utilizing the standard consumption-based model with an isoelastic power utility of the representative economic agent, I derive the non-Markovian generalization of the Ramsey discounting formula. Obtained analytical results allowing simple calibration, may augment the rigorous cost-benefit and regulatory impact analysis of long-term environmental and infrastructure projects.

  10. Marginal production in the Gulf of Mexico - II. Model results

    Energy Technology Data Exchange (ETDEWEB)

    Kaiser, Mark J.; Yu, Yunke [Center for Energy Studies, Louisiana State University, Baton Rouge, LA 70803 (United States)

    2010-08-15

    In the second part of this two-part article on marginal production in the Gulf of Mexico, we estimate the number of committed assets in water depth less than 1000 ft that are expected to be marginal over a 60-year time horizon. We compute the expected quantity and value of the production and gross revenue streams of the gulf's committed asset inventory circa. January 2007 using a probabilistic model framework. Cumulative hydrocarbon production from the producing inventory is estimated to be 1056 MMbbl oil and 13.3 Tcf gas. Marginal production from the committed asset inventory is expected to contribute 4.1% of total oil production and 5.4% of gas production. A meta-evaluation procedure is adapted to present the results of sensitivity analysis. Model results are discussed along with a description of the model framework and limitations of the analysis. (author)

  11. Platelet-Rich Fibrin (PRF) in Implants Dentistry in Combination with New Bone Regenerative Flapless Technique: Evolution of the Technique and Final Results.

    Science.gov (United States)

    Cortese, Antonio; Pantaleo, Giuseppe; Amato, Massimo; Howard, Candace M; Pedicini, Lorenzo; Claudio, Pier Paolo

    2017-01-01

    Most common techniques for alveolar bone augmentation are guided bone regeneration (GBR) and autologous bone grafting. GBR studies demonstrated long-term reabsorption using heterologous bone graft. A general consensus has been achieved in implant surgery for a minimal amount of 2 mm of healthy bone around the implant. A current height loss of about 3-4 mm will result in proper deeper implant insertion when alveolar bone expansion is not planned because of the dome shape of the alveolar crest. To manage this situation a split crest technique has been proposed for alveolar bone expansion and the implants' insertion in one stage surgery. Platelet-rich fibrin (PRF) is a healing biomaterial with a great potential for bone and soft tissue regeneration without inflammatory reactions, and may be used alone or in combination with bone grafts, promoting hemostasis, bone growth, and maturation. The aim of this study was to demonstrate the clinical effectiveness of PRF combined with a new split crest flapless modified technique in 5 patients vs. 5 control patients. Ten patients with horizontal alveolar crests deficiency were treated in this study, divided into 2 groups: Group 1 (test) of 5 patients treated by the flapless split crest new procedure; Group 2 (control) of 5 patients treated by traditional technique with deeper insertion of smaller implants without split crest. The follow-up was performed with x-ray orthopantomography and intraoral radiographs at T0 (before surgery), T1 (operation time), T2 (3 months) and T3 (6 months) post-operation. All cases were successful; there were no problems at surgery and post-operative times. All implants succeeded osteointegration and all patients underwent uneventful prosthetic rehabilitation. Mean height bone loss was 1 mm, measured as bone-implant most coronal contact (Δ-BIC), and occurred at immediate T2 post-operative time (3 months). No alveolar bone height loss was detected at implant insertion time, which was instead

  12. Exact results for car accidents in a traffic model

    Science.gov (United States)

    Huang, Ding-wei

    1998-07-01

    Within the framework of a recent model for car accidents on single-lane highway traffic, we study analytically the probability of the occurrence of car accidents. Exact results are obtained. Various scaling behaviours are observed. The linear dependence of the occurrence of car accidents on density is understood as the dominance of a single velocity in the distribution.

  13. Tools, Techniques, and Training: Results of an E-Resources Troubleshooting Survey

    Science.gov (United States)

    Rathmel, Angela; Mobley, Liisa; Pennington, Buddy; Chandler, Adam

    2015-01-01

    A primary role of any e-resources librarian or staff is troubleshooting electronic resources (e-resources). While much progress has been made in many areas of e-resources management (ERM) to understand the ERM lifecycle and to manage workflows, troubleshooting access remains a challenge. This collaborative study is the result of the well-received…

  14. Scapular allograft reconstruction after total scapulectomy: surgical technique and functional results

    NARCIS (Netherlands)

    Capanna, R.; Totti, F.; Geest, I.C.M. van der; Muller, D.A.

    2015-01-01

    HYPOTHESIS: Scapular allograft reconstruction after total scapulectomy preserving the rotator cuff muscles is an oncologically safe procedure and results in good functional outcome with a low complication rate. METHODS: The data of 6 patients who underwent scapular allograft reconstruction after a

  15. Treatment of simple bone cysts by topical infiltrations of methylprednisolone acetate: Technique and results

    Energy Technology Data Exchange (ETDEWEB)

    Carrata, A.; Garbagna, P.; Mapelli, S.; Zucchi, V.

    1983-02-01

    The authors report their experience in the percutaneous treatment of simple bone cysts by intra-cystic local infiltrations of methylprednisolone acetate. In particular, the method adopted, the evolution of the radiologic picture and the results, achieved are described. Sixty patients were successfully treated without complications or surgery.

  16. Modelling, analysis and validation of microwave techniques for the characterisation of metallic nanoparticles

    Science.gov (United States)

    Sulaimalebbe, Aslam

    High Frequency Structure Simulator (HFSS), followed by the electrical characterisation of synthesised Pt NP films using the novel miniature fabricated OCP technique. The results obtained from this technique provided the inspiration to synthesise and evaluate the microwave properties of Au NPs. The findings from this technique provided the motivation to characterise both the Pt and Au NP films using the DR technique. Unlike the OCP technique, the DR method is highly sensitive but the achievable measurement accuracy is limited since this technique does not have broadband frequency capability like the OCP method. The results obtained from the DR technique show a good agreement with the theoretical prediction. In the last phase of this research, a further validation of the aperture admittance models on different types OCP (i.e. RG-405 and RG-402 cables and SMA connector) have been carried out on the developed 3D full wave models using HFSS software, followed by the development of universal models for the aforementioned OCPs based on the same 3D full wave models.

  17. SURGICAL TECHNIQUE, SHORT- AND LONG-TERM RESULTS OF THE HORSESHOE KIDNEY TRANSPLANTATION

    Directory of Open Access Journals (Sweden)

    Sh. R. Galeev

    2015-01-01

    Full Text Available The experience of horseshoe kidney transplant operations is significantly restricted. Transplant surgeons often refuse to use horseshoe kidney due to a number of serious abnormalities of vessels and upper urinary tract in these organs. However, the constant shortage of donor organs and an increase in patients on the waiting list for kidney transplantation make us reconsider our approach to the selection of donor organs. The aim of this work was to demonstrate our result of horseshoe kidney transplantation. 

  18. Clinical Results After Prostatic Artery Embolization Using the PErFecTED Technique: A Single-Center Study

    Energy Technology Data Exchange (ETDEWEB)

    Amouyal, Gregory, E-mail: gregamouyal@hotmail.com; Thiounn, Nicolas, E-mail: nicolas.thiounn@aphp.fr; Pellerin, Olivier, E-mail: olivier.pellerin@aphp.fr [Université Paris Descartes - Sorbonne - Paris - Cité, Faculté de Médecine (France); Yen-Ting, Lin, E-mail: ymerically@gmail.com [Assistance Publique - Hôpitaux de Paris, Hôpital Européen Georges Pompidou, Interventional Radiology Department (France); Giudice, Costantino Del, E-mail: costantino.delgiudice@aphp.fr [Université Paris Descartes - Sorbonne - Paris - Cité, Faculté de Médecine (France); Dean, Carole, E-mail: carole.dean@aphp.fr [Assistance Publique - Hôpitaux de Paris, Hôpital Européen Georges Pompidou, Interventional Radiology Department (France); Pereira, Helena, E-mail: helena.pereira@aphp.fr [Assistance Publique - Hôpitaux de Paris, Hôpital Européen Georges Pompidou, Clinical Research Unit (France); Chatellier, Gilles, E-mail: gilles.chatellier@aphp.fr; Sapoval, Marc, E-mail: marc.sapoval2@aphp.fr [Université Paris Descartes - Sorbonne - Paris - Cité, Faculté de Médecine (France)

    2016-03-15

    BackgroundProstatic artery embolization (PAE) has been performed for a few years, but there is no report on PAE using the PErFecTED technique outside from the team that initiated this approach.ObjectiveThis single-center retrospective open label study reports our experience and clinical results on patients suffering from symptomatic BPH, who underwent PAE aiming at using the PErFecTED technique.Materials and MethodsWe treated 32 consecutive patients, mean age 65 (52–84 years old) between December 2013 and January 2015. Patients were referred for PAE after failure of medical treatment and refusal or contra-indication to surgery. They were treated using the PErFecTED technique, when feasible, with 300–500 µm calibrated microspheres (two-night hospital stay or outpatient procedure). Follow-up was performed at 3, 6, and 12 months.ResultsWe had a 100 % immediate technical success of embolization (68 % of feasibility of the PErFecTED technique) with no immediate complications. After a mean follow-up of 7.7 months, we observed a 78 % rate of clinical success. Mean IPSS decreased from 15.3 to 4.2 (p = .03), mean QoL from 5.4 to 2 (p = .03), mean Qmax increased from 9.2 to 19.2 (p = .25), mean prostatic volume decreased from 91 to 62 (p = .009) mL. There was no retrograde ejaculation and no major complication.ConclusionPAE using the PErFecTED technique is a safe and efficient technique to treat bothersome LUTS related to BPH. It is of interest to note that the PErFecTED technique cannot be performed in some cases for anatomical reasons.

  19. Hybrid Model Testing Technique for Deep-Sea Platforms Based on Equivalent Water Depth Truncation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, an inner turret moored FPSO which works in the water of 320 m depth, is selected to study the so-called "passively-truncated + numerical-simulation" type of hybrid model testing technique while the truncated water depth is 160 m and the model scale λ=80. During the investigation, the optimization design of the equivalent-depth truncated system is performed by using the similarity of the static characteristics between the truncated system and the full depth one as the objective function. According to the truncated system, the corresponding physical test model is made. By adopting the coupling time domain simulation method, the truncated system model test is numerically reconstructed to carefully verify the computer simulation software and to adjust the corresponding hydrodynamic parameters. Based on the above work, the numerical extrapolation to the full depth system is performed by using the verified computer software and the adjusted hydrodynamic parameters. The full depth system model test is then performed in the basin and the results are compared with those from the numerical extrapolation. At last, the implementation procedure and the key technique of the hybrid model testing of the deep-sea platforms are summarized and printed. Through the above investigations, some beneficial conclusions are presented.

  20. Using Interior Point Method Optimization Techniques to Improve 2- and 3-Dimensional Models of Earth Structures

    Science.gov (United States)

    Zamora, A.; Gutierrez, A. E.; Velasco, A. A.

    2014-12-01

    2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.

  1. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Ned; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  2. Transgastric endoscopic gastrojejunostomy using holing followed by interrupted suture technique in a porcine model

    Institute of Scientific and Technical Information of China (English)

    Su-Yu; Chen; Hong; Shi; Sheng-Jun; Jiang; Yong-Guang; Wang; Kai; Lin; Zhao-Fei; Xie; Xiao-Jing; Liu

    2015-01-01

    AIM: To demonstrate the feasibility and reproducibility of a pure natural orifice transluminal endoscopic surgery(NOTES) gastrojejunostomy using holing followed by interrupted suture technique using a single endoloop matched with a pair of clips in a non-survival porcine model.METHODS: NOTES gastrojejunostomy was performed on three female domestic pigs as follows: Gastrostomy, selection and retrieval of a free-floating loop of the small bowel into the stomach pouch, hold and exposure of the loop in the gastric cavity using a submucosal inflation technique, execution of a gastro-jejunal mucosal-seromuscular layer approximation using holing followed by interrupted suture technique with endoloop/clips, and full-thickness incision of the loop with a Dual knife.RESULTS: Pure NOTES side-to-side gastrojejunostomy was successfully performed in all three animals. No leakage was identified via methylene blue evaluation following surgery.CONCLUSION: This novel technique for preforming a gastrointestinal anastomosis exclusively by NOTES is technically feasible and reproducible in an animal model but warrants further improvement.

  3. A modeling technique for active control design studies with application to spacecraft microvibrations.

    Science.gov (United States)

    Aglietti, G S; Gabriel, S B; Langley, R S; Rogers, E

    1997-10-01

    Microvibrations, at frequencies between 1 and 1000 Hz, generated by on board equipment, can propagate throughout a spacecraft structure and affect the performance of sensitive payloads. To investigate strategies to reduce these dynamic disturbances by means of active control systems, realistic yet simple structural models are necessary to represent the dynamics of the electromechanical system. In this paper a modeling technique which meets this requirement is presented, and the resulting mathematical model is used to develop some initial results on active control strategies. Attention is focused on a mass loaded panel subjected to point excitation sources, the objective being to minimize the displacement at an arbitrary output location. Piezoelectric patches acting as sensors and actuators are employed. The equations of motion are derived by using Lagrange's equation with vibration mode shapes as the Ritz functions. The number of sensors/actuators and their location is variable. The set of equations obtained is then transformed into state variables and some initial controller design studies are undertaken. These are based on standard linear systems optimal control theory where the resulting controller is implemented by a state observer. It is demonstrated that the proposed modeling technique is a feasible realistic basis for in-depth controller design/evaluation studies.

  4. Modeling Results For the ITER Cryogenic Fore Pump. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Pfotenhauer, John M. [University of Wisconsin, Madison, WI (United States); Zhang, Dongsheng [University of Wisconsin, Madison, WI (United States)

    2014-03-31

    A numerical model characterizing the operation of a cryogenic fore-pump (CFP) for ITER has been developed at the University of Wisconsin – Madison during the period from March 15, 2011 through June 30, 2014. The purpose of the ITER-CFP is to separate hydrogen isotopes from helium gas, both making up the exhaust components from the ITER reactor. The model explicitly determines the amount of hydrogen that is captured by the supercritical-helium-cooled pump as a function of the inlet temperature of the supercritical helium, its flow rate, and the inlet conditions of the hydrogen gas flow. Furthermore the model computes the location and amount of hydrogen captured in the pump as a function of time. Throughout the model’s development, and as a calibration check for its results, it has been extensively compared with the measurements of a CFP prototype tested at Oak Ridge National Lab. The results of the model demonstrate that the quantity of captured hydrogen is very sensitive to the inlet temperature of the helium coolant on the outside of the cryopump. Furthermore, the model can be utilized to refine those tests, and suggests methods that could be incorporated in the testing to enhance the usefulness of the measured data.

  5. Assessment of Galileo modal test results for mathematical model verification

    Science.gov (United States)

    Trubert, M.

    1984-01-01

    The modal test program for the Galileo Spacecraft was completed at the Jet Propulsion Laboratory in the summer of 1983. The multiple sine dwell method was used for the baseline test. The Galileo Spacecraft is a rather complex 2433 kg structure made of a central core on which seven major appendages representing 30 percent of the total mass are attached, resulting in a high modal density structure. The test revealed a strong nonlinearity in several major modes. This nonlinearity discovered in the course of the test necessitated running additional tests at the unusually high response levels of up to about 21 g. The high levels of response were required to obtain a model verification valid at the level of loads for which the spacecraft was designed. Because of the high modal density and the nonlinearity, correlation between the dynamic mathematical model and the test results becomes a difficult task. Significant changes in the pre-test analytical model are necessary to establish confidence in the upgraded analytical model used for the final load verification. This verification, using a test verified model, is required by NASA to fly the Galileo Spacecraft on the Shuttle/Centaur launch vehicle in 1986.

  6. Injection-moulded models of major and minor arteries: the variability of model wall thickness owing to casting technique.

    Science.gov (United States)

    O'Brien, T; Morris, L; O'Donnell, M; Walsh, M; McGloughlin, T

    2005-09-01

    Cardiovascular disease of major and minor arteries is a common cause of death in Western society. The wall mechanics and haemodynamics within the arteries are considered to be important factors in the disease formation process. This paper is concerned with the development of an efficient computer-integrated technique to manufacture idealized and realistic models of diseased major and minor arteries from radiological images and to address the issue of model wall thickness variability. Variations in wall thickness from the original computer models to the final castings are quantified using a CCD camera. The results found that wall thickness variation from the major and minor idealized artery models to design specification were insignificant, up to a maximum of 16 per cent. In realistic models, however, differences were up to 23 per cent in the major arterial models and 58 per cent in the minor arterial models, but the wall thickness variability remained within the limits of previously reported wall thickness results. It is concluded that the described injection moulding procedure yields idealized and realistic castings suitable for use in experimental investigations, with idealized models giving better agreement with design. Wall thickness is variable and should be assessed after the models are manufactured.

  7. A MODEL FOR OVERLAPPING TRIGRAM TECHNIQUE FOR TELUGU SCRIPT

    Directory of Open Access Journals (Sweden)

    B.Vishnu Vardhan

    2007-09-01

    Full Text Available N-grams are consecutive overlapping N-character sequences formed from an input stream. N-grams are used as alternatives to word-based retrieval in a number of systems. In this paper we propose a model applicable to categorization of Telugu document. Telugu is an official language derived from ancient Brahmi script and also the official language of the state of Andhra Pradesh. Brahmi based script is noted for complex conjunct formations. The canonical structure is described as ((C C CV. The structure evolves any character from a set of basic syllables known as vowels and consonants where consonant, vowel (CV core is the basic unit optionally preceded by one or two consonants. A huge set of characters that resemble the phonetic nature with an equivalent character shape are derived from the canonical structure. Words formed from this set evolved into a large corpus. Stringent grammar rules in word formation are part of this corpus. Certain word combinations result in the formation of single word is to be addressed where the last character of the first word and first character of the successive word are combined. Keeping in view of these complexities we propose a trigram based system that provides a reasonable alternative to a word based system in achieving document categorization for the language Telugu.

  8. [Conservative surgery for supraglottic carcinoma. Surgical technique. Oncologic and functional results].

    Science.gov (United States)

    Vega, S F; Scola, B; Vega, M F; Martinez, T; Scola, E

    1996-08-01

    We present the results of a retrospective study of 817 patients treated with conservative surgery for carcinomas of the supraglottic larynx at ENT department of the Gregorio Maranón Hospital between 1962-1993. The disease was staged using the criteria set forth in 1988 by the AJCC, and 36,2% were stages III and IV. From the 817 patients treated with conservative surgery 230 were extended supraglottic laryngectomies. Our theoretic treatment protocol is presented. The 5 years actuarial uncorrected survival rate related to stage was 83,9%, 83,2%, 78,5% and 55,3% for stages I, II, III and IV respectively. Local-regional failure occurred in 32,9% patients overall, and the most common site for local-regional failure was the cervical nodes. The 5 years local control rate related to stage was 86,97%, 89,1%, 82,15% and 66,55% for stages I, II, III and IV respectively. In extended supraglottic laryngectomies the 5 years uncorrected survival rate was 62,6% in supraglottic laryngectomies (SL) extended to the base of the tongue, 62,5% in SL extended to the hypopharynx, 72,5% in SL extended to the arythenoyd and 79,4% in SL extended to the vocal chord. The 5 years local control rate was 87% in SL extended to the base of the tongue, 85,7% in SL extended to the hypopharynx, 97% in SL extended to the arythenoid and 90,8% in SL extended to the vocal chord. Functional results have been evaluated according to a three grade scale. Good and fair results were 97.6% for swallowing, 90% for respiration and 95.8% for the quality of voice.

  9. Changes in Selected Biochemical Indices Resulting from Various Pre-sampling Handling Techniques in Broilers

    Directory of Open Access Journals (Sweden)

    Chloupek Petr

    2011-05-01

    Full Text Available Abstract Background Since it is not yet clear whether it is possible to satisfactorily avoid sampling-induced stress interference in poultry, more studies on the pattern of physiological response and detailed quantification of stress connected with the first few minutes of capture and pre-sampling handling in poultry are required. This study focused on detection of changes in the corticosterone level and concentrations of other selected biochemical parameters in broilers handled in two different manners during blood sampling (involving catching, carrying, restraint, and blood collection itself that lasted for various time periods within the interval 30-180 seconds. Methods Stress effects of pre-sampling handling were studied in a group (n = 144 of unsexed ROSS 308 broiler chickens aged 42 d. Handling (catching, carrying, restraint, and blood sampling itself was carried out in a gentle (caught, held and carried carefully in an upright position or rough (caught by the leg, held and carried with lack of care in inverted position manner and lasted for 30 s, 60 s, 90 s, 120 s, 150 s, and 180 s. Plasma corticosterone, albumin, glucose, cholesterol, lactate, triglycerides and total protein were measured in order to assess the stress-induced changes to these biochemical indices following handling in the first few minutes of capture. Results Pre-sampling handling in a rough manner resulted in considerably higher plasma concentrations of all biochemical indices monitored when compared with gentle handling. Concentrations of plasma corticosterone after 150 and 180 s of handling were considerably higher (P Conclusions These results indicate that the pre-sampling procedure may be a considerably stressful procedure for broilers, particularly when carried out with lack of care and exceeding 120 seconds.

  10. Multidetector computed tomography of urolithiasis. Technique and results; Multidetektor-Computertomografie der Urolithiasis. Technik und Ergebnisse

    Energy Technology Data Exchange (ETDEWEB)

    Karul, M.; Regier, M. [Universitaetsklinikum Hamburg-Eppendorf, Hamburg (Germany). Zentrum fuer Radiologie und Endoskopie; Heuer, R. [Universitaetsklinikum Hamburg-Eppendorf, Hamburg (Germany). Zentrum fuer Operative Medizin

    2013-02-15

    The diagnosis of acute urolithiasis results from unenhanced multidetector computed tomography (MDCT). This test analyses the functional and anatomical possibility for passing an ureteral calculi, the localization and dimension of which are important parameters for further therapy. Alternatively chronic urolithiasis could be ruled out by magnetic resonance urography (MRU). MRU is the first choice especially in pregnant women and children because of radiation hygiene. Enhanced MDCT must be emphasized as an alternative to intravenous urography (IVU) for diagnosis of complex drainage of urine and suspected disorder of the involved kidney. This review illustrates the principles of different tests and the clinical relevance thereof. (orig.)

  11. Modeling vertical loads in pools resulting from fluid injection. [BWR

    Energy Technology Data Exchange (ETDEWEB)

    Lai, W.; McCauley, E.W.

    1978-06-15

    Table-top model experiments were performed to investigate pressure suppression pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peachbottom Mark I boiling water reactor containment system. The results guided subsequent conduct of experiments in the /sup 1///sub 5/-scale facility and provided new insight into the vertical load function (VLF). Model experiments show an oscillatory VLF with the download typically double-spiked followed by a more gradual sinusoidal upload. The load function contains a high frequency oscillation superimposed on a low frequency one; evidence from measurements indicates that the oscillations are initiated by fluid dynamics phenomena.

  12. [Distraction osteogenesis in the midface. Indications, technique and first long-term results].

    Science.gov (United States)

    Kessler, Peter; Kloss, Frank; Hirschfelder, Ursula; Neukam, Friedrich Wilhelm; Wiltfang, Jörg

    2003-01-01

    Since the beginning of 1998, eleven patients have been treated by osteodistraction to correct hypoplasia of the maxilla and midface of various origins. Among them were six patients who were treated by high LeFort I osteotomies and insertion of subcutaneous intraoral distraction devices in the malar region. In the remaining five patients extraoral distraction devices were applied after LeFort I, II and III osteotomies. Distraction osteogenesis was successful in all cases, resulting in a mean sagittal bone gain measured parallel to the skull base of 9.5 mm (range 4.5-12.0) in the group treated with intraoral distractors and a mean of 19.4 mm in the extraoral distraction group (range 15.0-25.0). All patients were kept under orthodontic supervision before, during, and after osteodistraction. In eight patients long-term cephalometric and clinical evaluation after a mean follow-up period of 24 months in the intraoral distraction group (range 22-26) and 12 months in the extraoral distraction group (range 10-14) show stable results concerning the skeletal and dental relations. Long-term follow-up is necessary.

  13. Comparison of the results of MIS-TLIF and open TLIF techniques in laborers

    Directory of Open Access Journals (Sweden)

    Daniel De Abreu Oliveira

    2014-01-01

    Full Text Available Objective: To compare clinical outcomes in laborers who have undergone open transforaminal interbody fusion (TLIF and minimally invasive transforaminal interbody fusion (MIS TLIF. Methods: 78 patients were submitted to lumbar arthrodesis by the same two spine surgeons partners from January 2008 to December 2012. Forty-one were submitted to traditional open arthrodesis and 37 to the minimally invasive procedure. Three patients were not included because they had already retired from work. The analyzed variables were length of hospitalization, length of follow-up, type of access (TILF or MIS TLIF, need for blood transfusion, percentage of improvement or worsening after surgery, pre- and postoperative VAS scale, time off work, pre-and postoperative Oswestry disability index, and general aspects of the laborers such as age, education, profession, working time, amount of daily weight carried at work, and use or not of personal protective equipment. Results: Time off work was longer in the TLIF group (average of 9.84 months compared with the MIS TLIF group (average of 3.20 months. Significant improvement in postoperative VAS and Oswestry was achieved in both groups. Average length of hospitalization was 5.73 days for the TLIF group and 2.76 days for the MIS TLIF group. Conclusions: Minimally invasive transforaminal lumbar interbody fusion presents similar results when compared to open TLIF, but has the benefits of less postoperative morbidity, shorter hospitalization times, and faster rehabilitation in laborer patients.

  14. GLOBAL CONVERGENCE RESULTS OF A THREE TERM MEMORY GRADIENT METHOD WITH A NON-MONOTONE LINE SEARCH TECHNIQUE

    Institute of Scientific and Technical Information of China (English)

    Sun Qingying

    2005-01-01

    In this paper, a new class of three term memory gradient method with nonmonotone line search technique for unconstrained optimization is presented. Global convergence properties of the new methods are discussed. Combining the quasi-Newton method with the new method, the former is modified to have global convergence property. Numerical results show that the new algorithm is efficient.

  15. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  16. A vortex model for Darrieus turbine using finite element techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ponta, Fernando L. [Universidad de Buenos Aires, Dept. de Electrotecnia, Grupo ISEP, Buenos Aires (Argentina); Jacovkis, Pablo M. [Universidad de Buenos Aires, Dept. de Computacion and Inst. de Calculo, Buenos Aires (Argentina)

    2001-09-01

    Since 1970 several aerodynamic prediction models have been formulated for the Darrieus turbine. We can identify two families of models: stream-tube and vortex. The former needs much less computation time but the latter is more accurate. The purpose of this paper is to show a new option for modelling the aerodynamic behaviour of Darrieus turbines. The idea is to combine a classic free vortex model with a finite element analysis of the flow in the surroundings of the blades. This avoids some of the remaining deficiencies in classic vortex models. The agreement between analysis and experiment when predicting instantaneous blade forces and near wake flow behind the rotor is better than the one obtained in previous models. (Author)

  17. Angiography of the temporomandibular joint. Description of an experimental technique with initial results.

    Science.gov (United States)

    Takagi, R; Shimoda, T; Westesson, P L; Takahashi, A; Morris, T W; Sano, T; Moses, J J

    1994-10-01

    The vascular supply to the temporomandibular joint is not completely understood. To form a base for advancement in this area we developed a method for experimental angiography of the temporomandibular joint that was applied to fresh temporomandibular joint autopsy specimens. Via the external carotid artery the vessels were infused with a mixture of barium and an acrylic resin. The specimens were sectioned and contact radiographs were obtained. These showed the vascularity of the joint and the surrounding structures with great detail. Most of the vascular supply appears to come from the lateral and medial aspects of the condyle head and from the anterior and posterior disk attachments. The method was applied to both normal and abnormal joints and the results suggest that this method could be used to gather further understanding of the vascularity of the temporomandibular joint relative to disease.

  18. Contact therapy in the region of pharynx and oral cavity: Afterloading technique, irradiation planning, and results

    Energy Technology Data Exchange (ETDEWEB)

    Fritz, P.; Bauer, M.; Fehrentz, D.; Zum Winkel, K.; Weidauer, H.; Singer, R.

    1985-05-01

    The contact irradiation in the region of pharynx and mouth with an afterloading unit is presented. Twelve patients with recurrent carcinomas of the squamous cell epithelium have been treated. A stable and reproducible positioning of the source probes in the tumor region is made possible by special applicator prostheses which are adapted to the post-operative situation. The irradiation scheme is based on the transformation of the source co-ordinates from the stereoradiographic localization system into the co-ordinate system of the computed tomogram. At least three fixed metal points which are inserted in the applicator prostheses and visualized by stereoradiography as well as by computed tomography serve as mutual reference points for both co-ordinate systems. The source positioning in the tumor region is optimized by CT irradiation planning. Three cases are presented in order to describe the principles of the method. Preliminary results are discussed.

  19. Thermodynamic Spectrum of Solar Flares Based on SDO/EVE Observations: Techniques and First Results

    Science.gov (United States)

    Wang, Yuming; Zhou, Zhenjun; Zhang, Jie; Liu, Kai; Liu, Rui; Shen, Chenglong; Chamberlin, Phillip C.

    2016-01-01

    The Solar Dynamics Observatory (SDO)/EUV Variability Experiment (EVE) provides rich information on the thermodynamic processes of solar activities, particularly on solar flares. Here, we develop a method to construct thermodynamic spectrum (TDS) charts based on the EVE spectral lines. This tool could potentially be useful for extreme ultraviolet (EUV) astronomy to learn about the eruptive activities on distant astronomical objects. Through several cases, we illustrate what we can learn from the TDS charts. Furthermore, we apply the TDS method to 74 flares equal to or greater than the M5.0 class, and reach the following statistical results. First, EUV peaks are always behind the soft X-ray (SXR) peaks and stronger flares tend to have faster cooling rates. There is a power-law correlation between the peak delay times and the cooling rates, suggesting a coherent cooling process of flares from SXR to EUV emissions. Second, there are two distinct temperature drift patterns, called Type I and Type II. For Type I flares, the enhanced emission drifts from high to low temperature like a quadrilateral, whereas for Type II flares the drift pattern looks like a triangle. Statistical analysis suggests that Type II flares are more impulsive than Type I flares. Third, for late-phase flares, the peak intensity ratio of the late phase to the main phase is roughly correlated with the flare class, and the flares with a strong late phase are all confined. We believe that the re-deposition of the energy carried by a flux rope, which unsuccessfully erupts out, into thermal emissions is responsible for the strong late phase found in a confined flare. Furthermore, we show the signatures of the flare thermodynamic process in the chromosphere and transition region in the TDS charts. These results provide new clues to advance our understanding of the thermodynamic processes of solar flares and associated solar eruptions, e.g., coronal mass ejections.

  20. Extended endoscopic endonasal transsphenoidal approach for retrochiasmatic craniopharyngioma: Surgical technique and results

    Directory of Open Access Journals (Sweden)

    Suresh K Sankhla

    2015-01-01

    Full Text Available Objective: Surgical treatment of retrochiasmatic craniopharyngioma still remains a challenge. While complete removal of the tumor with preservation of the vital neurovascular structures is often the goal of the treatment, there is no optimal surgical approach available to achieve this goal. Transcranial and transsphenoidal microsurgical approaches, commonly used in the past, have considerable technical limitations. The extended endonasal endoscopic surgical route, obtained by removal of tuberculum sellae and planum sphenoidale, offers direct midline access to the retrochiasmatic space and provides excellent visualization of the undersurface of the optic chiasm. In this report, we describe the technical details of the extended endoscopic approach, and review our results using this approach in the surgical management of retrochiasmatic craniopharyngiomas. Methods: Fifteen children, including 9 girls and 6 boys, aged 8 to 15 years underwent surgery using extended endoscopic transsphenoidal approach between 2008 and 2014. Nine patients had a surgical procedure done previously and presented with recurrence of symptoms and regrowth of their residual tumors. Results: A gross total or near total excision was achieved in 10 (66.7% patients, subtotal resection in 4 (26.7%, and partial removal in 1 (6.7% patient. Postoperatively, headache improved in 93.3%, vision recovered in 77.3%, and the hormonal levels stabilised in 66.6%. Three patients (20% developed postoperative CSF leaks which were managed conservatively. Three (20% patients with diabetes insipidus and 2 (13.3% with panhypopituitarism required long-term hormonal replacement therapy. Conclusions: Our early experience suggests that the extended endonasal endoscopic approach is a reasonable option for removal of the retrochiasmal craniopharyngiomas. Compared to other surgical approaches, it provides better opportunities for greater tumor removal and visual improvement without any increase in risks.

  1. Experimental technique of calibration of symmetrical air pollution models

    Indian Academy of Sciences (India)

    P Kumar

    2005-10-01

    Based on the inherent property of symmetry of air pollution models,a Symmetrical Air Pollution Model Index (SAPMI)has been developed to calibrate the accuracy of predictions made by such models,where the initial quantity of release at the source is not known.For exact prediction the value of SAPMI should be equal to 1.If the predicted values are overestimating then SAPMI is > 1and if it is underestimating then SAPMI is < 1.Specific design for the layout of receptors has been suggested as a requirement for the calibration experiments.SAPMI is applicable for all variations of symmetrical air pollution dispersion models.

  2. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  3. Assessment of the reliability of reproducing two-dimensional resistivity models using an image processing technique.

    Science.gov (United States)

    Ishola, Kehinde S; Nawawi, Mohd Nm; Abdullah, Khiruddin; Sabri, Ali Idriss Aboubakar; Adiat, Kola Abdulnafiu

    2014-01-01

    This study attempts to combine the results of geophysical images obtained from three commonly used electrode configurations using an image processing technique in order to assess their capabilities to reproduce two-dimensional (2-D) resistivity models. All the inverse resistivity models were processed using the PCI Geomatica software package commonly used for remote sensing data sets. Preprocessing of the 2-D inverse models was carried out to facilitate further processing and statistical analyses. Four Raster layers were created, three of these layers were used for the input images and the fourth layer was used as the output of the combined images. The data sets were merged using basic statistical approach. Interpreted results show that all images resolved and reconstructed the essential features of the models. An assessment of the accuracy of the images for the four geologic models was performed using four criteria: the mean absolute error and mean percentage absolute error, resistivity values of the reconstructed blocks and their displacements from the true models. Generally, the blocks of the images of maximum approach give the least estimated errors. Also, the displacement of the reconstructed blocks from the true blocks is the least and the reconstructed resistivities of the blocks are closer to the true blocks than any other combined used. Thus, it is corroborated that when inverse resistivity models are combined, most reliable and detailed information about the geologic models is obtained than using individual data sets.

  4. A Comparative Modelling Study of PWM Control Techniques for Multilevel Cascaded Inverter

    Directory of Open Access Journals (Sweden)

    A. TAHRI

    2005-01-01

    Full Text Available The emergence of multilevel converters has been in increase since the last decade. These new types of converters are suitable for high voltage and high power application due to their ability to synthesize waveforms with better harmonic spectrum. Numerous topologies have been introduced and widely studied for utility and drive applications. Amongst these topologies, the multilevel cascaded inverter was introduced in Static Var compensation and drive systems. This paper investigates several control techniques applied to the Multilevel Cascaded Inverter in order to ensure an efficient voltage utilization and better harmonic spectrum. A modelling and control strategy of a single phase Multilevel Cascaded Inverter is also investigated. Computer simulation results using Matlab program are reported and discussed together with a comparative study of the different control techniques of multilevel cascaded inverter.Moreover, experimental results are carried out on a scaled down prototype to prove the effectiveness of the proposed analysis.

  5. Early Results Show Reduced Infection Rate Using No-touch Technique for Expander/ADM Breast Reconstruction

    Directory of Open Access Journals (Sweden)

    Henry B. Wilson, MD, FACS

    2015-03-01

    Full Text Available Summary: Infection is a common complication of immediate breast reconstruction that often leads to device removal, a result emotionally devastating to the patient and frustrating for her surgeon. “No-touch” techniques have been used in other surgical disciplines and plastic surgery, but they have not been reported for breast reconstruction with tissue expanders or implants and acellular dermis. We report a novel technique of tissue expander and acellular dermis placement using no-touch principles with a self-retaining retractor system that holds promise to decrease infectious complications of breast reconstruction.

  6. DBSolve Optimum: a software package for kinetic modeling which allows dynamic visualization of simulation results

    Directory of Open Access Journals (Sweden)

    Gizzatkulov Nail M

    2010-08-01

    Full Text Available Abstract Background Systems biology research and applications require creation, validation, extensive usage of mathematical models and visualization of simulation results by end-users. Our goal is to develop novel method for visualization of simulation results and implement it in simulation software package equipped with the sophisticated mathematical and computational techniques for model development, verification and parameter fitting. Results We present mathematical simulation workbench DBSolve Optimum which is significantly improved and extended successor of well known simulation software DBSolve5. Concept of "dynamic visualization" of simulation results has been developed and implemented in DBSolve Optimum. In framework of the concept graphical objects representing metabolite concentrations and reactions change their volume and shape in accordance to simulation results. This technique is applied to visualize both kinetic response of the model and dependence of its steady state on parameter. The use of the dynamic visualization is illustrated with kinetic model of the Krebs cycle. Conclusion DBSolve Optimum is a user friendly simulation software package that enables to simplify the construction, verification, analysis and visualization of kinetic models. Dynamic visualization tool implemented in the software allows user to animate simulation results and, thereby, present them in more comprehensible mode. DBSolve Optimum and built-in dynamic visualization module is free for both academic and commercial use. It can be downloaded directly from http://www.insysbio.ru.

  7. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  8. Seed dispersal into wetlands: Techniques and results for a restored tidal freshwater marsh

    Science.gov (United States)

    Neff, K.P.; Baldwin, A.H.

    2005-01-01

    Although seed dispersal is assumed to be a major factor determining plant community development in restored wetlands, little research exists on density and species richness of seed available through dispersal in these systems. We measured composition and seed dispersal rates at a restored tidal freshwater marsh in Washington, DC, USA by collecting seed dispersing through water and wind. Seed dispersal by water was measured using two methods of seed collection: (1) stationary traps composed of coconut fiber mat along an elevation gradient bracketing the tidal range and (2) a floating surface trawl net attached to a boat. To estimate wind dispersal rates, we collected seed from stationary traps composed of coconut fiber mat positioned above marsh vegetation. We also collected a small number of samples of debris deposited along high tide lines (drift lines) and feces of Canada Goose to explore their seed content. We used the seedling emergence method to determine seed density in all samples, which involved placing the fiber mats or sample material on top of potting soil in a greenhouse misting room and enumerating emerging seedlings. Seedlings from a total of 125 plant species emerged during this study (including 82 in river trawls, 89 in stationary water traps, 21 in drift lines, 39 in wind traps, and 10 in goose feces). The most abundant taxa included Bidens frondosa, Boehmeria cylindrica, Cyperus spp., Eclipta prostrata, and Ludwigia palustris. Total seedling density was significantly greater for the stationary water traps (212 + 30.6 seeds/m2/month) than the equal-sized stationary wind traps (18 + 6.0 seeds/m(2)/month). Lower-bound estimates of total species richness based on the non-parametric Chao 2 asymptotic estimators were greater for seeds in water (106 + 1.4 for stationary water traps and 104 + 5.5 for trawl samples) than for wind (54 + 6.4). Our results indicate that water is the primary source of seeds dispersing to the site and that a species-rich pool

  9. Triple labrum tears repaired with the JuggerKnot™ soft anchor: Technique and results

    Directory of Open Access Journals (Sweden)

    Vivek Agrawal

    2015-01-01

    Full Text Available Purpose: The 2-year outcomes of patients undergoing repair of triple labrum tears using an all-suture anchor device were assessed. Materials and Methods: Eighteen patients (17 male, one female; mean age 36.4 years, range: 14.2-62.3 years with triple labrum tears underwent arthroscopic repair using the 1.4 mm JuggerKnot Soft Anchor (mean number of anchors 11.5, range: 9-19 anchors. Five patients had prior surgeries performed on their operative shoulder. Patients were followed for a mean of 2.0 years (range: 1.6-3.0 years. Constant-Murley shoulder score (CS and Flexilevel scale of shoulder function (FLEX-SF scores were measured, with preoperative and final postoperative mean scores compared with a paired Student′s t-test (P < 0.05. Magnetic resonance imaging (MRI was also performed at final postoperative. Results: Overall total CS and FLEX-SF scores increased from 52.9 ± 20.4 to 84.3 ± 10.7 (P < 0.0001 and from 29.3 ± 4.7 to 42.0 ± 7.3 (P < 0.0001, respectively. When divided into two groups by whether or not glenohumeral arthrosis was present at the time of surgery (n = 9 each group, significant improvements in CS and FLEX-SF were obtained for both groups (P < 0.0015. There were no intraoperative complications. All patients, including contact athletes, returned to their preinjury level of sports activity and were satisfied. MRI evaluation revealed no instances of subchondral cyst formation or tunnel expansion. Anchor tracts appeared to heal with fibrous tissue, complete bony healing, or combined fibro-osseous healing. Conclusion: Our results are encouraging, demonstrating a consistent healing of the anchor tunnels through arthroscopic treatment of complex labrum lesions with a completely suture-based implant. It further demonstrates a meaningful improvement in patient outcomes, a predictable return to activity, and a high rate of patient satisfaction. Level of Evidence: Level IV case series.

  10. THERMODYNAMIC SPECTRUM OF SOLAR FLARES BASED ON SDO/EVE OBSERVATIONS: TECHNIQUES AND FIRST RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yuming; Zhou, Zhenjun; Liu, Kai; Liu, Rui; Shen, Chenglong [CAS Key Laboratory of Geospace Environment, Department of Geophysics and Planetary Sciences, University of Science and Technology of China, Hefei, Anhui 230026 (China); Zhang, Jie [School of Physics, Astronomy and Computational Sciences, George Mason University, 4400 University Drive, MSN 6A2, Fairfax, VA 22030 (United States); Chamberlin, Phillip C., E-mail: ymwang@ustc.edu.cn [Solar Physics Laboratory, Heliophysics Division, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2016-03-15

    The Solar Dynamics Observatory (SDO)/EUV Variability Experiment (EVE) provides rich information on the thermodynamic processes of solar activities, particularly on solar flares. Here, we develop a method to construct thermodynamic spectrum (TDS) charts based on the EVE spectral lines. This tool could potentially be useful for extreme ultraviolet (EUV) astronomy to learn about the eruptive activities on distant astronomical objects. Through several cases, we illustrate what we can learn from the TDS charts. Furthermore, we apply the TDS method to 74 flares equal to or greater than the M5.0 class, and reach the following statistical results. First, EUV peaks are always behind the soft X-ray (SXR) peaks and stronger flares tend to have faster cooling rates. There is a power-law correlation between the peak delay times and the cooling rates, suggesting a coherent cooling process of flares from SXR to EUV emissions. Second, there are two distinct temperature drift patterns, called Type I and Type II. For Type I flares, the enhanced emission drifts from high to low temperature like a quadrilateral, whereas for Type II flares the drift pattern looks like a triangle. Statistical analysis suggests that Type II flares are more impulsive than Type I flares. Third, for late-phase flares, the peak intensity ratio of the late phase to the main phase is roughly correlated with the flare class, and the flares with a strong late phase are all confined. We believe that the re-deposition of the energy carried by a flux rope, which unsuccessfully erupts out, into thermal emissions is responsible for the strong late phase found in a confined flare. Furthermore, we show the signatures of the flare thermodynamic process in the chromosphere and transition region in the TDS charts. These results provide new clues to advance our understanding of the thermodynamic processes of solar flares and associated solar eruptions, e.g., coronal mass ejections.

  11. Alternative decision modelling techniques for the evaluation of health care technologies: Markov processes versus discrete event simulation.

    Science.gov (United States)

    Karnon, Jonathan

    2003-10-01

    Markov models have traditionally been used to evaluate the cost-effectiveness of competing health care technologies that require the description of patient pathways over extended time horizons. Discrete event simulation (DES) is a more flexible, but more complicated decision modelling technique, that can also be used to model extended time horizons. Through the application of a Markov process and a DES model to an economic evaluation comparing alternative adjuvant therapies for early breast cancer, this paper compares the respective processes and outputs of these alternative modelling techniques. DES displays increased flexibility in two broad areas, though the outputs from the two modelling techniques were similar. These results indicate that the use of DES may be beneficial only when the available data demonstrates particular characteristics.

  12. Thermodynamic Spectrum of Solar Flares Based on SDO/EVE Observations: Techniques and First Results

    CERN Document Server

    Wang, Yuming; Zhang, Jie; Liu, Kai; Liu, Rui; Shen, Chenglong; Chamberlin, Phillip C

    2015-01-01

    SDO/EVE provide rich information of the thermodynamic processes of solar activities, particularly of solar flares. Here, we develop a method to construct thermodynamic spectrum (TDS) charts based on the EVE spectral lines. Reading from the charts, we are able to easily recognize if there is a late phase following a main phase of a flare, and able to learn the begin, peak and end times of the flare as well as the drift of the temperature, i.e., the cooling rate, of the heated plasma during the flare. Through four M-class flares of different types, we illustrate which thermodynamic information can be revealed from the TDS charts. Further, we investigate the TDS charts of all the flares greater than M5.0, and some interesting results are achieved. First, there are two distinct drift patterns, called Type I and Type II. For Type I flares, the enhanced emission drifts from high to low temperture, whereas for Type II flares, the drift is somewhat reversed, suggesting a more violent and durable heating during Type I...

  13. Ultrasound-guided endocavitary drainage of pelvic abscesses: Technique, results and complications

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, R.S.; McGrath, F P.; Haslam, P.J.; Varghese, J.C.; Lee, M.J

    2003-01-01

    AIM: To evaluate the experience in our institution with ultrasound-guided transrectal and transvaginal (endocavitary) drainage of pelvic abscesses. MATERIALS AND METHODS: Eighteen patients (four male, 14 female; mean age 55 years, range 30-78 years) presenting with pelvic abscesses were referred to our institution for therapeutic drainage over a 4 year period. Patients received broad-spectrum antibiotics prior to drainage, which was performed by either the transvaginal or transrectal route under ultrasound guidance. Patients were given sedo-analgesia in the form of midazolam and fentanyl and local anaesthesia was also employed. Eight French catheters were inserted into the abscess cavities, and patients were subsequently monitored on a daily basis by a member of the interventional radiology team until such time as it was deemed appropriate to remove the catheter. RESULTS: Eighteen catheters were placed in 17 patients, and transvaginal aspiration alone was performed in one patient. Drainage was successful in 16 of 17 patients, but a transgluteal approach was ultimately required in the remaining patient to enable passage of a larger catheter into an infected haematoma. The mean duration of drainage was 5 days, mean time to defervesce 2 days. Spontaneous catheter dislodgement occurred in four patients associated with straining, but this did not have any adverse effect in three of the four patients. CONCLUSION: Endocavitary drainage is an effective method of treatment for pelvic abscesses. Spontaneous catheter dislodgement does not affect patient outcome.

  14. Medial release and lateral imbrication for intractable anterior knee pain: diagnostic process, technique, and results

    Directory of Open Access Journals (Sweden)

    Meldrum AR

    2015-01-01

    Full Text Available Alexander R Meldrum,1 Jeremy R Reed,2 Megan D Dash3 1Department of Surgery, Section of Orthopedic Surgery, University of Calgary, Calgary, AB, Canada; 2Department of Surgery, University of Saskatchewan College of Medicine, Regina, SK, Canada; 3Department of Family Medicine, College of Medicine, University of Saskatchewan, Regina, SK, Canada Purpose: To present two cases of intractable patellofemoral pain syndrome treated with a novel procedure, arthroscopic medial release, and lateral imbrication of the patellar retinaculum. Patients and methods: This case series presents the treatment of three knees in two patients (one bilateral in whom an all-inside arthroscopic medial release and lateral imbrication of the patellar retinaculum was performed. Subjective measurement of pain was the primary outcome measurement, and subjective patellofemoral instability was the secondary outcome measurement. Results: Subjectively the two patients had full resolution of their pain, without any patellofemoral instability. Conclusion: Medial release and lateral imbrication of the patellar retinaculum is a new surgical procedure that has been used in the treatment of intractable patellofemoral pain syndrome. This is the first report of its kind in the literature. While outcome measurements were less than ideal, the patients had positive outcomes, both functionally and in terms of pain. Keywords: anterior knee pain syndrome, chondromalacia patellae, runners knee, patellar chondropathy, patellofemoral dysfunction, patellofemoral tracking disorder

  15. Iterative reconstruction techniques for computed tomography part 2: initial results in dose reduction and image quality

    Energy Technology Data Exchange (ETDEWEB)

    Willemink, Martin J.; Leiner, Tim; Jong, Pim A. de; Nievelstein, Rutger A.J.; Schilham, Arnold M.R. [Utrecht University Medical Center, Department of Radiology, P.O. Box 85500, Utrecht (Netherlands); Heer, Linda M. de [Cardiothoracic Surgery, Utrecht (Netherlands); Budde, Ricardo P.J. [Utrecht University Medical Center, Department of Radiology, P.O. Box 85500, Utrecht (Netherlands); Gelre Hospital, Department of Radiology, Apeldoorn (Netherlands)

    2013-06-15

    To present the results of a systematic literature search aimed at determining to what extent the radiation dose can be reduced with iterative reconstruction (IR) for cardiopulmonary and body imaging with computed tomography (CT) in the clinical setting and what the effects on image quality are with IR versus filtered back-projection (FBP) and to provide recommendations for future research on IR. We searched Medline and Embase from January 2006 to January 2012 and included original research papers concerning IR for CT. The systematic search yielded 380 articles. Forty-nine relevant studies were included. These studies concerned: the chest(n = 26), abdomen(n = 16), both chest and abdomen(n = 1), head(n = 4), spine(n = 1), and no specific area (n = 1). IR reduced noise and artefacts, and it improved subjective and objective image quality compared to FBP at the same dose. Conversely, low-dose IR and normal-dose FBP showed similar noise, artefacts, and subjective and objective image quality. Reported dose reductions ranged from 23 to 76 % compared to locally used default FBP settings. However, IR has not yet been investigated for ultra-low-dose acquisitions with clinical diagnosis and accuracy as endpoints. Benefits of IR include improved subjective and objective image quality as well as radiation dose reduction while preserving image quality. Future studies need to address the value of IR in ultra-low-dose CT with clinically relevant endpoints. (orig.)

  16. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.

    2017-07-06

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  17. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    Science.gov (United States)

    Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.

    2017-09-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  18. Optimization of liquid overlay technique to formulate heterogenic 3D co-cultures models.

    Science.gov (United States)

    Costa, Elisabete C; Gaspar, Vítor M; Coutinho, Paula; Correia, Ilídio J

    2014-08-01

    Three-dimensional (3D) cell culture models of solid tumors are currently having a tremendous impact in the in vitro screening of candidate anti-tumoral therapies. These 3D models provide more reliable results than those provided by standard 2D in vitro cell cultures. However, 3D manufacturing techniques need to be further optimized in order to increase the robustness of these models and provide data that can be properly correlated with the in vivo situation. Therefore, in the present study the parameters used for producing multicellular tumor spheroids (MCTS) by liquid overlay technique (LOT) were optimized in order to produce heterogeneous cellular agglomerates comprised of cancer cells and stromal cells, during long periods. Spheroids were produced under highly controlled conditions, namely: (i) agarose coatings; (ii) horizontal stirring, and (iii) a known initial cell number. The simultaneous optimization of these parameters promoted the assembly of 3D characteristic cellular organization similar to that found in the in vivo solid tumors. Such improvements in the LOT technique promoted the assembly of highly reproducible, individual 3D spheroids, with a low cost of production and that can be used for future in vitro drug screening assays.

  19. Use of System Identification Techniques to Explore the Hydrological Cycle Response to Perturbations in Climate Models

    Science.gov (United States)

    Kravitz, B.; MacMartin, D. G.; Rasch, P. J.; Wang, H.

    2015-12-01

    Identifying the influence of radiative forcing on hydrological cycle changes in climate models can be challenging due to low signal-to-noise ratios, particularly for regional changes. One method of improving the signal-to-noise ratio, even for short simulations, is to use techniques from engineering, broadly known as system identification. Through this method, forcing (or any other chosen field) in multiple regions in a climate model is perturbed simultaneously by using mutually uncorrelated signals with a chosen frequency content, depending upon the climate behavior one wishes to reveal. The result is the sensitivity of a particular climate field (e.g., temperature, precipitation, or cloud cover) to changes in any perturbed region. We demonstrate this technique in the Community Earth System Model (CESM). We perturbed surface air temperatures in 22 regions by up to 1°C. The amount of temperature perturbation was changed every day corresponding to a predetermined sequence of random numbers between -1 and 1, filtered to contain particular frequency content. The matrix of sequences was then orthogonalized such that all individual sequences were mutually uncorrelated. We performed CESM simulations with both fixed sea surface temperatures and a fully coupled ocean. We discuss the various patterns of climate response in several fields relevant to the hydrological cycle, including precipitation and surface latent heat fluxes. We also discuss the potential limits of this technique in terms of the spatial and temporal scales over which it would be appropriate to use.

  20. Variational Data Assimilation Technique in Mathematical Modeling of Ocean Dynamics

    Science.gov (United States)

    Agoshkov, V. I.; Zalesny, V. B.

    2012-03-01

    Problems of the variational data assimilation for the primitive equation ocean model constructed at the Institute of Numerical Mathematics, Russian Academy of Sciences are considered. The model has a flexible computational structure and consists of two parts: a forward prognostic model, and its adjoint analog. The numerical algorithm for the forward and adjoint models is constructed based on the method of multicomponent splitting. The method includes splitting with respect to physical processes and space coordinates. Numerical experiments are performed with the use of the Indian Ocean and the World Ocean as examples. These numerical examples support the theoretical conclusions and demonstrate the rationality of the approach using an ocean dynamics model with an observed data assimilation procedure.

  1. Difficulties and Problematic Steps in Teaching the Onstep Technique for Inguinal Hernia Repair, Results from a Focus Group Interview

    DEFF Research Database (Denmark)

    Andresen, Kristoffer; Laursen, Jannie; Rosenberg, Jacob

    2016-01-01

    Background. When a new surgical technique is brought into a department, it is often experienced surgeons that learn it first and then pass it on to younger surgeons in training. This study seeks to clarify the problems and positive experiences when teaching and training surgeons in the Onstep...... technique for inguinal hernia repair, seen from the instructor's point of view. Methods. We designed a qualitative study using a focus group to allow participants to elaborate freely and facilitate a discussion. Participants were surgeons with extensive experience in performing the Onstep technique from...... Germany, UK, France, Belgium, Italy, Greece, and Sweden. Results. Four main themes were found, with one theme covering three subthemes: instruction of others (experience, patient selection, and tailored teaching), comfort, concerns/fear, and anatomy. Conclusion. Surgeons receiving a one-day training...

  2. Comparison of the cytology technique and the frozen section results in intraoperative consultation of the breast lesions

    Directory of Open Access Journals (Sweden)

    "Haeri H

    2002-07-01

    Full Text Available The cytology study is effective and reliable technique in intraoperative consultation. This study was performed to evaluate the accuracy of the cytology study in intraoperative consultation of the breast lesions. 125 specimens of the breast lesions were examined and studied in Imam Khomeini Hospital during the years 1998-99. The sensitivity, specificity and accuracy for cytological method were 87.5% , 95%, 90.5% and for the frozen section 92.4%, 100% and 95.4% respectively. The false positive reports were 2% in the cytology technique and the most important source of error and false postivie reports was fibroadenoma in this method. By reviewing the results. It could be concluded that combination of these two techniques is beneficial and more reliable in intraoperative consultation resports of the breast lesions

  3. Clinical and Histological Results of Vertical Ridge Augmentation of Atrophic Posterior Mandible with Inlay Technique of Cancellous Equine Bone Blocks

    Directory of Open Access Journals (Sweden)

    Pistilli R

    2013-12-01

    Full Text Available Aim: We want to evaluate a new bone block material in the inlay technique, for the vertical bone augmentation of a posterior atrophic mandible, in order to perfom aesthetic and prosthetic rehabilitation and enable implant insertion. Materials & Methods: Inlay technique and the subsequent successful implant rehabilitation in the atrophic right posterior mandible in a 42-year-old woman, was completed using a cancellous equine bone block as grafting material. Results: Three months after surgical procedure both clinical and histological aspects show complete integration of the biomaterial with the surrounding bone and three dental implants were placed. Computed tomography and conventional radiography showed a 5mm mean vertical bone gain. Conclusion: Cancellous equine bone grafts may be an effective alternative to autogenous bone and/or inorganic bovine bone for reconstruction of the posterior mandible using inlay technique.

  4. Importance of integrated results of different non-destructive techniques in order to evaluate defects in panel paintings: the contribution of infrared, optical and ultrasonic techniques

    Science.gov (United States)

    Sfarra, S.; Theodorakeas, P.; Ibarra-Castanedo, C.; Avdelidis, N. P.; Paoletti, A.; Paoletti, D.; Hrissagis, K.; Bendada, A.; Koui, M.; Maldague, X.

    2011-06-01

    The increasing deterioration of panel paintings can be due to physical processes that take place during exhibition or transit, or as a result of temperature and humidity fluctuations within a building, church or museum. In response to environmental alterations, a panel painting can expand or contract and a new equilibrium state is eventually reached. These adjustments though, are usually accompanied by a change in shape in order to accommodate to the new conditions. In this work, a holographic method for detecting detached regions and micro-cracks is described. Some of these defects are confirmed by Thermographic Signal Reconstruction (TSR) technique. In addition, Pulsed Phase Thermography (PPT) and Principal Component Thermography (PCT) allow to identify with greater contrast two artificial defects in Mylar which are crucial to understand the topic of interest: the discrimination between defect materials. Finally, traditional contact ultrasounds applications, are widely applied for the evaluation of the wood quality in several characterization procedures. Inspecting the specimen from the front side, the natural and artificial defects of the specimen are confirmed. Experimental results derived by the application of the integrated methods on an Italian panel painting reproduction, called The Angel specimen, are presented. The main advantages that these techniques can offer to the conservation and restoration of artworks are emphasized.

  5. Adult spinal deformity treated with minimally invasive surgery. Description of surgical technique, radiological results and literature review.

    Science.gov (United States)

    Domínguez, I; Luque, R; Noriega, M; Rey, J; Alía, J; Urda, A; Marco, F

    2017-09-06

    The prevalence of adult spinal deformity has been increasing exponentially over time. Surgery has been credited with good radiological and clinical results. The incidence of complications is high. MIS techniques provide good results with fewer complications. This is a retrospective study of 25 patients with an adult spinal deformity treated by MIS surgery, with a minimum follow-up of 6 months. Radiological improvement was SVA from 5 to 2cm, coronal Cobb angle from 31° to 6°, and lumbar lordosis from 18° to 38°. All of these parameters remained stable over time. We also present the complications that appeared in 4 patients (16%). Only one patient needed reoperation. We describe the technique used and review the references on the subject. We conclude that the MIS technique for treating adult spinal deformity has comparable results to those of the conventional techniques but with fewer complications. Copyright © 2017 SECOT. Publicado por Elsevier España, S.L.U. All rights reserved.

  6. Simulating lightning into the RAMS model: implementation and preliminary results

    Directory of Open Access Journals (Sweden)

    S. Federico

    2014-05-01

    Full Text Available This paper shows the results of a tailored version of a previously published methodology, designed to simulate lightning activity, implemented into the Regional Atmospheric Modeling System (RAMS. The method gives the flash density at the resolution of the RAMS grid-scale allowing for a detailed analysis of the evolution of simulated lightning activity. The system is applied in detail to two case studies occurred over the Lazio Region, in Central Italy. Simulations are compared with the lightning activity detected by the LINET network. The cases refer to two thunderstorms of different intensity. Results show that the model predicts reasonably well both cases and that the lightning activity is well reproduced especially for the most intense case. However, there are errors in timing and positioning of the convection, whose magnitude depends on the case study, which mirrors in timing and positioning errors of the lightning distribution. To assess objectively the performance of the methodology, standard scores are presented for four additional case studies. Scores show the ability of the methodology to simulate the daily lightning activity for different spatial scales and for two different minimum thresholds of flash number density. The performance decreases at finer spatial scales and for higher thresholds. The comparison of simulated and observed lighting activity is an immediate and powerful tool to assess the model ability to reproduce the intensity and the evolution of the convection. This shows the importance of the use of computationally efficient lightning schemes, such as the one described in this paper, in forecast models.

  7. Modeling air quality over China: Results from the Panda project

    Science.gov (United States)

    Katinka Petersen, Anna; Bouarar, Idir; Brasseur, Guy; Granier, Claire; Xie, Ying; Wang, Lili; Wang, Xuemei

    2015-04-01

    China faces strong air pollution problems related to rapid economic development in the past decade and increasing demand for energy. Air quality monitoring stations often report high levels of particle matter and ozone all over the country. Knowing its long-term health impacts, air pollution became then a pressing problem not only in China but also in other Asian countries. The PANDA project is a result of cooperation between scientists from Europe and China who joined their efforts for a better understanding of the processes controlling air pollution in China, improve methods for monitoring air quality and elaborate indicators in support of European and Chinese policies. A modeling system of air pollution is being setup within the PANDA project and include advanced global (MACC, EMEP) and regional (WRF-Chem, EMEP) meteorological and chemical models to analyze and monitor air quality in China. The poster describes the accomplishments obtained within the first year of the project. Model simulations for January and July 2010 are evaluated with satellite measurements (SCIAMACHY NO2 and MOPITT CO) and in-situ data (O3, CO, NOx, PM10 and PM2.5) observed at several surface stations in China. Using the WRF-Chem model, we investigate the sensitivity of the model performance to emissions (MACCity, HTAPv2), horizontal resolution (60km, 20km) and choice of initial and boundary conditions.

  8. Preferred tools and techniques for implantation of cardiac electronic devices in Europe: results of the European Heart Rhythm Association survey.

    Science.gov (United States)

    Bongiorni, Maria Grazia; Proclemer, Alessandro; Dobreanu, Dan; Marinskis, Germanas; Pison, Laurent; Blomstrom-Lundqvist, Carina

    2013-11-01

    The aim of this European Heart Rhythm Association (EHRA) survey was to assess clinical practice in relation to the tools and techniques used for cardiac implantable electronic devices procedures in the European countries. Responses to the questionnaire were received from 62 members of the EHRA research network. The survey involved high-, medium-, and low-volume implanting centres, performing, respectively, more than 200, 100-199 and under 100 implants per year. The following topics were explored: the side approach for implantation, surgical techniques for pocket incision, first venous access for lead implantation, preference of lead fixation, preferred coil number for implantable cardioverter-defibrillator (ICD) leads, right ventricular pacing site, generator placement site, subcutaneous ICD implantation, specific tools and techniques for cardiac resynchronization therapy (CRT), lead implantation sequence in CRT, coronary sinus cannulation technique, target site for left ventricular lead placement, strategy in left ventricular lead implant failure, mean CRT implantation time, optimization of the atrioventricular (AV) and ventriculo-ventricular intervals, CRT implants in patients with permanent atrial fibrillation, AV node ablation in patients with permanent AF. This panoramic view allows us to find out the operator preferences regarding the techniques and tools for device implantation in Europe. The results showed different practices in all the fields we investigated, nevertheless the survey also outlines a good adherence to the common standards and recommendations.

  9. Using an inverse modelling approach to evaluate the water retention in a simple water harvesting technique

    Directory of Open Access Journals (Sweden)

    K. Verbist

    2009-06-01

    Full Text Available In arid and semi-arid zones runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Nevertheless, few efforts were observed to quantify the water harvesting processes of these techniques and to evaluate their efficiency. In this study a combination of detailed field measurements and modelling with the HYDRUS-2D software package was used to visualize the effect of an infiltration trench on the soil water content of a bare slope in Northern Chile. Rainfall simulations were combined with high spatial and temporal resolution water content monitoring in order to construct a useful dataset for inverse modelling purposes. Initial estimates of model parameters were provided by detailed infiltration and soil water retention measurements. Four different measurement techniques were used to determine the saturated hydraulic conductivity (Ksat independently. Tension infiltrometer measurements proved a good estimator of the Ksat value and a proxy for those measured under simulated rainfall, whereas the pressure and constant head well infiltrometer measurements showed larger variability. Six different parameter optimization functions were tested as a combination of soil-water content, water retention and cumulative infiltration data. Infiltration data alone proved insufficient to obtain high model accuracy, due to large scatter on the data set, and water content data were needed to obtain optimized effective parameter sets with small confidence intervals. Correlation between observed soil water content and simulated values was as high as R2=0.93 for ten selected observation points used in the model calibration phase, with overall correlation for the 22 observation points equal to 0.85. Model results indicate that the infiltration trench has a significant effect on

  10. Using an inverse modelling approach to evaluate the water retention in a simple water harvesting technique

    Directory of Open Access Journals (Sweden)

    K. Verbist

    2009-10-01

    Full Text Available In arid and semi-arid zones, runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Nevertheless, few efforts were observed to quantify the water harvesting processes of these techniques and to evaluate their efficiency. In this study, a combination of detailed field measurements and modelling with the HYDRUS-2D software package was used to visualize the effect of an infiltration trench on the soil water content of a bare slope in northern Chile. Rainfall simulations were combined with high spatial and temporal resolution water content monitoring in order to construct a useful dataset for inverse modelling purposes. Initial estimates of model parameters were provided by detailed infiltration and soil water retention measurements. Four different measurement techniques were used to determine the saturated hydraulic conductivity (Ksat independently. The tension infiltrometer measurements proved a good estimator of the Ksat value and a proxy for those measured under simulated rainfall, whereas the pressure and constant head well infiltrometer measurements showed larger variability. Six different parameter optimization functions were tested as a combination of soil-water content, water retention and cumulative infiltration data. Infiltration data alone proved insufficient to obtain high model accuracy, due to large scatter on the data set, and water content data were needed to obtain optimized effective parameter sets with small confidence intervals. Correlation between the observed soil water content and the simulated values was as high as R2=0.93 for ten selected observation points used in the model calibration phase, with overall correlation for the 22 observation points equal to 0.85. The model results indicate that the infiltration trench has a

  11. Using an inverse modelling approach to evaluate the water retention in a simple water harvesting technique

    Science.gov (United States)

    Verbist, K.; Cornelis, W. M.; Gabriels, D.; Alaerts, K.; Soto, G.

    2009-10-01

    In arid and semi-arid zones, runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Nevertheless, few efforts were observed to quantify the water harvesting processes of these techniques and to evaluate their efficiency. In this study, a combination of detailed field measurements and modelling with the HYDRUS-2D software package was used to visualize the effect of an infiltration trench on the soil water content of a bare slope in northern Chile. Rainfall simulations were combined with high spatial and temporal resolution water content monitoring in order to construct a useful dataset for inverse modelling purposes. Initial estimates of model parameters were provided by detailed infiltration and soil water retention measurements. Four different measurement techniques were used to determine the saturated hydraulic conductivity (Ksat) independently. The tension infiltrometer measurements proved a good estimator of the Ksat value and a proxy for those measured under simulated rainfall, whereas the pressure and constant head well infiltrometer measurements showed larger variability. Six different parameter optimization functions were tested as a combination of soil-water content, water retention and cumulative infiltration data. Infiltration data alone proved insufficient to obtain high model accuracy, due to large scatter on the data set, and water content data were needed to obtain optimized effective parameter sets with small confidence intervals. Correlation between the observed soil water content and the simulated values was as high as R2=0.93 for ten selected observation points used in the model calibration phase, with overall correlation for the 22 observation points equal to 0.85. The model results indicate that the infiltration trench has a significant effect on soil-water storage, especially at the base of the

  12. A simple technique for combining simplified models and its application to direct stop production

    CERN Document Server

    Barnard, James; French, Sky; White, Martin

    2014-01-01

    The results of many LHC searches for supersymmetric particles are interpreted using simplified models, in which one fixes the masses and couplings of most sparticles then scans over a few remaining masses of interest. We present a new technique for combining multiple simplified models (that requires no additional simulation) thereby demonstrating the utility and limitations of simplified models in general, and suggesting a simple way of improving LHC search strategies. The technique is used to derive limits on the stop mass that are model independent, modulo some reasonably generic assumptions which are quantified precisely. We find that current ATLAS and CMS results exclude stop masses up to 340 GeV for neutralino masses up to 120 GeV, provided that the total branching ratio into channels other than top-neutralino and bottom-chargino is small, and that there is no mass difference smaller than 10 GeV in the mass spectrum. In deriving these limits we place upper bounds on the branching ratios for complete stop...

  13. Exact results for the one dimensional asymmetric exclusion model

    Science.gov (United States)

    Derrida, B.; Evans, M. R.; Hakim, V.; Pasquier, V.

    1993-11-01

    The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices.

  14. Exact results for the one dimensional asymmetric exclusion model

    Energy Technology Data Exchange (ETDEWEB)

    Derrida, B.; Evans, M.R.; Pasquier, V. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Service de Physique Theorique; Hakim, V. [Ecole Normale Superieure, 75 - Paris (France)

    1993-12-31

    The asymmetric exclusion model describes a system of particles hopping in a preferred direction with hard core repulsion. These particles can be thought of as charged particles in a field, as steps of an interface, as cars in a queue. Several exact results concerning the steady state of this system have been obtained recently. The solution consists of representing the weights of the configurations in the steady state as products of non-commuting matrices. (author).

  15. APPLYING LOGISTIC REGRESSION MODEL TO THE EXAMINATION RESULTS DATA

    Directory of Open Access Journals (Sweden)

    Goutam Saha

    2011-01-01

    Full Text Available The binary logistic regression model is used to analyze the school examination results(scores of 1002 students. The analysis is performed on the basis of the independent variables viz.gender, medium of instruction, type of schools, category of schools, board of examinations andlocation of schools, where scores or marks are assumed to be dependent variables. The odds ratioanalysis compares the scores obtained in two examinations viz. matriculation and highersecondary.

  16. Analytical results for a three-phase traffic model.

    Science.gov (United States)

    Huang, Ding-wei

    2003-10-01

    We study analytically a cellular automaton model, which is able to present three different traffic phases on a homogeneous highway. The characteristics displayed in the fundamental diagram can be well discerned by analyzing the evolution of density configurations. Analytical expressions for the traffic flow and shock speed are obtained. The synchronized flow in the intermediate-density region is the result of aggressive driving scheme and determined mainly by the stochastic noise.

  17. Wave Propagation in Fluids Models and Numerical Techniques

    CERN Document Server

    Guinot, Vincent

    2007-01-01

    This book presents the physical principles of wave propagation in fluid mechanics and hydraulics. The mathematical techniques that allow the behavior of the waves to be analyzed are presented, along with existing numerical methods for the simulation of wave propagation. Particular attention is paid to discontinuous flows, such as steep fronts and shock waves, and their mathematical treatment. A number of practical examples are taken from various areas fluid mechanics and hydraulics, such as contaminant transport, the motion of immiscible hydrocarbons in aquifers, river flow, pipe transients an

  18. Simulation technique for hard-disk models in two dimensions

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1990-01-01

    A method is presented for studying hard-disk systems by Monte Carlo computer-simulation techniques within the NpT ensemble. The method is based on the Voronoi tesselation, which is dynamically maintained during the simulation. By an analysis of the Voronoi statistics, a quantity is identified...... that is extremely sensitive to structural changes in the system. This quantity, which is derived from the edge-length distribution function of the Voronoi polygons, displays a dramatic change at the solid-liquid transition. This is found to be more useful for locating the transition than either the defect density...

  19. Multiple Fan-Beam Optical Tomography: Modelling Techniques

    Directory of Open Access Journals (Sweden)

    Pang Jon Fea

    2009-10-01

    Full Text Available This paper explains in detail the solution to the forward and inverse problem faced in this research. In the forward problem section, the projection geometry and the sensor modelling are discussed. The dimensions, distributions and arrangements of the optical fibre sensors are determined based on the real hardware constructed and these are explained in the projection geometry section. The general idea in sensor modelling is to simulate an artificial environment, but with similar system properties, to predict the actual sensor values for various flow models in the hardware system. The sensitivity maps produced from the solution of the forward problems are important in reconstructing the tomographic image.

  20. Size reduction techniques for vital compliant VHDL simulation models

    Science.gov (United States)

    Rich, Marvin J.; Misra, Ashutosh

    2006-08-01

    A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.

  1. Liquid propellant analogy technique in dynamic modeling of launch vehicle

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The coupling effects among lateral mode,longitudinal mode and torsional mode of a launch vehicle cannot be taken into account in traditional dynamic analysis using lateral beam model and longitudinal spring-mass model individually.To deal with the problem,propellant analogy methods based on beam model are proposed and coupled mass-matrix of liquid propellant is constructed through additional mass in the present study.Then an integrated model of launch vehicle for free vibration analysis is established,by which research on the interactions between longitudinal and lateral modes,longitudinal and torsional modes of the launch vehicle can be implemented.Numerical examples for tandem tanks validate the present method and its necessity.

  2. Evaluation of dynamical models: dissipative synchronization and other techniques.

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A B

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams--which in turn is much greater than, say, that of correlation dimension--but at a much lower computational cost.

  3. Evaluation of dynamical models: Dissipative synchronization and other techniques

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A. B.

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams—which in turn is much greater than, say, that of correlation dimension—but at a much lower computational cost.

  4. Challenges in validating model results for first year ice

    Science.gov (United States)

    Melsom, Arne; Eastwood, Steinar; Xie, Jiping; Aaboe, Signe; Bertino, Laurent

    2017-04-01

    In order to assess the quality of model results for the distribution of first year ice, a comparison with a product based on observations from satellite-borne instruments has been performed. Such a comparison is not straightforward due to the contrasting algorithms that are used in the model product and the remote sensing product. The implementation of the validation is discussed in light of the differences between this set of products, and validation results are presented. The model product is the daily updated 10-day forecast from the Arctic Monitoring and Forecasting Centre in CMEMS. The forecasts are produced with the assimilative ocean prediction system TOPAZ. Presently, observations of sea ice concentration and sea ice drift are introduced in the assimilation step, but data for sea ice thickness and ice age (or roughness) are not included. The model computes the age of the ice by recording and updating the time passed after ice formation as sea ice grows and deteriorates as it is advected inside the model domain. Ice that is younger than 365 days is classified as first year ice. The fraction of first-year ice is recorded as a tracer in each grid cell. The Ocean and Sea Ice Thematic Assembly Centre in CMEMS redistributes a daily product from the EUMETSAT OSI SAF of gridded sea ice conditions which include "ice type", a representation of the separation of regions between those infested by first year ice, and those infested by multi-year ice. The ice type is parameterized based on data for the gradient ratio GR(19,37) from SSMIS observations, and from the ASCAT backscatter parameter. This product also includes information on ambiguity in the processing of the remote sensing data, and the product's confidence level, which have a strong seasonal dependency.

  5. Advanced modeling techniques in application to plasma pulse treatment

    Science.gov (United States)

    Pashchenko, A. F.; Pashchenko, F. F.

    2016-06-01

    Different approaches considered for simulation of plasma pulse treatment process. The assumption of a significant non-linearity of processes in the treatment of oil wells has been confirmed. Method of functional transformations and fuzzy logic methods suggested for construction of a mathematical model. It is shown, that models, based on fuzzy logic are able to provide a satisfactory accuracy of simulation and prediction of non-linear processes observed.

  6. Analysis of computational modeling techniques for complete rotorcraft configurations

    Science.gov (United States)

    O'Brien, David M., Jr.

    Computational fluid dynamics (CFD) provides the helicopter designer with a powerful tool for identifying problematic aerodynamics. Through the use of CFD, design concepts can be analyzed in a virtual wind tunnel long before a physical model is ever created. Traditional CFD analysis tends to be a time consuming process, where much of the effort is spent generating a high quality computational grid. Recent increases in computing power and memory have created renewed interest in alternative grid schemes such as unstructured grids, which facilitate rapid grid generation by relaxing restrictions on grid structure. Three rotor models have been incorporated into a popular fixed-wing unstructured CFD solver to increase its capability and facilitate availability to the rotorcraft community. The benefit of unstructured grid methods is demonstrated through rapid generation of high fidelity configuration models. The simplest rotor model is the steady state actuator disk approximation. By transforming the unsteady rotor problem into a steady state one, the actuator disk can provide rapid predictions of performance parameters such as lift and drag. The actuator blade and overset blade models provide a depiction of the unsteady rotor wake, but incur a larger computational cost than the actuator disk. The actuator blade model is convenient when the unsteady aerodynamic behavior needs to be investigated, but the computational cost of the overset approach is too large. The overset or chimera method allows the blades loads to be computed from first-principles and therefore provides the most accurate prediction of the rotor wake for the models investigated. The physics of the flow fields generated by these models for rotor/fuselage interactions are explored, along with efficiencies and limitations of each method.

  7. Preliminary results of the study on wind erosion in the Qinghai-Tibetan Plateau using 137Cs technique

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The worldwide fallout of caesium-137 (137Cs) associated with the nuclear weapon tests during the 1950s and 1960s has provided a valuable man-made tracer for studies of soil erosion and sediment delivery. But relatively few researchers have used it to estimate wind erosion. In this note, the 137Cs technique is introduced into the studies of wind erosion and its modern processes in the Qinghai-Tibetan Plateau. Two 137Cs reference inventories of 982.11 and 2 376.04 Bq·m-2 were established preminarily, distributed in the south and middle-north parts of the studied area respectively. By analyzing the patterns of 137Cs depth profiles from sampling sites, the aeolian processes of erosion and deposition in nearly 40 years have been revealed, i.e. the shrub coppice dunes (S1) and semi-fixed dunefields (S3) experienced the alternation of erosion and deposition, while the grasslands (S4, S6 and S7) and dry farmlands (S5) suffered erosion only. By using 137Cs model, the average wind erosion rates for shrub coppice dune (S1), semi-fixed dune fields (S3), dry farmlands (S5) and grasslands (S4, S6 and S7) were estimated to be 84.14, 69.43, 30.68 and 21.84 t·ha-1·a-1 respectively, averaging 47.59 t·ha-1·a-1 for the whole plateau, which can be regarded as of the medium erosion standard. These results derived from 137Cs for the first time have significant implications for the further research of wind erosion and desertification control in the Qinghai-Tibetan Plateau.

  8. Models of adopting the convicted to the imprisonment conditions – the results of my own research

    Directory of Open Access Journals (Sweden)

    Dorota Kanarek-Lizik

    2013-06-01

    Full Text Available Convicted who are sent to penitentiary, units in order to serve a sentence of imprisonment, are obliged to choose a proper technique (model of coping with the imprisonment discomfort and the way of minimizing discrepancy between the restricted and the outer world at the same time. In order to know these techniques, there has been a special questionnaire written which applies to a model of adopting the convicted to the imprisonment conditions. This questionnaire is based on the types of adaptations enumerated by E. Goffman and these are withdrawal, rebellion, settling down, cold calculation and conversion. In this article I introduced the results of my own research that concern the models of adopting the convicted to the imprisonment conditions. The survey included recidivists and the adults who serve a sentence of imprisonment for the first time.

  9. Model Calibration of a Groundwater Flow Analysis for an Underground Structure Using Data Assimilation Technique

    Science.gov (United States)

    Yamamoto, S.; Honda, M.; Sakurai, H.

    2015-12-01

    Model calibration of groundwater flow analysis is a difficult task, especially in the complicated hydrogeological condition, because available information about hydrogeological properties is very limited. This often causes non-negligible differences between predicted results and real observations. We applied the Ensemble Kalman Filter (EnKF), which is a type of data assimilation technique, to groundwater flow simulation in order to obtain a valid model that can reproduce accurately the observations. Unlike conventional manual calibration, this scheme not only makes the calibration work efficient but also provides an objective approach not depending on the skills of engineers.In this study, we focused on estimating hydraulic conductivities of bedrocks and fracture zones around an underground fuel storage facility. Two different kinds of groundwater monitoring data were sequentially assimilated into the unsteady groundwater flow model via the EnKF.Synthetic test results showed that estimated hydraulic conductivities matched their true values and our method works well in groundwater flow analysis. Further, influences of each observation in the state updating process were quantified through sensitivity analysis.To assess the feasibility under practical conditions, the assimilation experiments using real field measurements were performed. The results showed that the identified model was able to approximately simulate the behavior of groundwater flow. On the other hand, it was difficult to reproduce the observation data correctly in a specific local area. This suggests that inaccurate area is included in the assumed hydrogeological conceptual model of this site, and could be useful information for the model validation.

  10. Comparison of Artificial Neural Network And M5 Model Tree Technique In Water Level Forecasting of Solo River

    Science.gov (United States)

    Lasminto, Umboro; Hery Mularta, Listya

    2010-05-01

    Flood events along the Solo River flow at the end of December 2007 has caused lose of properties and lives. Floods occurred in the city of Ngawi, Madiun, Bojonegoro, Babat and surrounding areas. To reduce future losses, one of the important efforts that will occur during a flood is to get information about the magnitude and time will be floods, so that people can make an effort to reduce its impact. Flood forecasting model can provide information of water level in the river some time before the incident. This paper will compare the flood forecasting model at Bojonegoro City was built using the technique of Artificial Neural Network (ANN) and M5 Model Tree (M5MT). The model will forecast the water level of 1, 3 and 6 hours ahead at the point of water level recorders in the City of Bojonegoro using input from the water level at some point water level recorders in the upstream such as Karangnongko, Sekayu, Jurug and Wonogiri. The same data set of hourly water level records are used to build the model of ANN and M5MT technique. The selection of parameters and setup of ANN and M5MT technique is done to obtain the best result. The results of the model are evaluated by calculating the Root Mean Square Error (RMSE) between the predictions and observations. RMSE produced by the water level forecasting model 1, 3 and 6 hours ahead with M5MT technique are 0.2723, 0.6279 and 0.7176 meters. While the ANN technique are 0.1829, 0.3192 and 0517 meters. ANN technique has a better ability in predicting low flow, whereas M5 Model Tree technique has a better ability in predicting high flow. Keywords : Water level forecasting, Solo River, M5 Model Tree, Artificial Neural Network

  11. Results of Satellite Brightness Modeling Using Kringing Optimized Interpolation

    Science.gov (United States)

    Weeden, C.; Hejduk, M.

    At the 2005 AMOS conference, Kriging Optimized Interpolation (KOI) was presented as a tool to model satellite brightness as a function of phase angle and solar declination angle (J.M Okada and M.D. Hejduk). Since November 2005, this method has been used to support the tasking algorithm for all optical sensors in the Space Surveillance Network (SSN). The satellite brightness maps generated by the KOI program are compared to each sensor's ability to detect an object as a function of the brightness of the background sky and angular rate of the object. This will determine if the sensor can technically detect an object based on an explicit calculation of the object's probability of detection. In addition, recent upgrades at Ground-Based Electro Optical Deep Space Surveillance Sites (GEODSS) sites have increased the amount and quality of brightness data collected and therefore available for analysis. This in turn has provided enough data to study the modeling process in more detail in order to obtain the most accurate brightness prediction of satellites. Analysis of two years of brightness data gathered from optical sensors and modeled via KOI solutions are outlined in this paper. By comparison, geo-stationary objects (GEO) were tracked less than non-GEO objects but had higher density tracking in phase angle due to artifices of scheduling. A statistically-significant fit to a deterministic model was possible less than half the time in both GEO and non-GEO tracks, showing that a stochastic model must often be used alone to produce brightness results, but such results are nonetheless serviceable. Within the Kriging solution, the exponential variogram model was the most frequently employed in both GEO and non-GEO tracks, indicating that monotonic brightness variation with both phase and solar declination angle is common and testifying to the suitability to the application of regionalized variable theory to this particular problem. Finally, the average nugget value, or

  12. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  13. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  14. Excimer laser "corneal shaping": a new technique for customized trephination in penetrating keratoplasty. First experimental results in rabbits.

    Science.gov (United States)

    Schmitz, Klaus; Schreiber, Wolfram; Behrens-Baumann, Wolfgang

    2003-05-01

    The aim of the presented experimental work was to develop a technique for congruent trephination of donor and recipient corneas in free form using a 193-nm excimer laser and to study the clinical follow-up after the application of the technique in a rabbit model. In 12 New Zealand White rabbits homologous penetrating keratoplasty was performed. Trephination of donor buttons and recipient beds was achieved in six animals by conventional mechanical trephination and in six by excimer laser trephination with a guided laser beam in a non-circular geometry. The surgical procedure and its applicability to human subjects were evaluated and the postoperative clinical course was followed for 6 months. The surgical procedure of full-thickness excimer laser trephination could be performed reproducibly in the animal model both for dissection of the donor buttons and for preparation of the recipient beds. Keratoplasty was performed with kidney-shaped transplants after trephination in free form with the guided laser beam. Postoperative clinical follow-up did not show any differences between the two trephination groups that could be related to the applied trephination technique. After 6 months we observed well-adapted and clear corneal grafts, kidney-shaped in the excimer trephination group and circular in the mechanical trephination group. No side effects on the crystalline lens and the central retina could be clinically observed following excimer laser trephination. We present the first experimental study of keratoplasty with freely selected transplant geometry and perfect congruence of donor button and recipient bed. The application of this technique in certain corneal disorders in humans will offer improved treatment options in the future.

  15. Titan Chemistry: Results From A Global Climate Model

    Science.gov (United States)

    Wilson, Eric; West, R. A.; Friedson, A. J.; Oyafuso, F.

    2008-09-01

    We present results from a 3-dimesional global climate model of Titan's atmosphere and surface. This model, a modified version of NCAR's CAM-3 (Community Atmosphere Model), has been optimized for analysis of Titan's lower atmosphere and surface. With the inclusion of forcing from Saturn's gravitational tides, interaction from the surface, transfer of longwave and shortwave radiation, and parameterization of haze properties, constrained by Cassini observations, a dynamical field is generated, which serves to advect 14 long-lived species. The concentrations of these chemical tracers are also affected by 82 chemical reactions and the photolysis of 21 species, based on the Wilson and Atreya (2004) model, that provide sources and sinks for the advected species along with 23 additional non-advected radicals. In addition, the chemical contribution to haze conversion is parameterized along with the microphysical processes that serve to distribute haze opacity throughout the atmosphere. References Wilson, E.H. and S.K. Atreya, J. Geophys. Res., 109, E06002, 2004.

  16. Why Does a Kronecker Model Result in Misleading Capacity Estimates?

    CERN Document Server

    Raghavan, Vasanthan; Sayeed, Akbar M

    2008-01-01

    Many recent works that study the performance of multi-input multi-output (MIMO) systems in practice assume a Kronecker model where the variances of the channel entries, upon decomposition on to the transmit and the receive eigen-bases, admit a separable form. Measurement campaigns, however, show that the Kronecker model results in poor estimates for capacity. Motivated by these observations, a channel model that does not impose a separable structure has been recently proposed and shown to fit the capacity of measured channels better. In this work, we show that this recently proposed modeling framework can be viewed as a natural consequence of channel decomposition on to its canonical coordinates, the transmit and/or the receive eigen-bases. Using tools from random matrix theory, we then establish the theoretical basis behind the Kronecker mismatch at the low- and the high-SNR extremes: 1) Sparsity of the dominant statistical degrees of freedom (DoF) in the true channel at the low-SNR extreme, and 2) Non-regul...

  17. Techniques and results

    Digital Repository Service at National Institute of Oceanography (India)

    Mudholkar, A.V.; Pattan, J.N.; Sudhakar, M.

    was carried out to narrow down the area for detailed exploration These studies were done to candidate mine sites in the area of high potential The database to record details was created in the National Institute of Oceanography (NIO), India, Goa and Engineers...

  18. New DNS and modeling results for turbulent pipe flow

    Science.gov (United States)

    Johansson, Arne; El Khoury, George; Grundestam, Olof; Schlatter, Philipp; Brethouwer, Geert; Linne Flow Centre Team

    2013-11-01

    The near-wall region of turbulent pipe and channel flows (as well as zero-pressure gradient boundary layers) have been shown to exhibit a very high degree of similarity in terms of all statistical moments and many other features, while even the mean velocity profile in the two cases exhibits significant differences between in the outer region. The wake part of the profile, i.e. the deviation from the log-law, in the outer region is of substantially larger amplitude in pipe flow as compared to channel flow (although weaker than in boundary layer flow). This intriguing feature has been well known but has no simple explanation. Model predictions typically give identical results for the two flows. We have analyzed a new set of DNS for pipe and channel flows (el Khoury et al. 2013, Flow, Turbulence and Combustion) for friction Reynolds numbers up to 1000 and made comparing calculations with differential Reynolds stress models (DRSM). We have strong indications that the key factor behind the difference in mean velocity in the outer region can be coupled to differences in the turbulent diffusion in this region. This is also supported by DRSM results, where interesting differences are seen depending on the sophistication of modeling the turbulent diffusion coefficient.

  19. Some Results on Optimal Dividend Problem in Two Risk Models

    Directory of Open Access Journals (Sweden)

    Shuaiqi Zhang

    2010-12-01

    Full Text Available The compound Poisson risk model and the compound Poisson risk model perturbed by diffusion are considered in the presence of a dividend barrier with solvency constraints. Moreover, it extends the known result due to [1]. Ref. [1] finds the optimal dividend policy is of a barrier type for a jump-diffusion model with exponentially distributed jumps. In this paper, it turns out that there can be two different solutions depending on the model’s parameters. Furthermore, an interesting result is given: the proportional transaction cost has no effect on the dividend barrier. The objective of the corporation is to maximize the cumulative expected discounted dividends payout with solvency constraints before the time of ruin. It is well known that under some reasonable assumptions, optimal dividend strategy is a barrier strategy, i.e., there is a level b_{1}(b_{2} so that whenever surplus goes above the level b_{1}(b_{2}, the excess is paid out as dividends. However, the optimal level b_{1}(b_{2} may be unacceptably low from a solvency point of view. Therefore, some constraints should imposed on an insurance company such as to pay out dividends unless the surplus has reached a level b^{1}_{c}>b_{1}(b^2_{c}>b_{2} . We show that in this case a barrier strategy at b^{1}_{c}(b^2_{c} is optimal.

  20. Generalization Technique for 2D+SCALE Dhe Data Model

    Science.gov (United States)

    Karim, Hairi; Rahman, Alias Abdul; Boguslawski, Pawel

    2016-10-01

    Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE) data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension) for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information) in scale dimension could be used for the future 3D-scale applications.