WorldWideScience

Sample records for linear techniques failed

  1. A comparison of publicly available linear MRI stereotaxic registration techniques.

    Science.gov (United States)

    Dadar, Mahsa; Fonov, Vladimir S; Collins, D Louis

    2018-07-01

    Linear registration to a standard space is one of the major steps in processing and analyzing magnetic resonance images (MRIs) of the brain. Here we present an overview of linear stereotaxic MRI registration and compare the performance of 5 publicly available and extensively used linear registration techniques in medical image analysis. A set of 9693 T1-weighted MR images were obtained for testing from 4 datasets: ADNI, PREVENT-AD, PPMI, and HCP, two of which have multi-center and multi-scanner data and three of which have longitudinal data. Each individual native image was linearly registered to the MNI ICBM152 average template using five versions of MRITOTAL from MINC tools, FLIRT from FSL, two versions of Elastix, spm_affreg from SPM, and ANTs linear registration techniques. Quality control (QC) images were generated from the registered volumes and viewed by an expert rater to assess the quality of the registrations. The QC image contained 60 sub-images (20 of each of axial, sagittal, and coronal views at different levels throughout the brain) overlaid with contours of the ICBM152 template, enabling the expert rater to label the registration as acceptable or unacceptable. The performance of the registration techniques was then compared across different datasets. In addition, the effect of image noise, intensity non-uniformity, age, head size, and atrophy on the performance of the techniques was investigated by comparing differences between age, scaling factor, ventricle volume, brain volume, and white matter hyperintensity (WMH) volumes between passed and failed cases for each method. The average registration failure rate among all datasets was 27.41%, 27.14%, 12.74%, 13.03%, 0.44% for the five versions of MRITOTAL techniques, 8.87% for ANTs, 11.11% for FSL, 12.35% for Elastix Affine, 24.40% for Elastix Similarity, and 30.66% for SPM. There were significant effects of signal to noise ratio, image intensity non-uniformity estimates, as well as age, head size, and

  2. Advanced analysis technique for the evaluation of linear alternators and linear motors

    Science.gov (United States)

    Holliday, Jeffrey C.

    1995-01-01

    A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.

  3. Assessment of Snared-Loop Technique When Standard Retrieval of Inferior Vena Cava Filters Fails

    International Nuclear Information System (INIS)

    Doody, Orla; Noe, Geertje; Given, Mark F.; Foley, Peter T.; Lyon, Stuart M.

    2009-01-01

    Purpose To identify the success and complications related to a variant technique used to retrieve inferior vena cava filters when simple snare approach has failed. Methods A retrospective review of all Cook Guenther Tulip filters and Cook Celect filters retrieved between July 2006 and February 2008 was performed. During this period, 130 filter retrievals were attempted. In 33 cases, the standard retrieval technique failed. Retrieval was subsequently attempted with our modified retrieval technique. Results The retrieval was successful in 23 cases (mean dwell time, 171.84 days; range, 5-505 days) and unsuccessful in 10 cases (mean dwell time, 162.2 days; range, 94-360 days). Our filter retrievability rates increased from 74.6% with the standard retrieval method to 92.3% when the snared-loop technique was used. Unsuccessful retrieval was due to significant endothelialization (n = 9) and caval penetration by the filter (n = 1). A single complication occurred in the group, in a patient developing pulmonary emboli after attempted retrieval. Conclusion The technique we describe increased the retrievability of the two filters studied. Hook endothelialization is the main factor resulting in failed retrieval and continues to be a limitation with these filters.

  4. A New Linearization Technique Using Multi-sinh Doublet

    Directory of Open Access Journals (Sweden)

    CEHAN, V.

    2009-06-01

    Full Text Available In this paper a new linearization technique using multi-sinh doublet, implemented with a second generation current conveyor is presented. This new linearization technique is compared with the one based on multi-tanh doublets with linearization series connected diodes on the branches. The comparative study of the two linearization techniques is carried out using both dynamic range analysis, expressed by linearity error and the THD value calculation of output current, and the noise behavior of the two analyzed doublets. For the multi-sinh linearization technique proposed in the paper a method which assures the increase of the dynamic range, keeping the transconductance value constant is presented. This is done by using two design parameters: the number of series connected diodes N, which specifies the desired linear operating range and the k emitters areas ratio of the input stage transistors, which establishes the transconductance value. In the paper is also shown that if the transconductances of the two analyzed doublets are identical, and for the same values of N and k parameters, respectively, the current consumption of the multi-sinh doublet is always smaller than for the multi-tanh doublet.

  5. Analysis of the efficiency of the linearization techniques for solving multi-objective linear fractional programming problems by goal programming

    Directory of Open Access Journals (Sweden)

    Tunjo Perić

    2017-01-01

    Full Text Available This paper presents and analyzes the applicability of three linearization techniques used for solving multi-objective linear fractional programming problems using the goal programming method. The three linearization techniques are: (1 Taylor’s polynomial linearization approximation, (2 the method of variable change, and (3 a modification of the method of variable change proposed in [20]. All three linearization techniques are presented and analyzed in two variants: (a using the optimal value of the objective functions as the decision makers’ aspirations, and (b the decision makers’ aspirations are given by the decision makers. As the criteria for the analysis we use the efficiency of the obtained solutions and the difficulties the analyst comes upon in preparing the linearization models. To analyze the applicability of the linearization techniques incorporated in the linear goal programming method we use an example of a financial structure optimization problem.

  6. All-Arthroscopic Revision Eden-Hybinette Procedure for Failed Instability Surgery: Technique and Preliminary Results.

    Science.gov (United States)

    Giannakos, Antonios; Vezeridis, Peter S; Schwartz, Daniel G; Jany, Richard; Lafosse, Laurent

    2017-01-01

    To describe the technique of an all-arthroscopic Eden-Hybinette procedure in the revision setting for treatment of a failed instability procedure, particularly after failed Latarjet, as well as to present preliminary results of this technique. Between 2007 and 2011, 18 shoulders with persistent instability after failed instability surgery were treated with an arthroscopic Eden-Hybinette technique using an autologous bicortical iliac crest bone graft. Of 18 patients, 12 (9 men, 3 women) were available for follow-up. The average follow-up was 28.8 months (range, 15 to 60 months). A Latarjet procedure was performed as an index surgery in 10 patients (83%). Two patients (17%) had a prior arthroscopic Bankart repair. Eight patients (67%) obtained a good or excellent result, whereas 4 patients (33%) reported a fair or poor result. Seven patients (58%) returned to sport activities. A positive apprehension test persisted in 5 patients (42%), including 2 patients (17%) with recurrent subluxations. The Rowe score increased from 30.00 to 78.33 points (P Instability Index score showed a good result of 28.71% (603 points). The average anterior flexion was 176° (range, 150° to 180°), and the average external rotation was 66° (range, 0° to 90°). Two patients (16.67%) showed a progression of glenohumeral osteoarthritic changes, with each patient increasing by one stage in the Samilson-Prieto classification. All 4 patients (33%) with a fair or poor result had a nonunion identified on postoperative computed tomography scan. An all-arthroscopic Eden-Hybinette procedure in the revision setting for failed instability surgery, although technically demanding, is a safe, effective, and reproducible technique. Although the learning curve is considerable, this procedure offers all the advantages of arthroscopic surgery and allows reconstruction of glenoid defects and restoration of shoulder stability in this challenging patient population. In our hands, this procedure yields good

  7. A comparison of linear and nonlinear statistical techniques in performance attribution.

    Science.gov (United States)

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  8. Compensation techniques for non-linearities in H-bridge inverters

    Directory of Open Access Journals (Sweden)

    Daniel Zammit

    2016-12-01

    Full Text Available This paper presents compensation techniques for component non-linearities in H-bridge inverters as those used in grid-connected photovoltaic (PV inverters. Novel compensation techniques depending on the switching device current were formulated to compensate for the non-linearities in inverter circuits caused by the voltage drops on the switching devices. Both simulation and experimental results will be presented. Testing was carried out on a PV inverter which was designed and constructed for this research. Very satisfactory results were obtained from all the compensation techniques presented, however the exact compensation method was the most effective, providing the highest reduction in harmonics.

  9. Linearizing of Low Noise Power Amplifier Using 5.8GHz Double Loop Feedforward Linearization Technique

    Directory of Open Access Journals (Sweden)

    Abdulkareem Mokif Obais

    2017-05-01

    Full Text Available In this paper, a double loop feedforward linearization technique is analyzed and built with a MMIC low noise amplifier “HMC753” as main amplifier and a two-stage class-A power amplifier as error amplifier. The system is operated with 5V DC supply at a center frequency of 5.8GHz and a bandwidth of 500MHz. The proposed technique, increases the linearity of the MMIC amplifier from 18dBm at 1dB compression point to more than 26dBm. In addition, the proposed system is tested with OFDM signal and it reveals good response in maximizing the linearity region and eliminating distortions. The proposed system is designed and simulated onAdvanced Wave Research-Microwave Office (AWR-MWO.

  10. CHEBYSHEV ACCELERATION TECHNIQUE FOR SOLVING FUZZY LINEAR SYSTEM

    Directory of Open Access Journals (Sweden)

    S.H. Nasseri

    2011-07-01

    Full Text Available In this paper, Chebyshev acceleration technique is used to solve the fuzzy linear system (FLS. This method is discussed in details and followed by summary of some other acceleration techniques. Moreover, we show that in some situations that the methods such as Jacobi, Gauss-Sidel, SOR and conjugate gradient is divergent, our proposed method is applicable and the acquired results are illustrated by some numerical examples.

  11. CHEBYSHEV ACCELERATION TECHNIQUE FOR SOLVING FUZZY LINEAR SYSTEM

    Directory of Open Access Journals (Sweden)

    S.H. Nasseri

    2009-10-01

    Full Text Available In this paper, Chebyshev acceleration technique is used to solve the fuzzy linear system (FLS. This method is discussed in details and followed by summary of some other acceleration techniques. Moreover, we show that in some situations that the methods such as Jacobi, Gauss-Sidel, SOR and conjugate gradient is divergent, our proposed method is applicable and the acquired results are illustrated by some numerical examples.

  12. Non-linear wave equations:Mathematical techniques

    International Nuclear Information System (INIS)

    1978-01-01

    An account of certain well-established mathematical methods, which prove useful to deal with non-linear partial differential equations is presented. Within the strict framework of Functional Analysis, it describes Semigroup Techniques in Banach Spaces as well as variational approaches towards critical points. Detailed proofs are given of the existence of local and global solutions of the Cauchy problem and of the stability of stationary solutions. The formal approach based upon invariance under Lie transformations deserves attention due to its wide range of applicability, even if the explicit solutions thus obtained do not allow for a deep analysis of the equations. A compre ensive introduction to the inverse scattering approach and to the solution concept for certain non-linear equations of physical interest are also presented. A detailed discussion is made about certain convergence and stability problems which arise in importance need not be emphasized. (author) [es

  13. Object matching using a locally affine invariant and linear programming techniques.

    Science.gov (United States)

    Li, Hongsheng; Huang, Xiaolei; He, Lei

    2013-02-01

    In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.

  14. Distributed CMOS Bidirectional Amplifiers Broadbanding and Linearization Techniques

    CERN Document Server

    El-Khatib, Ziad; Mahmoud, Samy A

    2012-01-01

    This book describes methods to design distributed amplifiers useful for performing circuit functions such as duplexing, paraphrase amplification, phase shifting power splitting and power combiner applications.  A CMOS bidirectional distributed amplifier is presented that combines for the first time device-level with circuit-level linearization, suppressing the third-order intermodulation distortion. It is implemented in 0.13μm RF CMOS technology for use in highly linear, low-cost UWB Radio-over-Fiber communication systems. Describes CMOS distributed amplifiers for optoelectronic applications such as Radio-over-Fiber systems, base station transceivers and picocells; Presents most recent techniques for linearization of CMOS distributed amplifiers; Includes coverage of CMOS I-V transconductors, as well as CMOS on-chip inductor integration and modeling; Includes circuit applications for UWB Radio-over-Fiber networks.

  15. Optically stimulated luminescence from quartz measured using the linear modulation technique

    DEFF Research Database (Denmark)

    Bulur, E.; Bøtter-Jensen, L.; Murray, A.S.

    2000-01-01

    The optically stimulated luminescence (OSL) from heated natural quartz has been investigated using the linear modulation technique (LMT), in which the excitation light intensity is increased linearly during stimulation. In contrast to conventional stimulation, which usually produces a monotonical...

  16. Chiropractic biophysics technique: a linear algebra approach to posture in chiropractic.

    Science.gov (United States)

    Harrison, D D; Janik, T J; Harrison, G R; Troyanovich, S; Harrison, D E; Harrison, S O

    1996-10-01

    This paper discusses linear algebra as applied to human posture in chiropractic, specifically chiropractic biophysics technique (CBP). Rotations, reflections and translations are geometric functions studied in vector spaces in linear algebra. These mathematical functions are termed rigid body transformations and are applied to segmental spinal movement in the literature. Review of the literature indicates that these linear algebra concepts have been used to describe vertebral motion. However, these rigid body movers are presented here as applying to the global postural movements of the head, thoracic cage and pelvis. The unique inverse functions of rotations, reflections and translations provide a theoretical basis for making postural corrections in neutral static resting posture. Chiropractic biophysics technique (CBP) uses these concepts in examination procedures, manual spinal manipulation, instrument assisted spinal manipulation, postural exercises, extension traction and clinical outcome measures.

  17. Failed endotracheal intubation

    Directory of Open Access Journals (Sweden)

    Sheykhol Islami V

    1995-07-01

    Full Text Available The incidence of failed intubation is higher in obstetric than other surgical patients. Failed intubation was the 2nd commonest cause of mortality during anesthesia. Bearing in mind that failre to intubate may be unavoidable in certain circumstances, it is worth reviewing. The factors, which may contribute to a disastrous out come. Priorities of subsequent management must include maintaining oxygenation and preventing aspiration of gastric contents. Fiber optic intubation is now the technique of choice with a high success rate and with least trauma to the patient.

  18. A High Performance Silicon-on-Insulator LDMOSTT Using Linearly Increasing Thickness Techniques

    International Nuclear Information System (INIS)

    Yu-Feng, Guo; Zhi-Gong, Wang; Gene, Sheu; Jian-Bing, Cheng

    2010-01-01

    We present a new technique to achieve uniform lateral electric field and maximum breakdown voltage in lateral double-diffused metal-oxide-semiconductor transistors fabricated on silicon-on-insulator substrates. A linearly increasing drift-region thickness from the source to the drain is employed to improve the electric field distribution in the devices. Compared to the lateral linear doping technique and the reduced surface field technique, two-dimensional numerical simulations show that the new device exhibits reduced specific on-resistance, maximum off- and on-state breakdown voltages, superior quasi-saturation characteristics and improved safe operating area. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  19. Immediate Reconstruction of Failed Implants in the Esthetic Zone Using a Flapless Technique and Autogenous Composite Tuberosity Graft

    NARCIS (Netherlands)

    Raghoebar, Gerry M; Meijer, Henny J A; van Minnen, Baucke; Vissink, Arjan

    We describe a technique for immediate reconstruction of bone after removal of failed dental implants in the esthetic region to optimize the esthetic outcome of retreatment. We conducted a study of 16 consecutive patients in whom the bony defect resulting from implant removal was immediately

  20. Linear models in the mathematics of uncertainty

    CERN Document Server

    Mordeson, John N; Clark, Terry D; Pham, Alex; Redmond, Michael A

    2013-01-01

    The purpose of this book is to present new mathematical techniques for modeling global issues. These mathematical techniques are used to determine linear equations between a dependent variable and one or more independent variables in cases where standard techniques such as linear regression are not suitable. In this book, we examine cases where the number of data points is small (effects of nuclear warfare), where the experiment is not repeatable (the breakup of the former Soviet Union), and where the data is derived from expert opinion (how conservative is a political party). In all these cases the data  is difficult to measure and an assumption of randomness and/or statistical validity is questionable.  We apply our methods to real world issues in international relations such as  nuclear deterrence, smart power, and cooperative threat reduction. We next apply our methods to issues in comparative politics such as successful democratization, quality of life, economic freedom, political stability, and fail...

  1. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  2. Better Drumming Through Calibration: Techniques for Pre-Performance Robotic Percussion Optimization

    OpenAIRE

    Murphy, Jim; Kapur, Ajay; Carnegie, Dale

    2012-01-01

    A problem with many contemporary musical robotic percussion systems lies in the fact that solenoids fail to respond lin-early to linear increases in input velocity. This nonlinearity forces performers to individually tailor their compositions to specific robotic drummers. To address this problem, we introduce a method of pre-performance calibration using metaheuristic search techniques. A variety of such techniques are introduced and evaluated and the results of the optimized solenoid-based p...

  3. Incomplete factorization technique for positive definite linear systems

    International Nuclear Information System (INIS)

    Manteuffel, T.A.

    1980-01-01

    This paper describes a technique for solving the large sparse symmetric linear systems that arise from the application of finite element methods. The technique combines an incomplete factorization method called the shifted incomplete Cholesky factorization with the method of generalized conjugate gradients. The shifted incomplete Cholesky factorization produces a splitting of the matrix A that is dependent upon a parameter α. It is shown that if A is positive definite, then there is some α for which this splitting is possible and that this splitting is at least as good as the Jacobi splitting. The method is shown to be more efficient on a set of test problems than either direct methods or explicit iteration schemes

  4. Laparoscopic hysterotomy for a failed termination of pregnancy: a first case report with demonstration of a new surgical technique.

    Science.gov (United States)

    Baekelandt, Jan; Bosteels, Jan

    2015-01-01

    To show a new technique of hysterotomy via laparoscopy for a failed termination of pregnancy as an alternative for a hysterotomy via laparotomy. Step-by-step explanation of the technique using parts of the original video of the procedure (Canadian Task Force classification III). A 39-year-old woman, para 1 gravida 2, was diagnosed with a trisomy 21 pregnancy at 18 weeks' gestation. After 7 days of failed medical and mechanical induction, including misoprostol per vaginam, intravenous sulprostone , intravenous oxytocin, a transcervical Foley catheter, and a transcervical Bakri balloon (Cooke Medical, Bloomington, IN), the decision was made to perform a laparoscopic hysterotomy. A laparoscopic hysterotomy was performed with extraction of the fetus and placenta in an endobag. The uterus was sutured using a double layer of 2 continuous Vicryl 1 sutures (Ethicon, Cincinnati, OH). The umbilical incision was enlarged to 2.5 cm to extract the endobags. The procedure was performed using only standard reusable laparoscopic equipment. The patient's postoperative recovery was uneventful. On the postoperative ultrasound, we suspected that a small piece of placental tissue had been left in the uterine cavity. A hysteroscopy confirmed this and showed a normal uterine cavity. The small placental fragment regressed spontaneously on the follow-up ultrasounds. A 2-year follow-up of the patient has shown no minor or major complications. The patient has used contraception since the procedure because she has no further desire for childbearing. This new technique can help surgeons avoid a laparotomy when a hysterotomy for a failed midtrimester termination of pregnancy is required. The risk of uterine rupture in a next pregnancy needs to be taken into account. This frugally innovative technique may potentially be performed in a low-resource setting because only standard reusable laparoscopic equipment was used. Copyright © 2015 AAGL. Published by Elsevier Inc. All rights reserved.

  5. Technique tip: Simultaneous first metatarsal lengthening and metatarsophalangeal joint fusion for failed hallux valgus surgery with transfer metatarsalgia.

    Science.gov (United States)

    Chowdhary, Ashwin; Drittenbass, Lisca; Stern, Richard; Assal, Mathieu

    2017-03-01

    Failed hallux valgus surgery may result in residual or recurrent hallux valgus, and as well transfer metatarsalgia. The present technical tip concerns the combination of fusion of the first metatarsophalangeal (MTP) joint and lengthening of the first metatarsal (MT) through a scarf osteotomy. Six patients underwent the presented technique, all for the indication of failed hallux valgus surgery with shortening of the first MT and degenerative changes in the 1st MTP joint. Follow-up at six months revealed all patients had complete healing of the osteotomy and arthrodesis sites. They were all asymptomatic and fully active, completely satisfied with the outcome. Combined fusion of the first MTP joint and lengthening of the first MT through a scarf osteotomy results in an excellent outcome in patients with failed hallux valgus surgery with shortening of the first MT and degenerative changes in the 1st MTP joint. Copyright © 2016 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.

  6. GDTM-Padé technique for the non-linear differential-difference equation

    Directory of Open Access Journals (Sweden)

    Lu Jun-Feng

    2013-01-01

    Full Text Available This paper focuses on applying the GDTM-Padé technique to solve the non-linear differential-difference equation. The bell-shaped solitary wave solution of Belov-Chaltikian lattice equation is considered. Comparison between the approximate solutions and the exact ones shows that this technique is an efficient and attractive method for solving the differential-difference equations.

  7. Variance estimation for complex indicators of poverty and inequality using linearization techniques

    Directory of Open Access Journals (Sweden)

    Guillaume Osier

    2009-12-01

    Full Text Available The paper presents the Eurostat experience in calculating measures of precision, including standard errors, confidence intervals and design effect coefficients - the ratio of the variance of a statistic with the actual sample design to the variance of that statistic with a simple random sample of same size - for the "Laeken" indicators, that is, a set of complex indicators of poverty and inequality which had been set out in the framework of the EU-SILC project (European Statistics on Income and Living Conditions. The Taylor linearization method (Tepping, 1968; Woodruff, 1971; Wolter, 1985; Tille, 2000 is actually a well-established method to obtain variance estimators for nonlinear statistics such as ratios, correlation or regression coefficients. It consists of approximating a nonlinear statistic with a linear function of the observations by using first-order Taylor Series expansions. Then, an easily found variance estimator of the linear approximation is used as an estimator of the variance of the nonlinear statistic. Although the Taylor linearization method handles all the nonlinear statistics which can be expressed as a smooth function of estimated totals, the approach fails to encompass the "Laeken" indicators since the latter are having more complex mathematical expressions. Consequently, a generalized linearization method (Deville, 1999, which relies on the concept of influence function (Hampel, Ronchetti, Rousseeuw and Stahel, 1986, has been implemented. After presenting the EU-SILC instrument and the main target indicators for which variance estimates are needed, the paper elaborates on the main features of the linearization approach based on influence functions. Ultimately, estimated standard errors, confidence intervals and design effect coefficients obtained from this approach are presented and discussed.

  8. Weighted thinned linear array design with the iterative FFT technique

    CSIR Research Space (South Africa)

    Du Plessis, WP

    2011-09-01

    Full Text Available techniques utilise simulated annealing [3]?[5], [10], mixed integer linear programming [7], genetic algorithms [9], and a hyrid approach combining a genetic algorithm and a local optimiser [8]. The iterative Fourier technique (IFT) developed by Keizer [2... algorithm being well- suited to obtaining low CTRs. Test problems from the literature are considered, and the results obtained with the IFT considerably exceed those achieved with other algorithms. II. DESCRIPTION OF THE ALGORITHM A flowchart describing...

  9. A TECHNIQUE OF EXPERIMENTAL INVESTIGATIONS OF LINEAR IMPULSE ELECTROMECHANICAL CONVERTERS

    Directory of Open Access Journals (Sweden)

    V.F. Bolyukh

    2017-04-01

    Full Text Available Purpose. Development of a technique of experimental studies linear pulse electromechanical converters parameters, which are used as shock-power devices and electromechanical accelerators, and comparing the experimental results with the calculated indices obtained using the mathematical model. Methodology. Method of experimental investigations of linear electromechanical converter is that the electrical parameters are recorded simultaneously (inductor winding current and mechanical parameters characterizing the power and speed indicators of the joke with actuator. Power indicators are primarily important for shock-power devices, and high velocity - for electromechanical accelerators. Power indices were investigated using piezoelectric sensors, a system of strain sensors, pressure pulsation sensor and high-speed videorecording. Velocity indicators were investigated using a resistive movement sensor which allows to record character of the armature movement with actuating element in each moment. Results. The technique of experimental research, which is the simultaneous recording of electrical and mechanical power and velocity parameters of the linear electromechanical converter pulse, is developed. In the converter as a shock-power device power indicators are recorded using a piezoelectric transducer, strain sensors system, pressure pulsation sensor and high-speed video. The parameters of the inductor winding current pulse, the time lag of mechanical processes in relation to the time of occurrence of the inductor winding current, the average speed of the joke, the magnitude and momentum of electrodynamics forces acting on the plate strikes are experimentally determined. In the converter as an electromechanical accelerator velocity performance recorded using resistive displacement sensors. It is shown that electromechanical converter processes have complex spatial-temporal character. The experimental results are in good agreement with the calculated

  10. Linearization and efficiency enhancement techniques for silicon power amplifiers from RF to mmW

    CERN Document Server

    Kerhervé, Eric

    2015-01-01

    This book provides an overview of current efficiency enhancement and linearization techniques for silicon power amplifier designs. It examines the latest state of the art technologies and design techniques to address challenges for RF cellular mobile, base stations, and RF and mmW WLAN applications. Coverage includes material on current silicon (CMOS, SiGe) RF and mmW power amplifier designs, focusing on advantages and disadvantages compared with traditional GaAs implementations. With this book you will learn: The principles of linearization and efficiency improvement techniquesThe arch

  11. Analytical vs. Simulation Solution Techniques for Pulse Problems in Non-linear Stochastic Dynamics

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.

    Advantages and disadvantages of available analytical and simulation techniques for pulse problems in non-linear stochastic dynamics are discussed. First, random pulse problems, both those which do and do not lead to Markov theory, are presented. Next, the analytical and analytically-numerical tec......Advantages and disadvantages of available analytical and simulation techniques for pulse problems in non-linear stochastic dynamics are discussed. First, random pulse problems, both those which do and do not lead to Markov theory, are presented. Next, the analytical and analytically...

  12. Linear circuit transfer functions an introduction to fast analytical techniques

    CERN Document Server

    Basso, Christophe P

    2016-01-01

    Linear Circuit Transfer Functions: An introduction to Fast Analytical Techniques teaches readers how to determine transfer functions of linear passive and active circuits by applying Fast Analytical Circuits Techniques. Building on their existing knowledge of classical loop/nodal analysis, the book improves and expands their skills to unveil transfer functions in a swift and efficient manner. Starting with simple examples, the author explains step-by-step how expressing circuits time constants in different configurations leads to writing transfer functions in a compact and insightful way. By learning how to organize numerators and denominators in the fastest possible way, readers will speed-up analysis and predict the frequency resp nse of simple to complex circuits. In some cases, they will be able to derive the final expression by inspection, without writing a line of algebra. Key features: * Emphasizes analysis through employing time constant-based methods discussed in other text books but not widely us...

  13. Emergency gastroduodenal artery embolization by sandwich technique for angiographically obvious and oblivious, endotherapy failed bleeding duodenal ulcers

    Energy Technology Data Exchange (ETDEWEB)

    Anil, G., E-mail: ivyanil10@gmail.com [Department of Diagnostic Imaging, National University Hospital (Singapore); Department of Radiology, Changi General Hospital (Singapore); Tan, A.G.S.; Cheong, H.-W.; Ng, K.-S.; Teoh, W.-C. [Department of Radiology, Changi General Hospital (Singapore)

    2012-05-15

    Aim: To determine the feasibility, safety, and efficacy of adopting a standardized protocol for emergency transarterial embolization (TAE) of the gastroduodenal artery (GDA) with a uniform sandwich technique in endotherapy-failed bleeding duodenal ulcers (DU). Materials and methods: Between December 2009 and December 2010, 15 patients with endotherapy-failed bleeding DU were underwent embolization. Irrespective of active extravasation, the segment of the GDA supplying the bleeding DU as indicated by endoscopically placed clips was embolized by a uniform sandwich technique with gelfoam between metallic coils. The clinical profile of the patients, re-bleeding, mortality rates, and response time of the intervention radiology team were recorded. The angioembolizations were reviewed for their technical success, clinical success, and complications. Mean duration of follow-up was 266.5 days. Results: Active contrast-medium extravasation was seen in three patients (20%). Early re-bleeding was noted in two patients (13.33%). No patient required surgery. There was 100% technical success, while primary and secondary clinical success rates for TAE were 86.6 and 93.3%, respectively. Focal pancreatitis was the single major procedure-related complication. There was no direct bleeding-DU-related death. The response time of the IR service averaged 150 min (range 60-360 min) with mean value of 170 min. Conclusion: Emergency embolization of the GDA using the sandwich technique is a safe and highly effective therapeutic option for bleeding DUs refractory to endotherapy. A prompt response from the IR service can be ensured with an institutional protocol in place for such common medical emergencies.

  14. A review on prognostic techniques for non-stationary and non-linear rotating systems

    Science.gov (United States)

    Kan, Man Shan; Tan, Andy C. C.; Mathew, Joseph

    2015-10-01

    The field of prognostics has attracted significant interest from the research community in recent times. Prognostics enables the prediction of failures in machines resulting in benefits to plant operators such as shorter downtimes, higher operation reliability, reduced operations and maintenance cost, and more effective maintenance and logistics planning. Prognostic systems have been successfully deployed for the monitoring of relatively simple rotating machines. However, machines and associated systems today are increasingly complex. As such, there is an urgent need to develop prognostic techniques for such complex systems operating in the real world. This review paper focuses on prognostic techniques that can be applied to rotating machinery operating under non-linear and non-stationary conditions. The general concept of these techniques, the pros and cons of applying these methods, as well as their applications in the research field are discussed. Finally, the opportunities and challenges in implementing prognostic systems and developing effective techniques for monitoring machines operating under non-stationary and non-linear conditions are also discussed.

  15. A non-linear procedure for the numerical analysis of crack development in beams failing in shear

    Directory of Open Access Journals (Sweden)

    P. Bernardi

    2016-01-01

    Full Text Available In this work, a consistent formulation for the representation of concrete behavior before and after cracking has been implemented into a non-linear model for the analysis of reinforced concrete structures, named 2D-PARC. Several researches have indeed pointed out that the adoption of an effective modeling for concrete, combined with an accurate failure criterion, is crucial for the correct prediction of the structural behavior, not only in terms of failure load, but also with reference to a realistic representation of crack initiation and development. This last aspect is particularly relevant at serviceability conditions in order to verify the fulfillment of structural requirements provided by Design Codes, which limit the maximum crack width due to appearance and durability issues. In more details, a constitutive model originally proposed by Ottosen and based on non-linear elasticity has been here incorporated into 2D-PARC in order to improve the numerical efficiency of the adopted algorithm, providing at the same time an accurate prediction of the structural response. The effectiveness of this procedure has been verified against significant experimental results available in the technical literature and relative to reinforced concrete beams without stirrups failing in shear, which represent a problem of great theoretical and practical importance in the field of structural engineering. Numerical results have been compared to experimental evidences not only in terms of global structural response (i.e. applied load vs. midspan deflection, but also in terms of crack pattern evolution and maximum crack widths.

  16. Solving Linear Equations by Classical Jacobi-SR Based Hybrid Evolutionary Algorithm with Uniform Adaptation Technique

    OpenAIRE

    Jamali, R. M. Jalal Uddin; Hashem, M. M. A.; Hasan, M. Mahfuz; Rahman, Md. Bazlar

    2013-01-01

    Solving a set of simultaneous linear equations is probably the most important topic in numerical methods. For solving linear equations, iterative methods are preferred over the direct methods especially when the coefficient matrix is sparse. The rate of convergence of iteration method is increased by using Successive Relaxation (SR) technique. But SR technique is very much sensitive to relaxation factor, {\\omega}. Recently, hybridization of classical Gauss-Seidel based successive relaxation t...

  17. The efficacy of laparoscopic intracorporeal linear suture technique as a strategy for reducing recurrences in pediatric inguinal hernia.

    Science.gov (United States)

    Lee, S R; Choi, S B

    2017-06-01

    Pediatric laparoscopic herniorrhaphy has rare complications, but recurrence might occur. The purpose of this manuscript is to evaluate the efficacy of linear suture technique of laparoscopic pediatric herniorrhaphy in reducing recurrences. Laparoscopic surgery was performed on 2223 pediatric patients (under 10 years old) from September 2012 to December 2014 in Damsoyu Hospital, Seoul, Republic of Korea. The causes of recurrence were investigated case by case. The patients were categorized into two groups according to the suture method used in closing the hernia orifice: Group 1 (purse-string suture, 1009 patients) and Group 2 (linear suture, 1214 patients). There were 1413 (63.6%) male and 810 (36.4%) female patients. Mean age was 30.5 ± 29.2 months. A significantly higher proportion of male patients, contralateral patent processus vaginalis, and less proportion of recurrence were observed in Group 2. There were ten cases of recurrence in Group 1 because the internal ring suture could not endure the tension. One recurrence occurred in Group 2. The suture technique and age were found to be a significant risk factor for recurrence. Linear suture technique had a lower recurrence rate (odds ratio = 0.07, with 95% confidence interval 0.01-0.53, and p = 0.004). Purse-string suture technique causes significantly higher occurrence of hernia recurrences than linear suture technique. Linear suture technique can reduce recurrence by increasing the endurance to tension around the internal ring by distributing pressure to a wider area along the linear suture line. Linear suture technique can effectively reduce recurrence in pediatric inguinal herniorrhaphy.

  18. Optically stimulated luminescence from quartz measured using the linear modulation technique

    International Nuclear Information System (INIS)

    Bulur, E.; Boetter-Jensen, L.; Murray, A.S.

    2000-01-01

    The optically stimulated luminescence (OSL) from heated natural quartz has been investigated using the linear modulation technique (LMT), in which the excitation light intensity is increased linearly during stimulation. In contrast to conventional stimulation, which usually produces a monotonically decreasing signal, linearly increasing the stimulation power gives peaks in the signal as a function of time. In cases where the OSL signal contains more than one component, the linear increase in power of the stimulation light may result in a curve containing overlapping peaks, where the most easily stimulated component occurs at a shorter time. This allows the separation of the overlapping OSL components, which are assumed to originate from different traps. The LM-OSL curve from quartz shows an initial peak followed by a broad one. Deconvolution using curve fitting has shown that the composite OSL curve from quartz can be approximated well by using a linear combination of first-order peaks. In addition to the three known components, i.e. fast, medium and slow components from continuous-wave-OSL studies, an additional slow component is also identified for the first time. The dose responses and thermal stabilities of the various components are also studied

  19. Failing Failed States

    DEFF Research Database (Denmark)

    Holm, Hans-Henrik

    2002-01-01

    coverage. A Danish survey of newsrooms shows that the national world-view and prevalent news criteria prevent consistent coverage. It is argued that politicians are the ones who determine national agendas: it is from political initiatives, rather than media coverage, that failing states and humanitarian......When states are failing, when basic state functions are no longer carried out, and when people have no security, humanitarian crises erupt. In confronting this problem, the stronger states have followed an ad hoc policy of intervention and aid. In some cases, humanitarian disasters have resulted...... from inaction. Often, the media are blamed. Politicians complain about the media when they interfere (the CNN effect), and when they do not. This article looks at how the media do cover failing states. Sierra Leone and Congo are used as examples. The analysis shows that there is little independent...

  20. Ultrasonics aids the identification of failed fuel rods

    International Nuclear Information System (INIS)

    Anon.

    1985-01-01

    Over a number of years Brown Boveri Reaktor of West Germany has developed and commercialized an ultrasonic failed fuel rod detection system. Sipping has up to now been the standard technique for failed fuel detection, but sipping can only indicate whether or not an assembly contains defective rods; the BBR system can tell which rod is defective. (author)

  1. Log-binomial models: exploring failed convergence.

    Science.gov (United States)

    Williamson, Tyler; Eliasziw, Misha; Fick, Gordon Hilton

    2013-12-13

    Relative risk is a summary metric that is commonly used in epidemiological investigations. Increasingly, epidemiologists are using log-binomial models to study the impact of a set of predictor variables on a single binary outcome, as they naturally offer relative risks. However, standard statistical software may report failed convergence when attempting to fit log-binomial models in certain settings. The methods that have been proposed in the literature for dealing with failed convergence use approximate solutions to avoid the issue. This research looks directly at the log-likelihood function for the simplest log-binomial model where failed convergence has been observed, a model with a single linear predictor with three levels. The possible causes of failed convergence are explored and potential solutions are presented for some cases. Among the principal causes is a failure of the fitting algorithm to converge despite the log-likelihood function having a single finite maximum. Despite these limitations, log-binomial models are a viable option for epidemiologists wishing to describe the relationship between a set of predictors and a binary outcome where relative risk is the desired summary measure. Epidemiologists are encouraged to continue to use log-binomial models and advocate for improvements to the fitting algorithms to promote the widespread use of log-binomial models.

  2. Rescue of failed filtering blebs with ab interno trephination.

    Science.gov (United States)

    Shihadeh, Wisam A; Ritch, Robert; Liebmann, Jeffrey M

    2006-06-01

    We evaluated the effectiveness of ab interno automated trephination as a technique for rescuing failed mature filtering blebs. A retrospective chart review of 40 failed blebs of 38 patients who had a posttrephination follow-up period of at least 3 months was done. With success defined as intraocular pressure (IOP) control with other modalities of management. Complications were few. We believe that ab interno trephination is an excellent option for rescuing selected failed filtering blebs.

  3. A HYBRID TECHNIQUE FOR PAPR REDUCTION OF OFDM USING DHT PRECODING WITH PIECEWISE LINEAR COMPANDING

    Directory of Open Access Journals (Sweden)

    Thammana Ajay

    2016-06-01

    Full Text Available Orthogonal Frequency Division Multiplexing (OFDM is a fascinating approach for wireless communication applications which require huge amount of data rates. However, OFDM signal suffers from its large Peak-to-Average Power Ratio (PAPR, which results in significant distortion while passing through a nonlinear device, such as a transmitter high power amplifier (HPA. Due to this high PAPR, the complexity of HPA as well as DAC also increases. For the reduction of PAPR in OFDM many techniques are available. Among them companding is an attractive low complexity technique for the OFDM signal’s PAPR reduction. Recently, a piecewise linear companding technique is recommended aiming at minimizing companding distortion. In this paper, a collective piecewise linear companding approach with Discrete Hartley Transform (DHT method is expected to reduce peak-to-average of OFDM to a great extent. Simulation results shows that this new proposed method obtains significant PAPR reduction while maintaining improved performance in the Bit Error Rate (BER and Power Spectral Density (PSD compared to piecewise linear companding method.

  4. Endoscopic Ultrasound-guided Rendezvous Technique after Failed Endoscopic Retrograde Cholangiopancreatography: Which Approach Route Is the Best?

    Science.gov (United States)

    Okuno, Nozomi; Hara, Kazuo; Mizuno, Nobumasa; Hijioka, Susumu; Tajika, Masahiro; Tanaka, Tsutomu; Ishihara, Makoto; Hirayama, Yutaka; Onishi, Sachiyo; Niwa, Yasumasa; Yamao, Kenji

    2017-12-01

    Objective The endoscopic ultrasound-guided rendezvous technique (EUS-RV) is a salvage method for failed selective biliary cannulation. Three puncture routes have been reported, with many comparisons between the intra-hepatic and extra-hepatic biliary ducts. We used the trans-esophagus (TE) and trans-jejunum (TJ) routes. In the present study, the utility of EUS-RV for biliary access was evaluated, focusing on the approach routes. Methods and Patients In 39 patients, 42 puncture routes were evaluated in detail. EUS-RV was performed between January 2010 and December 2014. The patients were prospectively enrolled, and their clinical data were retrospectively collected. Results The patients' median age was 71 (range 29-84) years. The indications for endoscopic retrograde cholangiopancreatography (ERCP) were malignant biliary obstruction in 24 patients and benign biliary disease in 15. The technical success rate was 78.6% (33/42) and was similar among approach routes (p=0.377). The overall complication rate was 16.7% (7/42) and was similar among approach routes (p=0.489). However, mediastinal emphysema occurred in 2 TE route EUS-RV patients. No EUS-RV-related deaths occurred. Conclusion EUS-RV proved reliable after failed ERCP. The selection of the appropriate route based on the patient's condition is crucial.

  5. The Recommendations for Linear Measurement Techniques on the Measurements of Nonlinear System Parameters of a Joint.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Scott A [Univ. of Maryland Baltimore County (UMBC), Baltimore, MD (United States); Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Catalfamo, Simone [Univ. of Stuttgart (Germany); Brake, Matthew R. W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rice Univ., Houston, TX (United States); Schwingshackl, Christoph W. [Imperial College, London (United Kingdom); Reusb, Pascal [Daimler AG, Stuttgart (Germany)

    2017-01-01

    In the study of the dynamics of nonlinear systems, experimental measurements often convolute the response of the nonlinearity of interest and the effects of the experimental setup. To reduce the influence of the experimental setup on the deduction of the parameters of the nonlinearity, the response of a mechanical joint is investigated under various experimental setups. These experiments first focus on quantifying how support structures and measurement techniques affect the natural frequency and damping of a linear system. The results indicate that support structures created from bungees have negligible influence on the system in terms of frequency and damping ratio variations. The study then focuses on the effects of the excitation technique on the response for a linear system. The findings suggest that thinner stingers should not be used, because under the high force requirements the stinger bending modes are excited adding unwanted torsional coupling. The optimal configuration for testing the linear system is then applied to a nonlinear system in order to assess the robustness of the test configuration. Finally, recommendations are made for conducting experiments on nonlinear systems using conventional/linear testing techniques.

  6. Improving biomedical information retrieval by linear combinations of different query expansion techniques.

    Science.gov (United States)

    Abdulla, Ahmed AbdoAziz Ahmed; Lin, Hongfei; Xu, Bo; Banbhrani, Santosh Kumar

    2016-07-25

    Biomedical literature retrieval is becoming increasingly complex, and there is a fundamental need for advanced information retrieval systems. Information Retrieval (IR) programs scour unstructured materials such as text documents in large reserves of data that are usually stored on computers. IR is related to the representation, storage, and organization of information items, as well as to access. In IR one of the main problems is to determine which documents are relevant and which are not to the user's needs. Under the current regime, users cannot precisely construct queries in an accurate way to retrieve particular pieces of data from large reserves of data. Basic information retrieval systems are producing low-quality search results. In our proposed system for this paper we present a new technique to refine Information Retrieval searches to better represent the user's information need in order to enhance the performance of information retrieval by using different query expansion techniques and apply a linear combinations between them, where the combinations was linearly between two expansion results at one time. Query expansions expand the search query, for example, by finding synonyms and reweighting original terms. They provide significantly more focused, particularized search results than do basic search queries. The retrieval performance is measured by some variants of MAP (Mean Average Precision) and according to our experimental results, the combination of best results of query expansion is enhanced the retrieved documents and outperforms our baseline by 21.06 %, even it outperforms a previous study by 7.12 %. We propose several query expansion techniques and their combinations (linearly) to make user queries more cognizable to search engines and to produce higher-quality search results.

  7. Results of Latarjet Coracoid Transfer to Revise Failed Arthroscopic Instability Repairs

    OpenAIRE

    Nicholson, Gregory P.; Rahman, Zain; Verma, Nikhil N.; Romeo, Anthony A.; Cole, Brian J.; Gupta, Anil Kumar; Bruce, Benjamin

    2014-01-01

    Objectives: Arthroscopic instability repair has supplanted open techniques to anatomically reconstruct anteroinferior instability pathology. Arthroscopic technique can fail for a variety of reasons. We have utilized the Latarjet as a revision option in failed arthroscopic instability repairs when there is altered surgical anatomy, capsular deficiency and/or glenoid bone compromise and recurrent glenohumeral instability. Methods: We reviewed 51 shoulders (40 ?, 11?) that underwent Latarjet cor...

  8. Solution of the fully fuzzy linear systems using iterative techniques

    International Nuclear Information System (INIS)

    Dehghan, Mehdi; Hashemi, Behnam; Ghatee, Mehdi

    2007-01-01

    This paper mainly intends to discuss the iterative solution of fully fuzzy linear systems which we call FFLS. We employ Dubois and Prade's approximate arithmetic operators on LR fuzzy numbers for finding a positive fuzzy vector x-tilde which satisfies A-tildex-tilde=b, where A-tilde and b-tilde are a fuzzy matrix and a fuzzy vector, respectively. Please note that the positivity assumption is not so restrictive in applied problems. We transform FFLS and propose iterative techniques such as Richardson, Jacobi, Jacobi overrelaxation (JOR), Gauss-Seidel, successive overrelaxation (SOR), accelerated overrelaxation (AOR), symmetric and unsymmetric SOR (SSOR and USSOR) and extrapolated modified Aitken (EMA) for solving FFLS. In addition, the methods of Newton, quasi-Newton and conjugate gradient are proposed from nonlinear programming for solving a fully fuzzy linear system. Various numerical examples are also given to show the efficiency of the proposed schemes

  9. Laparoscopic revision of failed antireflux operations.

    Science.gov (United States)

    Serafini, F M; Bloomston, M; Zervos, E; Muench, J; Albrink, M H; Murr, M; Rosemurgy, A S

    2001-01-01

    A small number of patients fail fundoplication and require reoperation. Laparoscopic techniques have been applied to reoperative fundoplications. We reviewed our experience with reoperative laparoscopic fundoplication. Reoperative laparoscopic fundoplication was undertaken in 28 patients, 19 F and 9 M, of mean age 56 years +/- 12. Previous antireflux procedures included 19 open and 12 laparoscopic antireflux operations. Symptoms were heartburn (90%), dysphagia (35%), and atypical symptoms (30%%). The mean interval from antireflux procedure to revision was 13 months +/- 4.2. The mean DeMeester score was 78+/-32 (normal 14.7). Eighteen patients (64%) had hiatal breakdown, 17 (60%) had wrap failure, 2 (7%) had slipped Nissen, 3 (11%) had paraesophageal hernias, and 1 (3%) had an excessively tight wrap. Twenty-five revisions were completed laparoscopically, while 3 patients required conversion to the open technique. Complications occurred in 9 of 17 (53%) patients failing previous open fundoplications and in 4 of 12 patients (33%) failing previous laparoscopic fundoplications and included 15 gastrotomies and 1 esophagotomy, all repaired laparoscopically, 3 postoperative gastric leaks, and 4 pneumothoraces requiring tube thoracostomy. No deaths occurred. Median length of stay was 5 days (range 2-90 days). At a mean follow-up of 20 months +/- 17, 2 patients (7%) have failed revision of their fundoplications, with the rest of the patients being essentially asymptomatic (93%). The results achieved with reoperative laparoscopic fundoplication are similar to those of primary laparoscopic fundoplications. Laparoscopic reoperations, particularly of primary open fundoplication, can be technically challenging and fraught with complications. Copyright 2001 Academic Press.

  10. Evaluation of linear regression techniques for atmospheric applications: the importance of appropriate weighting

    Directory of Open Access Journals (Sweden)

    C. Wu

    2018-03-01

    Full Text Available Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS, Deming regression (DR, orthogonal distance regression (ODR, weighted ODR (WODR, and York regression (YR. We first introduce a new data generation scheme that employs the Mersenne twister (MT pseudorandom number generator. The numerical simulations are also improved by (a refining the parameterization of nonlinear measurement uncertainties, (b inclusion of a linear measurement uncertainty, and (c inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot was developed to facilitate the implementation of error-in-variables regressions.

  11. Evaluation of linear regression techniques for atmospheric applications: the importance of appropriate weighting

    Science.gov (United States)

    Wu, Cheng; Zhen Yu, Jian

    2018-03-01

    Linear regression techniques are widely used in atmospheric science, but they are often improperly applied due to lack of consideration or inappropriate handling of measurement uncertainty. In this work, numerical experiments are performed to evaluate the performance of five linear regression techniques, significantly extending previous works by Chu and Saylor. The five techniques are ordinary least squares (OLS), Deming regression (DR), orthogonal distance regression (ODR), weighted ODR (WODR), and York regression (YR). We first introduce a new data generation scheme that employs the Mersenne twister (MT) pseudorandom number generator. The numerical simulations are also improved by (a) refining the parameterization of nonlinear measurement uncertainties, (b) inclusion of a linear measurement uncertainty, and (c) inclusion of WODR for comparison. Results show that DR, WODR and YR produce an accurate slope, but the intercept by WODR and YR is overestimated and the degree of bias is more pronounced with a low R2 XY dataset. The importance of a properly weighting parameter λ in DR is investigated by sensitivity tests, and it is found that an improper λ in DR can lead to a bias in both the slope and intercept estimation. Because the λ calculation depends on the actual form of the measurement error, it is essential to determine the exact form of measurement error in the XY data during the measurement stage. If a priori error in one of the variables is unknown, or the measurement error described cannot be trusted, DR, WODR and YR can provide the least biases in slope and intercept among all tested regression techniques. For these reasons, DR, WODR and YR are recommended for atmospheric studies when both X and Y data have measurement errors. An Igor Pro-based program (Scatter Plot) was developed to facilitate the implementation of error-in-variables regressions.

  12. Stability, performance and sensitivity analysis of I.I.D. jump linear systems

    Science.gov (United States)

    Chávez Fuentes, Jorge R.; González, Oscar R.; Gray, W. Steven

    2018-06-01

    This paper presents a symmetric Kronecker product analysis of independent and identically distributed jump linear systems to develop new, lower dimensional equations for the stability and performance analysis of this type of systems than what is currently available. In addition, new closed form expressions characterising multi-parameter relative sensitivity functions for performance metrics are introduced. The analysis technique is illustrated with a distributed fault-tolerant flight control example where the communication links are allowed to fail randomly.

  13. A Unique Technique to get Kaprekar Iteration in Linear Programming Problem

    Science.gov (United States)

    Sumathi, P.; Preethy, V.

    2018-04-01

    This paper explores about a frivolous number popularly known as Kaprekar constant and Kaprekar numbers. A large number of courses and the different classroom capacities with difference in study periods make the assignment between classrooms and courses complicated. An approach of getting the minimum value of number of iterations to reach the Kaprekar constant for four digit numbers and maximum value is also obtained through linear programming techniques.

  14. Multiple regression technique for Pth degree polynominals with and without linear cross products

    Science.gov (United States)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  15. Optimal technique of linear accelerator-based stereotactic radiosurgery for tumors adjacent to brainstem.

    Science.gov (United States)

    Chang, Chiou-Shiung; Hwang, Jing-Min; Tai, Po-An; Chang, You-Kang; Wang, Yu-Nong; Shih, Rompin; Chuang, Keh-Shih

    2016-01-01

    Stereotactic radiosurgery (SRS) is a well-established technique that is replacing whole-brain irradiation in the treatment of intracranial lesions, which leads to better preservation of brain functions, and therefore a better quality of life for the patient. There are several available forms of linear accelerator (LINAC)-based SRS, and the goal of the present study is to identify which of these techniques is best (as evaluated by dosimetric outcomes statistically) when the target is located adjacent to brainstem. We collected the records of 17 patients with lesions close to the brainstem who had previously been treated with single-fraction radiosurgery. In all, 5 different lesion catalogs were collected, and the patients were divided into 2 distance groups-1 consisting of 7 patients with a target-to-brainstem distance of less than 0.5cm, and the other of 10 patients with a target-to-brainstem distance of ≥ 0.5 and linear accelerator is only 1 modality can to establish for SRS treatment. Based on statistical evidence retrospectively, we recommend VMAT as the optimal technique for delivering treatment to tumors adjacent to brainstem. Copyright © 2016 American Association of Medical Dosimetrists. All rights reserved.

  16. Restoring Low Sidelobe Antenna Patterns with Failed Elements in a Phased Array Antenna

    Science.gov (United States)

    2016-02-01

    optimum low sidelobes are demonstrated in several examples. Index Terms — Array signal processing, beams, linear algebra , phased arrays, shaped...beam antennas. I. INTRODUCTION For many phased array antenna applications , low spatial sidelobes are required, and it is desirable to maintain...represented by a linear combination of low sidelobe beamformers with no failed elements, ’s, in a neighborhood around under the constraint that the linear

  17. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    Science.gov (United States)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  18. America, Linearly Cyclical

    Science.gov (United States)

    2013-05-10

    AND VICTIM- ~ vAP BLAMING 4. AMERICA, LINEARLY CYCUCAL AF IMT 1768, 19840901, V5 PREVIOUS EDITION WILL BE USED. C2C Jessica Adams Dr. Brissett...his desires, his failings, and his aspirations follow the same general trend throughout history and throughout cultures. The founding fathers sought

  19. Non-Linear Optical Studies On Sol-Gel Derived Lead Chloride Crystals Using Z-Scan Technique

    OpenAIRE

    Rejeena, I; Lillibai, B; Toms, Roseleena; Nampoori, VP N; Radhakrishnan, P

    2014-01-01

    In this paper we report the preparation, optical characterization and non linear optical behavior of pure lead chloride crystals. Lead chloride samples subjected to UV and IR irradiation and electric and magnetic fields have also been investigated Optical nonlinearity in these lead chloride samples were determined using single beam and high sensitive Z-scan technique. Non linear optical studies of these materials in single distilled water show reverse saturable absorption which makes th...

  20. Redefining Hybrid Warfare: Russia's Non-linear War against the West

    Directory of Open Access Journals (Sweden)

    Tad Schnaufer

    2017-03-01

    Full Text Available The term hybrid warfare fails to properly describe Russian operations in Ukraine and elsewhere. Russia has undertaken unconventional techniques to build its influence and test the boundaries of a shaken international system. Notably, Russia’s actions in Ukraine display an evolved style of warfare that goes beyond its initial label of hybrid warfare. The term non-linear war (NLW will be defined in this article to encompass Russia’s actions and allow policymakers the correct framework to discuss and respond to Russia. NLW plays to the advantage of countries like Russia and constitute the future of warfare.

  1. A Comparative Evaluation of the Linear Dimensional Accuracy of Four Impression Techniques using Polyether Impression Material

    OpenAIRE

    Manoj, Smita Sara; Cherian, K. P.; Chitre, Vidya; Aras, Meena

    2013-01-01

    There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregu...

  2. Building a new predictor for multiple linear regression technique-based corrective maintenance turnaround time.

    Science.gov (United States)

    Cruz, Antonio M; Barr, Cameron; Puñales-Pozo, Elsa

    2008-01-01

    This research's main goals were to build a predictor for a turnaround time (TAT) indicator for estimating its values and use a numerical clustering technique for finding possible causes of undesirable TAT values. The following stages were used: domain understanding, data characterisation and sample reduction and insight characterisation. Building the TAT indicator multiple linear regression predictor and clustering techniques were used for improving corrective maintenance task efficiency in a clinical engineering department (CED). The indicator being studied was turnaround time (TAT). Multiple linear regression was used for building a predictive TAT value model. The variables contributing to such model were clinical engineering department response time (CE(rt), 0.415 positive coefficient), stock service response time (Stock(rt), 0.734 positive coefficient), priority level (0.21 positive coefficient) and service time (0.06 positive coefficient). The regression process showed heavy reliance on Stock(rt), CE(rt) and priority, in that order. Clustering techniques revealed the main causes of high TAT values. This examination has provided a means for analysing current technical service quality and effectiveness. In doing so, it has demonstrated a process for identifying areas and methods of improvement and a model against which to analyse these methods' effectiveness.

  3. Management of the failed posterior/multidirectional instability patient.

    Science.gov (United States)

    Forsythe, Brian; Ghodadra, Neil; Romeo, Anthony A; Provencher, Matthew T

    2010-09-01

    Although the results of operative treatment of posterior and multidirectional instability (P-MDI) of the shoulder have improved, they are not as reliable as those treated for anterior instability of the shoulder. This may be attributed to the complexities in the classification, etiology, and physical examination of a patient with suspected posterior and multidirectional instability. Failure to address the primary and concurrent lesion adequately and the development of pain and/or stiffness are contributing factors to the failure of P-MDI procedures. Other pitfalls include errors in history and physical examination, failure to recognize concomitant pathology, and problems with the surgical technique or implant failure. Patulous capsular tissues and glenoid version also play in role management of failed P-MDI patients. With an improved understanding of pertinent clinical complaints and physical examination findings and the advent of arthroscopic techniques and improved implants, successful strategies for the nonoperative and operative management of the patient after a failed posterior or multidirectional instability surgery may be elucidated. This article highlights the common presentation, physical findings, and radiographic workup in a patient that presents after a failed P-MDI repair and offers strategies for revision surgical repair.

  4. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  5. Robust intelligent backstepping tracking control for uncertain non-linear chaotic systems using H∞ control technique

    International Nuclear Information System (INIS)

    Peng, Y.-F.

    2009-01-01

    The cerebellar model articulation controller (CMAC) is a non-linear adaptive system with built-in simple computation, good generalization capability and fast learning property. In this paper, a robust intelligent backstepping tracking control (RIBTC) system combined with adaptive CMAC and H ∞ control technique is proposed for a class of chaotic systems with unknown system dynamics and external disturbance. In the proposed control system, an adaptive backstepping cerebellar model articulation controller (ABCMAC) is used to mimic an ideal backstepping control (IBC), and a robust H ∞ controller is designed to attenuate the effect of the residual approximation errors and external disturbances with desired attenuation level. Moreover, the all adaptation laws of the RIBTC system are derived based on the Lyapunov stability analysis, the Taylor linearization technique and H ∞ control theory, so that the stability of the closed-loop system and H ∞ tracking performance can be guaranteed. Finally, three application examples, including a Duffing-Holmes chaotic system, a Genesio chaotic system and a Sprott circuit system, are used to demonstrate the effectiveness and performance of proposed robust control technique.

  6. The solution of linear and nonlinear systems of Volterra functional equations using Adomian-Pade technique

    International Nuclear Information System (INIS)

    Dehghan, Mehdi; Shakourifar, Mohammad; Hamidi, Asgar

    2009-01-01

    The purpose of this study is to implement Adomian-Pade (Modified Adomian-Pade) technique, which is a combination of Adomian decomposition method (Modified Adomian decomposition method) and Pade approximation, for solving linear and nonlinear systems of Volterra functional equations. The results obtained by using Adomian-Pade (Modified Adomian-Pade) technique, are compared to those obtained by using Adomian decomposition method (Modified Adomian decomposition method) alone. The numerical results, demonstrate that ADM-PADE (MADM-PADE) technique, gives the approximate solution with faster convergence rate and higher accuracy than using the standard ADM (MADM).

  7. A study of the use of linear programming techniques to improve the performance in design optimization problems

    Science.gov (United States)

    Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.

  8. Comparison of acrylamide intake from Western and guideline based diets using probabilistic techniques and linear programming.

    Science.gov (United States)

    Katz, Josh M; Winter, Carl K; Buttrey, Samuel E; Fadel, James G

    2012-03-01

    Western and guideline based diets were compared to determine if dietary improvements resulting from following dietary guidelines reduce acrylamide intake. Acrylamide forms in heat treated foods and is a human neurotoxin and animal carcinogen. Acrylamide intake from the Western diet was estimated with probabilistic techniques using teenage (13-19 years) National Health and Nutrition Examination Survey (NHANES) food consumption estimates combined with FDA data on the levels of acrylamide in a large number of foods. Guideline based diets were derived from NHANES data using linear programming techniques to comport to recommendations from the Dietary Guidelines for Americans, 2005. Whereas the guideline based diets were more properly balanced and rich in consumption of fruits, vegetables, and other dietary components than the Western diets, acrylamide intake (mean±SE) was significantly greater (Plinear programming and results demonstrate that linear programming techniques can be used to model specific diets for the assessment of toxicological and nutritional dietary components. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Dynamic acousto-elastic testing of concrete with a coda-wave probe: comparison with standard linear and nonlinear ultrasonic techniques.

    Science.gov (United States)

    Shokouhi, Parisa; Rivière, Jacques; Lake, Colton R; Le Bas, Pierre-Yves; Ulrich, T J

    2017-11-01

    The use of nonlinear acoustic techniques in solids consists in measuring wave distortion arising from compliant features such as cracks, soft intergrain bonds and dislocations. As such, they provide very powerful nondestructive tools to monitor the onset of damage within materials. In particular, a recent technique called dynamic acousto-elasticity testing (DAET) gives unprecedented details on the nonlinear elastic response of materials (classical and non-classical nonlinear features including hysteresis, transient elastic softening and slow relaxation). Here, we provide a comprehensive set of linear and nonlinear acoustic responses on two prismatic concrete specimens; one intact and one pre-compressed to about 70% of its ultimate strength. The two linear techniques used are Ultrasonic Pulse Velocity (UPV) and Resonance Ultrasound Spectroscopy (RUS), while the nonlinear ones include DAET (fast and slow dynamics) as well as Nonlinear Resonance Ultrasound Spectroscopy (NRUS). In addition, the DAET results correspond to a configuration where the (incoherent) coda portion of the ultrasonic record is used to probe the samples, as opposed to a (coherent) first arrival wave in standard DAET tests. We find that the two visually identical specimens are indistinguishable based on parameters measured by linear techniques (UPV and RUS). On the contrary, the extracted nonlinear parameters from NRUS and DAET are consistent and orders of magnitude greater for the damaged specimen than those for the intact one. This compiled set of linear and nonlinear ultrasonic testing data including the most advanced technique (DAET) provides a benchmark comparison for their use in the field of material characterization. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Comparative study of linear and nonlinear ultrasonic techniques for evaluation thermal damage of tube like structures

    International Nuclear Information System (INIS)

    Li, Weibin; Cho, Younho; Li, Xianqiang

    2013-01-01

    Ultrasonic guided wave techniques have been widely used for long range nondestructive detection in tube like structures. The present paper investigates the ultrasonic linear and nonlinear parameters for evaluating the thermal damage in aluminum pipe. Specimens were subjected to thermal loading. Flexible polyvinylidene fluoride (PVDF) comb transducers were used to generate and receive the ultrasonic waves. The second harmonic wave generation technique was used to check the material nonlinearity change after different heat loadings. The conventional linear ultrasonic approach based on attenuation was also used to evaluate the thermal damages in specimens. The results show that the proposed experimental setup is viable to assess the thermal damage in an aluminum pipe. The ultrasonic nonlinear parameter is a promising candidate for the prediction of micro damages in a tube like structure

  11. Advanced statistics: linear regression, part I: simple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  12. IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS

    Science.gov (United States)

    Fogle, F. R.

    1994-01-01

    IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.

  13. Profiling of barrier capacitance and spreading resistance using a transient linearly increasing voltage technique.

    Science.gov (United States)

    Gaubas, E; Ceponis, T; Kusakovskij, J

    2011-08-01

    A technique for the combined measurement of barrier capacitance and spreading resistance profiles using a linearly increasing voltage pulse is presented. The technique is based on the measurement and analysis of current transients, due to the barrier and diffusion capacitance, and the spreading resistance, between a needle probe and sample. To control the impact of deep traps in the barrier capacitance, a steady state bias illumination with infrared light was employed. Measurements of the spreading resistance and barrier capacitance profiles using a stepwise positioned probe on cross sectioned silicon pin diodes and pnp structures are presented.

  14. The application of LQR synthesis techniques to the turboshaft engine control problem. [Linear Quadratic Regulator

    Science.gov (United States)

    Pfeil, W. H.; De Los Reyes, G.; Bobula, G. A.

    1985-01-01

    A power turbine governor was designed for a recent-technology turboshaft engine coupled to a modern, articulated rotor system using Linear Quadratic Regulator (LQR) and Kalman Filter (KF) techniques. A linear, state-space model of the engine and rotor system was derived for six engine power settings from flight idle to maximum continuous. An integrator was appended to the fuel flow input to reduce the steady-state governor error to zero. Feedback gains were calculated for the system states at each power setting using the LQR technique. The main rotor tip speed state is not measurable, so a Kalman Filter of the rotor was used to estimate this state. The crossover of the system was increased to 10 rad/s compared to 2 rad/sec for a current governor. Initial computer simulations with a nonlinear engine model indicate a significant decrease in power turbine speed variation with the LQR governor compared to a conventional governor.

  15. Elongation cutoff technique armed with quantum fast multipole method for linear scaling.

    Science.gov (United States)

    Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko

    2009-11-30

    A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.

  16. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  17. Beam-based alignment technique for the SLC [Stanford Linear Collider] linac

    International Nuclear Information System (INIS)

    Adolphsen, C.E.; Lavine, T.L.; Atwood, W.B.

    1989-03-01

    Misalignment of quadrupole magnets and beam position monitors (BPMs) in the linac of the SLAC Linear Collider (SLC) cause the electron and positron beams to be steered off-center in the disk-loaded waveguide accelerator structures. Off-center beams produce wakefields which limit the SLC performance at high beam intensities by causing emittance growth. Here, we present a general method for simultaneously determining quadrupole magnet and BPM offsets using beam trajectory measurements. Results from the application of the method to the SLC linac are described. The alignment precision achieved is approximately 100 μm, which is significantly better than that obtained using optical surveying techniques. 2 refs., 4 figs

  18. Problem Based Learning Technique and Its Effect on Acquisition of Linear Programming Skills by Secondary School Students in Kenya

    Science.gov (United States)

    Nakhanu, Shikuku Beatrice; Musasia, Amadalo Maurice

    2015-01-01

    The topic Linear Programming is included in the compulsory Kenyan secondary school mathematics curriculum at form four. The topic provides skills for determining best outcomes in a given mathematical model involving some linear relationship. This technique has found application in business, economics as well as various engineering fields. Yet many…

  19. Optical asymmetric cryptography based on elliptical polarized light linear truncation and a numerical reconstruction technique.

    Science.gov (United States)

    Lin, Chao; Shen, Xueju; Wang, Zhisong; Zhao, Cheng

    2014-06-20

    We demonstrate a novel optical asymmetric cryptosystem based on the principle of elliptical polarized light linear truncation and a numerical reconstruction technique. The device of an array of linear polarizers is introduced to achieve linear truncation on the spatially resolved elliptical polarization distribution during image encryption. This encoding process can be characterized as confusion-based optical cryptography that involves no Fourier lens and diffusion operation. Based on the Jones matrix formalism, the intensity transmittance for this truncation is deduced to perform elliptical polarized light reconstruction based on two intensity measurements. Use of a quick response code makes the proposed cryptosystem practical, with versatile key sensitivity and fault tolerance. Both simulation and preliminary experimental results that support theoretical analysis are presented. An analysis of the resistance of the proposed method on a known public key attack is also provided.

  20. Optimal technique of linear accelerator–based stereotactic radiosurgery for tumors adjacent to brainstem

    International Nuclear Information System (INIS)

    Chang, Chiou-Shiung; Hwang, Jing-Min; Tai, Po-An; Chang, You-Kang; Wang, Yu-Nong; Shih, Rompin; Chuang, Keh-Shih

    2016-01-01

    Stereotactic radiosurgery (SRS) is a well-established technique that is replacing whole-brain irradiation in the treatment of intracranial lesions, which leads to better preservation of brain functions, and therefore a better quality of life for the patient. There are several available forms of linear accelerator (LINAC)–based SRS, and the goal of the present study is to identify which of these techniques is best (as evaluated by dosimetric outcomes statistically) when the target is located adjacent to brainstem. We collected the records of 17 patients with lesions close to the brainstem who had previously been treated with single-fraction radiosurgery. In all, 5 different lesion catalogs were collected, and the patients were divided into 2 distance groups—1 consisting of 7 patients with a target-to-brainstem distance of less than 0.5 cm, and the other of 10 patients with a target-to-brainstem distance of ≥ 0.5 and < 1 cm. Comparison was then made among the following 3 types of LINAC-based radiosurgery: dynamic conformal arcs (DCA), intensity-modulated radiosurgery (IMRS), and volumetric modulated arc radiotherapy (VMAT). All techniques included multiple noncoplanar beams or arcs with or without intensity-modulated delivery. The volume of gross tumor volume (GTV) ranged from 0.2 cm 3 to 21.9 cm 3 . Regarding the dose homogeneity index (HI ICRU ) and conformity index (CI ICRU ) were without significant difference between techniques statistically. However, the average CI ICRU = 1.09 ± 0.56 achieved by VMAT was the best of the 3 techniques. Moreover, notable improvement in gradient index (GI) was observed when VMAT was used (0.74 ± 0.13), and this result was significantly better than those achieved by the 2 other techniques (p < 0.05). For V 4 Gy of brainstem, both VMAT (2.5%) and IMRS (2.7%) were significantly lower than DCA (4.9%), both at the p < 0.05 level. Regarding V 2 Gy of normal brain, VMAT plans had attained 6.4 ± 5%; this was significantly better

  1. A new technique for generating the isotropic and linearly anisotropic components of elastic and discrete inelastic transfer matrices

    International Nuclear Information System (INIS)

    Garcia, R.D.M.

    1984-01-01

    A new technique for generating the isotropic and linearly anisotropic componets of elastic and discrete inelastic transfer matrices is proposed. The technique allows certain angular integrals to be expressed in terms of functions that can be computed by recursion relations or series expansions alternatively to the use of numerical quadratures. (Author) [pt

  2. Connection between perturbation theory, projection-operator techniques, and statistical linearization for nonlinear systems

    International Nuclear Information System (INIS)

    Budgor, A.B.; West, B.J.

    1978-01-01

    We employ the equivalence between Zwanzig's projection-operator formalism and perturbation theory to demonstrate that the approximate-solution technique of statistical linearization for nonlinear stochastic differential equations corresponds to the lowest-order β truncation in both the consolidated perturbation expansions and in the ''mass operator'' of a renormalized Green's function equation. Other consolidated equations can be obtained by selectively modifying this mass operator. We particularize the results of this paper to the Duffing anharmonic oscillator equation

  3. Study of 1D complex resistivity inversion using digital linear filter technique; Linear filter ho wo mochiita fukusohi teiko no gyakukaisekiho no kento

    Energy Technology Data Exchange (ETDEWEB)

    Sakurai, K; Shima, H [OYO Corp., Tokyo (Japan)

    1996-10-01

    This paper proposes a modeling method of one-dimensional complex resistivity using linear filter technique which has been extended to the complex resistivity. In addition, a numerical test of inversion was conducted using the monitoring results, to discuss the measured frequency band. Linear filter technique is a method by which theoretical potential can be calculated for stratified structures, and it is widely used for the one-dimensional analysis of dc electrical exploration. The modeling can be carried out only using values of complex resistivity without using values of potential. In this study, a bipolar method was employed as a configuration of electrodes. The numerical test of one-dimensional complex resistivity inversion was conducted using the formulated modeling. A three-layered structure model was used as a numerical model. A multi-layer structure with a thickness of 5 m was analyzed on the basis of apparent complex resistivity calculated from the model. From the results of numerical test, it was found that both the chargeability and the time constant agreed well with those of the original model. A trade-off was observed between the chargeability and the time constant at the stage of convergence. 3 refs., 9 figs., 1 tab.

  4. A Trivial Linear Discriminant Function

    Directory of Open Access Journals (Sweden)

    Shuichi Shinmura

    2015-11-01

    Full Text Available In this paper, we focus on the new model selection procedure of the discriminant analysis. Combining re-sampling technique with k-fold cross validation, we develop a k-fold cross validation for small sample method. By this breakthrough, we obtain the mean error rate in the validation samples (M2 and the 95\\% confidence interval (CI of discriminant coefficient. Moreover, we propose the model  selection  procedure  in  which  the model having a minimum M2 was  chosen  to  the  best  model.  We  apply  this  new  method and procedure to the pass/ fail determination of  exam  scores.  In  this  case,  we  fix  the constant =1 for seven linear discriminant  functions  (LDFs  and  several  good  results  were obtained as follows: 1 M2 of Fisher's LDF are over 4.6\\% worse than Revised IP-OLDF. 2 A soft-margin  SVM  for  penalty c=1  (SVM1  is  worse  than  another  mathematical  programming (MP based LDFs and logistic regression . 3 The 95\\% CI of the best discriminant coefficients was obtained. Seven LDFs except for Fisher's LDF are almost the same as a trivial LDF for the linear separable model. Furthermore, if we choose the median of the coefficient of seven LDFs except for Fisher's LDF,  those are almost the same as the trivial LDF for the linear separable model.

  5. Linear and Non-Linear Control Techniques Applied to Actively Lubricated Journal Bearings

    DEFF Research Database (Denmark)

    Nicoletti, Rodrigo; Santos, Ilmar

    2003-01-01

    The main objectives of actively lubricated bearings are the simultaneous reduction of wear and vibration between rotating and stationary machinery parts. For reducing wear and dissipating vibration energy until certain limits, one can count with the conventional hydrodynamic lubrication. For furt......The main objectives of actively lubricated bearings are the simultaneous reduction of wear and vibration between rotating and stationary machinery parts. For reducing wear and dissipating vibration energy until certain limits, one can count with the conventional hydrodynamic lubrication....... For further reduction of shaft vibrations one can count with the active lubrication action, which is based on injecting pressurised oil into the bearing gap through orifices machined in the bearing sliding surface. The design and efficiency of some linear (PD, PI and PID) and non-linear controllers, applied...... vibration reduction of unbalance response of a rigid rotor, where the PD and the non-linear P controllers show better performance for the frequency range of study (0 to 80 Hz). The feasibility of eliminating rotor-bearing instabilities (phenomena of whirl) by using active lubrication is also investigated...

  6. Efficacy of repeated 5-fluorouracil needling for failing and failed filtering surgeries based on simple gonioscopic examination

    Directory of Open Access Journals (Sweden)

    Rashad MA

    2012-12-01

    Full Text Available Mohammad A RashadOphthalmology Department, Faculty of Medicine, Ain Shams University, Cairo, EgyptPurpose: To evaluate the success rate of a modified bleb needling technique in eyes with previous glaucoma surgery that had elevated intraocular pressure.Methods: A retrospective study of 24 eyes of 24 patients that underwent repeated bleb needling performed for failing and failed blebs on slit lamp with 5-fluorouracil (5-FU injections on demand. This was performed after gonioscopic examination to define levels of filtration block.Results: There was significant reduction of mean IOP from 36.91 mmHg to 14.73 mmHg at the final follow-up (P < 0.001. The overall success rate was 92%.Conclusion: Repeated needling with adjunctive 5-FU proved a highly effective, safe alternative to revive filtration surgery rather than another medication or surgery.Keywords: bleb, failure, 5-FU, needling, gonioscopy

  7. Failed healing of rotator cuff repair correlates with altered collagenase and gelatinase in supraspinatus and subscapularis tendons.

    Science.gov (United States)

    Robertson, Catherine M; Chen, Christopher T; Shindle, Michael K; Cordasco, Frank A; Rodeo, Scott A; Warren, Russell F

    2012-09-01

    Despite improvements in arthroscopic rotator cuff repair technique and technology, a significant rate of failed tendon healing persists. Improving the biology of rotator cuff repairs may be an important focus to decrease this failure rate. The objective of this study was to determine the mRNA biomarkers and histological characteristics of repaired rotator cuffs that healed or developed persistent defects as determined by postoperative ultrasound. Increased synovial inflammation and tendon degeneration at the time of surgery are correlated with the failed healing of rotator cuff tendons. Case-control study; Level of evidence, 3. Biopsy specimens from the subscapularis tendon, supraspinatus tendon, glenohumeral synovium, and subacromial bursa of 35 patients undergoing arthroscopic rotator cuff repair were taken at the time of surgery. Expression of proinflammatory cytokines, tissue remodeling genes, and angiogenesis factors was evaluated by quantitative real-time polymerase chain reaction. Histological characteristics of the affected tissue were also assessed. Postoperative (>6 months) ultrasound was used to evaluate the healing of the rotator cuff. General linear modeling with selected mRNA biomarkers was used to predict rotator cuff healing. Thirty patients completed all analyses, of which 7 patients (23%) had failed healing of the rotator cuff. No differences in demographic data were found between the defect and healed groups. American Shoulder and Elbow Surgeons shoulder scores collected at baseline and follow-up showed improvement in both groups, but there was no significant difference between groups. Increased expression of matrix metalloproteinase 1 (MMP-1) and MMP-9 was found in the supraspinatus tendon in the defect group versus the healed group (P = .006 and .02, respectively). Similar upregulation of MMP-9 was also found in the subscapularis tendon of the defect group (P = .001), which was consistent with the loss of collagen organization as determined by

  8. Cognitive Levels and Approaches Taken by Students Failing Written Examinations in Mathematics

    Science.gov (United States)

    Roegner, Katherine

    2013-01-01

    A study was conducted at the Technical University Berlin involving students who twice failed the written examination in the first semester course Linear Algebra for Engineers in order to better understand the reasons behind their failure. The study considered student understanding in terms of Bloom's taxonomy and the ways in which students…

  9. Behavioral and macro modeling using piecewise linear techniques

    NARCIS (Netherlands)

    Kruiskamp, M.W.; Leenaerts, D.M.W.; Antao, B.

    1998-01-01

    In this paper we will demonstrate that most digital, analog as well as behavioral components can be described using piecewise linear approximations of their real behavior. This leads to several advantages from the viewpoint of simulation. We will also give a method to store the resulting linear

  10. Technique of Critical Current Density Measurement of Bulk Superconductor with Linear Extrapolation Method

    International Nuclear Information System (INIS)

    Adi, Wisnu Ari; Sukirman, Engkir; Winatapura, Didin S.

    2000-01-01

    Technique of critical current density measurement (Jc) of HTc bulk ceramic superconductor has been performed by using linear extrapolation with four-point probes method. The measurement of critical current density HTc bulk ceramic superconductor usually causes damage in contact resistance. In order to decrease this damage factor, we introduce extrapolation method. The extrapolating data show that the critical current density Jc for YBCO (123) and BSCCO (2212) at 77 K are 10,85(6) Amp.cm - 2 and 14,46(6) Amp.cm - 2, respectively. This technique is easier, simpler, and the use of the current flow is low, so it will not damage the contact resistance of the sample. We expect that the method can give a better solution for bulk superconductor application. Key words. : superconductor, critical temperature, and critical current density

  11. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  12. Genetic design of interpolated non-linear controllers for linear plants

    International Nuclear Information System (INIS)

    Ajlouni, N.

    2000-01-01

    The techniques of genetic algorithms are proposed as a means of designing non-linear PID control systems. It is shown that the use of genetic algorithms for this purpose results in highly effective non-linear PID control systems. These results are illustrated by using genetic algorithms to design a non-linear PID control system and contrasting the results with an optimally tuned linear PID controller. (author)

  13. [Fractographic analysis of clinically failed anterior all ceramic crowns].

    Science.gov (United States)

    DU, Qian; Zhou, Min-bo; Zhang, Xin-ping; Zhao, Ke

    2012-04-01

    To identify the site of crack initiation and propagation path of clinically failed all ceramic crowns by fractographic analysis. Three clinically failed anterior IPS Empress II crowns and two anterior In-Ceram alumina crowns were retrieved. Fracture surfaces were examined using both optical stereo and scanning electron microscopy. Fractographic theory and fracture mechanics principles were applied to disclose the damage characteristics and fracture mode. All the crowns failed by cohesive failure within the veneer on the labial surface. Critical crack originated at the incisal contact area and propagated gingivally. Porosity was found within the veneer because of slurry preparation and the sintering of veneer powder. Cohesive failure within the veneer is the main failure mode of all ceramic crown. Veneer becomes vulnerable when flaws are present. To reduce the chances of chipping, multi-point occlusal contacts are recommended, and layering and sintering technique of veneering layer should also be improved.

  14. Application of Linear Quadratic Gaussian and Coefficient Diagram Techniques to Distributed Load Frequency Control of Power Systems

    Directory of Open Access Journals (Sweden)

    Tarek Hassan Mohamed

    2015-12-01

    Full Text Available This paper presented both the linear quadratic Gaussian technique (LQG and the coefficient diagram method (CDM as load frequency controllers in a multi-area power system to deal with the problem of variations in system parameters and load demand change. The full states of the system including the area frequency deviation have been estimated using the Kalman filter technique. The efficiency of the proposed control method has been checked using a digital simulation. Simulation results indicated that, with the proposed CDM + LQG technique, the system is robust in the face of parameter uncertainties and load disturbances. A comparison between the proposed technique and other schemes is carried out, confirming the superiority of the proposed CDM + LQG technique.

  15. A METHOD FOR SOLVING LINEAR PROGRAMMING PROBLEMS WITH FUZZY PARAMETERS BASED ON MULTIOBJECTIVE LINEAR PROGRAMMING TECHNIQUE

    OpenAIRE

    M. ZANGIABADI; H. R. MALEKI

    2007-01-01

    In the real-world optimization problems, coefficients of the objective function are not known precisely and can be interpreted as fuzzy numbers. In this paper we define the concepts of optimality for linear programming problems with fuzzy parameters based on those for multiobjective linear programming problems. Then by using the concept of comparison of fuzzy numbers, we transform a linear programming problem with fuzzy parameters to a multiobjective linear programming problem. To this end, w...

  16. An improved exploratory search technique for pure integer linear programming problems

    Science.gov (United States)

    Fogle, F. R.

    1990-01-01

    The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.

  17. Linear triangular optimization technique and pricing scheme in residential energy management systems

    Science.gov (United States)

    Anees, Amir; Hussain, Iqtadar; AlKhaldi, Ali Hussain; Aslam, Muhammad

    2018-06-01

    This paper presents a new linear optimization algorithm for power scheduling of electric appliances. The proposed system is applied in a smart home community, in which community controller acts as a virtual distribution company for the end consumers. We also present a pricing scheme between community controller and its residential users based on real-time pricing and likely block rates. The results of the proposed optimization algorithm demonstrate that by applying the anticipated technique, not only end users can minimise the consumption cost, but it can also reduce the power peak to an average ratio which will be beneficial for the utilities as well.

  18. Bayesian techniques for fatigue life prediction and for inference in linear time dependent PDEs

    KAUST Repository

    Scavino, Marco

    2016-01-08

    In this talk we introduce first the main characteristics of a systematic statistical approach to model calibration, model selection and model ranking when stress-life data are drawn from a collection of records of fatigue experiments. Focusing on Bayesian prediction assessment, we consider fatigue-limit models and random fatigue-limit models under different a priori assumptions. In the second part of the talk, we present a hierarchical Bayesian technique for the inference of the coefficients of time dependent linear PDEs, under the assumption that noisy measurements are available in both the interior of a domain of interest and from boundary conditions. We present a computational technique based on the marginalization of the contribution of the boundary parameters and apply it to inverse heat conduction problems.

  19. Detecting failed elements on phased array ultrasound transducers using the Edinburgh Pipe Phantom

    Science.gov (United States)

    Inglis, Scott; Pye, Stephen D

    2016-01-01

    Aims Imaging faults with ultrasound transducers are common. Failed elements on linear and curvilinear array transducers can usually be detected with a simple image uniformity or ‘paperclip’ test. However, this method is less effective for phased array transducers, commonly used in cardiac imaging. The aim of this study was to assess whether the presence of failed elements could be detected through measurement of the resolution integral (R) using the Edinburgh Pipe Phantom. Methods A 128-element paediatric phased array transducer was studied. Failed elements were simulated using layered polyvinyl chloride (PVC) tape as an attenuator and measurements of resolution integral were carried out for several widths of attenuator. Results All widths of attenuator greater than 0.5 mm resulted in a significant reduction in resolution integral and low contrast penetration measurements compared to baseline (p tests to detect failed elements on phased array transducers. Particularly encouraging is the result for low contrast penetration as this is a quick and simple measurement to make and can be performed with many different test objects, thus enabling ‘in-the-field’ checks. PMID:27482276

  20. If additional shielding required for the linear accelerator room when modern treatment techniques are intensively used

    International Nuclear Information System (INIS)

    Miller, Albert V.; Atkocius, Vydmantas; Aleknavicius, Eduardas

    2001-01-01

    Full text: Introduction - When the new linear accelerator is to be installed in radiotherapy department the responsible personnel should perform necessary estimations and calculations of the protective barriers for the accelerator treatment room. These methods are described in details in literature. However, if modern treatment techniques are planned to be intensively used on this machine, additional concern rises regarding adequacy of these calculations. The new Saturne-43 linear accelerator with three photon energies of 8, 15 and 25 MV recently installed at our department was, planned to be used for conventional treatment techniques as well as for conformal and total body treatments. The method of conformal therapy generally employs more small fields per one treated patient than conventional techniques. It leads to the use of more linear accelerator monitor units for the average treatment. It was estimated that 'beam on' time of an accelerator to deliver the same dose to the tumor is up to 3 times more than for conventional methods. The total body technique contribute to the extra time on of an accelerator because of extended distance to the dose prescription point. Altogether intensive clinical use of these modern techniques will noticeably increase 'beam on' time of an accelerator and rise question regarding validity of the traditionally calculated shielding of the treatment room. Materials and methods - IAEA-TECDOC-1040 and NCRP Report No 49 suggest considering three main components incident on the protective barriers: direct radiation, scatter radiation and leakage radiation. The formulas for these components are similar and dose equivalent limits are proportional to the workload. For the conventional treatments workloads of direct, scattered and leakage radiation are equal and calculated by the division of total prescribed dose (for all treated per week patients) to the machine isocenter to average tissue maximum ratio. These workloads for conformal and TBI

  1. Sensitivity Analysis of Multicarrier Digital Pre-distortion/ Equalization Techniques for Non-linear Satellite Channels

    OpenAIRE

    Piazza, Roberto; Shankar, Bhavani; Zenteno, Efrain; Ronnow, Daniel; Liolis, Kostantinos; Zimmer, Frank; Grasslin, Michael; Berheide, Tobias; Cioni, Stefano

    2013-01-01

    On-board joint power amplification of multiple-carrier DVB-S2 signals using a single High-Power Amplifier (HPA) is an emerging configuration that aims to reduce flight hardware and weight. However, effects specific to such a scenario degrade power and spectral efficiencies with increased Adjacent Channel Interference caused by non-linear characteristic of the HPA and power efficiency loss due to the increased Peak to Average Power Ratio (PAPR). The paper studies signal processing techniques ...

  2. Non-linear optical techniques and optical properties of condensed molecular systems

    Science.gov (United States)

    Citroni, Margherita

    2013-06-01

    Structure, dynamics, and optical properties of molecular systems can be largely modified by the applied pressure, with remarkable consequences on their chemical stability. Several examples of selective reactions yielding technologically attractive products can be cited, which are particularly efficient when photochemical effects are exploited in conjunction with the structural conditions attained at high density. Non-linear optical techniques are a basic tool to unveil key aspects of the chemical reactivity and dynamic properties of molecules. Their application to high-pressure samples is experimentally challenging, mainly because of the small sample dimensions and of the non-linear effects generated in the anvil materials. In this talk I will present results on the electronic spectra of several aromatic crystals obtained through two-photon induced fluorescence and two-photon excitation profiles measured as a function of pressure (typically up to about 25 GPa), and discuss the relationship between the pressure-induced modifications of the electronic structure and the chemical reactivity at high pressure. I will also present the first successful pump-probe infrared measurement performed as a function of pressure on a condensed molecular system. The system under examination is liquid water, in a sapphire anvil cell, up to 1 GPa along isotherms at 298 and 363 K. These measurements give a new enlightening insight into the dynamical properties of low- and high-density water allowing a definition of the two structures.

  3. Krylov subspace method with communication avoiding technique for linear system obtained from electromagnetic analysis

    International Nuclear Information System (INIS)

    Ikuno, Soichiro; Chen, Gong; Yamamoto, Susumu; Itoh, Taku; Abe, Kuniyoshi; Nakamura, Hiroaki

    2016-01-01

    Krylov subspace method and the variable preconditioned Krylov subspace method with communication avoiding technique for a linear system obtained from electromagnetic analysis are numerically investigated. In the k−skip Krylov method, the inner product calculations are expanded by Krylov basis, and the inner product calculations are transformed to the scholar operations. k−skip CG method is applied for the inner-loop solver of Variable Preconditioned Krylov subspace methods, and the converged solution of electromagnetic problem is obtained using the method. (author)

  4. The art of linear electronics

    CERN Document Server

    Hood, John Linsley

    2013-01-01

    The Art of Linear Electronics presents the principal aspects of linear electronics and techniques in linear electronic circuit design. The book provides a wide range of information on the elucidation of the methods and techniques in the design of linear electronic circuits. The text discusses such topics as electronic component symbols and circuit drawing; passive and active semiconductor components; DC and low frequency amplifiers; and the basic effects of feedback. Subjects on frequency response modifying circuits and filters; audio amplifiers; low frequency oscillators and waveform generato

  5. A minimax technique for time-domain design of preset digital equalizers using linear programming

    Science.gov (United States)

    Vaughn, G. L.; Houts, R. C.

    1975-01-01

    A linear programming technique is presented for the design of a preset finite-impulse response (FIR) digital filter to equalize the intersymbol interference (ISI) present in a baseband channel with known impulse response. A minimax technique is used which minimizes the maximum absolute error between the actual received waveform and a specified raised-cosine waveform. Transversal and frequency-sampling FIR digital filters are compared as to the accuracy of the approximation, the resultant ISI and the transmitted energy required. The transversal designs typically have slightly better waveform accuracy for a given distortion; however, the frequency-sampling equalizer uses fewer multipliers and requires less transmitted energy. A restricted transversal design is shown to use the least number of multipliers at the cost of a significant increase in energy and loss of waveform accuracy at the receiver.

  6. Fail marketing, marketingová technika plánované krizové komunikace

    OpenAIRE

    Kolek, Ondřej

    2014-01-01

    The aim of the bachelor thesis "Fail marketing, marketing technique of planned crisis communication" is a describtion of a theoretical foundation of working mechanics this marketing technique posseses with use of case studies and confirmation or disproof of technique's existence. Theoretical basis consists of a detail analysis of crisis communication, spin doctoring and customer psychology. Communication activies of McDonald's ČR spol. s. r. o. and Domino's Pizza, Inc., which caused or used n...

  7. On-chip power-combining techniques for watt-level linear power amplifiers in 0.18 μm CMOS

    International Nuclear Information System (INIS)

    Ren Zhixiong; Zhang Kefeng; Liu Lanqi; Li Cong; Chen Xiaofei; Liu Dongsheng; Liu Zhenglin; Zou Xuecheng

    2015-01-01

    Three linear CMOS power amplifiers (PAs) with high output power (more than watt-level output power) for high data-rate mobile applications are introduced. To realize watt-level output power, there are two 2.4 GHz PAs using an on-chip parallel combining transformer (PCT) and one 1.95 GHz PA using an on-chip series combining transformer (SCT) to combine output signals of multiple power stages. Furthermore, some linearization techniques including adaptive bias, diode linearizer, multi-gated transistors (MGTR) and the second harmonic control are applied in these PAs. Using the proposed power combiner, these three PAs are designed and fabricated in TSMC 0.18 μm RFCMOS process. According to the measurement results, the proposed two linear 2.4 GHz PAs achieve a gain of 33.2 dB and 34.3 dB, a maximum output power of 30.7 dBm and 29.4 dBm, with 29% and 31.3% of peak PAE, respectively. According to the simulation results, the presented linear 1.95 GHz PA achieves a gain of 37.5 dB, a maximum output power of 34.3 dBm with 36.3% of peak PAE. (paper)

  8. Railway Crossing Risk Area Detection Using Linear Regression and Terrain Drop Compensation Techniques

    Science.gov (United States)

    Chen, Wen-Yuan; Wang, Mei; Fu, Zhou-Xing

    2014-01-01

    Most railway accidents happen at railway crossings. Therefore, how to detect humans or objects present in the risk area of a railway crossing and thus prevent accidents are important tasks. In this paper, three strategies are used to detect the risk area of a railway crossing: (1) we use a terrain drop compensation (TDC) technique to solve the problem of the concavity of railway crossings; (2) we use a linear regression technique to predict the position and length of an object from image processing; (3) we have developed a novel strategy called calculating local maximum Y-coordinate object points (CLMYOP) to obtain the ground points of the object. In addition, image preprocessing is also applied to filter out the noise and successfully improve the object detection. From the experimental results, it is demonstrated that our scheme is an effective and corrective method for the detection of railway crossing risk areas. PMID:24936948

  9. Railway Crossing Risk Area Detection Using Linear Regression and Terrain Drop Compensation Techniques

    Directory of Open Access Journals (Sweden)

    Wen-Yuan Chen

    2014-06-01

    Full Text Available Most railway accidents happen at railway crossings. Therefore, how to detect humans or objects present in the risk area of a railway crossing and thus prevent accidents are important tasks. In this paper, three strategies are used to detect the risk area of a railway crossing: (1 we use a terrain drop compensation (TDC technique to solve the problem of the concavity of railway crossings; (2 we use a linear regression technique to predict the position and length of an object from image processing; (3 we have developed a novel strategy called calculating local maximum Y-coordinate object points (CLMYOP to obtain the ground points of the object. In addition, image preprocessing is also applied to filter out the noise and successfully improve the object detection. From the experimental results, it is demonstrated that our scheme is an effective and corrective method for the detection of railway crossing risk areas.

  10. Advanced statistics: linear regression, part II: multiple linear regression.

    Science.gov (United States)

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  11. Modelling female fertility traits in beef cattle using linear and non-linear models.

    Science.gov (United States)

    Naya, H; Peñagaricano, F; Urioste, J I

    2017-06-01

    Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2  linear models; h 2  > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.

  12. MAGNETIC RESONANCE IMAGING IN FAILED BACK SURGERY SYNDROME

    OpenAIRE

    SEN, KK; SINGH, AMARJIT

    1999-01-01

    The failed back surgery syndrome (FBSS) is a severe, long-lasting, disabling and relatively frequent (5-10%) complication of lumbosacral spine surgery. Wrong level surgery, inadequate surgical techniques, vertebral instability, recurrent disc herniation, and lumbosacral fibrosis are the most frequent causes of FBSS. The results after repeated surgery on recurrent disc herniations are comparable to those after the first intervention, whereas repeated surgery for fibrosis gives only 30-35% succ...

  13. Usefulness of combined percutaneous-endoscopic rendezvous techniques after failed therapeutic endoscopic retrograde cholangiography in the era of endoscopic ultrasound guided rendezvous.

    Science.gov (United States)

    Yang, Min Jae; Kim, Jin Hong; Hwang, Jae Chul; Yoo, Byung Moo; Kim, Soon Sun; Lim, Sun Gyo; Won, Je Hwan

    2017-12-01

    The rendezvous approach is a salvage technique after failure of endoscopic retrograde cholangiography (ERC). In certain circumstances, percutaneous-endoscopic rendezvous (PE-RV) is preferred, and endoscopic ultrasound-guided rendezvous (EUS-RV) is difficult to perform. We aimed to evaluate PE-RV outcomes, describe the PE-RV techniques, and identify potential indications for PE-RV over EUS-RV.Retrospective analysis was conducted of a prospectively designed ERC database between January 2005 and December 2016 at a tertiary referral center including cases where PE-RV was used as a salvage procedure after ERC failure.During the study period, PE-RV was performed in 42 cases after failed therapeutic ERC; 15 had a surgically altered enteric anatomy. The technical success rate of PE-RV was 92.9% (39/42), with a therapeutic success rate of 88.1% (37/42). Potential indications for PE-RV over EUS-RV were identified in 23 cases, and either PE-RV or EUS-RV could have effectively been used in 19 cases. Endoscopic bile duct access was successfully achieved with PE-RV in 39 cases with accessible biliary orifice using one of PE-RV cannulation techniques (classic, n = 11; parallel, n = 19; and adjunctive maneuvers, n = 9).PE-RV uses a unique technology and has clinical indications that distinguish it from EUS-RV. Therefore, PE-RV can still be considered a useful salvage technique for the treatment of biliary obstruction after ERC failure.

  14. Pulsed-laser time-resolved thermal mirror technique in low-absorbance homogeneous linear elastic materials.

    Science.gov (United States)

    Lukasievicz, Gustavo V B; Astrath, Nelson G C; Malacarne, Luis C; Herculano, Leandro S; Zanuto, Vitor S; Baesso, Mauro L; Bialkowski, Stephen E

    2013-10-01

    A theoretical model for a time-resolved photothermal mirror technique using pulsed-laser excitation was developed for low absorption samples. Analytical solutions to the temperature and thermoelastic deformation equations are found for three characteristic pulse profiles and are compared to finite element analysis methods results for finite samples. An analytical expression for the intensity of the center of a continuous probe laser at the detector plane is derived using the Fresnel diffraction theory, which allows modeling of experimental results. Experiments are performed in optical glasses, and the models are fitted to the data. The parameters of the fit are in good agreement with previous literature data for absorption, thermal diffusion, and thermal expansion of the materials tested. The combined modeling and experimental techniques are shown to be useful for quantitative determination of the physical properties of low absorption homogeneous linear elastic material samples.

  15. Evolution Is Linear: Debunking Life's Little Joke.

    Science.gov (United States)

    Jenner, Ronald A

    2018-01-01

    Linear depictions of the evolutionary process are ubiquitous in popular culture, but linear evolutionary imagery is strongly rejected by scientists who argue that evolution branches. This point is frequently illustrated by saying that we didn't evolve from monkeys, but that we are related to them as collateral relatives. Yet, we did evolve from monkeys, but our monkey ancestors are extinct, not extant. Influential voices, such as the late Stephen Jay Gould, have misled audiences for decades by falsely portraying the linear and branching aspects of evolution to be in conflict, and by failing to distinguish between the legitimate linearity of evolutionary descent, and the branching relationships among collateral relatives that result when lineages of ancestors diverge. The purpose of this article is to correct the widespread misplaced rejection of linear evolutionary imagery, and to re-emphasize the basic truth that the evolutionary process is fundamentally linear. © 2017 WILEY Periodicals, Inc.

  16. Method for repairing failed fuel

    International Nuclear Information System (INIS)

    Shakudo, Taketomi.

    1986-01-01

    Purpose: To repair fuel elements that became failed during burnup in a reactor or during handling. Method: After the surface in the vicinity of a failed part of a fuel element is cleaned, a socket made of a shape-memory alloy having a ring form or a horseshoe form made by cutting a part of the ring form is inserted into the failed position according to the position of the failed fuel element. The shape memory alloy socket remembers a slightly larger inside diameter in its original phase (high-temperature side) than the outside diameter of the cladding tube and also a slightly larger inside diameter of the socket in the martensite phase (low-temperature side) than the outside diameter of the cladding tube, such that the socket can easily be inserted into the failed position. The socket, inserted into the failed part of the cladding tube, is heated by a heating jig. The socket recovers the original phase, and the shape also tends to recover a smaller diameter than the outside diameter of the cladding tube that has been remembered, and accordingly the failed part of the cladding tube is fastened with a great force and the failed part is fully closed with the socket, thus keeping radioactive materials from going out. (Horiuchi, T.)

  17. An improved direct feedback linearization technique for transient stability enhancement and voltage regulation of power generators

    Energy Technology Data Exchange (ETDEWEB)

    Kenne, Godpromesse [Laboratoire d' Automatique et d' Informatique Appliquee (LAIA), Departement de Genie Electrique, Universite de Dschang, B.P. 134 Bandjoun, Cameroun; Goma, Raphael; Lamnabhi-Lagarrigue, Francoise [Laboratoire des Signaux et Systemes (L2S), CNRS-SUPELEC, Universite Paris XI, 3 Rue Joliot Curie, 91192 Gif-sur-Yvette (France); Nkwawo, Homere [Departement GEII, Universite Paris XIII, IUT Villetaneuse, 99 Avenue Jean Baptiste Clement, 93430 Villetaneuse (France); Arzande, Amir; Vannier, Jean Claude [Departement Energie, Ecole Superieure d' Electricite-SUPELEC, 3 Rue Joliot Curie, 91192 Gif-sur-Yvette (France)

    2010-09-15

    In this paper, a simple improved direct feedback linearization design method for transient stability and voltage regulation of power systems is discussed. Starting with the classical direct feedback linearization technique currently applied to power systems, an adaptive nonlinear excitation control of synchronous generators is proposed, which is new and effective for engineering. The power angle and mechanical power input are not assumed to be available. The proposed method is based on a standard third-order model of a synchronous generator which requires only information about the physical available measurements of angular speed, active electric power and generator terminal voltage. Experimental results of a practical power system show that fast response, robustness, damping, steady-state and transient stability as well as voltage regulation are all achieved satisfactorily. (author)

  18. Multivariate mixed linear model analysis of longitudinal data: an information-rich statistical technique for analyzing disease resistance data

    Science.gov (United States)

    The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...

  19. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  20. Use of linear model analysis techniques in the evaluation of radiation effects on the life span of the beagle

    International Nuclear Information System (INIS)

    Angleton, G.M.; Lee, A.C.; Benjamin, S.A.

    1986-01-01

    The dependency of the beagle-dog life span on level of and age at exposure to 60 Co gamma radiation was analyzed by several techniques; one of these methods was linear model analysis. Beagles of both sexes were given single, bilateral exposures at 8, 28, or 55 days postcoitus (dpc) or at 2, 70, or 365 days postpartum (dpp). Dogs exposed at 8, 28, or 55 dpc or at 2 dpp received 0, 20, or 100 R, whereas those exposed at 70 or 365 dpp received 0 or 100 R. Beagles were designated initially either as sacrifice or as life-span animals. All deaths of life-span study animals were classified as spontaneous, hence for this group the mean age of death was a quantitative response that can be analyzed by linear model analysis techniques. Such analyses for each age group were performed, taking into account differences due to sex, linear and quadratic dependency on dose, and interaction between sex and dose. At this time most of the animals have reached 11 years of age. No significant effects of radiation on mean life span have been detected. 6 refs., 3 figs., 3 tabs

  1. Linear accelerator-based intensity-modulated total marrow irradiation technique for treatment of hematologic malignancies: a dosimetric feasibility study.

    Science.gov (United States)

    Yeginer, Mete; Roeske, John C; Radosevich, James A; Aydogan, Bulent

    2011-03-15

    To investigate the dosimetric feasibility of linear accelerator-based intensity-modulated total marrow irradiation (IM-TMI) in patients with hematologic malignancies. Linear accelerator-based IM-TMI treatment planning was performed for 9 patients using the Eclipse treatment planning system. The planning target volume (PTV) consisted of all the bones in the body from the head to the mid-femur, except for the forearms and hands. Organs at risk (OAR) to be spared included the lungs, heart, liver, kidneys, brain, eyes, oral cavity, and bowel and were contoured by a physician on the axial computed tomography images. The three-isocenter technique previously developed by our group was used for treatment planning. We developed and used a common dose-volume objective method to reduce the planning time and planner subjectivity in the treatment planning process. A 95% PTV coverage with the 99% of the prescribed dose of 12 Gy was achieved for all nine patients. The average dose reduction in OAR ranged from 19% for the lungs to 68% for the lenses. The common dose-volume objective method decreased the planning time by an average of 35% and reduced the inter- and intra- planner subjectivity. The results from the present study suggest that the linear accelerator-based IM-TMI technique is clinically feasible. We have demonstrated that linear accelerator-based IM-TMI plans with good PTV coverage and improved OAR sparing can be obtained within a clinically reasonable time using the common dose-volume objective method proposed in the present study. Copyright © 2011. Published by Elsevier Inc.

  2. Forecasting Volatility of Dhaka Stock Exchange: Linear Vs Non-linear models

    Directory of Open Access Journals (Sweden)

    Masudul Islam

    2012-10-01

    Full Text Available Prior information about a financial market is very essential for investor to invest money on parches share from the stock market which can strengthen the economy. The study examines the relative ability of various models to forecast daily stock indexes future volatility. The forecasting models that employed from simple to relatively complex ARCH-class models. It is found that among linear models of stock indexes volatility, the moving average model ranks first using root mean square error, mean absolute percent error, Theil-U and Linex loss function  criteria. We also examine five nonlinear models. These models are ARCH, GARCH, EGARCH, TGARCH and restricted GARCH models. We find that nonlinear models failed to dominate linear models utilizing different error measurement criteria and moving average model appears to be the best. Then we forecast the next two months future stock index price volatility by the best (moving average model.

  3. Lysis solution composition and non-linear dose-response to ionizing radiation in the non-denaturing DNA filter elution technique

    International Nuclear Information System (INIS)

    Radford, I.R.

    1990-01-01

    The suggestion by Okayasu and Iliakis (1989) that the non-linear dose-response curve, obtained with the non-denaturing filter elution technique for mammalian cells exposed to low-LET radiation, is the result of a technical artefact, was not confirmed. (author)

  4. Engaging Future Failing States

    Science.gov (United States)

    2011-03-23

    military missions in the Middle East, the Balkans, Africa, Asia , and South America. There is an increasing proliferation of failed and failing states...disparity, overpopulation , food security, health services availability, migration pressures, environmental degradation, personal and 22 community

  5. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kookjin [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science; Carlberg, Kevin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Elman, Howard C. [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science and Inst. for Advanced Computer Studies

    2018-03-29

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted $\\ell^2$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $\\ell^2$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.

  6. Linear b-gauges for open string fields

    International Nuclear Information System (INIS)

    Kiermaier, Michael; Zwiebach, Barton; Sen, Ashoke

    2008-01-01

    Motivated by Schnabl's gauge choice, we explore open string perturbation theory in gauges where a linear combination of antighost oscillators annihilates the string field. We find that in these linear b-gauges different gauge conditions are needed at different ghost numbers. We derive the full propagator and prove the formal properties which guarantee that the Feynman diagrams reproduce the correct on-shell amplitudes. We find that these properties can fail due to the need to regularize the propagator, and identify a large class of linear b-gauges for which they hold rigorously. In these gauges the propagator has a non-anomalous Schwinger representation and builds Riemann surfaces by adding strip-like domains. Projector-based gauges, like Schnabl's, are not in this class of gauges but we construct a family of regular linear b-gauges which interpolate between Siegel gauge and Schnabl gauge

  7. Why conventional detection methods fail in identifying the existence of contamination events.

    Science.gov (United States)

    Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han

    2016-04-15

    Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Monitoring of Non-Linear Ground Movement in an Open Pit Iron Mine Based on an Integration of Advanced DInSAR Techniques Using TerraSAR-X Data

    Directory of Open Access Journals (Sweden)

    José Claudio Mura

    2016-05-01

    Full Text Available This work presents an investigation to determine ground deformation based on an integration of DInSAR Time-Series (DTS and Persistent Scatterer Interferometry (PSI techniques aiming at detecting high rates of linear and non-linear ground movement. The combined techniques were applied in an open pit iron mine located in Carajás Mineral Province (Brazilian Amazon region, using a set of 33 TerraSAR-X-1 images acquired from March 2012 to April 2013 when, due to a different deformation behavior during the dry and wet seasons in the Amazon region, a non-linear deformation was detected. The DTS analysis was performed on a stack of multi-look unwrapped interferograms using an extension of the SVD (Singular Value Decomposition, where a set of additional weighted constraints on the acceleration of the displacement was incorporated to control the smoothness of the time-series solutions, whose objective was to correct the atmospheric phase artifacts. The height errors and the deformation history provided by the DTS technique were used as previous information to perform the PSI analysis. This procedure improved the capability of the PSI technique to detect non-linear movement as well as to increase the numbers of point density of the final results. The results of the combined techniques are presented and compared with total station/prisms and ground-based radar (GBR measurements.

  9. Linear programming

    CERN Document Server

    Solow, Daniel

    2014-01-01

    This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.

  10. Stability of multi-objective bi-level linear programming problems under fuzziness

    Directory of Open Access Journals (Sweden)

    Abo-Sinna Mahmoud A.

    2013-01-01

    Full Text Available This paper deals with multi-objective bi-level linear programming problems under fuzzy environment. In the proposed method, tentative solutions are obtained and evaluated by using the partial information on preference of the decision-makers at each level. The existing results concerning the qualitative analysis of some basic notions in parametric linear programming problems are reformulated to study the stability of multi-objective bi-level linear programming problems. An algorithm for obtaining any subset of the parametric space, which has the same corresponding Pareto optimal solution, is presented. Also, this paper established the model for the supply-demand interaction in the age of electronic commerce (EC. First of all, the study uses the individual objectives of both parties as the foundation of the supply-demand interaction. Subsequently, it divides the interaction, in the age of electronic commerce, into the following two classifications: (i Market transactions, with the primary focus on the supply demand relationship in the marketplace; and (ii Information service, with the primary focus on the provider and the user of information service. By applying the bi-level programming technique of interaction process, the study will develop an analytical process to explain how supply-demand interaction achieves a compromise or why the process fails. Finally, a numerical example of information service is provided for the sake of illustration.

  11. On the solution of two-point linear differential eigenvalue problems. [numerical technique with application to Orr-Sommerfeld equation

    Science.gov (United States)

    Antar, B. N.

    1976-01-01

    A numerical technique is presented for locating the eigenvalues of two point linear differential eigenvalue problems. The technique is designed to search for complex eigenvalues belonging to complex operators. With this method, any domain of the complex eigenvalue plane could be scanned and the eigenvalues within it, if any, located. For an application of the method, the eigenvalues of the Orr-Sommerfeld equation of the plane Poiseuille flow are determined within a specified portion of the c-plane. The eigenvalues for alpha = 1 and R = 10,000 are tabulated and compared for accuracy with existing solutions.

  12. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    Science.gov (United States)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  13. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    Science.gov (United States)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  14. Computer Program For Linear Algebra

    Science.gov (United States)

    Krogh, F. T.; Hanson, R. J.

    1987-01-01

    Collection of routines provided for basic vector operations. Basic Linear Algebra Subprogram (BLAS) library is collection from FORTRAN-callable routines for employing standard techniques to perform basic operations of numerical linear algebra.

  15. Dams designed to fail

    Energy Technology Data Exchange (ETDEWEB)

    Penman, A. [Geotechnical Engineering Consultants, Harpenden (United Kingdom)

    2004-09-01

    New developments in geotechnical engineering have led to methods for designing and constructing safe embankment dams. Failed dams can be categorized as those designed to fail, and those that have failed unexpectedly. This presentation outlined 3 dam failures: the 61 m high Malpasset Dam in France in 1959 which killed 421; the 71 m high Baldwin Hills Dam in the United States in 1963 which killed 5; and, the Vajont Dam in Italy in 1963 which killed 2,600 people. Following these incidents, the International Commission for Large Dams (ICOLD) reviewed regulations on reservoir safety. The 3 dams were found to have inadequate spillways and their failures were due to faults in their design. Fuse plug spillways, which address this problem, are designed to fail if an existing spillway proves inadequate. They allow additional discharge to prevent overtopping of the embankment dam. This solution can only be used if there is an adjacent valley to take the additional discharge. Examples of fuse gates were presented along with their effect on dam safety. A research program is currently underway in Norway in which high embankment dams are being studied for overtopping failure and failure due to internal erosion. Internal erosion has been the main reason why dams have failed unexpectedly. To prevent failures, designers suggested the use of a clay blanket placed under the upstream shoulder. However, for dams with soft clay cores, these underblankets could provide a route for a slip surface and that could lead to failure of the upstream shoulder. It was concluded that a safe arrangement for embankment dams includes the use of tipping gates or overturning gates which always fail at a required flood water level. Many have been installed in old and new dams around the world. 14 refs., 19 figs.

  16. MIDAS: Regionally linear multivariate discriminative statistical mapping.

    Science.gov (United States)

    Varol, Erdem; Sotiras, Aristeidis; Davatzikos, Christos

    2018-07-01

    Statistical parametric maps formed via voxel-wise mass-univariate tests, such as the general linear model, are commonly used to test hypotheses about regionally specific effects in neuroimaging cross-sectional studies where each subject is represented by a single image. Despite being informative, these techniques remain limited as they ignore multivariate relationships in the data. Most importantly, the commonly employed local Gaussian smoothing, which is important for accounting for registration errors and making the data follow Gaussian distributions, is usually chosen in an ad hoc fashion. Thus, it is often suboptimal for the task of detecting group differences and correlations with non-imaging variables. Information mapping techniques, such as searchlight, which use pattern classifiers to exploit multivariate information and obtain more powerful statistical maps, have become increasingly popular in recent years. However, existing methods may lead to important interpretation errors in practice (i.e., misidentifying a cluster as informative, or failing to detect truly informative voxels), while often being computationally expensive. To address these issues, we introduce a novel efficient multivariate statistical framework for cross-sectional studies, termed MIDAS, seeking highly sensitive and specific voxel-wise brain maps, while leveraging the power of regional discriminant analysis. In MIDAS, locally linear discriminative learning is applied to estimate the pattern that best discriminates between two groups, or predicts a variable of interest. This pattern is equivalent to local filtering by an optimal kernel whose coefficients are the weights of the linear discriminant. By composing information from all neighborhoods that contain a given voxel, MIDAS produces a statistic that collectively reflects the contribution of the voxel to the regional classifiers as well as the discriminative power of the classifiers. Critically, MIDAS efficiently assesses the

  17. Development of the delyed-neutron triangulation technique for locating failed fuel in LMFBR

    International Nuclear Information System (INIS)

    Kryter, R.C.

    1975-01-01

    Two major accomplishments of the ORNL delayed neutron triangulation program are (1) an analysis of anticipated detector counting rates and sensitivities to unclad fuel and erosion types of pin failure, and (2) an experimental assessment of the accuracy with which the position of failed fuel can be determined in the FFTF (this was performed in a quarter-scale water mockup of realistic outlet plenum geometry using electrolyte injections and conductivity cells to simulate delayed-neutron precursor releases and detections, respectively). The major results and conclusions from these studies are presented, along with plans for further DNT development work at ORNL for the FFTF and CRBR. (author)

  18. The use of linear programming techniques to design optimal digital filters for pulse shaping and channel equalization

    Science.gov (United States)

    Houts, R. C.; Burlage, D. W.

    1972-01-01

    A time domain technique is developed to design finite-duration impulse response digital filters using linear programming. Two related applications of this technique in data transmission systems are considered. The first is the design of pulse shaping digital filters to generate or detect signaling waveforms transmitted over bandlimited channels that are assumed to have ideal low pass or bandpass characteristics. The second is the design of digital filters to be used as preset equalizers in cascade with channels that have known impulse response characteristics. Example designs are presented which illustrate that excellent waveforms can be generated with frequency-sampling filters and the ease with which digital transversal filters can be designed for preset equalization.

  19. Charge transport and recombination in bulk heterojunction solar cells studied by the photoinduced charge extraction in linearly increasing voltage technique

    OpenAIRE

    Mozer, AJ; Sariciftci, NS; Osterbacka, R; Westerling, M; Juska, G; LUTSEN, Laurence; VANDERZANDE, Dirk

    2005-01-01

    Charge carrier mobility and recombination in a bulk heterojunction solar cell based on the mixture of poly[2-methoxy-5-(3,7-dimethyloctyloxy)-phenylene vinylene] (MDMO-PPV) and 1-(3-methoxycarbonyl)propyl-1-phenyl-(6,6)-C-61 (PCBM) has been studied using the novel technique of photoinduced charge carrier extraction in a linearly increasing voltage (Photo-CELIV). In this technique, charge carriers are photogenerated by a short laser flash, and extracted under a reverse bias voltage ramp after ...

  20. Applied linear regression

    CERN Document Server

    Weisberg, Sanford

    2013-01-01

    Praise for the Third Edition ""...this is an excellent book which could easily be used as a course text...""-International Statistical Institute The Fourth Edition of Applied Linear Regression provides a thorough update of the basic theory and methodology of linear regression modeling. Demonstrating the practical applications of linear regression analysis techniques, the Fourth Edition uses interesting, real-world exercises and examples. Stressing central concepts such as model building, understanding parameters, assessing fit and reliability, and drawing conclusions, the new edition illus

  1. Posterior transpedicular approach with circumferential debridement and anterior reconstruction as a salvage procedure for symptomatic failed vertebroplasty

    OpenAIRE

    Chiu, Yen-Chun; Yang, Shih-Chieh; Chen, Hung-Shu; Kao, Yu-Hsien; Tu, Yuan-Kun

    2015-01-01

    Background Complications and failure of vertebroplasty, such as cement dislodgement, cement leakage, or spinal infection, usually result in spinal instability and neural element compression. Combined anterior and posterior approaches are the most common salvage procedure for symptomatic failed vertebroplasty. The purpose of this study is to evaluate the feasibility and efficacy of a single posterior approach technique for the treatment of patients with symptomatic failed vertebroplasty. Metho...

  2. Considerations for handling failed fuel at the Barnwell Nuclear Fuel Plant

    International Nuclear Information System (INIS)

    Anderson, R.T.; Cholister, R.J.

    1982-05-01

    The impact of failed fuel receipt on reprocessing operations is qualitatively described. It appears that extended storage of fuel, particularly with advanced storage techniques, will increase the quantity of failed fuel, the nature and possibly the configuration of the fuel. The receipt of failed fuel at the BNFP increases handling problems, waste volumes, and operator exposure. If it is necessary to impose special operating precautions to minimize this impact, a loss in plant throughput will result. Hence, ideally, the reprocessing plant operator would take every reasonable precaution so that no failed fuel is received. An alternative policy would be to require that failed fuel be placed in a sealed canister. In the latter case the canister must be compatible with the shipping cask and suitable for in-plant storage. A required inspection of bare fuel would be made at the reactor prior to shipping off-site. This would verify fuel integrity. These requirements are obviously idealistic. Due to the current uncertain status of reprocessing and the need to keep reactors operating, business or governmental policy may be enacted resulting in the receipt of a negotiated quantity of non-standard fuel (including failed fuel). In this situation, BNFP fuel receiving policy based soley on fuel cladding integrity would be difficult to enforce. There are certain areas where process incompatibility does exist and where a compromise would be virtually impossible, e.g., canned fuel for which material or dimensional conflicts exist. This fuel would have to be refused or the fuel would require recanning prior to shipment. In other cases, knowledge of the type and nature of the failure may be acceptable to the operator. A physical inspection of the fuel either before shipment or after the cask unloading operation would be warranted. In this manner, concerns with pool contamination can be identified and the assembly canned if deemed necessary

  3. Salvage Flexor Hallucis Longus Transfer for a Failed Achilles Repair: Endoscopic Technique.

    Science.gov (United States)

    Gonçalves, Sérgio; Caetano, Rubén; Corte-Real, Nuno

    2015-10-01

    Flexor hallucis longus (FHL) transfer is a well-established treatment option in failed Achilles tendon (AT) repair and has been routinely performed as an open procedure. We detail the surgical steps needed to perform an arthroscopic transfer of the FHL for a chronic AT rupture. The FHL tendon is harvested as it enters in its tunnel beneath the sustentaculum tali; a tunnel is then drilled in the calcaneus as near to the AT footprint as possible. By use of a suture-passing device, the free end of the FHL is advanced to the plantar aspect of the foot. After adequate tension is applied to the construct, the tendon is fixed in place with an interference screw in an inside-out fashion. This minimally invasive approach is a safe and valid alternative to classic open procedures with the obvious advantages of preserving the soft-tissue envelope and using a biologically intact tendon.

  4. Simultaneous stent expansion/balloon deflation technique to salvage failed balloon remodeling.

    Science.gov (United States)

    Ladner, Travis R; He, Lucy; Davis, Brandon J; Froehler, Michael T; Mocco, J

    2016-04-01

    Herniation, with possible embolization, of coils into the parent vessel following aneurysm coiling remains a frequent challenge. For this reason, balloon or stent assisted embolization remains an important technique. Despite the use of balloon remodeling, there are occasions where, on deflation of the balloon, some coils, or even the entire coil mass, may migrate. We report the successful use of a simultaneous adjacent stent deployment bailout technique in order to salvage coil prolapse during balloon remodeling in three patients. Case No 1 was a wide neck left internal carotid artery bifurcation aneurysm, measuring 9 mm×7.9 mm×6 mm with a 5 mm neck. Case No 2 was a complex left superior hypophyseal artery aneurysm, measuring 5.3 mm×4 mm×5 mm with a 2.9 mm neck. Case No 3 was a ruptured right posterior communicating artery aneurysm, measuring 4 mm×4 mm×4.5 mm with a 4 mm neck. This technique successfully returned the prolapsed coil mass into the aneurysm sac in all cases without procedural complications. The closed cell design of the Enterprise VRD (Codman and Shurtleff Inc, Raynham, Massachusetts, USA) makes it ideal for this bailout technique, by allowing the use of an 0.021 inch delivery catheter (necessary for simultaneous access) and by avoiding the possibility of an open cell strut getting caught on the deflated balloon. We hope this technique will prove useful to readers who may find themselves in a similar predicament. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  5. FAILED FUEL DISPOSITION STUDY

    International Nuclear Information System (INIS)

    THIELGES, J.R.

    2004-01-01

    In May 2004 alpha contamination was found on the lid of the pre-filter housing in the Sodium Removal Ion Exchange System during routine filter change. Subsequent investigation determined that the alpha contamination likely came from a fuel pin(s) contained in an Ident-69 (ID-69) type pin storage container serial number 9 (ID-69-9) that was washed in the Sodium Removal System (SRS) in January 2004. Because all evidence indicated that the wash water interacted with the fuel, this ID49 is designated as containing a failed fuel pin with gross cladding defect and was set aside in the Interim Examination and Maintenance (IEM) Cell until it could be determined how to proceed for long term dry storage of the fuel pin container. This ID49 contained fuel pins from the driver fuel assembly (DFA) 16392, which was identified as a Delayed Neutron Monitor (DNM) leaker assembly. However, this DFA was disassembled and the fuel pin that was thought to be the failed pin was encapsulated and was not located in this ID49 container. This failed fuel disposition study discusses two alternatives that could be used to address long term storage for the contents of ID-69-9. The first alternative evaluated utilizes the current method of identifying and storing DNM leaker fuel pin(s) in tubes and thus, verifying that the alpha contamination found in the SRS came from a failed pin in this pin container. This approach will require unloading selected fuel pins from the ID-69, visually examining and possibly weighing suspect fuel pins to identify the failed pin(s), inserting the failed pin(s) in storage tubes, and reloading the fuel pins into ID49 containers. Safety analysis must be performed to revise the 200 Area Interim Storage Area (ISA) Final Safety Analysis Report (FSAR) (Reference 1) for this fuel configuration. The second alternative considered is to store the failed fuel as-is in the ID-69. This was evaluated to determine if this approach would comply with storage requirements. This

  6. FAILED FUEL DISPOSITION STUDY

    Energy Technology Data Exchange (ETDEWEB)

    THIELGES, J.R.

    2004-12-20

    In May 2004 alpha contamination was found on the lid of the pre-filter housing in the Sodium Removal Ion Exchange System during routine filter change. Subsequent investigation determined that the alpha contamination likely came from a fuel pin(s) contained in an Ident-69 (ID-69) type pin storage container serial number 9 (ID-69-9) that was washed in the Sodium Removal System (SRS) in January 2004. Because all evidence indicated that the wash water interacted with the fuel, this ID49 is designated as containing a failed fuel pin with gross cladding defect and was set aside in the Interim Examination and Maintenance (IEM) Cell until it could be determined how to proceed for long term dry storage of the fuel pin container. This ID49 contained fuel pins from the driver fuel assembly (DFA) 16392, which was identified as a Delayed Neutron Monitor (DNM) leaker assembly. However, this DFA was disassembled and the fuel pin that was thought to be the failed pin was encapsulated and was not located in this ID49 container. This failed fuel disposition study discusses two alternatives that could be used to address long term storage for the contents of ID-69-9. The first alternative evaluated utilizes the current method of identifying and storing DNM leaker fuel pin(s) in tubes and thus, verifying that the alpha contamination found in the SRS came from a failed pin in this pin container. This approach will require unloading selected fuel pins from the ID-69, visually examining and possibly weighing suspect fuel pins to identify the failed pin(s), inserting the failed pin(s) in storage tubes, and reloading the fuel pins into ID49 containers. Safety analysis must be performed to revise the 200 Area Interim Storage Area (ISA) Final Safety Analysis Report (FSAR) (Reference 1) for this fuel configuration. The second alternative considered is to store the failed fuel as-is in the ID-69. This was evaluated to determine if this approach would comply with storage requirements. This

  7. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    Science.gov (United States)

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Validity of linear measurements of the jaws using ultralow-dose MDCT and the iterative techniques of ASIR and MBIR.

    Science.gov (United States)

    Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig

    2016-10-01

    To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.

  9. Critical thinking: are the ideals of OBE failing us or are we failing the ...

    African Journals Online (AJOL)

    Critical thinking: are the ideals of OBE failing us or are we failing the ideals of OBE? K Lombard, M Grosser. Abstract. No Abstract. South African Journal of Education Vol. 28 (4) 2008: pp. 561-580. Full Text: EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT.

  10. Efficient Non Linear Loudspeakers

    DEFF Research Database (Denmark)

    Petersen, Bo R.; Agerkvist, Finn T.

    2006-01-01

    Loudspeakers have traditionally been designed to be as linear as possible. However, as techniques for compensating non linearities are emerging, it becomes possible to use other design criteria. This paper present and examines a new idea for improving the efficiency of loudspeakers at high levels...... by changing the voice coil layout. This deliberate non-linear design has the benefit that a smaller amplifier can be used, which has the benefit of reducing system cost as well as reducing power consumption....

  11. Endoscopic Ultrasound-Guided Rendezvous Technique for Failed Biliary Cannulation in Benign and Resectable Malignant Biliary Disorders.

    Science.gov (United States)

    Shiomi, Hideyuki; Yamao, Kentaro; Hoki, Noriyuki; Hisa, Takeshi; Ogura, Takeshi; Minaga, Kosuke; Masuda, Atsuhiro; Matsumoto, Kazuya; Kato, Hironari; Kamada, Hideki; Goto, Daisuke; Imai, Hajime; Takenaka, Mamoru; Noguchi, Chishio; Nishikiori, Hidefumi; Chiba, Yasutaka; Kutsumi, Hiromu; Kitano, Masayuki

    2018-03-01

    Endoscopic ultrasound-guided rendezvous technique (EUS-RV) has emerged as an effective salvage method for unsuccessful biliary cannulation. However, its application for benign and resectable malignant biliary disorders has not been fully evaluated. To assess the efficacy and safety of EUS-RV for benign and resectable malignant biliary disorders. This was a multicenter prospective study from 12 Japanese referral centers. Patients who underwent EUS-RV after failed biliary cannulation for biliary disorder were candidates for this study. Inclusion criteria were unsuccessful biliary cannulation for therapeutic endoscopic retrograde cholangiopancreatography with benign and potentially resectable malignant biliary obstruction. Exclusion criteria included unresectable malignant biliary obstruction, inaccessible papillae due to surgically altered upper gastrointestinal anatomy or duodenal stricture, and previous sphincterotomy and/or biliary stent placement. The primary outcome was the technical success rate of biliary cannulation; procedure time, adverse events, and clinical outcomes were secondary outcomes. Twenty patients were prospectively enrolled. The overall technical success rate and median procedure time were 85% and 33 min, respectively. Guidewire manipulation using a 4-Fr tapered tip catheter contributed to the success in advancing the guidewire into the duodenum. Adverse events were identified in 15% patients, including 2 with biliary peritonitis and 1 mild pancreatitis. EUS-RV did not affect surgical maneuvers or complications associated with surgery, or postoperative course. EUS-RV may be a safe and feasible salvage method for unsuccessful biliary cannulation for benign or resectable malignant biliary disorders. Use of a 4-Fr tapered tip catheter may improve the overall EUS-RV success rate.

  12. Failed medial patellofemoral ligament reconstruction: Causes and surgical strategies

    Science.gov (United States)

    Sanchis-Alfonso, Vicente; Montesinos-Berry, Erik; Ramirez-Fuentes, Cristina; Leal-Blanquet, Joan; Gelber, Pablo E; Monllau, Joan Carles

    2017-01-01

    Patellar instability is a common clinical problem encountered by orthopedic surgeons specializing in the knee. For patients with chronic lateral patellar instability, the standard surgical approach is to stabilize the patella through a medial patellofemoral ligament (MPFL) reconstruction. Foreseeably, an increasing number of revision surgeries of the reconstructed MPFL will be seen in upcoming years. In this paper, the causes of failed MPFL reconstruction are analyzed: (1) incorrect surgical indication or inappropriate surgical technique/patient selection; (2) a technical error; and (3) an incorrect assessment of the concomitant risk factors for instability. An understanding of the anatomy and biomechanics of the MPFL and cautiousness with the imaging techniques while favoring clinical over radiological findings and the use of common sense to determine the adequate surgical technique for each particular case, are critical to minimizing MPFL surgery failure. Additionally, our approach to dealing with failure after primary MPFL reconstruction is also presented. PMID:28251062

  13. Linear programming using Matlab

    CERN Document Server

    Ploskas, Nikolaos

    2017-01-01

    This book offers a theoretical and computational presentation of a variety of linear programming algorithms and methods with an emphasis on the revised simplex method and its components. A theoretical background and mathematical formulation is included for each algorithm as well as comprehensive numerical examples and corresponding MATLAB® code. The MATLAB® implementations presented in this book  are sophisticated and allow users to find solutions to large-scale benchmark linear programs. Each algorithm is followed by a computational study on benchmark problems that analyze the computational behavior of the presented algorithms. As a solid companion to existing algorithmic-specific literature, this book will be useful to researchers, scientists, mathematical programmers, and students with a basic knowledge of linear algebra and calculus.  The clear presentation enables the reader to understand and utilize all components of simplex-type methods, such as presolve techniques, scaling techniques, pivoting ru...

  14. A Dynamic Range Enhanced Readout Technique with a Two-Step TDC for High Speed Linear CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Zhiyuan Gao

    2015-11-01

    Full Text Available This paper presents a dynamic range (DR enhanced readout technique with a two-step time-to-digital converter (TDC for high speed linear CMOS image sensors. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A column-parallel ADC based on a two-step TDC is utilized to improve the conversion rate. The conversion is divided into coarse phase and fine phase. An error calibration scheme is also proposed to correct quantization errors caused by propagation delay skew within −Tclk~+Tclk. A linear CMOS image sensor pixel array is designed in the 0.13 μm CMOS process to verify this DR-enhanced high speed readout technique. The post simulation results indicate that the dynamic range of readout circuit is 99.02 dB and the ADC achieves 60.22 dB SNDR and 9.71 bit ENOB at a conversion rate of 2 MS/s after calibration, with 14.04 dB and 2.4 bit improvement, compared with SNDR and ENOB of that without calibration.

  15. Random linear codes in steganography

    Directory of Open Access Journals (Sweden)

    Kamil Kaczyński

    2016-12-01

    Full Text Available Syndrome coding using linear codes is a technique that allows improvement in the steganographic algorithms parameters. The use of random linear codes gives a great flexibility in choosing the parameters of the linear code. In parallel, it offers easy generation of parity check matrix. In this paper, the modification of LSB algorithm is presented. A random linear code [8, 2] was used as a base for algorithm modification. The implementation of the proposed algorithm, along with practical evaluation of algorithms’ parameters based on the test images was made.[b]Keywords:[/b] steganography, random linear codes, RLC, LSB

  16. Application of a local linearization technique for the solution of a system of stiff differential equations associated with the simulation of a magnetic bearing assembly

    Science.gov (United States)

    Kibler, K. S.; Mcdaniel, G. A.

    1981-01-01

    A digital local linearization technique was used to solve a system of stiff differential equations which simulate a magnetic bearing assembly. The results prove the technique to be accurate, stable, and efficient when compared to a general purpose variable order Adams method with a stiff option.

  17. TU-FG-201-04: Computer Vision in Autonomous Quality Assurance of Linear Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Yu, H; Jenkins, C; Yu, S; Yang, Y; Xing, L [Stanford University, Stanford, CA (United States)

    2016-06-15

    Purpose: Routine quality assurance (QA) of linear accelerators represents a critical and costly element of a radiation oncology center. Recently, a system was developed to autonomously perform routine quality assurance on linear accelerators. The purpose of this work is to extend this system and contribute computer vision techniques for obtaining quantitative measurements for a monthly multi-leaf collimator (MLC) QA test specified by TG-142, namely leaf position accuracy, and demonstrate extensibility for additional routines. Methods: Grayscale images of a picket fence delivery on a radioluminescent phosphor coated phantom are captured using a CMOS camera. Collected images are processed to correct for camera distortions, rotation and alignment, reduce noise, and enhance contrast. The location of each MLC leaf is determined through logistic fitting and a priori modeling based on knowledge of the delivered beams. Using the data collected and the criteria from TG-142, a decision is made on whether or not the leaf position accuracy of the MLC passes or fails. Results: The locations of all MLC leaf edges are found for three different picket fence images in a picket fence routine to 0.1mm/1pixel precision. The program to correct for image alignment and determination of leaf positions requires a runtime of 21– 25 seconds for a single picket, and 44 – 46 seconds for a group of three pickets on a standard workstation CPU, 2.2 GHz Intel Core i7. Conclusion: MLC leaf edges were successfully found using techniques in computer vision. With the addition of computer vision techniques to the previously described autonomous QA system, the system is able to quickly perform complete QA routines with minimal human contribution.

  18. Topics in computational linear optimization

    DEFF Research Database (Denmark)

    Hultberg, Tim Helge

    2000-01-01

    Linear optimization has been an active area of research ever since the pioneering work of G. Dantzig more than 50 years ago. This research has produced a long sequence of practical as well as theoretical improvements of the solution techniques avilable for solving linear optimization problems...... of high quality solvers and the use of algebraic modelling systems to handle the communication between the modeller and the solver. This dissertation features four topics in computational linear optimization: A) automatic reformulation of mixed 0/1 linear programs, B) direct solution of sparse unsymmetric...... systems of linear equations, C) reduction of linear programs and D) integration of algebraic modelling of linear optimization problems in C++. Each of these topics is treated in a separate paper included in this dissertation. The efficiency of solving mixed 0-1 linear programs by linear programming based...

  19. Online failed fuel identification using delayed neutron detector signals in pool type reactors

    International Nuclear Information System (INIS)

    Upadhyay, Chandra Kant; Sivaramakrishna, M.; Nagaraj, C.P.; Madhusoodanan, K.

    2011-01-01

    In todays world, nuclear reactors are at the forefront of modern day innovation and reactor designs are increasingly incorporating cutting edge technology. It is of utmost importance to detect failure or defects in any part of a nuclear reactor for healthy operation of reactor as well as the safety aspects of the environment. Despite careful fabrication and manufacturing of fuel pins, there is a chance of clad failure. After fuel pin clad rupture takes place, it allows fission products to enter in to sodium pool. There are some potential consequences due to this such as Total Instantaneous Blockage (TIB) of coolant and primary component contamination. At present, the failed fuel detection techniques such as cover gas monitoring (alarming the operator), delayed neutron detection (DND-automatic trip) and standalone failed fuel localization module (FFLM) are exercised in various reactors. The first technique is a quantitative measurement of increase in the cover gas activity background whereas DND system causes automatic trip on detecting certain level of activity during clad wet rupture. FFLM is subsequently used to identify the failed fuel subassembly. The later although accurate, but mainly suffers from downtime and reduction in power during identification process. The proposed scheme, reported in this paper, reduces the operation of FFLM by predicting the faulty sector and therefore reducing reactor down time and thermal shocks. The neutron evolution pattern gets modulated because fission products are the delay neutron precursors. When they travel along with coolant to Intermediate heat Exchangers, experienced three effects i.e. delay; decay and dilution which make the neutron pulse frequency vary depending on the location of failed fuel sub assembly. This paper discusses the method that is followed to study the frequency domain properties, so that it is possible to detect exact fuel subassembly failure online, before the reactor automatically trips. (author)

  20. Long-term outcome of urethroplasty after failed urethrotomy versus primary repair.

    Science.gov (United States)

    Barbagli, G; Palminteri, E; Lazzeri, M; Guazzoni, G; Turini, D

    2001-06-01

    A urethral stricture recurring after repeat urethrotomy challenges even a skilled urologist. To address the question of whether to repeat urethrotomy or perform open reconstructive surgery, we retrospectively review a series of 93 patients comparing those who underwent primary repair versus those who had undergone urethrotomy and underwent secondary treatment. From 1975 to 1998, 93 males between age 13 and 78 years (mean 39) underwent surgical treatment for bulbar urethral stricture. In 46 (49%) of the patients urethroplasty was performed as primary repair, and in 47 (51%) after previously failed urethrotomy. The strictures were localized in the bulbous urethra without involvement of penile or membranous tracts. The etiology was ischemic in 37 patients, traumatic in 23, unknown in 17 and inflammatory in 16. To simplify evaluation of the results, the clinical outcome was considered either a success or a failure at the time any postoperative procedure was needed, including dilation. In our 93 patients primary urethroplasty had a final success rate of 85%, and after failed urethrotomy 87%. Previously failed urethrotomy did not influence the long-term outcome of urethroplasty. The long-term results of different urethroplasty techniques had a final success rate ranging from 77% to 96%. We conclude that failed urethrotomy does not condition the long-term result of surgical repair. With extended followup, the success rate of urethroplasty decreases with time but it is in fact still higher than that of urethrotomy.

  1. Application of the successive linear programming technique to the optimum design of a high flux reactor using LEU fuel

    International Nuclear Information System (INIS)

    Mo, S.C.

    1991-01-01

    The successive linear programming technique is applied to obtain the optimum thermal flux in the reflector region of a high flux reactor using LEU fuel. The design variables are the reactor power, core radius and coolant channel thickness. The constraints are the cycle length, average heat flux and peak/average power density ratio. The characteristics of the optimum solutions with various constraints are discussed

  2. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    International Nuclear Information System (INIS)

    Garcia-Pareja, S.; Vilches, M.; Lallena, A.M.

    2007-01-01

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool

  3. Failing Decision

    DEFF Research Database (Denmark)

    Knudsen, Morten

    2014-01-01

    Recently the Danish subway trains have begun to announce “on time” when they arrive at a station on time. This action reflects a worrying acceptance of the normality of failure. If trains were generally expected to be on time, there would be no reason to – triumphantly – announce it. This chapter...... by an interest in failure as one way of improving understanding of present-day decision making in organizations.......Recently the Danish subway trains have begun to announce “on time” when they arrive at a station on time. This action reflects a worrying acceptance of the normality of failure. If trains were generally expected to be on time, there would be no reason to – triumphantly – announce it. This chapter...... deals not with traffic delays, but with failing decisions in organizations. The assumption of this chapter is that failing decisions today are as normal as delayed trains. Instead of being the exception, failure is part of the everyday reproduction of organizations – as an uncontrolled effect but also...

  4. Simulating fail-stop in asynchronous distributed systems

    Science.gov (United States)

    Sabel, Laura; Marzullo, Keith

    1994-01-01

    The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.

  5. Non-Linear Dynamics and Fundamental Interactions

    CERN Document Server

    Khanna, Faqir

    2006-01-01

    The book is directed to researchers and graduate students pursuing an advanced degree. It provides details of techniques directed towards solving problems in non-linear dynamics and chos that are, in general, not amenable to a perturbative treatment. The consideration of fundamental interactions is a prime example where non-perturbative techniques are needed. Extension of these techniques to finite temperature problems is considered. At present these ideas are primarily used in a perturbative context. However, non-perturbative techniques have been considered in some specific cases. Experts in the field on non-linear dynamics and chaos and fundamental interactions elaborate the techniques and provide a critical look at the present status and explore future directions that may be fruitful. The text of the main talks will be very useful to young graduate students who are starting their studies in these areas.

  6. Ranking Forestry Investments With Parametric Linear Programming

    Science.gov (United States)

    Paul A. Murphy

    1976-01-01

    Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.

  7. One and a half ventricle repair in association with tricuspid valve repair according to "peacock tail" technique in patients with Ebstein's malformation and failing right ventricle.

    Science.gov (United States)

    Prifti, Edvin; Baboci, Arben; Esposito, Giampiero; Kajo, Efrosina; Dado, Elona; Vanini, Vittorio

    2014-05-01

    The aim of this study was to evaluate the outcome in a series of patients with Ebstein's anomaly and a failing right ventricle (RV) undergoing tricuspid valve (TV) repair and bidirectional Glenn cavopulmonary anastomosis (BDG). Between January 2006 and September 2013, 11 consecutive patients diagnosed with severe forms of Ebstein's anomaly and a failing RV underwent TV surgery and BDG. The mean age was 16.5 ± 7 years. Most frequently found symptoms were cyanosis, dyspnea, and arrhythmias. The azygos or hemiazygos veins were left open. The TV was repaired using the "peacock tail" technique, which consisted of total detachment of the anterior and posterior leaflets of the TV and rotation in both directions reimplanting them to the true annulus. The mean follow-up was 3.8 ± 2.4 years (range three months to six years). Hospital mortality was 9% (one patient). TV repair was possible in 10 patients. None of the patients had AV block postoperatively. At one year after surgery, the indexed RV and RA diameter were reduced significantly versus the preoperative data (p = 0.003 and p TV area were 1.2 ± 0.42 and 1.6 ± 0.6 (mm/m2), significantly lower than preoperatively (p = 0.001 and p = 0.008, respectively). The mean NYHA functional class, SaO2 , and cardiothoracic ratio were significantly improved. The peacock tail technique for TV repair in combination with BDG in patients with Ebstein's malformation and depressed RV function results in TV preservation, a low incidence of recurrent regurgitation, favorable functional status and RV function, and resolution of cyanosis. © 2014 Wiley Periodicals, Inc.

  8. New Developments in FPGA: SEUs and Fail-Safe Strategies from the NASA Goddard Perspective

    Science.gov (United States)

    Berg, Melanie D.; Label, Kenneth A.; Pellish, Jonathan

    2016-01-01

    It has been shown that, when exposed to radiation environments, each Field Programmable Gate Array (FPGA) device has unique error signatures. Subsequently, fail-safe and mitigation strategies will differ per FPGA type. In this session several design approaches for safe systems will be presented. It will also explore the benefits and limitations of several mitigation techniques. The intention of the presentation is to provide information regarding FPGA types, their susceptibilities, and proven fail-safe strategies; so that users can select appropriate mitigation and perform the required trade for system insertion. The presentation will describe three types of FPGA devices and their susceptibilities in radiation environments.

  9. Linear Unlearning for Cross-Validation

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Larsen, Jan

    1996-01-01

    The leave-one-out cross-validation scheme for generalization assessment of neural network models is computationally expensive due to replicated training sessions. In this paper we suggest linear unlearning of examples as an approach to approximative cross-validation. Further, we discuss...... time series prediction benchmark demonstrate the potential of the linear unlearning technique...

  10. A novel sensor for two-degree-of-freedom motion measurement of linear nanopositioning stage using knife edge displacement sensing technique

    Science.gov (United States)

    Zolfaghari, Abolfazl; Jeon, Seongkyul; Stepanick, Christopher K.; Lee, ChaBum

    2017-06-01

    This paper presents a novel method for measuring two-degree-of-freedom (DOF) motion of flexure-based nanopositioning systems based on optical knife-edge sensing (OKES) technology, which utilizes the interference of two superimposed waves: a geometrical wave from the primary source of light and a boundary diffraction wave from the secondary source. This technique allows for two-DOF motion measurement of the linear and pitch motions of nanopositioning systems. Two capacitive sensors (CSs) are used for a baseline comparison with the proposed sensor by simultaneously measuring the motions of the nanopositioning system. The experimental results show that the proposed sensor closely agrees with the fundamental linear motion of the CS. However, the two-DOF OKES technology was shown to be approximately three times more sensitive to the pitch motion than the CS. The discrepancy in the two sensor outputs is discussed in terms of measuring principle, linearity, bandwidth, control effectiveness, and resolution.

  11. Linear optical response of finite systems using multishift linear system solvers

    Energy Technology Data Exchange (ETDEWEB)

    Hübener, Hannes; Giustino, Feliciano [Department of Materials, University of Oxford, Oxford OX1 3PH (United Kingdom)

    2014-07-28

    We discuss the application of multishift linear system solvers to linear-response time-dependent density functional theory. Using this technique the complete frequency-dependent electronic density response of finite systems to an external perturbation can be calculated at the cost of a single solution of a linear system via conjugate gradients. We show that multishift time-dependent density functional theory yields excitation energies and oscillator strengths in perfect agreement with the standard diagonalization of the response matrix (Casida's method), while being computationally advantageous. We present test calculations for benzene, porphin, and chlorophyll molecules. We argue that multishift solvers may find broad applicability in the context of excited-state calculations within density-functional theory and beyond.

  12. Hyaluronic acid solution injection for upper and lower gastrointestinal bleeding after failed conventional endoscopic therapy.

    Science.gov (United States)

    Lee, Jin Wook; Kim, Hyung Hun

    2014-03-01

    Hyaluronic acid solution injection can be an additional endoscopic modality for controlling bleeding in difficult cases when other techniques have failed. We evaluated 12 cases in which we used hyaluronic acid solution injection for stopping bleeding. Immediately following hyaluronic acid solution injection, bleeding was controlled in 11 out of 12 cases. There was no clinical evidence of renewed bleeding in 11 cases during follow up.Hyaluronic acid solution injection can be a simple and efficient additional method for controlling upper and lower gastrointestinal bleeding after failed endoscopic therapy. © 2013 The Authors. Digestive Endoscopy © 2013 Japan Gastroenterological Endoscopy Society.

  13. Percutaneous anterior C1/2 transarticular screw fixation: salvage of failed percutaneous odontoid screw fixation for odontoid fracture

    OpenAIRE

    Wu, Ai-Min; Jin, Hai-Ming; Lin, Zhong-Ke; Chi, Yong-Long; Wang, Xiang-Yang

    2017-01-01

    Background The objective of this study is to investigate the outcomes and safety of using percutaneous anterior C1/2 transarticular screw fixation as a salvage technique for odontoid fracture if percutaneous odontoid screw fixation fails. Methods Fifteen in 108 odontoid fracture patients (planned to be treated by percutaneous anterior odontoid screw fixation) were failed to introduce satisfactory odontoid screw trajectory. To salvage this problem, we chose the percutaneous anterior C1/2 trans...

  14. Linearization instability for generic gravity in AdS spacetime

    Science.gov (United States)

    Altas, Emel; Tekin, Bayram

    2018-01-01

    In general relativity, perturbation theory about a background solution fails if the background spacetime has a Killing symmetry and a compact spacelike Cauchy surface. This failure, dubbed as linearization instability, shows itself as non-integrability of the perturbative infinitesimal deformation to a finite deformation of the background. Namely, the linearized field equations have spurious solutions which cannot be obtained from the linearization of exact solutions. In practice, one can show the failure of the linear perturbation theory by showing that a certain quadratic (integral) constraint on the linearized solutions is not satisfied. For non-compact Cauchy surfaces, the situation is different and for example, Minkowski space having a non-compact Cauchy surface, is linearization stable. Here we study, the linearization instability in generic metric theories of gravity where Einstein's theory is modified with additional curvature terms. We show that, unlike the case of general relativity, for modified theories even in the non-compact Cauchy surface cases, there are some theories which show linearization instability about their anti-de Sitter backgrounds. Recent D dimensional critical and three dimensional chiral gravity theories are two such examples. This observation sheds light on the paradoxical behavior of vanishing conserved charges (mass, angular momenta) for non-vacuum solutions, such as black holes, in these theories.

  15. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)], E-mail: garciapareja@gmail.com; Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain); Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

    2007-09-21

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool.

  16. High-Order Sparse Linear Predictors for Audio Processing

    DEFF Research Database (Denmark)

    Giacobello, Daniele; van Waterschoot, Toon; Christensen, Mads Græsbøll

    2010-01-01

    Linear prediction has generally failed to make a breakthrough in audio processing, as it has done in speech processing. This is mostly due to its poor modeling performance, since an audio signal is usually an ensemble of different sources. Nevertheless, linear prediction comes with a whole set...... of interesting features that make the idea of using it in audio processing not far fetched, e.g., the strong ability of modeling the spectral peaks that play a dominant role in perception. In this paper, we provide some preliminary conjectures and experiments on the use of high-order sparse linear predictors...... in audio processing. These predictors, successfully implemented in modeling the short-term and long-term redundancies present in speech signals, will be used to model tonal audio signals, both monophonic and polyphonic. We will show how the sparse predictors are able to model efficiently the different...

  17. Linear time relational prototype based learning.

    Science.gov (United States)

    Gisbrecht, Andrej; Mokbel, Bassam; Schleif, Frank-Michael; Zhu, Xibin; Hammer, Barbara

    2012-10-01

    Prototype based learning offers an intuitive interface to inspect large quantities of electronic data in supervised or unsupervised settings. Recently, many techniques have been extended to data described by general dissimilarities rather than Euclidean vectors, so-called relational data settings. Unlike the Euclidean counterparts, the techniques have quadratic time complexity due to the underlying quadratic dissimilarity matrix. Thus, they are infeasible already for medium sized data sets. The contribution of this article is twofold: On the one hand we propose a novel supervised prototype based classification technique for dissimilarity data based on popular learning vector quantization (LVQ), on the other hand we transfer a linear time approximation technique, the Nyström approximation, to this algorithm and an unsupervised counterpart, the relational generative topographic mapping (GTM). This way, linear time and space methods result. We evaluate the techniques on three examples from the biomedical domain.

  18. Linear Programming and Network Flows

    CERN Document Server

    Bazaraa, Mokhtar S; Sherali, Hanif D

    2011-01-01

    The authoritative guide to modeling and solving complex problems with linear programming-extensively revised, expanded, and updated The only book to treat both linear programming techniques and network flows under one cover, Linear Programming and Network Flows, Fourth Edition has been completely updated with the latest developments on the topic. This new edition continues to successfully emphasize modeling concepts, the design and analysis of algorithms, and implementation strategies for problems in a variety of fields, including industrial engineering, management science, operations research

  19. Systematic study of doping dependence on linear magnetoresistance in p-PbTe

    International Nuclear Information System (INIS)

    Schneider, J. M.; Chitta, V. A.; Oliveira, N. F.; Peres, M. L.; Castro, S. de; Soares, D. A. W.; Wiedmann, S.; Zeitler, U.; Abramof, E.; Rappl, P. H. O.; Mengui, U. A.

    2014-01-01

    We report on a large linear magnetoresistance effect observed in doped p-PbTe films. While undoped p-PbTe reveals a sublinear magnetoresistance, p-PbTe films doped with BaF 2 exhibit a transition to a nearly perfect linear magnetoresistance behaviour that is persistent up to 30 T. The linear magnetoresistance slope ΔR/ΔB is to a good approximation, independent of temperature. This is in agreement with the theory of Quantum Linear Magnetoresistance. We also performed magnetoresistance simulations using a classical model of linear magnetoresistance. We found that this model fails to explain the experimental data. A systematic study of the doping dependence reveals that the linear magnetoresistance response has a maximum for small BaF 2 doping levels and diminishes rapidly for increasing doping levels. Exploiting the huge impact of doping on the linear magnetoresistance signal could lead to new classes of devices with giant magnetoresistance behavior.

  20. Dynamics of unsymmetric piecewise-linear/non-linear systems using finite elements in time

    Science.gov (United States)

    Wang, Yu

    1995-08-01

    The dynamic response and stability of a single-degree-of-freedom system with unsymmetric piecewise-linear/non-linear stiffness are analyzed using the finite element method in the time domain. Based on a Hamilton's weak principle, this method provides a simple and efficient approach for predicting all possible fundamental and sub-periodic responses. The stability of the steady state response is determined by using Floquet's theory without any special effort for calculating transition matrices. This method is applied to a number of examples, demonstrating its effectiveness even for a strongly non-linear problem involving both clearance and continuous stiffness non-linearities. Close agreement is found between available published findings and the predictions of the finite element in time approach, which appears to be an efficient and reliable alternative technique for non-linear dynamic response and stability analysis of periodic systems.

  1. Why did occidental modernity fail in the Arab Middle East: the failed modern state?

    OpenAIRE

    Sardar, Aziz

    2011-01-01

    This thesis asks a straightforward but nevertheless a complex question, that is: Why did modernity fail in the Arab Middle East? The notion of modernity in this thesis signifies the occidental modernity which reached the region in many different forms and through various channels. This occidental modernity had an impact on many areas and changed the societies and politics of the region. But these changes stopped short of reaching modernity, in other words it failed to change the society from ...

  2. Input/Output linearizing control of a nuclear reactor

    International Nuclear Information System (INIS)

    Perez C, V.

    1994-01-01

    The feedback linearization technique is an approach to nonlinear control design. The basic idea is to transform, by means of algebraic methods, the dynamics of a nonlinear control system into a full or partial linear system. As a result of this linearization process, the well known basic linear control techniques can be used to obtain some desired dynamic characteristics. When full linearization is achieved, the method is referred to as input-state linearization, whereas when partial linearization is achieved, the method is referred to as input-output linearization. We will deal with the latter. By means of input-output linearization, the dynamics of a nonlinear system can be decomposed into an external part (input-output), and an internal part (unobservable). Since the external part consists of a linear relationship among the output of the plant and the auxiliary control input mentioned above, it is easy to design such an auxiliary control input so that we get the output to behave in a predetermined way. Since the internal dynamics of the system is known, we can check its dynamics behavior on order of to ensure that the internal states are bounded. The linearization method described here can be applied to systems with one-input/one-output, as well as to systems with multiple-inputs/multiple-outputs. Typical control problems such as stabilization and reference path tracking can be solved using this technique. In this work, the input/output linearization theory is presented, as well as the problem of getting the output variable to track some desired trayectories. Further, the design of an input/output control system applied to the nonlinear model of a research nuclear reactor is included, along with the results obtained by computer simulation. (Author)

  3. Discriminative Elastic-Net Regularized Linear Regression.

    Science.gov (United States)

    Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen

    2017-03-01

    In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.

  4. Linear programming mathematics, theory and algorithms

    CERN Document Server

    1996-01-01

    Linear Programming provides an in-depth look at simplex based as well as the more recent interior point techniques for solving linear programming problems. Starting with a review of the mathematical underpinnings of these approaches, the text provides details of the primal and dual simplex methods with the primal-dual, composite, and steepest edge simplex algorithms. This then is followed by a discussion of interior point techniques, including projective and affine potential reduction, primal and dual affine scaling, and path following algorithms. Also covered is the theory and solution of the linear complementarity problem using both the complementary pivot algorithm and interior point routines. A feature of the book is its early and extensive development and use of duality theory. Audience: The book is written for students in the areas of mathematics, economics, engineering and management science, and professionals who need a sound foundation in the important and dynamic discipline of linear programming.

  5. Fast Solvers for Dense Linear Systems

    Energy Technology Data Exchange (ETDEWEB)

    Kauers, Manuel [Research Institute for Symbolic Computation (RISC), Altenbergerstrasse 69, A4040 Linz (Austria)

    2008-10-15

    It appears that large scale calculations in particle physics often require to solve systems of linear equations with rational number coefficients exactly. If classical Gaussian elimination is applied to a dense system, the time needed to solve such a system grows exponentially in the size of the system. In this tutorial paper, we present a standard technique from computer algebra that avoids this exponential growth: homomorphic images. Using this technique, big dense linear systems can be solved in a much more reasonable time than using Gaussian elimination over the rationals.

  6. Multi-criteria approach with linear combination technique and analytical hierarchy process in land evaluation studies

    Directory of Open Access Journals (Sweden)

    Orhan Dengiz

    2018-01-01

    Full Text Available Land evaluation analysis is a prerequisite to achieving optimum utilization of the available land resources. Lack of knowledge on best combination of factors that suit production of yields has contributed to the low production. The aim of this study was to determine the most suitable areas for agricultural uses. For that reasons, in order to determine land suitability classes of the study area, multi-criteria approach was used with linear combination technique and analytical hierarchy process by taking into consideration of some land and soil physico-chemical characteristic such as slope, texture, depth, derange, stoniness, erosion, pH, EC, CaCO3 and organic matter. These data and land mapping unites were taken from digital detailed soil map scaled as 1:5.000. In addition, in order to was produce land suitability map GIS was program used for the study area. This study was carried out at Mahmudiye, Karaamca, Yazılı, Çiçeközü, Orhaniye and Akbıyık villages in Yenişehir district of Bursa province. Total study area is 7059 ha. 6890 ha of total study area has been used as irrigated agriculture, dry farming agriculture, pasture while, 169 ha has been used for non-agricultural activities such as settlement, road water body etc. Average annual temperature and precipitation of the study area are 16.1oC and 1039.5 mm, respectively. Finally after determination of land suitability distribution classes for the study area, it was found that 15.0% of the study area has highly (S1 and moderately (S2 while, 85% of the study area has marginally suitable and unsuitable coded as S3 and N. It was also determined some relation as compared results of linear combination technique with other hierarchy approaches such as Land Use Capability Classification and Suitability Class for Agricultural Use methods.

  7. New Developments in FPGA Devices: SEUs and Fail-Safe Strategies from the NASA Goddard Perspective

    Science.gov (United States)

    Berg, Melanie; LaBel, Kenneth; Pellish, Jonathan

    2016-01-01

    It has been shown that, when exposed to radiation environments, each Field Programmable Gate Array (FPGA) device has unique error signatures. Subsequently, fail-safe and mitigation strategies will differ per FPGA type. In this session several design approaches for safe systems will be presented. It will also explore the benefits and limitations of several mitigation techniques. The intention of the presentation is to provide information regarding FPGA types, their susceptibilities, and proven fail-safe strategies; so that users can select appropriate mitigation and perform the required trade for system insertion. The presentation will describe three types of FPGA devices and their susceptibilities in radiation environments.

  8. A Physical Model of Mass Ejection in Failed Supernovae

    Science.gov (United States)

    Coughlin, Eric R.; Quataert, Eliot; Fernández, Rodrigo; Kasen, Daniel

    2018-03-01

    During the core collapse of massive stars, the formation of the protoneutron star is accompanied by the emission of a significant amount of mass-energy (˜0.3 M⊙) in the form of neutrinos. This mass-energy loss generates an outward-propagating pressure wave that steepens into a shock near the stellar surface, potentially powering a weak transient associated with an otherwise-failed supernova. We analytically investigate this mass-loss-induced wave generation and propagation. Heuristic arguments provide an accurate estimate of the amount of energy contained in the outgoing sound pulse. We then develop a general formalism for analyzing the response of the star to centrally concentrated mass loss in linear perturbation theory. To build intuition, we apply this formalism to polytropic stellar models, finding qualitative and quantitative agreement with simulations and heuristic arguments. We also apply our results to realistic pre-collapse massive star progenitors (both giants and compact stars). Our analytic results for the sound pulse energy, excitation radius, and steepening in the stellar envelope are in good agreement with full time-dependent hydrodynamic simulations. We show that prior to the sound pulses arrival at the stellar photosphere, the photosphere has already reached velocities ˜20 - 100% of the local sound speed, thus likely modestly decreasing the stellar effective temperature prior to the star disappearing. Our results provide important constraints on the physical properties and observational appearance of failed supernovae.

  9. A physical model of mass ejection in failed supernovae

    Science.gov (United States)

    Coughlin, Eric R.; Quataert, Eliot; Fernández, Rodrigo; Kasen, Daniel

    2018-06-01

    During the core collapse of massive stars, the formation of the proto-neutron star is accompanied by the emission of a significant amount of mass energy (˜0.3 M⊙) in the form of neutrinos. This mass-energy loss generates an outward-propagating pressure wave that steepens into a shock near the stellar surface, potentially powering a weak transient associated with an otherwise-failed supernova. We analytically investigate this mass-loss-induced wave generation and propagation. Heuristic arguments provide an accurate estimate of the amount of energy contained in the outgoing sound pulse. We then develop a general formalism for analysing the response of the star to centrally concentrated mass loss in linear perturbation theory. To build intuition, we apply this formalism to polytropic stellar models, finding qualitative and quantitative agreement with simulations and heuristic arguments. We also apply our results to realistic pre-collapse massive star progenitors (both giants and compact stars). Our analytic results for the sound pulse energy, excitation radius, and steepening in the stellar envelope are in good agreement with full time-dependent hydrodynamic simulations. We show that prior to the sound pulses arrival at the stellar photosphere, the photosphere has already reached velocities ˜ 20-100 per cent of the local sound speed, thus likely modestly decreasing the stellar effective temperature prior to the star disappearing. Our results provide important constraints on the physical properties and observational appearance of failed supernovae.

  10. Menu-Driven Solver Of Linear-Programming Problems

    Science.gov (United States)

    Viterna, L. A.; Ferencz, D.

    1992-01-01

    Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).

  11. Linear Water Waves

    Science.gov (United States)

    Kuznetsov, N.; Maz'ya, V.; Vainberg, B.

    2002-08-01

    This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'

  12. Closed External Fixation for Failing or Failed Femoral Shaft Plating in a Developing Country.

    Science.gov (United States)

    Aliakbar, Adil; Witwit, Ibrahim; Al-Algawy, Alaa A Hussein

    2017-08-01

    Femoral shaft fractures are one of the common injuries that is treated by open reduction, with internal fixation by plate and screws or intramedullary nailing, which can achieve a high union rate. To evaluate the outcome of using closed external fixation to augment a failing plate; with signs of screw loosening and increasing bone/plate gap; a failed plate; broken plate; screws completely out of bone with redisplacement of fracture. A retrospective study on 18 patients, aged between 17-42 years, who presented between 6-18 weeks after initial surgical fixation, with pain, difficulty in limb function, deformity and abnormal movement at fracture site, was done. X-Rays showed plating failure with acceptable amount of callus, which unfortunately had refractured. Cases associated with infection and no radiological evidence of callus formation were excluded from this study. Closed reduction was done by manipulation, then fracture fixation by AO external fixator. The patients were encouraged for full weight bearing as early as possible with dynamization later on. Of the 18 patients who underwent external fixation after close reduction, 15 cases showed bone healing in a period between 11-18 weeks (mean of 14.27 weeks) with good alignment (Radiologically). Removal of external fixator was done followed by physical therapy thereafter. Closed external fixation for treatment of failing or failed femoral plating, achieves good success rate and has less complications, is a short time procedure, especially in a hospital with limited resources.

  13. Neglected City Narratives And Failed Rebranding

    DEFF Research Database (Denmark)

    Mousten, Birthe; Locmele, Gunta

    2017-01-01

    Rīga, Latvia went through a failed rebranding process as the forerunner of its status as a European Capital of Culture (2014). The same thing happened in Aarhus, Denmark. Aarhus will be a European Capital of Culture (2017) and leading to this, it went through a failed rebranding process. Based on...

  14. When Organization Fails: Why Authority Matters

    DEFF Research Database (Denmark)

    Blaschke, Steffen

    2015-01-01

    Review of: James R. Taylor and Elizabeth J. Van Every / When Organization Fails: Why Authority Matters. (New York: Routledge, 2014. 220 pp. ISBN: 978 0415741668)......Review of: James R. Taylor and Elizabeth J. Van Every / When Organization Fails: Why Authority Matters. (New York: Routledge, 2014. 220 pp. ISBN: 978 0415741668)...

  15. A Comparative Evaluation of the Linear Dimensional Accuracy of Four Impression Techniques using Polyether Impression Material.

    Science.gov (United States)

    Manoj, Smita Sara; Cherian, K P; Chitre, Vidya; Aras, Meena

    2013-12-01

    There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregum Soft™, 3 M ESPE and the four impression techniques used were (1) Monophase impression technique using medium body impression material. (2) One step double mix impression technique using heavy body and light body impression materials simultaneously. (3) Two step double mix impression technique using a cellophane spacer (heavy body material used as a preliminary impression to create a wash space with a cellophane spacer, followed by the use of light body material). (4) Matrix impression using a matrix of polyether occlusal registration material. The matrix is loaded with heavy body material followed by a pick-up impression in medium body material. For each technique, thirty impressions were made of a stainless steel master model that contained three complete crown abutment preparations, which were used as the positive control. Accuracy was assessed by measuring eight dimensions (mesiodistal, faciolingual and inter-abutment) on stone dies poured from impressions of the master model. A two-tailed t test was carried out to test the significance in difference of the distances between the master model and the stone models. One way analysis of variance (ANOVA) was used for multiple group comparison followed by the Bonferroni's test for pair wise comparison. The accuracy was tested at α = 0.05. In general, polyether impression material produced stone dies that were smaller except for the dies produced from the one step double mix impression technique. The ANOVA revealed a highly

  16. On some perturbation techniques for quasi-linear parabolic equations

    Directory of Open Access Journals (Sweden)

    Igor Malyshev

    1990-01-01

    Full Text Available We study a nonhomogeneous quasi-linear parabolic equation and introduce a method that allows us to find the solution of a nonlinear boundary value problem in “explicit” form. This task is accomplished by perturbing the original equation with a source function, which is then found as a solution of some nonlinear operator equation.

  17. Use Residual Correction Method and Monotone Iterative Technique to Calculate the Upper and Lower Approximate Solutions of Singularly Perturbed Non-linear Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Chi-Chang Wang

    2013-09-01

    Full Text Available This paper seeks to use the proposed residual correction method in coordination with the monotone iterative technique to obtain upper and lower approximate solutions of singularly perturbed non-linear boundary value problems. First, the monotonicity of a non-linear differential equation is reinforced using the monotone iterative technique, then the cubic-spline method is applied to discretize and convert the differential equation into the mathematical programming problems of an inequation, and finally based on the residual correction concept, complex constraint solution problems are transformed into simpler questions of equational iteration. As verified by the four examples given in this paper, the method proposed hereof can be utilized to fast obtain the upper and lower solutions of questions of this kind, and to easily identify the error range between mean approximate solutions and exact solutions.

  18. Compressed Sensing with Linear Correlation Between Signal and Measurement Noise

    DEFF Research Database (Denmark)

    Arildsen, Thomas; Larsen, Torben

    2014-01-01

    reconstruction algorithms, but is not known in existing literature. The proposed technique reduces reconstruction error considerably in the case of linearly correlated measurements and noise. Numerical experiments confirm the efficacy of the technique. The technique is demonstrated with application to low......Existing convex relaxation-based approaches to reconstruction in compressed sensing assume that noise in the measurements is independent of the signal of interest. We consider the case of noise being linearly correlated with the signal and introduce a simple technique for improving compressed...... sensing reconstruction from such measurements. The technique is based on a linear model of the correlation of additive noise with the signal. The modification of the reconstruction algorithm based on this model is very simple and has negligible additional computational cost compared to standard...

  19. An efficient formulation for linear and geometric non-linear membrane elements

    Directory of Open Access Journals (Sweden)

    Mohammad Rezaiee-Pajand

    Full Text Available Utilizing the straingradient notation process and the free formulation, an efficient way of constructing membrane elements will be proposed. This strategy can be utilized for linear and geometric non-linear problems. In the suggested formulation, the optimization constraints of insensitivity to distortion, rotational invariance and not having parasitic shear error are employed. In addition, the equilibrium equations will be established based on some constraints among the strain states. The authors' technique can easily separate the rigid body motions, and those belong to deformational motions. In this article, a novel triangular element, named SST10, is formulated. This element will be used in several plane problems having irregular mesh and complicated geometry with linear and geometrically nonlinear behavior. The numerical outcomes clearly demonstrate the efficiency of the new formulation.

  20. 7 CFR 983.52 - Failed lots/rework procedure.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Failed lots/rework procedure. 983.52 Section 983.52..., ARIZONA, AND NEW MEXICO Regulations § 983.52 Failed lots/rework procedure. (a) Substandard pistachios... committee may establish, with the Secretary's approval, appropriate rework procedures. (b) Failed lot...

  1. Posterior transpedicular approach with circumferential debridement and anterior reconstruction as a salvage procedure for symptomatic failed vertebroplasty.

    Science.gov (United States)

    Chiu, Yen-Chun; Yang, Shih-Chieh; Chen, Hung-Shu; Kao, Yu-Hsien; Tu, Yuan-Kun

    2015-02-10

    Complications and failure of vertebroplasty, such as cement dislodgement, cement leakage, or spinal infection, usually result in spinal instability and neural element compression. Combined anterior and posterior approaches are the most common salvage procedure for symptomatic failed vertebroplasty. The purpose of this study is to evaluate the feasibility and efficacy of a single posterior approach technique for the treatment of patients with symptomatic failed vertebroplasty. Ten patients with symptomatic failed vertebroplasty underwent circumferential debridement and anterior reconstruction surgery through a single-stage posterior transpedicular approach (PTA) from January 2009 to December 2011 at our institution. The differences of visual analog scale (VAS), neurologic status, and vertebral body reconstruction before and after surgery were recorded. The clinical outcomes of patients were categorized as excellent, good, fair, or poor based on modified Brodsky's criteria. The symptomatic failed vertebroplasty occurred between the T11 and L3 vertebrae with one- or two-level involvement. The average VAS score was 8.3 (range, 7 to 9) before surgery, significantly decreased to 3.2 (range, 2 to 4) after surgery (p surgery was 17.3° (range, 4° to 35°) (p surgery was 1 mm (range, 0 to 2). The neurologic status of Frankel's scale significantly improved after surgery (p = 0.014) and at 1 year after surgery (p = 0.046). No one experienced severe complications such as deep wound infection or neurologic deterioration. All patients achieved good or excellent outcomes after surgery based on modified Brodsky's criteria (p surgery with circumferential debridement and anterior reconstruction technique provides good clinical outcomes and low complication rate, which can be considered as an alternative method to combined anterior and posterior approaches for patients with symptomatic failed vertebroplasty.

  2. Acoustic emission linear pulse holography

    International Nuclear Information System (INIS)

    Collins, H.D.; Busse, L.J.; Lemon, D.K.

    1983-01-01

    This paper describes the emission linear pulse holography which produces a chronological linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. A thirty two point sampling array is used to construct phase-only linear holograms of simulated acoustic emission sources on large metal plates. The concept behind the AE linear pulse holography is illustrated, and a block diagram of a data acquisition system to implement the concept is given. Array element spacing, synthetic frequency criteria, and lateral depth resolution are specified. A reference timing transducer positioned between the array and the inspection zone and which inititates the time-of-flight measurements is described. The results graphically illustrate the technique using a one-dimensional FFT computer algorithm (ie. linear backward wave) for an AE image reconstruction

  3. Failed common bile duct cannulation during pregnancy: Rescue with endoscopic ultrasound guided rendezvous procedure.

    Science.gov (United States)

    Singla, Vikas; Arora, Anil; Tyagi, Pankaj; Sharma, Praveen; Bansal, Naresh; Kumar, Ashish

    2016-01-01

    Common bile duct (CBD) stones can lead to serious complications and require intervention with either endoscopic retrograde cholangiopancreatography (ERCP) or laparoscopic techniques for urgent relief. On an average 10%-20% of the patients with gall bladder stones can have associated CBD stones. CBD stones during pregnancy can be associated with hazardous complications for both the mother and the fetus. Failed cannulation while performing ERCP during pregnancy is a technically demanding situation, which requires immediate rescue with special techniques. Conventional rescue techniques may not be feasible and can be associated with hazardous consequences. Endoscopic ultrasound (EUS) guided rendezvous technique has now emerged as a safe alternative, and in one of our patients, this technique was successfully attempted. To the best of our knowledge, this is the first case report in the literature on EUS-guided rendezvous procedure during pregnancy.

  4. Seismic analysis of equipment system with non-linearities such as gap and friction using equivalent linearization method

    International Nuclear Information System (INIS)

    Murakami, H.; Hirai, T.; Nakata, M.; Kobori, T.; Mizukoshi, K.; Takenaka, Y.; Miyagawa, N.

    1989-01-01

    Many of the equipment systems of nuclear power plants contain a number of non-linearities, such as gap and friction, due to their mechanical functions. It is desirable to take such non-linearities into account appropriately for the evaluation of the aseismic soundness. However, in usual design works, linear analysis method with rough assumptions is applied from engineering point of view. An equivalent linearization method is considered to be one of the effective analytical techniques to evaluate non-linear responses, provided that errors to a certain extent are tolerated, because it has greater simplicity in analysis and economization in computing time than non-linear analysis. The objective of this paper is to investigate the applicability of the equivalent linearization method to evaluate the maximum earthquake response of equipment systems such as the CANDU Fuelling Machine which has multiple non- linearities

  5. Non-surgical retreatment of a failed apicoectomy without retrofilling using white mineral trioxide aggregate as an apical barrier.

    Science.gov (United States)

    Stefopoulos, Spyridon; Tzanetakis, Giorgos N; Kontakiotis, Evangelos G

    2012-01-01

    Root-end resected teeth with persistent apical periodontitis are usually retreated surgically or a combination of non-surgical and surgical retreatment is employed. However, patients are sometimes unwilling to be subjected to a second surgical procedure. The apical barrier technique that is used for apical closure of immature teeth with necrotic pulps may be an alternative to non-surgically retreat a failed apicoectomy. Mineral trioxide aggregate (MTA) has become the material of choice in such cases because of its excellent biocompatibility, sealing ability and osseoinductive properties. This case report describes the non-surgical retreatment of a failed apicoectomy with no attempt at retrofilling of a maxillary central incisor. White MTA was used to induce apical closure of the wide resected apical area. Four-year follow-up examination revealed an asymptomatic, fully functional tooth with a satisfactory healing of the apical lesion. White MTA apical barrier may constitute a reliable and efficient technique to non-surgically retreat teeth with failed root-end resection. The predictability of such a treatment is of great benefit for the patient who is unwilling to be submitted to a second surgical procedure.

  6. Linearity of holographic entanglement entropy

    Energy Technology Data Exchange (ETDEWEB)

    Almheiri, Ahmed [Stanford Institute for Theoretical Physics, Department of Physics,Stanford University, Stanford, CA 94305 (United States); Dong, Xi [School of Natural Sciences, Institute for Advanced Study,Princeton, NJ 08540 (United States); Swingle, Brian [Stanford Institute for Theoretical Physics, Department of Physics,Stanford University, Stanford, CA 94305 (United States)

    2017-02-14

    We consider the question of whether the leading contribution to the entanglement entropy in holographic CFTs is truly given by the expectation value of a linear operator as is suggested by the Ryu-Takayanagi formula. We investigate this property by computing the entanglement entropy, via the replica trick, in states dual to superpositions of macroscopically distinct geometries and find it consistent with evaluating the expectation value of the area operator within such states. However, we find that this fails once the number of semi-classical states in the superposition grows exponentially in the central charge of the CFT. Moreover, in certain such scenarios we find that the choice of surface on which to evaluate the area operator depends on the density matrix of the entire CFT. This nonlinearity is enforced in the bulk via the homology prescription of Ryu-Takayanagi. We thus conclude that the homology constraint is not a linear property in the CFT. We also discuss the existence of ‘entropy operators’ in general systems with a large number of degrees of freedom.

  7. Compact multi-energy electron linear accelerators

    International Nuclear Information System (INIS)

    Tanabe, E.; Hamm, R.W.

    1985-01-01

    Two distinctly different concepts that have been developed for compact multi-energy, single-section, standing-wave electron linear accelerator structures are presented. These new concepts, which utilize (a) variable nearest neighbor couplings and (b) accelerating field phase switching, provide the capability of continuously varying the electron output energy from the accelerator without degrading the energy spectrum. These techniques also provide the means for continuously varying the energy spectrum while maintaining a given average electron energy, and have been tested successfully with several accelerators of length from 0.1 m to 1.9 m. Theoretical amd experimental results from these accelerators, and demonstrated applications of these techniques to medical and industrial linear accelerator technology will be described. In addition, possible new applications available to research and industry from these techniques are presented. (orig.)

  8. Failed fuel detector

    International Nuclear Information System (INIS)

    Martucci, J.A.

    1975-01-01

    A failed fuel detection apparatus is described for a nuclear reactor having a liquid cooled core comprising a gas collection hood adapted to engage the top of the suspect assembly and means for delivering a stripping gas to the vicinity of the bottom of the suspect fuel assembly. (U.S.)

  9. Programmable Solution for Solving Non-linearity Characteristics of Smart Sensor Applications

    Directory of Open Access Journals (Sweden)

    S. Khan

    2007-10-01

    Full Text Available This paper presents a simple but programmable technique to solve the problem of non-linear characteristics of sensors used in more sensitive applications. The nonlinearity of the output response becomes a very sensitive issue in cases where a proportional increase in the physical quantity fails to bring about a proportional increase in the signal measured. The nonlinearity is addressed by using the interpolation method on the characteristics of a given sensor, approximating it to a set of tangent lines, the tangent points of which are recognized in the code of the processor by IF-THEN code. The method suggested here eliminates the use of external circuits for interfacing, and eases the programming burden on the processor at the cost of proportionally reduced memory requirements. The mathematically worked out results are compared with the simulation and experimental results for an IR sensor selected for the purpose and used for level measurement. This work will be of paramount importance and significance in applications where the controlled signal is required to follow the input signal precisely particularly in sensitive robotic applications.

  10. The role of chemometrics in single and sequential extraction assays: a review. Part II. Cluster analysis, multiple linear regression, mixture resolution, experimental design and other techniques.

    Science.gov (United States)

    Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo

    2011-03-04

    Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. Solving fault diagnosis problems linear synthesis techniques

    CERN Document Server

    Varga, Andreas

    2017-01-01

    This book addresses fault detection and isolation topics from a computational perspective. Unlike most existing literature, it bridges the gap between the existing well-developed theoretical results and the realm of reliable computational synthesis procedures. The model-based approach to fault detection and diagnosis has been the subject of ongoing research for the past few decades. While the theoretical aspects of fault diagnosis on the basis of linear models are well understood, most of the computational methods proposed for the synthesis of fault detection and isolation filters are not satisfactory from a numerical standpoint. Several features make this book unique in the fault detection literature: Solution of standard synthesis problems in the most general setting, for both continuous- and discrete-time systems, regardless of whether they are proper or not; consequently, the proposed synthesis procedures can solve a specific problem whenever a solution exists Emphasis on the best numerical algorithms to ...

  12. Correlation and simple linear regression.

    Science.gov (United States)

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  13. Two biased estimation techniques in linear regression: Application to aircraft

    Science.gov (United States)

    Klein, Vladislav

    1988-01-01

    Several ways for detection and assessment of collinearity in measured data are discussed. Because data collinearity usually results in poor least squares estimates, two estimation techniques which can limit a damaging effect of collinearity are presented. These two techniques, the principal components regression and mixed estimation, belong to a class of biased estimation techniques. Detection and assessment of data collinearity and the two biased estimation techniques are demonstrated in two examples using flight test data from longitudinal maneuvers of an experimental aircraft. The eigensystem analysis and parameter variance decomposition appeared to be a promising tool for collinearity evaluation. The biased estimators had far better accuracy than the results from the ordinary least squares technique.

  14. A fail-safe microprocessor-based protection system utilising low-level multiplexed sensor signals

    International Nuclear Information System (INIS)

    Orme, S.; Evans, N.J.; Wey, B.O.

    1985-01-01

    The paper describes a fail-safe reactor protection system, called the individual sub-assembly temperature monitoring system (ISAT). It is being developed for the commercial demonstration fast reactor. The system incorporates recent advances in solid-state electronics and in particular microprocessors to implement time-shared data acquisition techniques to obtain and process data from around 1400 fast response thermocouples whilst meeting the required levels for reliability and availability. (author)

  15. Failed fuel detection method

    International Nuclear Information System (INIS)

    Utamura, Motoaki; Urata, Megumu.

    1976-01-01

    Object: To detect failed fuel element in a reactor with high precision by measuring the radioactivity concentrations for more than one nuclides of fission products ( 131 I and 132 I, for example) contained in each sample of coolant in fuel channel. Method: The radioactivity concentrations in the sampled coolant are obtained from gamma spectra measured by a pulse height analyser after suitable cooling periods according to the half-lives of the fission products to be measured. The first measurement for 132 I is made in two hours after sampling, and the second for 131 I is started one day after the sampling. Fuel element corresponding to the high radioactivity concentrations for both 131 I and 132 I is expected with certainty to have failed

  16. A new astrophysical solution to the Too Big To Fail problem. Insights from the moria simulations

    NARCIS (Netherlands)

    Verbeke, R.; Papastergis, E.; Ponomareva, A. A.; Rathi, S.; De Rijcke, S.

    2017-01-01

    Aims: We test whether or not realistic analysis techniques of advanced hydrodynamical simulations can alleviate the Too Big To Fail problem (TBTF) for late-type galaxies. TBTF states that isolated dwarf galaxy kinematics imply that dwarfs live in halos with lower mass than is expected in a Λ cold

  17. Parameterized Linear Longitudinal Airship Model

    Science.gov (United States)

    Kulczycki, Eric; Elfes, Alberto; Bayard, David; Quadrelli, Marco; Johnson, Joseph

    2010-01-01

    A parameterized linear mathematical model of the longitudinal dynamics of an airship is undergoing development. This model is intended to be used in designing control systems for future airships that would operate in the atmospheres of Earth and remote planets. Heretofore, the development of linearized models of the longitudinal dynamics of airships has been costly in that it has been necessary to perform extensive flight testing and to use system-identification techniques to construct models that fit the flight-test data. The present model is a generic one that can be relatively easily specialized to approximate the dynamics of specific airships at specific operating points, without need for further system identification, and with significantly less flight testing. The approach taken in the present development is to merge the linearized dynamical equations of an airship with techniques for estimation of aircraft stability derivatives, and to thereby make it possible to construct a linearized dynamical model of the longitudinal dynamics of a specific airship from geometric and aerodynamic data pertaining to that airship. (It is also planned to develop a model of the lateral dynamics by use of the same methods.) All of the aerodynamic data needed to construct the model of a specific airship can be obtained from wind-tunnel testing and computational fluid dynamics

  18. Bone Gap Management Using Linear Rail System (LRS): Initial ...

    African Journals Online (AJOL)

    Background: Vascularized fibular grafting, free fibular graft, tibia profibula synostosis, amputation with a good prosthesis and Ilizarov technique are some of the suitable options for managing bone gaps that result from trauma or treatment of tumours, bone infection, congenital pseudoarthrosis and repeated failed ...

  19. Perturbation analysis of linear control problems

    International Nuclear Information System (INIS)

    Petkov, Petko; Konstantinov, Mihail

    2017-01-01

    The paper presents a brief overview of the technique of splitting operators, proposed by the authors and intended for perturbation analysis of control problems involving unitary and orthogonal matrices. Combined with the technique of Lyapunov majorants and the implementation of the Banach and Schauder fixed point principles, it allows to obtain rigorous non-local perturbation bounds for a set of sensitivity analysis problems. Among them are the reduction of linear systems into orthogonal canonical forms, the feedback synthesis problem and pole assignment problem in particular, as well as other important problems in control theory and linear algebra. Key words: perturbation analysis, canonical forms, feedback synthesis

  20. Method of detecting a failed fuel

    International Nuclear Information System (INIS)

    Utamura, Motoaki; Urata, Megumi; Uchida, Shunsuke.

    1976-01-01

    Object: To improve detection accuracy of a failed fuel by eliminating a coolant temperature distribution in a fuel assembly. Structure: A failed fuel is detected from contents of nuclear fission products in a coolant by shutting off an upper portion of a fuel assembly provided in the coolant and by sampling the coolant in the fuel assembly. Temperature distribution in the fuel assembly is eliminated, by injecting the higher temperature coolant than that of the coolant inside and outside the fuel assembly when sampling, and thereby replacing the existing coolant in the fuel assembly for the higher temperature coolant. The failed fuel is detected from contents of the fission products existing in the coolant, by sampling the higher temperature coolant of the fuel assembly after a temperature passed. (Moriyama, K.)

  1. Failed fuel rod detector

    Energy Technology Data Exchange (ETDEWEB)

    Uchida, Katsuya; Matsuda, Yasuhiko

    1984-05-02

    The purpose of the project is to enable failed fuel rod detection simply with no requirement for dismantling the fuel assembly. A gamma-ray detection section is arranged so as to attend on the optional fuel rods in the fuel assembly. The fuel assembly is adapted such that a gamma-ray shielding plate is detachably inserted into optional gaps of the fuel rods or, alternatively, the fuel assembly can detachably be inserted to the gamma-ray shielding plate. In this way, amount of gaseous fission products accumulated in all of the plenum portions in the fuel rods as the object of the measurement can be determined without dismantling the fuel assembly. Accordingly, by comparing the amounts of the gaseous fission products, the failed fuel rod can be detected.

  2. Characteristic polynomials of linear polyacenes and their ...

    Indian Academy of Sciences (India)

    Coefficients of characteristic polynomials (CP) of linear polyacenes (LP) have been shown to be obtainable from Pascal's triangle by using a graph factorisation and squaring technique. Strong subspectrality existing among the members of the linear polyacene series has been shown from the derivation of the CP's. Thus it ...

  3. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  4. Circularly polarized light to study linear magneto-optics for ferrofluids: θ-scan technique

    Science.gov (United States)

    Meng, Xiangshen; Huang, Yan; He, Zhenghong; Lin, Yueqiang; Liu, Xiaodong; Li, Decai; Li, Jian; Qiu, Xiaoyan

    2018-06-01

    Circularly polarized light can be divided into two vertically linearly polarized light beams with  ±π/2 phase differences. In the presence of an external magnetic field, when circularly polarized light travels through a ferrofluid film, whose thickness is no more than that of λ/4 plate, magneto-optical, magnetic birefringence and dichroism effects cause the transmitted light to behave as elliptically polarized light. Using angular scan by a continuously rotating polarizer as analyzer, the angular (θ) distribution curve of relative intensity (T) corresponding to elliptically polarized light can be measured. From the T  ‑  θ curve having ellipsometry, the parameters such as the ratio of short to long axis, and angular orientation of the long axis to the vertical field direction can be obtained. Thus, magnetic birefringence and dichroism can be probed simultaneously by measuring magneto-optical, positive or negative birefringence and dichroism features from the transmission mode. The proposed method is called θ-scan technique, and can accurately determine sample stability, magnetic field direction, and cancel intrinsic light source ellipticity. This study may be helpful to further research done to ferrofluids and other similar colloidal samples with anisotropic optics.

  5. Simplified Linear Equation Solvers users manual

    Energy Technology Data Exchange (ETDEWEB)

    Gropp, W. [Argonne National Lab., IL (United States); Smith, B. [California Univ., Los Angeles, CA (United States)

    1993-02-01

    The solution of large sparse systems of linear equations is at the heart of many algorithms in scientific computing. The SLES package is a set of easy-to-use yet powerful and extensible routines for solving large sparse linear systems. The design of the package allows new techniques to be used in existing applications without any source code changes in the applications.

  6. Pass/fail patterns of candidates who failed COMLEX-USA level 2-PE because of misrepresentation of clinical findings on postencounter notes.

    Science.gov (United States)

    Langenau, Erik E; Sandella, Jeanne M

    2011-07-01

    In 2007, The National Board of Osteopathic Medical Examiners (NBOME) instituted a policy to address the accuracy and integrity of postencounter written documentation recorded during the Comprehensive Osteopathic Medical Licensing Examination Level 2-Performance Evaluation (COMLEX-USA Level 2-PE). This policy was instituted not only to protect the integrity of the examination, but also to highlight that overdocumentation of clinical findings not obtained during patient encounters may jeopardize patient safety. To investigate overall and domain pass/fail patterns of candidates who misrepresented clinical findings with regard to past and subsequent performance on COMLEX-USA Level 2-PE. Specifically, to investigate what percentage of candidates failed because of misrepresentation on first attempts and how they performed on subsequent administrations, as well as the previous performance patterns of candidates who failed because of misrepresentation on examination retakes. Historical records from NBOME's COMLEX-USA Level 2-PE database (testing cycles 2007-2008, 2008-2009, and 2009-2010) were used to analyze overall and domain pass/fail patterns of candidates who failed at least once because of misrepresentation of clinical findings. Of the 24 candidates who failed because of misrepresentation of postencounter (SOAP) notes, 20 candidates (83%) were first-time examinees. Four candidates (17%) were repeating the examination, 2 of whom were making a third attempt to pass. Among these 20 candidates who failed because of misrepresentation of clinical findings on their first attempt, 19 passed on their next attempt. At the time of study analysis, all but 2 candidates eventually passed the examination in subsequent attempts. Among candidates found to have misrepresented clinical findings on postencounter written documentation on COMLEX-USA Level 2-PE, no pattern existed between their past or subsequent performance with regard to overall or domain pass/fail results. The vast

  7. Linear accelerator radiosurgery for trigeminal neuralgia: case report

    Energy Technology Data Exchange (ETDEWEB)

    Yun, Hyong Geun [Dongguk University International Hospital, Goyang (Korea, Republic of)

    2006-06-15

    Trigeminal neuralgia is defined as an episodic electrical shock-like sensation in a dermatomal distribution of the trigeminal nerve. When medications fail to control pain, various procedures are used to attempt to control refractory pain. Of available procedures, stereotactic radiosurgery is the least invasive procedure and has been demonstrated to produce significant pain relief with minimal side effects. Recently, linear accelerators were introduced as a tool for radiosurgery of trigeminal neuralgia beneath the already accepted gamma unit. Author have experienced one case with trigeminal neuralgia treated with linear accelerator. The patient was treated with 85 Gy by means of 5 mm collimator directed to trigeminal nerve root entry zone. The patient obtained pain free without medication at 20 days after the procedure and remain pain free at 6 months after the procedure. He didn't experience facial numbness or other side effects.

  8. One-stage dorsal lingual mucosal graft urethroplasty for the treatment of failed hypospadias repair

    Directory of Open Access Journals (Sweden)

    Hong-Bin Li

    2016-01-01

    Full Text Available The aim of this study was to retrospectively investigate the outcomes of patients who underwent one-stage onlay or inlay urethroplasty using a lingual mucosal graft (LMG after failed hypospadias repairs. Inclusion criteria included a history of failed hypospadias repair, insufficiency of the local skin that made a reoperation with skin flaps difficult, and necessity of an oral mucosal graft urethroplasty. Patients were excluded if they had undergone a failed hypospadias repair using the foreskin or a multistage repair urethroplasty. Between January 2008 and December 2012, 110 patients with failed hypospadias repairs were treated in our center. Of these patients, 56 underwent a one-stage onlay or inlay urethroplasty using LMG. The median age was 21.8 years (range: 4-45 years. Of the 56 patients, one-stage onlay LMG urethroplasty was performed in 42 patients (group 1, and a modified Snodgrass technique using one-stage inlay LMG urethroplasty was performed in 14 (group 2. The median LMG urethroplasty length was 5.6 ± 1.6 cm (range: 4-13 cm. The mean follow-up was 34.7 months (range: 10-58 months, and complications developed in 12 of 56 patients (21.4%, including urethrocutaneous fistulas in 7 (6 in group 1, 1 in group 2 and neourethral strictures in 5 (4 in group 1, 1 in group 2. The total success rate was 78.6%. Our survey suggests that one-stage onlay or inlay urethroplasty with LMG may be an effective option to treat the patients with less available skin after failed hypospadias repairs; LMG harvesting is easy and safe, irrespective of the patient′s age.

  9. Non-linear imaging techniques visualize the lipid profile of C. elegans

    Science.gov (United States)

    Mari, Meropi; Petanidou, Barbara; Palikaras, Konstantinos; Fotakis, Costas; Tavernarakis, Nektarios; Filippidis, George

    2015-07-01

    The non-linear techniques Second and Third Harmonic Generation (SHG, THG) have been employed simultaneously to record three dimensional (3D) imaging and localize the lipid content of the muscular areas (ectopic fat) of Caenorhabditis elegans (C. elegans). Simultaneously, Two-Photon Fluorescence (TPEF) was used initially to localize the stained lipids with Nile Red, but also to confirm the THG potential to image lipids successfully. In addition, GFP labelling of the somatic muscles, proves the initial suggestion of the existence of ectopic fat on the muscles and provides complementary information to the SHG imaging of the pharynx. The ectopic fat may be related to a complex of pathological conditions including type-2 diabetes, hypertension and cardiovascular diseases. The elucidation of the molecular path leading to the development of metabolic syndrome is a vital issue with high biological significance and necessitates accurate methods competent of monitoring lipid storage distribution and dynamics in vivo. THG microscopy was employed as a quantitative tool to monitor the lipid accumulation in non-adipose tissues in the pharyngeal muscles of 12 unstained specimens while the SHG imaging revealed the anatomical structure of the muscles. The ectopic fat accumulation on the pharyngeal muscles increases in wild type (N2) C. elegans between 1 and 9 days of adulthood. This suggests a correlation of the ectopic fat accumulation with the aging. Our results can provide new evidence relating the deposition of ectopic fat with aging, but also validate SHG and THG microscopy modalities as new, non-invasive tools capable of localizing and quantifying selectively lipid accumulation and distribution.

  10. Charge transport and recombination in bulk heterojunction solar cells studied by the photoinduced charge extraction in linearly increasing voltage technique

    Science.gov (United States)

    Mozer, A. J.; Sariciftci, N. S.; Lutsen, L.; Vanderzande, D.; Österbacka, R.; Westerling, M.; Juška, G.

    2005-03-01

    Charge carrier mobility and recombination in a bulk heterojunction solar cell based on the mixture of poly[2-methoxy-5-(3,7-dimethyloctyloxy)-phenylene vinylene] (MDMO-PPV) and 1-(3-methoxycarbonyl)propyl-1-phenyl-(6,6)-C61 (PCBM) has been studied using the novel technique of photoinduced charge carrier extraction in a linearly increasing voltage (Photo-CELIV). In this technique, charge carriers are photogenerated by a short laser flash, and extracted under a reverse bias voltage ramp after an adjustable delay time (tdel). The Photo-CELIV mobility at room temperature is found to be μ =2×10-4cm2V-1s-1, which is almost independent on charge carrier density, but slightly dependent on tdel. Furthermore, determination of charge carrier lifetime and demonstration of an electric field dependent mobility is presented.

  11. Failed fuel detection device

    International Nuclear Information System (INIS)

    Sudo, Takayuki.

    1983-01-01

    Purpose: To enable early and sure detection of failed fuels by automatically changing the alarm set value depending on the operation states of a nuclear reactor. Constitution: Gaseous fission products released into coolants are transferred further into cover gases and then introduced through a pipeway to a failed fuel detector. The cover gases introduced from the pipeway to the pipeway or chamber within the detection device are detected by a radiation detector for the radiation dose of the gaseous fission products contained therein. The detected value is converted and amplified as a signal and inputted to a comparator. While on the other hand, a signal corresponding to the reactor power is converted by an alarm setter into a set value and inputted to the comparator. In such a structure, early and sure detection can be made for the fuel failures. (Yoshino, Y.)

  12. Convergence of hybrid methods for solving non-linear partial ...

    African Journals Online (AJOL)

    This paper is concerned with the numerical solution and convergence analysis of non-linear partial differential equations using a hybrid method. The solution technique involves discretizing the non-linear system of PDE to obtain a corresponding non-linear system of algebraic difference equations to be solved at each time ...

  13. Non-linear neutron star oscillations viewed as deviations from an equilibrium state

    International Nuclear Information System (INIS)

    Sperhake, U

    2002-01-01

    A numerical technique is presented which facilitates the evolution of non-linear neutron star oscillations with a high accuracy essentially independent of the oscillation amplitude. We apply this technique to radial neutron star oscillations in a Lagrangian formulation and demonstrate the superior performance of the new scheme compared with 'conventional' techniques. The key feature of our approach is to describe the evolution in terms of deviations from an equilibrium configuration. In contrast to standard perturbation analysis we keep all higher order terms in the evolution equations and thus obtain a fully non-linear description. The advantage of our scheme lies in the elimination of background terms from the equations and the associated numerical errors. The improvements thus achieved will be particularly significant in the study of mildly non-linear effects where the amplitude of the dynamic signal is small compared with the equilibrium values but large enough to warrant non-linear effects. We apply the new technique to the study of non-linear coupling of Eigenmodes and non-linear effects in the oscillations of marginally stable neutron stars. We find non-linear effects in low amplitude oscillations to be particularly pronounced in the range of modes with vanishing frequency which typically mark the onset of instability. (author)

  14. Applied linear algebra

    CERN Document Server

    Olver, Peter J

    2018-01-01

    This textbook develops the essential tools of linear algebra, with the goal of imparting technique alongside contextual understanding. Applications go hand-in-hand with theory, each reinforcing and explaining the other. This approach encourages students to develop not only the technical proficiency needed to go on to further study, but an appreciation for when, why, and how the tools of linear algebra can be used across modern applied mathematics. Providing an extensive treatment of essential topics such as Gaussian elimination, inner products and norms, and eigenvalues and singular values, this text can be used for an in-depth first course, or an application-driven second course in linear algebra. In this second edition, applications have been updated and expanded to include numerical methods, dynamical systems, data analysis, and signal processing, while the pedagogical flow of the core material has been improved. Throughout, the text emphasizes the conceptual connections between each application and the un...

  15. Linearized Flux Evolution (LiFE): A technique for rapidly adapting fluxes from full-physics radiative transfer models

    Science.gov (United States)

    Robinson, Tyler D.; Crisp, David

    2018-05-01

    Solar and thermal radiation are critical aspects of planetary climate, with gradients in radiative energy fluxes driving heating and cooling. Climate models require that radiative transfer tools be versatile, computationally efficient, and accurate. Here, we describe a technique that uses an accurate full-physics radiative transfer model to generate a set of atmospheric radiative quantities which can be used to linearly adapt radiative flux profiles to changes in the atmospheric and surface state-the Linearized Flux Evolution (LiFE) approach. These radiative quantities describe how each model layer in a plane-parallel atmosphere reflects and transmits light, as well as how the layer generates diffuse radiation by thermal emission and by scattering light from the direct solar beam. By computing derivatives of these layer radiative properties with respect to dynamic elements of the atmospheric state, we can then efficiently adapt the flux profiles computed by the full-physics model to new atmospheric states. We validate the LiFE approach, and then apply this approach to Mars, Earth, and Venus, demonstrating the information contained in the layer radiative properties and their derivatives, as well as how the LiFE approach can be used to determine the thermal structure of radiative and radiative-convective equilibrium states in one-dimensional atmospheric models.

  16. Failed fuel action plan guidelines: Special report

    International Nuclear Information System (INIS)

    1987-11-01

    The objective of this document is to provide a generic guideline that can be used to formulate a failed fuel action plan (FFAP) for specific application by a utility. This document is intended to be part of a comprehensive fuel reliability monitoring, management, and improvement program. The utilities may utilize this document as one resource in developing a failed fuel action plan. This document is not intended to be used as a failed fuel action plan standard. This document is intended to provide guidance on: management responsibilities; fuel performance parameters; cost/benefit analysis; action levels; long-term improvement methods; and data collection, analysis, and trending. 3 refs., 4 figs., 6 tabs

  17. An efficient linear approach in the reservoirs operation for electric power generation; Uma eficiente abordagem linear na operacao de reservatorios para geracao de energia eletrica

    Energy Technology Data Exchange (ETDEWEB)

    Zambon, Katia Livia

    1997-07-01

    A new approach for the Scheduling of Hydrothermal Systems, with a formulation that allows the solution of the problem through the linear programming techniques, otherwise the original form, which is complex and difficult is presented. The models were developed through a linear form for the generation function of hydroelectric plants, successive of the linearization of the cost function of the problem. The linear techniques used were the Simplex Method, with some modification that is is efficient, fast and simple. The important physical aspects of the system were preserved, like the individual representation of the hydroelectric plant, the features of cost function with the exponential increase and the head effect. Besides, this formulation can lead to stochastic approaches. All the optimization methods were implemented for the solution of the problem. The performance obtained were compared with each other and with that obtained through the non linear techniques. The algorithms showed to be efficient, with good results and very near to the optimal behavior of the reservoir operation planning obtained by traditional methods. (author)

  18. Linear accelerator use in the nuclear field

    International Nuclear Information System (INIS)

    Lecomte, J.-C.

    Radiography of internal conformity is performed on weldments and thick castings using linear accelerators. The basic principles relating to linear accelerators are outlined and their advantages over Co 60 sources described. Linear accelerator operation related requirements are presented as well as the use of this apparatus as a method for volumetric inspection, during fabrication of French Nuclear Steam Supply Systems (NSSS). Finally the resources needed to use this technique as an inspection method is dealt with [fr

  19. Effect of linear and non-linear blade modelling techniques on simulated fatigue and extreme loads using Bladed

    Science.gov (United States)

    Beardsell, Alec; Collier, William; Han, Tao

    2016-09-01

    There is a trend in the wind industry towards ever larger and more flexible turbine blades. Blade tip deflections in modern blades now commonly exceed 10% of blade length. Historically, the dynamic response of wind turbine blades has been analysed using linear models of blade deflection which include the assumption of small deflections. For modern flexible blades, this assumption is becoming less valid. In order to continue to simulate dynamic turbine performance accurately, routine use of non-linear models of blade deflection may be required. This can be achieved by representing the blade as a connected series of individual flexible linear bodies - referred to in this paper as the multi-part approach. In this paper, Bladed is used to compare load predictions using single-part and multi-part blade models for several turbines. The study examines the impact on fatigue and extreme loads and blade deflection through reduced sets of load calculations based on IEC 61400-1 ed. 3. Damage equivalent load changes of up to 16% and extreme load changes of up to 29% are observed at some turbine load locations. It is found that there is no general pattern in the loading differences observed between single-part and multi-part blade models. Rather, changes in fatigue and extreme loads with a multi-part blade model depend on the characteristics of the individual turbine and blade. Key underlying causes of damage equivalent load change are identified as differences in edgewise- torsional coupling between the multi-part and single-part models, and increased edgewise rotor mode damping in the multi-part model. Similarly, a causal link is identified between torsional blade dynamics and changes in ultimate load results.

  20. A Reduced Dantzig-Wolfe Decomposition for a Suboptimal Linear MPC

    DEFF Research Database (Denmark)

    Standardi, Laura; Poulsen, Niels Kjølstad; Jørgensen, John Bagterp

    2014-01-01

    Linear Model Predictive Control (MPC) is an efficient control technique that repeatedly solves online constrained linear programs. In this work we propose an economic linear MPC strategy for operation of energy systems consisting of multiple and independent power units. These systems cooperate...

  1. The failing firm defence: merger policy and entry

    OpenAIRE

    Mason, Robin; Weeds, Helen

    2003-01-01

    This Paper considers the 'failing firm defence'. Under this principle, found in most antitrust jurisdictions, a merger that would otherwise be blocked due to its adverse effect on competition is permitted when the firm to be acquired is a failing firm, and an alternative, less detrimental merger is unavailable. Competition authorities have shown considerable reluctance to accept the failing firm defence, and it has been successfully used in just a handful of cases. The Paper considers the def...

  2. Coping Styles of Failing Brunei Vocational Students

    Science.gov (United States)

    Mundia, Lawrence; Salleh, Sallimah

    2017-01-01

    Purpose: The purpose of this paper is to determine the prevalence of two types of underachieving students (n = 246) (active failing (AF) and passive failing (PF)) in Brunei vocational and technical education (VTE) institutions and their patterns of coping. Design/methodology/approach: The field survey method was used to directly reach many…

  3. Studying the method of linearization of exponential calibration curves

    International Nuclear Information System (INIS)

    Bunzh, Z.A.

    1989-01-01

    The results of study of the method for linearization of exponential calibration curves are given. The calibration technique and comparison of the proposed method with piecewise-linear approximation and power series expansion, are given

  4. Effects of collisions on linear and non-linear spectroscopic line shapes

    International Nuclear Information System (INIS)

    Berman, P.R.

    1978-01-01

    A fundamental physical problem is the determination of atom-atom, atom-molecule and molecule-molecule differential and total scattering cross sections. In this work, a technique for studying atomic and molecular collisions using spectroscopic line shape analysis is discussed. Collisions occurring within an atomic or molecular sample influence the sample's absorptive or emissive properties. Consequently the line shapes associated with the linear or non-linear absorption of external fields by an atomic system reflect the collisional processes occurring in the gas. Explicit line shape expressions are derived characterizing linear or saturated absorption by two-or three-level 'active' atoms which are undergoing collisions with perturber atoms. The line shapes may be broadened, shifted, narrowed, or distorted as a result of collisions which may be 'phase-interrupting' or 'velocity-changing' in nature. Systematic line shape studies can be used to obtain information on both the differential and total active atom-perturber scattering cross sections. (Auth.)

  5. Development of a parameter optimization technique for the design of automatic control systems

    Science.gov (United States)

    Whitaker, P. H.

    1977-01-01

    Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.

  6. LINEAR AND NONLINEAR CORRECTIONS IN THE RHIC INTERACTION REGIONS

    International Nuclear Information System (INIS)

    PILAT, F.; CAMERON, P.; PTITSYN, V.; KOUTCHOUK, J.P.

    2002-01-01

    A method has been developed to measure operationally the linear and non-linear effects of the interaction region triplets, that gives access to the multipole content through the action kick, by applying closed orbit bumps and analyzing tune and orbit shifts. This technique has been extensively tested and used during the RHIC operations in 2001. Measurements were taken at 3 different interaction regions and for different focusing at the interaction point. Non-linear effects up to the dodecapole have been measured as well as the effects of linear, sextupolar and octupolar corrections. An analysis package for the data processing has been developed that through a precise fit of the experimental tune shift data (measured by a phase lock loop technique to better than 10 -5 resolution) determines the multipole content of an IR triplet

  7. Linear approximation model network and its formation via ...

    Indian Academy of Sciences (India)

    To overcome the deficiency of `local model network' (LMN) techniques, an alternative `linear approximation model' (LAM) network approach is proposed. Such a network models a nonlinear or practical system with multiple linear models fitted along operating trajectories, where individual models are simply networked ...

  8. Prediction of minimum temperatures in an alpine region by linear and non-linear post-processing of meteorological models

    Directory of Open Access Journals (Sweden)

    R. Barbiero

    2007-05-01

    Full Text Available Model Output Statistics (MOS refers to a method of post-processing the direct outputs of numerical weather prediction (NWP models in order to reduce the biases introduced by a coarse horizontal resolution. This technique is especially useful in orographically complex regions, where large differences can be found between the NWP elevation model and the true orography. This study carries out a comparison of linear and non-linear MOS methods, aimed at the prediction of minimum temperatures in a fruit-growing region of the Italian Alps, based on the output of two different NWPs (ECMWF T511–L60 and LAMI-3. Temperature, of course, is a particularly important NWP output; among other roles it drives the local frost forecast, which is of great interest to agriculture. The mechanisms of cold air drainage, a distinctive aspect of mountain environments, are often unsatisfactorily captured by global circulation models. The simplest post-processing technique applied in this work was a correction for the mean bias, assessed at individual model grid points. We also implemented a multivariate linear regression on the output at the grid points surrounding the target area, and two non-linear models based on machine learning techniques: Neural Networks and Random Forest. We compare the performance of all these techniques on four different NWP data sets. Downscaling the temperatures clearly improved the temperature forecasts with respect to the raw NWP output, and also with respect to the basic mean bias correction. Multivariate methods generally yielded better results, but the advantage of using non-linear algorithms was small if not negligible. RF, the best performing method, was implemented on ECMWF prognostic output at 06:00 UTC over the 9 grid points surrounding the target area. Mean absolute errors in the prediction of 2 m temperature at 06:00 UTC were approximately 1.2°C, close to the natural variability inside the area itself.

  9. Enhancing Linearity of Voltage Controlled Oscillator Thermistor Signal Conditioning Circuit Using Linear Search

    Science.gov (United States)

    Rana, K. P. S.; Kumar, Vineet; Prasad, Tapan

    2018-02-01

    Temperature to Frequency Converters (TFCs) are potential signal conditioning circuits (SCCs) usually employed in temperature measurements using thermistors. A NE/SE-566 based SCC has been recently used in several reported works as TFC. Application of NE/SE-566 based SCC requires a mechanism for finding the optimal values of SCC parameters yielding the optimal linearity and desired sensitivity performances. Two classical methods, namely, inflection point and three point have been employed for this task. In this work, the application of these two methods, on NE/SE-566 based SCC in TFC, is investigated in detail and the conditions for its effective usage are developed. Further, since these classical methods offer an approximate linearization of temperature and frequency relationship an application of a linear search based technique is proposed to further enhance the linearity. The implemented linear search method used results obtained from the above mentioned classical methods. The presented simulation studies, for three different industrial grade thermistors, revealed that the linearity enhancements of 21.7, 18.3 and 17.8% can be achieved over the inflection point method and 4.9, 4.7 and 4.7% over the three point method, for an input temperature range of 0-100 °C.

  10. Cavity characterization for general use in linear electron accelerators

    International Nuclear Information System (INIS)

    Souza Neto, M.V. de.

    1985-01-01

    The main objective of this work is to is to develop measurement techniques for the characterization of microwave cavities used in linear electron accelerators. Methods are developed for the measurement of parameters that are essential to the design of an accelerator structure using conventional techniques of resonant cavities at low power. Disk-loaded cavities were designed and built, similar to those in most existing linear electron accelerators. As a result, the methods developed and the estimated accuracy were compared with those from other investigators. The results of this work are relevant for the design of cavities with the objective of developing linear electron accelerators. (author) [pt

  11. Optimal non-linear health insurance.

    Science.gov (United States)

    Blomqvist, A

    1997-06-01

    Most theoretical and empirical work on efficient health insurance has been based on models with linear insurance schedules (a constant co-insurance parameter). In this paper, dynamic optimization techniques are used to analyse the properties of optimal non-linear insurance schedules in a model similar to one originally considered by Spence and Zeckhauser (American Economic Review, 1971, 61, 380-387) and reminiscent of those that have been used in the literature on optimal income taxation. The results of a preliminary numerical example suggest that the welfare losses from the implicit subsidy to employer-financed health insurance under US tax law may be a good deal smaller than previously estimated using linear models.

  12. Numerical solution of two-dimensional non-linear partial differential ...

    African Journals Online (AJOL)

    linear partial differential equations using a hybrid method. The solution technique involves discritizing the non-linear system of partial differential equations (PDEs) to obtain a corresponding nonlinear system of algebraic difference equations to be ...

  13. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    Science.gov (United States)

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  14. Numerical linear algebra with applications using Matlab

    CERN Document Server

    Ford, William

    2014-01-01

    Designed for those who want to gain a practical knowledge of modern computational techniques for the numerical solution of linear algebra problems, Numerical Linear Algebra with Applications contains all the material necessary for a first year graduate or advanced undergraduate course on numerical linear algebra with numerous applications to engineering and science. With a unified presentation of computation, basic algorithm analysis, and numerical methods to compute solutions, this book is ideal for solving real-world problems. It provides necessary mathematical background information for

  15. Identification of an Equivalent Linear Model for a Non-Linear Time-Variant RC-Structure

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Andersen, P.; Brincker, Rune

    are investigated and compared with ARMAX models used on a running window. The techniques are evaluated using simulated data generated by the non-linear finite element program SARCOF modeling a 10-storey 3-bay concrete structure subjected to amplitude modulated Gaussian white noise filtered through a Kanai......This paper considers estimation of the maximum softening for a RC-structure subjected to earthquake excitation. The so-called Maximum Softening damage indicator relates the global damage state of the RC-structure to the relative decrease of the fundamental eigenfrequency in an equivalent linear...

  16. Can a Linear Sigma Model Describe Walking Gauge Theories at Low Energies?

    Science.gov (United States)

    Gasbarro, Andrew

    2018-03-01

    In recent years, many investigations of confining Yang Mills gauge theories near the edge of the conformal window have been carried out using lattice techniques. These studies have revealed that the spectrum of hadrons in nearly conformal ("walking") gauge theories differs significantly from the QCD spectrum. In particular, a light singlet scalar appears in the spectrum which is nearly degenerate with the PNGBs at the lightest currently accessible quark masses. This state is a viable candidate for a composite Higgs boson. Presently, an acceptable effective field theory (EFT) description of the light states in walking theories has not been established. Such an EFT would be useful for performing chiral extrapolations of lattice data and for serving as a bridge between lattice calculations and phenomenology. It has been shown that the chiral Lagrangian fails to describe the IR dynamics of a theory near the edge of the conformal window. Here we assess a linear sigma model as an alternate EFT description by performing explicit chiral fits to lattice data. In a combined fit to the Goldstone (pion) mass and decay constant, a tree level linear sigma model has a Χ2/d.o.f. = 0.5 compared to Χ2/d.o.f. = 29.6 from fitting nextto-leading order chiral perturbation theory. When the 0++ (σ) mass is included in the fit, Χ2/d.o.f. = 4.9. We remark on future directions for providing better fits to the σ mass.

  17. Technique and dosimetry for total body irradiation with an 8-MV linear accelerator

    International Nuclear Information System (INIS)

    Svahn-Tapper, G.; Nilsson, P.; Jonsson, C.; Alvegard, T.A.

    1987-01-01

    The aim of the study was to develop a method for calculation of the absorbed dose at an arbitrary point in the patient (adults and children). The method should be accurate but simple to use in clinical routine and it should as far as possible follow the recommendations by ICRU for conventional radiotherapy. An 8-MV linear accelerator is used with a diamond-shaped field and an isocentric technique at a focus-axis distance of 430 cm. The dose rate in an arbitrary point in the patient is calculated from the absorbed dose rate in dose maximum for a phantom size of 30 x 30 x 30 cm 3 in the TBI field, an inverse square law factor, the tissue-maximum ratio, an equivalent field size correction factor determined from the patient contour using the Clarkson method, a factor correcting for lack of backscattering material, an off-axis output correction factor, and a factor that corrects for off-axis variations in effective photon beam energy and for oblique beam penetration of the patient. A personal computer is used for the dose calculations. The formula was tested with TLD measurements in a RT Humanoid (adult) phantom and in a Pedo-RT Humanoid (child) phantom. In vivo dose measurements are also presented

  18. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  19. Linear differential equations to solve nonlinear mechanical problems: A novel approach

    OpenAIRE

    Nair, C. Radhakrishnan

    2004-01-01

    Often a non-linear mechanical problem is formulated as a non-linear differential equation. A new method is introduced to find out new solutions of non-linear differential equations if one of the solutions of a given non-linear differential equation is known. Using the known solution of the non-linear differential equation, linear differential equations are set up. The solutions of these linear differential equations are found using standard techniques. Then the solutions of the linear differe...

  20. Who Really Failed? Commentary

    Science.gov (United States)

    Maiuri, Katherine M.; Leon, Raul A.

    2012-01-01

    Scott Jaschik's (2010) article "Who Really Failed?" details the experience of Dominique Homberger, a tenured faculty member at Louisiana State University (LSU) who was removed from teaching her introductory biology course citing student complaints in regards to "the extreme nature" of the grading policy. This removal has…

  1. Abortion: Strong's counterexamples fail

    DEFF Research Database (Denmark)

    Di Nucci, Ezio

    2009-01-01

    This paper shows that the counterexamples proposed by Strong in 2008 in the Journal of Medical Ethics to Marquis's argument against abortion fail. Strong's basic idea is that there are cases--for example, terminally ill patients--where killing an adult human being is prima facie seriously morally...

  2. Using multiple linear regression techniques to quantify carbon ...

    African Journals Online (AJOL)

    Fallow ecosystems provide a significant carbon stock that can be quantified for inclusion in the accounts of global carbon budgets. Process and statistical models of productivity, though useful, are often technically rigid as the conditions for their application are not easy to satisfy. Multiple regression techniques have been ...

  3. Observer-based linear parameter varying H∞ tracking control for hypersonic vehicles

    Directory of Open Access Journals (Sweden)

    Yiqing Huang

    2016-11-01

    Full Text Available This article aims to develop observer-based linear parameter varying output feedback H∞ tracking controller for hypersonic vehicles. Due to the complexity of an original nonlinear model of the hypersonic vehicle dynamics, a slow–fast loop linear parameter varying polytopic model is introduced for system stability analysis and controller design. Then, a state observer is developed by linear parameter varying technique in order to estimate the unmeasured attitude angular for slow loop system. Also, based on the designed linear parameter varying state observer, a kind of attitude tracking controller is presented to reduce tracking errors for all bounded reference attitude angular inputs. The closed-loop linear parameter varying system is proved to be quadratically stable by Lypapunov function technique. Finally, simulation results show that the developed linear parameter varying H∞ controller has good tracking capability for reference commands.

  4. A new trajectory correction technique for linacs

    International Nuclear Information System (INIS)

    Raubenheimer, T.O.; Ruth, R.D.

    1990-06-01

    In this paper, we describe a new trajectory correction technique for high energy linear accelerators. Current correction techniques force the beam trajectory to follow misalignments of the Beam Position Monitors. Since the particle bunch has a finite energy spread and particles with different energies are deflected differently, this causes ''chromatic'' dilution of the transverse beam emittance. The algorithm, which we describe in this paper, reduces the chromatic error by minimizing the energy dependence of the trajectory. To test the method we compare the effectiveness of our algorithm with a standard correction technique in simulations on a design linac for a Next Linear Collider. The simulations indicate that chromatic dilution would be debilitating in a future linear collider because of the very small beam sizes required to achieve the necessary luminosity. Thus, we feel that this technique will prove essential for future linear colliders. 3 refs., 6 figs., 2 tabs

  5. Arthroscopic Revision Surgery for Failure of Open Latarjet Technique.

    Science.gov (United States)

    Cuéllar, Adrián; Cuéllar, Ricardo; de Heredia, Pablo Beltrán

    2017-05-01

    To evaluate the efficacy in treating pain, limited range of motion, and continued instability of the Latarjet open technique via the use of arthroscopy. A retrospective review of patients who underwent arthroscopic capsule plication after failure of an open Latarjet technique was performed. Revision surgery was indicated in cases of recurrent instability and associated pain. Only patients with a glenoid defect failed due to capsular redundancy is amenable to successful treatment with arthroscopic capsuloplasty. Arthroscopic approaches can offer a good solution for treating previously failed open Latarjet procedures. Level IV, therapeutic case series. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  6. Linearly polarized photons at ELSA

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, Holger [Physikalisches Institut, Universitaet Bonn (Germany)

    2009-07-01

    To investigate the nucleon resonance regime in meson photoproduction, double polarization experiments are currently performed at the electron accelerator ELSA in Bonn. The experiments make use of a polarized target and circularly or linearly polarized photon beams. Linearly polarized photons are produced by coherent bremsstrahlung from an accurately aligned diamond crystal. The orientation of the crystal with respect to the electron beam is measured using the Stonehenge-Technique. Both, the energy of maximum polarization and the plane of polarization, can be deliberately chosen for the experiment. The linearly polarized beam provides the basis for the measurement of azimuthal beam asymmetries, such as {sigma} (unpolarized target) and G (polarized target). These observables are extracted in various single and multiple meson photoproduction channels.

  7. Generalized, Linear, and Mixed Models

    CERN Document Server

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  8. Emittance control in linear colliders

    International Nuclear Information System (INIS)

    Ruth, R.D.

    1991-01-01

    Before completing a realistic design of a next-generation linear collider, the authors must first learn the lessons taught by the first generation, the SLC. Given that, they must make designs fault tolerant by including correction and compensation in the basic design. They must also try to eliminate these faults by improved alignment and stability of components. When these two efforts cross, they have a realistic design. The techniques of generation and control of emittance reviewed here provide a foundation for a design which can obtain the necessary luminosity in a next-generation linear collider

  9. On the use of iterative techniques for feedforward control of transverse angle and position jitter in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1994-01-01

    It is possible to use feedforward predictive control for transverse position and trajectory-angle jitter correction. The control procedure is straightforward, but creation of the predictive filter is not as obvious. The two processes tested were the least mean squares (LMS) and Kalman inter methods. The controller parameters calculated offline are downloaded to a real-time analog correction system between macropulses. These techniques worked well for both interpulse (pulse-to-pulse) correction and intrapulse (within a pulse) correction with the Kalman filter method being the clear winner. A simulation based on interpulse data taken at the Stanford Linear Collider showed an improvement factor of almost three in the average rms jitter over standard feedback techniques for the Kalman filter. An improvement factor of over three was found for the Kalman filter on intrapulse data taken at the Los Alamos Meson Physics Facility. The feedforward systems also improved the correction bandwidth

  10. Breaking the Failed-State Cycle

    National Research Council Canada - National Science Library

    Haims, Marla C; Gompert, David C; Treverton, Gregory F; Stearns, Brooke K

    2008-01-01

    In their research and field experience, the authors have observed a wide gulf separating the treatment of the security problems of failed states from the treatment of those states economic problems...

  11. Development of failed fuel detection system for PWR (III)

    International Nuclear Information System (INIS)

    Hwang, Churl Kew; Kang, Hee Dong; Jeong, Seung Ho; Cho, Byung Sub; Yoon, Byeong Joo; Yoon, Jae Seong

    1987-12-01

    Ultrasonic transducers satisfying the conditions for failed fuel rod detection for failed fuel rod detection have been designed and built. And performance tests for them have been carried out. Ultrasonic signal processing units, a manipulator guiding the ultrasonic probe through the fuel assembly lanes and its control units have been constructed. The performance of the system has been verified experimentally to be successful in failed fuel rod detection. (Author)

  12. Development and Application of an MSALL-Based Approach for the Quantitative Analysis of Linear Polyethylene Glycols in Rat Plasma by Liquid Chromatography Triple-Quadrupole/Time-of-Flight Mass Spectrometry.

    Science.gov (United States)

    Zhou, Xiaotong; Meng, Xiangjun; Cheng, Longmei; Su, Chong; Sun, Yantong; Sun, Lingxia; Tang, Zhaohui; Fawcett, John Paul; Yang, Yan; Gu, Jingkai

    2017-05-16

    Polyethylene glycols (PEGs) are synthetic polymers composed of repeating ethylene oxide subunits. They display excellent biocompatibility and are widely used as pharmaceutical excipients. To fully understand the biological fate of PEGs requires accurate and sensitive analytical methods for their quantitation. Application of conventional liquid chromatography-tandem mass spectrometry (LC-MS/MS) is difficult because PEGs have polydisperse molecular weights (MWs) and tend to produce multicharged ions in-source resulting in innumerable precursor ions. As a result, multiple reaction monitoring (MRM) fails to scan all ion pairs so that information on the fate of unselected ions is missed. This Article addresses this problem by application of liquid chromatography-triple-quadrupole/time-of-flight mass spectrometry (LC-Q-TOF MS) based on the MS ALL technique. This technique performs information-independent acquisition by allowing all PEG precursor ions to enter the collision cell (Q2). In-quadrupole collision-induced dissociation (CID) in Q2 then effectively generates several fragments from all PEGs due to the high collision energy (CE). A particular PEG product ion (m/z 133.08592) was found to be common to all linear PEGs and allowed their total quantitation in rat plasma with high sensitivity, excellent linearity and reproducibility. Assay validation showed the method was linear for all linear PEGs over the concentration range 0.05-5.0 μg/mL. The assay was successfully applied to the pharmacokinetic study in rat involving intravenous administration of linear PEG 600, PEG 4000, and PEG 20000. It is anticipated the method will have wide ranging applications and stimulate the development of assays for other pharmaceutical polymers in the future.

  13. 77 FR 9846 - Source of Income From Qualified Fails Charges

    Science.gov (United States)

    2012-02-21

    ... temporary regulations noted that no trading practice existed at that time for fails charges on securities other than Treasuries, but that if a fails charge trading practice pertaining to other securities was... sources within the United States, and the income from the qualified fails charge is treated as effectively...

  14. An acceleration technique for the Gauss-Seidel method applied to symmetric linear systems

    Directory of Open Access Journals (Sweden)

    Jesús Cajigas

    2014-06-01

    Full Text Available A preconditioning technique to improve the convergence of the Gauss-Seidel method applied to symmetric linear systems while preserving symmetry is proposed. The preconditioner is of the form I + K and can be applied an arbitrary number of times. It is shown that under certain conditions the application of the preconditioner a finite number of steps reduces the matrix to a diagonal. A series of numerical experiments using matrices from spatial discretizations of partial differential equations demonstrates that both versions of the preconditioner, point and block version, exhibit lower iteration counts than its non-symmetric version. Resumen. Se propone una técnica de precondicionamiento para mejorar la convergencia del método Gauss-Seidel aplicado a sistemas lineales simétricos pero preservando simetría. El precondicionador es de la forma I + K y puede ser aplicado un número arbitrario de veces. Se demuestra que bajo ciertas condiciones la aplicación del precondicionador un número finito de pasos reduce la matriz del sistema precondicionado a una diagonal. Una serie de experimentos con matrices que provienen de la discretización de ecuaciones en derivadas parciales muestra que ambas versiones del precondicionador, por punto y por bloque, muestran un menor número de iteraciones en comparación con la versión que no preserva simetría.

  15. Thermal radiation analysis for small satellites with single-node model using techniques of equivalent linearization

    International Nuclear Information System (INIS)

    Anh, N.D.; Hieu, N.N.; Chung, P.N.; Anh, N.T.

    2016-01-01

    Highlights: • Linearization criteria are presented for a single-node model of satellite thermal. • A nonlinear algebraic system for linearization coefficients is obtained. • The temperature evolutions obtained from different methods are explored. • The temperature mean and amplitudes versus the heat capacity are discussed. • The dual criterion approach yields smaller errors than other approximate methods. - Abstract: In this paper, the method of equivalent linearization is extended to the thermal analysis of satellite using both conventional and dual criteria of linearization. These criteria are applied to a differential nonlinear equation of single-node model of the heat transfer of a small satellite in the Low Earth Orbit. A system of nonlinear algebraic equations for linearization coefficients is obtained in the closed form and then solved by the iteration method. The temperature evolution, average values and amplitudes versus the heat capacity obtained by various approaches including Runge–Kutta algorithm, conventional and dual criteria of equivalent linearization, and Grande's approach are compared together. Numerical results reveal that temperature responses obtained from the method of linearization and Grande's approach are quite close to those obtained from the Runge–Kutta method. The dual criterion yields smaller errors than those of the remaining methods when the nonlinearity of the system increases, namely, when the heat capacity varies in the range [1.0, 3.0] × 10 4  J K −1 .

  16. ALPS: A Linear Program Solver

    Science.gov (United States)

    Ferencz, Donald C.; Viterna, Larry A.

    1991-01-01

    ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.

  17. Inferior alveolar nerve block: Alternative technique

    OpenAIRE

    Thangavelu, K.; Kannan, R.; Kumar, N. Senthil

    2012-01-01

    Background: Inferior alveolar nerve block (IANB) is a technique of dental anesthesia, used to produce anesthesia of the mandibular teeth, gingivae of the mandible and lower lip. The conventional IANB is the most commonly used the nerve block technique for achieving local anesthesia for mandibular surgical procedures. In certain cases, however, this nerve block fails, even when performed by the most experienced clinician. Therefore, it would be advantageous to find an alternative simple techni...

  18. Why Companies Fail? The Boiling Frog Syndrome

    OpenAIRE

    Ozcan, Rasim

    2018-01-01

    Why nations fail? An answer is given by Acemoglu and Robinson (2012) by pointing out the importance of institutions for an economy that leads to innovations for economic growth. Christensen (2012) asks a similar question for a firm and diagnoses why companies fail. In this study, I relate Acemoglu and Robinson (2012) with Christensen (2012) in order to better understand how to make companies more prosperous, more powerful, healthier, and live longer via innovations.

  19. Frame sequences analysis technique of linear objects movement

    Science.gov (United States)

    Oshchepkova, V. Y.; Berg, I. A.; Shchepkin, D. V.; Kopylova, G. V.

    2017-12-01

    Obtaining data by noninvasive methods are often needed in many fields of science and engineering. This is achieved through video recording in various frame rate and light spectra. In doing so quantitative analysis of movement of the objects being studied becomes an important component of the research. This work discusses analysis of motion of linear objects on the two-dimensional plane. The complexity of this problem increases when the frame contains numerous objects whose images may overlap. This study uses a sequence containing 30 frames at the resolution of 62 × 62 pixels and frame rate of 2 Hz. It was required to determine the average velocity of objects motion. This velocity was found as an average velocity for 8-12 objects with the error of 15%. After processing dependencies of the average velocity vs. control parameters were found. The processing was performed in the software environment GMimPro with the subsequent approximation of the data obtained using the Hill equation.

  20. On the analysis of clonogenic survival data: Statistical alternatives to the linear-quadratic model

    International Nuclear Information System (INIS)

    Unkel, Steffen; Belka, Claus; Lauber, Kirsten

    2016-01-01

    The most frequently used method to quantitatively describe the response to ionizing irradiation in terms of clonogenic survival is the linear-quadratic (LQ) model. In the LQ model, the logarithm of the surviving fraction is regressed linearly on the radiation dose by means of a second-degree polynomial. The ratio of the estimated parameters for the linear and quadratic term, respectively, represents the dose at which both terms have the same weight in the abrogation of clonogenic survival. This ratio is known as the α/β ratio. However, there are plausible scenarios in which the α/β ratio fails to sufficiently reflect differences between dose-response curves, for example when curves with similar α/β ratio but different overall steepness are being compared. In such situations, the interpretation of the LQ model is severely limited. Colony formation assays were performed in order to measure the clonogenic survival of nine human pancreatic cancer cell lines and immortalized human pancreatic ductal epithelial cells upon irradiation at 0-10 Gy. The resulting dataset was subjected to LQ regression and non-linear log-logistic regression. Dimensionality reduction of the data was performed by cluster analysis and principal component analysis. Both the LQ model and the non-linear log-logistic regression model resulted in accurate approximations of the observed dose-response relationships in the dataset of clonogenic survival. However, in contrast to the LQ model the non-linear regression model allowed the discrimination of curves with different overall steepness but similar α/β ratio and revealed an improved goodness-of-fit. Additionally, the estimated parameters in the non-linear model exhibit a more direct interpretation than the α/β ratio. Dimensionality reduction of clonogenic survival data by means of cluster analysis was shown to be a useful tool for classifying radioresistant and sensitive cell lines. More quantitatively, principal component analysis allowed

  1. Approach to assurance of reliability of linear accelerator operation observations

    International Nuclear Information System (INIS)

    Bakov, S.M.; Borovikov, A.A.; Kavkun, S.L.

    1994-01-01

    The system approach to solving the task of assuring reliability of observations over the linear accelerator operation is proposed. The basic principles of this method consist in application of dependences between the facility parameters, decrease in the number of the system apparatus channels for data acquisition without replacement of failed channel by reserve one. The signal commutation unit, the introduction whereof into the data acquisition system essentially increases the reliability of the measurement system on the account of active reserve, is considered detail. 8 refs. 6 figs

  2. On the use of iterative techniques for feedforward control of transverse angle and position jitter in linear particle beam accelerators

    International Nuclear Information System (INIS)

    Barr, D.S.

    1995-01-01

    It is possible to use feedforward predictive control for transverse position and trajectory-angle jitter correction. The control procedure is straightforward, but creation of the predictive filter is not as obvious. The two process tested were the least mean squares (LMS) and Kalman filter methods. The controller parameters calculated offline are downloaded to a real-time analog correction system between macropulses. These techniques worked well for both interpulse (pulse-to-pulse) correction and intrapulse (within a pulse) correction with the Kalman filter method being the clear winner. A simulation based on interpulse data taken at the Stanford Linear Collider showed an improvement factor of almost three in the average rms jitter over standard feedback techniques for the Kalman filter. An improvement factor of over three was found for the Kalman filter on intrapulse data taken at the Los Alamos Meson Physics Facility. The feedforward systems also improved the correction bandwidth. copyright 1995 American Institute of Physics

  3. Lean Transformation Guidance: Why Organizations Fail To Achieve and Sustain Excellence Through Lean Improvement

    OpenAIRE

    Mohammed Hamed Ahmed

    2013-01-01

    Many companies are complaining that lean didn’t achieve their long-term goals, and the improvement impact was very short-lived. 7 out of each 10 lean projects fail as companies try to use lean like a toolkit, copying and pasting the techniques without trying to adapt the employee’s culture, manage the improvement process, sustain the results, and develop their leaders. When the Toyota production system was created, the main goal was to remove wastes from the shop floor us...

  4. Open Bankart repair for revision of failed prior stabilization: outcome analysis at a mean of more than 10 years.

    Science.gov (United States)

    Neviaser, Andrew S; Benke, Michael T; Neviaser, Robert J

    2015-06-01

    The purpose of this study was to analyze the outcome of open Bankart repair for failed stabilization surgery at a mean follow-up of >10 years. Thirty patients underwent revision open Bankart repair by a single surgeon for failed prior stabilization surgery, with a standard technique and postoperative rehabilitation. All patients were referred by other surgeons. Evaluation was by an independent examiner, at a mean follow-up of 10.2 years. Evaluation included a history, physical examination for range of motion, outcome scores, recurrence, return to athletics, and radiographic examination. All cases had persistent Bankart and Hill-Sachs lesions. Failures included 14 patients with a failed single arthroscopic Bankart repair; 1 patient with 2 failed arthroscopic Bankart repairs; 1 patient with an arthroscopic failure and an open Bankart repair; 7 patients with failed open Bankart repairs; and 1 patient with a failed open Bankart repair, then a failed arthroscopic attempt. Two patients had had thermal capsulorrhaphy; 2 others had staple capsulorrhaphy, 1 with an open capsular shift and 1 after a failed arthroscopic Bankart repair, an open Bankart repair, and then a coracoid transfer. All arthroscopic Bankart repairs had anchors placed medial and superior on the glenoid neck. Mean motion loss compared with the normal contralateral side was as follows: elevation 1.15°, abduction 4.2°, external rotation at the side 3.2°, external rotation in abduction 5.1°, and internal rotation 0.6 vertebral levels (NS). No patient had an apprehension sign, pain, or instability. Of 23 who played sports, 22 resumed after. Outcomes scores were as follows: American Shoulder and Elbow Surgeons, 89.44; Rowe, 86.67; Western Ontario Shoulder Instability Index, 476.26. On radiographic examination, there were 13 normal radiographs and 7 with mild, 2 with moderate, and 0 with severe arthritic changes. The open Bankart repair offers a reliable, consistently successful option for revision of

  5. Characterization of failure modes in deep UV and deep green LEDs utilizing advanced semiconductor localization techniques.

    Energy Technology Data Exchange (ETDEWEB)

    Tangyunyong, Paiboon; Miller, Mary A.; Cole, Edward Isaac, Jr.

    2012-03-01

    We present the results of a two-year early career LDRD that focused on defect localization in deep green and deep ultraviolet (UV) light-emitting diodes (LEDs). We describe the laser-based techniques (TIVA/LIVA) used to localize the defects and interpret data acquired. We also describe a defect screening method based on a quick electrical measurement to determine whether defects should be present in the LEDs. We then describe the stress conditions that caused the devices to fail and how the TIVA/LIVA techniques were used to monitor the defect signals as the devices degraded and failed. We also describe the correlation between the initial defects and final degraded or failed state of the devices. Finally we show characterization results of the devices in the failed conditions and present preliminary theories as to why the devices failed for both the InGaN (green) and AlGaN (UV) LEDs.

  6. Rotational total skin electron irradiation with a linear accelerator

    Science.gov (United States)

    Evans, Michael D.C.; Devic, Slobodan; Parker, William; Freeman, Carolyn R.; Roberge, David; Podgorsak, Ervin B.

    2008-01-01

    The rotational total skin electron irradiation (RTSEI) technique at our institution has undergone several developments over the past few years. Replacement of the formerly used linear accelerator has prompted many modifications to the previous technique. With the current technique, the patient is treated with a single large field while standing on a rotating platform, at a source‐to‐surface distance of 380 cm. The electron field is produced by a Varian 21EX linear accelerator using the commercially available 6 MeV high dose rate total skin electron mode, along with a custom‐built flattening filter. Ionization chambers, radiochromic film, and MOSFET (metal oxide semiconductor field effect transistor) detectors have been used to determine the dosimetric properties of this technique. Measurements investigating the stationary beam properties, the effects of full rotation, and the dose distributions to a humanoid phantom are reported. The current treatment technique and dose regimen are also described. PACS numbers: 87.55.ne, 87.53.Hv, 87.53.Mr

  7. Linear Programming for Vocational Education Planning. Interim Report.

    Science.gov (United States)

    Young, Robert C.; And Others

    The purpose of the paper is to define for potential users of vocational education management information systems a quantitative analysis technique and its utilization to facilitate more effective planning of vocational education programs. Defining linear programming (LP) as a management technique used to solve complex resource allocation problems…

  8. Nelson's stochastic quantization of free linearized gravitational field and its Markovian structure

    International Nuclear Information System (INIS)

    Lim, S.C.

    1983-05-01

    It is shown that by applying Nelson's stochastic quantization scheme to free linearized gravitational field tensor one can associate with the resulting stochastic system a stochastic tensor field which coincides with the ''space'' part of the Riemannian tensor in Euclidean space-time. However, such a stochastic field fails to satisfy the Markov property. Instead, it satisfies the reflection positivity. The Markovian structure of the stochastic fields associated with the electromagnetic field is also discussed. (author)

  9. 7 CFR 983.152 - Failed lots/rework procedure.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Failed lots/rework procedure. 983.152 Section 983.152..., ARIZONA, AND NEW MEXICO Rules and Regulations § 983.152 Failed lots/rework procedure. (a) Inshell rework procedure for aflatoxin. If inshell rework is selected as a remedy to meet the aflatoxin regulations of this...

  10. A novel amplitude modulated triangular carrier gain linearization technique for SPWM inverter

    Directory of Open Access Journals (Sweden)

    Ramkumar Subburam

    2009-01-01

    Full Text Available This paper presents a new method to extend the linearity of the sinusoidal pulse width modulation (SPWM to full range of the pulse dropping region. The proposed amplitude modulated triangular carrier PWM method (AMTCPWM increases the dynamic range of the SPWM control and eliminates the need of nonlinear modulation in the pulse dropping region to reach the square wave boundary. The novel method combines the spectral quality of SPWM with the efficient single-mode linear control. A simple analytical characterization of the exact method is presented and its effectiveness is demonstrated using simulation for the basic single-phase H-bridge inverter circuit. The hardware results of the designed prototype inverter are presented to validate the betterment of the novel scheme. .

  11. LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table

  12. Efficient decomposition and linearization methods for the stochastic transportation problem

    International Nuclear Information System (INIS)

    Holmberg, K.

    1993-01-01

    The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)

  13. The mathematical structure of the approximate linear response relation

    International Nuclear Information System (INIS)

    Yasuda, Muneki; Tanaka, Kazuyuki

    2007-01-01

    In this paper, we study the mathematical structures of the linear response relation based on Plefka's expansion and the cluster variation method in terms of the perturbation expansion, and we show how this linear response relation approximates the correlation functions of the specified system. Moreover, by comparing the perturbation expansions of the correlation functions estimated by the linear response relation based on these approximation methods with exact perturbative forms of the correlation functions, we are able to explain why the approximate techniques using the linear response relation work well

  14. Post-processing through linear regression

    Science.gov (United States)

    van Schaeybroeck, B.; Vannitsem, S.

    2011-03-01

    Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  15. Contact analysis and experimental investigation of a linear ultrasonic motor.

    Science.gov (United States)

    Lv, Qibao; Yao, Zhiyuan; Li, Xiang

    2017-11-01

    The effects of surface roughness are not considered in the traditional motor model which fails to reflect the actual contact mechanism between the stator and slider. An analytical model for calculating the tangential force of linear ultrasonic motor is proposed in this article. The presented model differs from the previous spring contact model, the asperities in contact between stator and slider are considered. The influences of preload and exciting voltage on tangential force in moving direction are analyzed. An experiment is performed to verify the feasibility of this proposed model by comparing the simulation results with the measured data. Moreover, the proposed model and spring model are compared. The results reveal that the proposed model is more accurate than spring model. The discussion is helpful for designing and modeling of linear ultrasonic motors. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Management of Chronic Recurrent Dislocation of Temporomandibular Joint Using 'U' Shaped Graft: A New Restrictive Technique.

    Science.gov (United States)

    Gadre, Kiran; Singh, Divya; Gadre, Pushkar; Halli, Rajshekhar

    2017-06-01

    Numerous procedures have been described for the treatment of chronic recurrent dislocation of the temporo-mandibular joint (TMJ), either in the form of enhancement or restriction of the condylar movement, with their obvious merits and demerits. We present a new technique of using U shaped iliac bone graft to restrict the condylar movement and its advantages over the conventional techniques.We have used this technique successfully in 8 cases where Dautrey's procedure had failed with follow up period of 2 years. No patient complained of recurrent dislocation postoperatively. This a very simple and effective technique where other procedures have failed.

  17. Reliability testing of failed fuel location system

    International Nuclear Information System (INIS)

    Vieru, G.

    1996-01-01

    This paper presents the experimental reliability tests performed in order to prove the reliability parameters for Failed Fuel Location System (FFLS), equipment used to detect in which channel of a particular heat transport loop a fuel failure is located, and to find in which channel what particular bundle pair is failed. To do so, D20 samples from each reactor channel are sequentially monitored to detect a comparatively high level of delayed neutron activity. 15 refs, 8 figs, 2 tabs

  18. Development of novel segmented-plate linearly tunable MEMS capacitors

    International Nuclear Information System (INIS)

    Shavezipur, M; Khajepour, A; Hashemi, S M

    2008-01-01

    In this paper, novel MEMS capacitors with flexible moving electrodes and high linearity and tunability are presented. The moving plate is divided into small and rigid segments connected to one another by connecting beams at their end nodes. Under each node there is a rigid step which selectively limits the vertical displacement of the node. A lumped model is developed to analytically solve the governing equations of coupled structural-electrostatic physics with mechanical contact. Using the analytical solver, an optimization program finds the best set of step heights that provides the highest linearity. Analytical and finite element analyses of two capacitors with three-segmented- and six-segmented-plate confirm that the segmentation technique considerably improves the linearity while the tunability remains as high as that of a conventional parallel-plate capacitor. Moreover, since the new designs require customized fabrication processes, to demonstrate the applicability of the proposed technique for standard processes, a modified capacitor with flexible steps designed for PolyMUMPs is introduced. Dimensional optimization of the modified design results in a combination of high linearity and tunability. Constraining the displacement of the moving plate can be extended to more complex geometries to obtain smooth and highly linear responses

  19. Failing by design.

    Science.gov (United States)

    McGrath, Rita Gunther

    2011-04-01

    It's hardly news that business leaders work in increasingly uncertain environments, where failures are bound to be more common than successes. Yet if you ask executives how well, on a scale of one to 10, their organizations learn from failure, you'll often get a sheepish "Two-or maybe three" in response. Such organizations are missing a big opportunity: Failure may be inevitable but, if managed well, can be very useful. A certain amount of failure can help you keep your options open, find out what doesn't work, create the conditions to attract resources and attention, make room for new leaders, and develop intuition and skill. The key to reaping these benefits is to foster "intelligent failure" throughout your organization. McGrath describes several principles that can help you put intelligent failure to work. You should decide what success and failure would look like before you start a project. Document your initial assumptions, test and revise them as you go, and convert them into knowledge. Fail fast-the longer something takes, the less you'll learn-and fail cheaply, to contain your downside risk. Limit the number of uncertainties in new projects, and build a culture that tolerates, and sometimes even celebrates, failure. Finally, codify and share what you learn. These principles won't give you a means of avoiding all failures down the road-that's simply not realistic. They will help you use small losses to attain bigger wins over time.

  20. Observations of linear and nonlinear processes in the foreshock wave evolution

    Directory of Open Access Journals (Sweden)

    Y. Narita

    2007-07-01

    Full Text Available Waves in the foreshock region are studied on the basis of a hypothesis that the linear process first excites the waves and further wave-wave nonlinearities distribute scatter the energy of the primary waves into a number of daughter waves. To examine this wave evolution scenario, the dispersion relations, the wave number spectra of the magnetic field energy, and the dimensionless cross helicity are determined from the observations made by the four Cluster spacecraft. The results confirm that the linear process is the ion/ion right-hand resonant instability, but the wave-wave interactions are not clearly identified. We discuss various reasons why the test for the wave-wave nonlinearities fails, and conclude that the higher order statistics would provide a direct evidence for the wave coupling phenomena.

  1. A non-linear programming approach to the computer-aided design of regulators using a linear-quadratic formulation

    Science.gov (United States)

    Fleming, P.

    1985-01-01

    A design technique is proposed for linear regulators in which a feedback controller of fixed structure is chosen to minimize an integral quadratic objective function subject to the satisfaction of integral quadratic constraint functions. Application of a non-linear programming algorithm to this mathematically tractable formulation results in an efficient and useful computer-aided design tool. Particular attention is paid to computational efficiency and various recommendations are made. Two design examples illustrate the flexibility of the approach and highlight the special insight afforded to the designer.

  2. In-core sipping method for the identification of failed fuel assemblies

    International Nuclear Information System (INIS)

    Wu Zhongwang; Zhang Yajun

    2000-01-01

    The failed fuel assembly identification system is an important safety system which ensures safe operations of reactor and immediate treatment of failed fuel rod cladding. The system uses an internationally recognized method to identify failed fuel assemblies in a reactor with fuel element cases. The in-core sipping method is customary used to identify failed fuel assemblies during refueling or after fuel rod cladding failure accidents. The test is usually performed after reactor shutdown by taking samples from each fuel element case while the cases are still in their original core positions. The sample activity is then measured to identify failed fuel assemblies. A failed fuel assembly identification system was designed for the NHR-200 based on the properties of the NHR-200 and national requirements. the design provides an internationally recognized level of safety to ensure the safety of NHR-200

  3. Approximations for W-Pair Production at Linear-Collider Energies

    CERN Document Server

    Denner, A

    1997-01-01

    We determine the accuracy of various approximations to the O(alpha) corrections for on-shell W-pair production. While an approximation based on the universal corrections arising from initial-state radiation, from the running of alpha, and from corrections proportional to m_t^2 fails in the Linear-Collider energy range, a high-energy approximation improved by the exact universal corrections is sufficiently good above about 500GeV. These results indicate that in Monte Carlo event generators for off-shell W-pair production the incorporation of the universal corrections is not sufficient and more corrections should be included.

  4. Weeded Out? Gendered Responses to Failing Calculus.

    Science.gov (United States)

    Sanabria, Tanya; Penner, Andrew

    2017-06-01

    Although women graduate from college at higher rates than men, they remain underrepresented in science, technology, engineering, and mathematics (STEM) fields. This study examines whether women react to failing a STEM weed-out course by switching to a non-STEM major and graduating with a bachelor's degree in a non-STEM field. While competitive courses designed to weed out potential STEM majors are often invoked in discussions around why students exit the STEM pipeline, relatively little is known about how women and men react to failing these courses. We use detailed individual-level data from the National Educational Longitudinal Study (NELS) Postsecondary Transcript Study (PETS): 1988-2000 to show that women who failed an introductory calculus course are substantially less likely to earn a bachelor's degree in STEM. In doing so, we provide evidence that weed-out course failure might help us to better understand why women are less likely to earn degrees.

  5. DECOFF Probabilities of Failed Operations

    DEFF Research Database (Denmark)

    Gintautas, Tomas

    2015-01-01

    A statistical procedure of estimation of Probabilities of Failed Operations is described and exemplified using ECMWF weather forecasts and SIMO output from Rotor Lift test case models. Also safety factor influence is investigated. DECOFF statistical method is benchmarked against standard Alpha-factor...

  6. Failed fuel detection device

    International Nuclear Information System (INIS)

    Kawai, Masayoshi; Hayashida, Yoshihisa; Niidome, Jiro.

    1985-01-01

    Purpose: To prevent intrusion of background neutrons to neutron detectors thereby improve the S/N ratio of the detectors in the failed fuel detection device of LMFBR type reactors. Constitution: Neutrons from the reactor core pass through the gaps around the penetration holes in which the primary pipeways pass through the concrete shielding walls and pass through the gaps between the thermal shielding members and the neutron moderating shielding members of the failed fuel detection device and then intrude into the neutron detectors. In view of the above, inner neutron moderating shielding members and movable or resilient neutron shielding members are disposed to the inside of the neutron moderating shielding member. Graphite or carbon hydrides such as paraffin or synthetic resin with a large neutron moderation effect are used as the outer moderating shielding member and materials such as boron or carbon are used for the inner members. As a result, the background neutrons are shielded by the inner neutron moderating shielding members and the resilient neutron shielding members, by which the S/N ratio of the neutron detectors can be increased to 2 - 4 times. (Moriyama, K.)

  7. Failed fuel detector

    International Nuclear Information System (INIS)

    Kogure, Sumio; Seya, Toru; Watanabe, Masaaki.

    1976-01-01

    Purpose: To enhance the reliability of a failed fuel detector which detects radioactivity of nuclear fission products leaked out from fuel elements in cooling water. Constitution: Collected specimen is introduced into a separator and co-existing material considered to be an impediment is separated and removed by ion exchange resins, after which this specimen is introduced into a container housing therein a detector to systematically measure radioactivity. Thereby, it is possible to detect a signal lesser in variation in background, and inspection work also becomes simple. (Kawakami, Y.)

  8. Time series prediction: statistical and neural techniques

    Science.gov (United States)

    Zahirniak, Daniel R.; DeSimio, Martin P.

    1996-03-01

    In this paper we compare the performance of nonlinear neural network techniques to those of linear filtering techniques in the prediction of time series. Specifically, we compare the results of using the nonlinear systems, known as multilayer perceptron and radial basis function neural networks, with the results obtained using the conventional linear Wiener filter, Kalman filter and Widrow-Hoff adaptive filter in predicting future values of stationary and non- stationary time series. Our results indicate the performance of each type of system is heavily dependent upon the form of the time series being predicted and the size of the system used. In particular, the linear filters perform adequately for linear or near linear processes while the nonlinear systems perform better for nonlinear processes. Since the linear systems take much less time to be developed, they should be tried prior to using the nonlinear systems when the linearity properties of the time series process are unknown.

  9. LINEARLY POLARIZED PROBES OF SURFACE CHIRALITY

    NARCIS (Netherlands)

    VERBIEST, T; KAURANEN, M; MAKI, JJ; TEERENSTRA, MN; SCHOUTEN, AJ; NOLTE, RJM; PERSOONS, A

    1995-01-01

    We present a new nonlinear optical technique to study surface chirality. We demonstrate experimentally that the efficiency of second-harmonic generation from isotropic chiral surfaces is different for excitation with fundamental light that is +45 degrees and -45 degrees linearly polarized with

  10. Why did the League of Nations fail?

    OpenAIRE

    Jari Eloranta

    2011-01-01

    Why did the League of Nations ultimately fail to achieve widespread disarmament, its most fundamental goal? This article shows that the failure of the League of Nations had two important dimensions: (1) the failure to provide adequate security guarantees for its members (like an alliance); (2) the failure of this organization to achieve the disarmament goals it set out in the 1920s and 1930s. Thus, it was doomed from the outset to fail, due to built-in institutional contradictions. It can als...

  11. Final Focus Systems in Linear Colliders

    International Nuclear Information System (INIS)

    Raubenheimer, Tor

    1998-01-01

    In colliding beam facilities, the ''final focus system'' must demagnify the beams to attain the very small spot sizes required at the interaction points. The first final focus system with local chromatic correction was developed for the Stanford Linear Collider where very large demagnifications were desired. This same conceptual design has been adopted by all the future linear collider designs as well as the SuperConducting Supercollider, the Stanford and KEK B-Factories, and the proposed Muon Collider. In this paper, the over-all layout, physics constraints, and optimization techniques relevant to the design of final focus systems for high-energy electron-positron linear colliders are reviewed. Finally, advanced concepts to avoid some of the limitations of these systems are discussed

  12. Artificial neural networks environmental forecasting in comparison with multiple linear regression technique: From heavy metals to organic micropollutants screening in agricultural soils

    Science.gov (United States)

    Bonelli, Maria Grazia; Ferrini, Mauro; Manni, Andrea

    2016-12-01

    The assessment of metals and organic micropollutants contamination in agricultural soils is a difficult challenge due to the extensive area used to collect and analyze a very large number of samples. With Dioxins and dioxin-like PCBs measurement methods and subsequent the treatment of data, the European Community advises the develop low-cost and fast methods allowing routing analysis of a great number of samples, providing rapid measurement of these compounds in the environment, feeds and food. The aim of the present work has been to find a method suitable to describe the relations occurring between organic and inorganic contaminants and use the value of the latter in order to forecast the former. In practice, the use of a metal portable soil analyzer coupled with an efficient statistical procedure enables the required objective to be achieved. Compared to Multiple Linear Regression, the Artificial Neural Networks technique has shown to be an excellent forecasting method, though there is no linear correlation between the variables to be analyzed.

  13. Preface: Introductory Remarks: Linear Scaling Methods

    Science.gov (United States)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up

  14. Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.

    Science.gov (United States)

    Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian

    2018-05-23

    Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.

  15. Linearly Polarized IR Spectroscopy Theory and Applications for Structural Analysis

    CERN Document Server

    Kolev, Tsonko

    2011-01-01

    A technique that is useful in the study of pharmaceutical products and biological molecules, polarization IR spectroscopy has undergone continuous development since it first emerged almost 100 years ago. Capturing the state of the science as it exists today, "Linearly Polarized IR Spectroscopy: Theory and Applications for Structural Analysis" demonstrates how the technique can be properly utilized to obtain important information about the structure and spectral properties of oriented compounds. The book starts with the theoretical basis of linear-dichroic infrared (IR-LD) spectroscop

  16. Failed total carpometacarpal joint prosthesis of the thumb

    DEFF Research Database (Denmark)

    Hansen, Torben Bæk; Homilius, Morten

    2010-01-01

    Total joint prosthesis in carpometacarpal joint arthritis of the thumb often fails. Loosening of the implant is often treated by resection arthroplasty, and we reviewed 10 patients, mean age 54 years (range 47-63) who were treated by resection arthroplasty after a failed total joint prosthesis. T...... in eight of 10 patients, but the mean Disabilities of the arm, shoulder, and hand (DASH) scores, self-reported pinch-grip-related function, and pain were comparable with our earlier published results with the Elektra carpometacarpal total joint prosthesis.......Total joint prosthesis in carpometacarpal joint arthritis of the thumb often fails. Loosening of the implant is often treated by resection arthroplasty, and we reviewed 10 patients, mean age 54 years (range 47-63) who were treated by resection arthroplasty after a failed total joint prosthesis....... The male:female ratio was 1:4 and the mean duration of observation 32 months (range 6-52). In three patients the revised implant was a MOJE uncemented carpometacarpal joint prosthesis and in seven patients an Elektra uncemented one. At follow-up grip strength was reduced to less than 90% of the other hand...

  17. Decentralised stabilising controllers for a class of large-scale linear ...

    Indian Academy of Sciences (India)

    subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order. Keywords. Decentralised stabilisation; large-scale linear systems; optimal feedback control; algebraic ...

  18. Stochastic equivalent linearization in 3-D hysteretic frames

    International Nuclear Information System (INIS)

    Casciati, F.; Faravelli, L.

    1987-01-01

    Stochastic equivalent linearization technique for hysteretic systems is extended to study the dynamic response of 3-D frames with hysteretic constitutive laws in the potential plastic hinges. The constitutive law is idealized by an appropriate endochronic model. A general purpose finite element code is adopted in order to generate the matrices by which the equations of motion to be linearized are built. (orig./HP)

  19. [Results of revision after failed surgical treatment for traumatic anterior shoulder instability].

    Science.gov (United States)

    Lópiz-Morales, Y; Alcobe-Bonilla, J; García-Fernández, C; Francés-Borrego, A; Otero-Fernández, R; Marco-Martínez, F

    2013-01-01

    Persistent or recurrent glenohumeral instability after a previous operative stabilization can be a complex problem. Our aim is to establish the incidence of recurrence and its revision surgery, and to analyse the functional results of the revision instability surgery, as well as to determine surgical protocols to perform it. A retrospective analysis was conducted on 16 patients with recurrent instability out of 164 patients operated on between 1999 and 2011. The mean follow-up was 57 months and the mean age was 29 years. To evaluate functional outcome we employed Constant, Rowe, UCLA scores and the visual analogue scale. Of the 12 patients who failed the initial arthroscopic surgery, 6 patients underwent an arthroscopic antero-inferior labrum repair technique, 4 using open labrum repair techniques, and 2 coracoid transfer. The two cases of open surgery with recurrences underwent surgery for coracoid transfer. Results of the Constant score were excellent or good in 64% of patients. Surgical revision of instability is a complex surgery essentially for two reasons: the difficulty in recognising the problem, and the technical demand (greater variety and the increasingly complex techniques). Copyright © 2012 SECOT. Published by Elsevier Espana. All rights reserved.

  20. Higher-order techniques for some problems of nonlinear control

    Directory of Open Access Journals (Sweden)

    Sarychev Andrey V.

    2002-01-01

    Full Text Available A natural first step when dealing with a nonlinear problem is an application of some version of linearization principle. This includes the well known linearization principles for controllability, observability and stability and also first-order optimality conditions such as Lagrange multipliers rule or Pontryagin's maximum principle. In many interesting and important problems of nonlinear control the linearization principle fails to provide a solution. In the present paper we provide some examples of how higher-order methods of differential geometric control theory can be used for the study nonlinear control systems in such cases. The presentation includes: nonlinear systems with impulsive and distribution-like inputs; second-order optimality conditions for bang–bang extremals of optimal control problems; methods of high-order averaging for studying stability and stabilization of time-variant control systems.

  1. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    Science.gov (United States)

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh

  2. Thermal analysis of the failed equipment storage vault system

    International Nuclear Information System (INIS)

    Jerrell, J.; Lee, S.Y.; Shadday, A.

    1995-07-01

    A storage facility for failed glass melters is required for radioactive operation of the Defense Waste Processing Facility (DWPF). It is currently proposed that the failed melters be stored in the Failed Equipment Storage Vaults (FESV's) in S area. The FESV's are underground reinforced concrete structures constructed in pairs, with adjacent vaults sharing a common wall. A failed melter is to be placed in a steel Melter Storage Box (MSB), sealed, and lowered into the vault. A concrete lid is then placed over the top of the FESV. Two melters will be placed within the FESV/MSB system, separated by the common wall. There is no forced ventilation within the vault so that the melter is passively cooled. Temperature profiles in the Failed Equipment Storage Vault Structures have been generated using the FLOW3D software to model heat conduction and convection within the FESV/MSB system. Due to complexities in modeling radiation with FLOW3D, P/THERMAL software has been used to model radiation using the conduction/convection temperature results from FLOW3D. The final conjugate model includes heat transfer by conduction, convection, and radiation to predict steady-state temperatures. Also, the FLOW3D software has been validated as required by the technical task request

  3. Linear Optimization Techniques for Product-Mix of Paints Production in Nigeria

    Directory of Open Access Journals (Sweden)

    Sulaimon Olanrewaju Adebiyi

    2014-02-01

    Full Text Available Many paint producers in Nigeria do not lend themselves to flexible production process which is important for them to manage the use of resources for effective optimal production. These goals can be achieved through the application of optimization models in their resources allocation and utilisation. This research focuses on linear optimization for achieving product- mix optimization in terms of the product identification and the right quantity in paint production in Nigeria for better profit and optimum firm performance. The computational experiments in this research contains data and information on the units item costs, unit contribution margin, maximum resources capacity, individual products absorption rate and other constraints that are particular to each of the five products produced in the company employed as case study. In data analysis, linear programming model was employed with the aid LINDO 11 software to analyse the data. The result has showed that only two out of the five products under consideration are profitable. It also revealed the rate to which the company needs to reduce cost incurred on the three other products before making them profitable for production.

  4. Evaluation and Management of Failed Shoulder Instability Surgery.

    Science.gov (United States)

    Cartucho, António; Moura, Nuno; Sarmento, Marco

    2017-01-01

    Failed shoulder instability surgery is mostly considered to be the recurrence of shoulder dislocation but subluxation, painful or non-reliable shoulder are also reasons for patient dissatisfaction and should be considered in the notion. The authors performed a revision of the literature and online contents on evaluation and management of failed shoulder instability surgery. When we look at the reasons for failure of shoulder instability surgery we point the finger at poor patient selection, technical error and an additional traumatic event. More than 80% of surgical failures, for shoulder instability, are associated with bone loss. Quantification of glenoid bone loss and investigation of an engaging Hill-Sachs lesion are determining facts. Adequate imaging studies are determinant to assess labrum and capsular lesions and to rule out associated pathology as rotator cuff tears. CT-scan is the method of choice to diagnose and quantify bone loss. Arthroscopic soft tissue procedures are indicated in patients with minimal bone loss and no contact sports. Open soft tissue procedures should be performed in patients with small bone defects, with hiperlaxity and practicing contact sports. Soft tissue techniques, as postero-inferior capsular plication and remplissage, may be used in patients with less than 25% of glenoid bone loss and Hill-Sachs lesions. Bone block procedures should be used for glenoid larger bone defects in the presence of an engaging Hill-Sachs lesion or in the presence of poor soft tissue quality. A tricortical iliac crest graft may be used as a primary procedure or as a salvage procedure after failure of a Bristow or a Latarjet procedure. Less frequently, the surgeon has to address the Hill-Sachs lesion. When a 30% loss of humeral head circumference is present a filling graft should be used. Reasons for failure are multifactorial. In order to address this entity, surgeons must correctly identify the causes and tailor the right solution.

  5. Is laparoscopic reoperation for failed antireflux surgery feasible?

    Science.gov (United States)

    Floch, N R; Hinder, R A; Klingler, P J; Branton, S A; Seelig, M H; Bammer, T; Filipi, C J

    1999-07-01

    Laparoscopic techniques can be used to treat patients whose antireflux surgery has failed. Case series. Two academic medical centers. Forty-six consecutive patients, of whom 21 were male and 25 were female (mean age, 55.6 years; range, 15-80 years). Previous antireflux procedures were laparoscopic (21 patients), laparotomy (21 patients), thoracotomy (3 patients), and thoracoscopy (1 patient). The cause of failure, operative and postoperative morbidity, and the level of follow-up satisfaction were determined for all patients. The causes of failure were hiatal herniation (31 patients [67%]), fundoplication breakdown (20 patients [43%]), fundoplication slippage (9 patients [20%]), tight fundoplication (5 patients [11%]), misdiagnosed achalasia (2 patients [4%]), and displaced Angelchik prosthesis (2 patients [4%]). Twenty-two patients (48%) had more than 1 cause. Laparoscopic reoperative procedures were Nissen fundoplication (n = 22), Toupet fundoplication (n = 13), paraesophageal hernia repair (n = 4), Dor procedure (n = 2), Angelchik prosthesis removal (n = 2), Heller myotomy (n = 2), and the takedown of a wrap (n = 1). In addition, 18 patients required crural repair and 13 required paraesophageal hernia repair. The mean +/- SEM duration of surgery was 3.5+/-1.1 hours. Operative complications were fundus tear (n = 8), significant bleeding (n = 4), bougie perforation (n = 1), small bowel enterotomy (n = 1), and tension pneumothorax (n = 1). The conversion rate (from laparoscopic to an open procedure) was 20% overall (9 patients) but 0% in the last 10 patients. Mortality was 0%. The mean +/- SEM hospital stay was 2.3+/-0.9 days for operations completed laparoscopically. Follow-up was possible in 35 patients (76%) at 17.2+/-11.8 months. The well-being score (1 best; 10, worst) was 8.6+/-2.1 before and 2.9+/-2.4 after surgery (Papproach may be used successfully to treat patients with failed antireflux operations. Good results were achieved despite the technical

  6. The role of dendritic non-linearities in single neuron computation

    Directory of Open Access Journals (Sweden)

    Boris Gutkin

    2014-05-01

    Full Text Available Experiment has demonstrated that summation of excitatory post-synaptic protientials (EPSPs in dendrites is non-linear. The sum of multiple EPSPs can be larger than their arithmetic sum, a superlinear summation due to the opening of voltage-gated channels and similar to somatic spiking. The so-called dendritic spike. The sum of multiple of EPSPs can also be smaller than their arithmetic sum, because the synaptic current necessarily saturates at some point. While these observations are well-explained by biophysical models the impact of dendritic spikes on computation remains a matter of debate. One reason is that dendritic spikes may fail to make the neuron spike; similarly, dendritic saturations are sometime presented as a glitch which should be corrected by dendritic spikes. We will provide solid arguments against this claim and show that dendritic saturations as well as dendritic spikes enhance single neuron computation, even when they cannot directly make the neuron fire. To explore the computational impact of dendritic spikes and saturations, we are using a binary neuron model in conjunction with Boolean algebra. We demonstrate using these tools that a single dendritic non-linearity, either spiking or saturating, combined with somatic non-linearity, enables a neuron to compute linearly non-separable Boolean functions (lnBfs. These functions are impossible to compute when summation is linear and the exclusive OR is a famous example of lnBfs. Importantly, the implementation of these functions does not require the dendritic non-linearity to make the neuron spike. Next, We show that reduced and realistic biophysical models of the neuron are capable of computing lnBfs. Within these models and contrary to the binary model, the dendritic and somatic non-linearity are tightly coupled. Yet we show that these neuron models are capable of linearly non-separable computations.

  7. Analysis of failed nuclear plant components

    International Nuclear Information System (INIS)

    Diercks, D.R.

    1993-01-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power-generating stations since 1974. The considerations involved in working with an analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (1) intergranular stress-corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor

  8. Analysis of failed nuclear plant components

    International Nuclear Information System (INIS)

    Diercks, D.R.

    1992-07-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power generating stations since 1974. The considerations involved in working with and analyzing radioactive components are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in service. The failures discussed are (a) intergranular stress corrosion cracking of core spray injection piping in a boiling water reactor, (b) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressure water reactor, (c) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (d) failure of pump seal wear rings by nickel leaching in a boiling water reactor

  9. SU-E-T-226: Junction Free Craniospinal Irradiation in Linear Accelerator Using Volumetric Modulated Arc Therapy : A Novel Technique Using Dose Tapering

    International Nuclear Information System (INIS)

    Sarkar, B; Roy, S; Paul, S; Munshi, A; Roy, Shilpi; Jassal, K; Ganesh, T; Mohanti, BK

    2014-01-01

    Purpose: Spatially separated fields are required for craniospinal irradiation due to field size limitation in linear accelerator. Field junction shits are conventionally done to avoid hot or cold spots. Our study was aimed to demonstrate the feasibility of junction free irradiation plan of craniospinal irradiation (CSI) for Meduloblastoma cases treated in linear accelerator using Volumetric modulated arc therapy (VMAT) technique. Methods: VMAT was planned using multiple isocenters in Monaco V 3.3.0 and delivered in Elekta Synergy linear accelerator. A full arc brain and 40° posterior arc spine fields were planned using two isocentre for short (<1.3 meter height ) and 3 isocentres for taller patients. Unrestricted jaw movement was used in superior-inferior direction. Prescribed dose to PTV was achieved by partial contribution from adjacent beams. A very low dose gradient was generated to taper the isodoses over a long length (>10 cm) at the conventional field junction. Results: In this primary study five patients were planned and three patients were delivered using this novel technique. As the dose contribution from the adjacent beams were varied (gradient) to create a complete dose distribution, therefore there is no specific junction exists in the plan. The junction were extended from 10–14 cm depending on treatment plan. Dose gradient were 9.6±2.3% per cm for brain and 7.9±1.7 % per cm for spine field respectively. Dose delivery error due to positional inaccuracy was calculated for brain and spine field for ±1mm, ±2mm, ±3mm and ±5 mm were 1%–0.8%, 2%–1.6%, 2.8%–2.4% and 4.3%–4% respectively. Conclusion: Dose tapering in junction free CSI do not require a junction shift. Therefore daily imaging for all the field is also not essential. Due to inverse planning dose to organ at risk like thyroid kidney, heart and testis can be reduced significantly. VMAT gives a quicker delivery than Step and shoot or dynamic IMRT

  10. SU-E-T-226: Junction Free Craniospinal Irradiation in Linear Accelerator Using Volumetric Modulated Arc Therapy : A Novel Technique Using Dose Tapering

    Energy Technology Data Exchange (ETDEWEB)

    Sarkar, B; Roy, S; Paul, S; Munshi, A; Roy, Shilpi; Jassal, K; Ganesh, T; Mohanti, BK [Fortis Memorial Research Institute, Gurgaon (India)

    2014-06-01

    Purpose: Spatially separated fields are required for craniospinal irradiation due to field size limitation in linear accelerator. Field junction shits are conventionally done to avoid hot or cold spots. Our study was aimed to demonstrate the feasibility of junction free irradiation plan of craniospinal irradiation (CSI) for Meduloblastoma cases treated in linear accelerator using Volumetric modulated arc therapy (VMAT) technique. Methods: VMAT was planned using multiple isocenters in Monaco V 3.3.0 and delivered in Elekta Synergy linear accelerator. A full arc brain and 40° posterior arc spine fields were planned using two isocentre for short (<1.3 meter height ) and 3 isocentres for taller patients. Unrestricted jaw movement was used in superior-inferior direction. Prescribed dose to PTV was achieved by partial contribution from adjacent beams. A very low dose gradient was generated to taper the isodoses over a long length (>10 cm) at the conventional field junction. Results: In this primary study five patients were planned and three patients were delivered using this novel technique. As the dose contribution from the adjacent beams were varied (gradient) to create a complete dose distribution, therefore there is no specific junction exists in the plan. The junction were extended from 10–14 cm depending on treatment plan. Dose gradient were 9.6±2.3% per cm for brain and 7.9±1.7 % per cm for spine field respectively. Dose delivery error due to positional inaccuracy was calculated for brain and spine field for ±1mm, ±2mm, ±3mm and ±5 mm were 1%–0.8%, 2%–1.6%, 2.8%–2.4% and 4.3%–4% respectively. Conclusion: Dose tapering in junction free CSI do not require a junction shift. Therefore daily imaging for all the field is also not essential. Due to inverse planning dose to organ at risk like thyroid kidney, heart and testis can be reduced significantly. VMAT gives a quicker delivery than Step and shoot or dynamic IMRT.

  11. Post-processing through linear regression

    Directory of Open Access Journals (Sweden)

    B. Van Schaeybroeck

    2011-03-01

    Full Text Available Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS method, a new time-dependent Tikhonov regularization (TDTR method, the total least-square method, a new geometric-mean regression (GM, a recently introduced error-in-variables (EVMOS method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified.

    These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise. At long lead times the regression schemes (EVMOS, TDTR which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.

  12. A new approach of binary addition and subtraction by non-linear ...

    Indian Academy of Sciences (India)

    optical domain by exploitation of proper non-linear material-based switching technique. In this communication, the authors extend this technique for both adder and subtractor accommodating the spatial input encoding system.

  13. Linear signal noise summer accurately determines and controls S/N ratio

    Science.gov (United States)

    Sundry, J. L.

    1966-01-01

    Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.

  14. Closure of oroantral communication with buccal fat pad after removing bilateral failed zygomatic implants: A case report and 6-month follow-up.

    Science.gov (United States)

    Peñarrocha-Oltra, David; Alonso-González, Rocio; Pellicer-Chover, Hilario; Aloy-Prósper, Amparo; Peñarrocha-Diago, María

    2015-02-01

    The aim of this study was to assess the use of buccal fat pad (BFP) technique as an option to close oroantral communications (OAC) after removing failed zygomatic implants in a patient with a severely resorbed maxilla, and to determine the degree of patient satisfaction. A 64-year-old woman presented recurrent sinusitis and permanent oroantral communication caused by bilateral failed zygomatic implants, 3 years after prosthetic loading. Zygomatic implants were removed previous antibiotic treatment and the BFP flap technique was used to treat the OAC and maxillary defect. The degree of patient satisfaction after treatment was assessed through a visual analogue scale (VAS). At 6-months follow-up, patient showed complete healing and good function and the results in terms of phonetics, aesthetics and chewing were highly rated by the patient. Key words:Bichat fat pad, buccal fat pad, zygomatic implants, oroantral communication.

  15. Development of failed fuel detection and location system in sodium-cooled large reactor. Sampling method of failed fuels under the slit

    International Nuclear Information System (INIS)

    Aizawa, Kousuke; Fujita, Kaoru; Kamide, Hideki; Kasahara, Naoto

    2010-01-01

    A conceptual design study of Japan Sodium-cooled Fast Reactor (JSFR) is in progress as an issue of the 'Fast Reactor Cycle Technology Development (FaCT)' project in Japan. JSFR adopts a Selector-Valve mechanism for the failed fuel detection and location (FFDL) system. The Selector-Valve FFDL system identifies failed fuel subassemblies by sampling sodium from each fuel subassembly outlet and detecting fission product. One of the JSFR design features is employing an upper internal structure (UIS) with a radial slit, in which an arm of fuel handling machine can move and access the fuel assemblies under the UIS. Thus, JSFR cannot place sampling nozzles right above the fuel subassemblies located under the slit. In this study, the sampling method for indentifying under-slit failed fuel subassemblies has been demonstrated by water experiments. (author)

  16. Modified rendezvous intrahepatic bile duct cannulation technique to pass a PTBD catheter in ERCP.

    Science.gov (United States)

    Lee, Tae Hoon; Park, Sang-Heum; Lee, Sae Hwan; Lee, Chang-Kyun; Lee, Suck-Ho; Chung, Il-Kwun; Kim, Hong Soo; Kim, Sun-Joo

    2010-11-14

    The rendezvous procedure combines an endoscopic technique with percutaneous transhepatic biliary drainage (PTBD). When a selective common bile duct cannulation fails, PTBD allows successful drainage and retrograde access for subsequent rendezvous techniques. Traditionally, rendezvous procedures such as the PTBD-assisted over-the-wire cannulation method, or the parallel cannulation technique, may be available when a bile duct cannot be selectively cannulated. When selective intrahepatic bile duct (IHD) cannulation fails, this modified rendezvous technique may be a feasible alternative. We report the case of a modified rendezvous technique, in which the guidewire was retrogradely passed into the IHD through the C2 catheter after end-to-end contact between the tips of the sphincterotome and the C2 catheter at the ampulla's orifice, in a 39-year-old man who had been diagnosed with gallbladder carcinoma with a metastatic right IHD obstruction. Clinically this procedure may be a feasible and timesaving technique.

  17. Fast non-linear extraction of plasma equilibrium parameters using a neural network mapping

    International Nuclear Information System (INIS)

    Lister, J.B.; Schnurrenberger, H.

    1990-07-01

    The shaping of non-circular plasmas requires a non-linear mapping between the measured diagnostic signals and selected equilibrium parameters. The particular configuration of Neural Network known as the multi-layer perceptron provides a powerful and general technique for formulating an arbitrary continuous non-linear multi-dimensional mapping. This technique has been successfully applied to the extraction of equilibrium parameters from measurements of single-null diverted plasmas in the DIII-D tokamak; the results are compared with a purely linear mapping. The method is promising, and hardware implementation is straightforward. (author) 15 refs., 7 figs

  18. Failing States or Failing Models?: Accounting for the Incidence of State Collapse

    OpenAIRE

    Martin Doornbos

    2010-01-01

    In recent years the notion and phenomenon of .failingÿ states - states deemed incapable to fulfil the basic tasks of providing security for their populace -, has been rapidly drawing attention. I will start off with a closer look at the inci- dence of fragile states and state failure, more specifically of state collapse. Connected with this, I will raise the question of differential degrees of propensity to failure and collapse among contemporary state systems, and to point to apparent region...

  19. Studying the formation of non-linear bursts in fully turbulent channel flows

    Science.gov (United States)

    Encinar, Miguel P.; Jimenez, Javier

    2017-11-01

    Linear transient growth has been suggested as a possible explanation for the intermittent behaviour, or `bursting', in shear flows with a stable mean velocity profile. Analysing fully non-linear DNS databases yields a similar Orr+lift-up mechanism, but acting on spatially localised wave packets rather than on monochromatic infinite wavetrains. The Orr mechanism requires the presence of backwards-leaning wall-normal velocity perturbations as initial condition, but the linear theory fails to clarify how these perturbations are formed. We investigate the latter in a time-resolved wavelet-filtered turbulent channel database, which allows us to assign an amplitude and an inclination angle to a flow region of selected size. This yields regions that match the dynamics of linear Orr for short times. We find that a short streamwise velocity (u) perturbation (i.e. a streak meander) consistently appears before the burst, but disappears before the burst reaches its maximum amplitude. Lift-up then generates a longer streamwise velocity perturbation. The initial streamwise velocity is also found to be backwards-leaning, contrary to the averaged energy-containing scales, which are known to be tilted forward. Funded by the ERC COTURB project.

  20. Linearized gravity in terms of differential forms

    Science.gov (United States)

    Baykal, Ahmet; Dereli, Tekin

    2017-01-01

    A technique to linearize gravitational field equations is developed in which the perturbation metric coefficients are treated as second rank, symmetric, 1-form fields belonging to the Minkowski background spacetime by using the exterior algebra of differential forms.

  1. Modern linear control design a time-domain approach

    CERN Document Server

    Caravani, Paolo

    2013-01-01

    This book offers a compact introduction to modern linear control design.  The simplified overview presented of linear time-domain methodology paves the road for the study of more advanced non-linear techniques. Only rudimentary knowledge of linear systems theory is assumed - no use of Laplace transforms or frequency design tools is required. Emphasis is placed on assumptions and logical implications, rather than abstract completeness; on interpretation and physical meaning, rather than theoretical formalism; on results and solutions, rather than derivation or solvability.  The topics covered include transient performance and stabilization via state or output feedback; disturbance attenuation and robust control; regional eigenvalue assignment and constraints on input or output variables; asymptotic regulation and disturbance rejection. Lyapunov theory and Linear Matrix Inequalities (LMI) are discussed as key design methods. All methods are demonstrated with MATLAB to promote practical use and comprehension. ...

  2. Analog fault diagnosis by inverse problem technique

    KAUST Repository

    Ahmed, Rania F.

    2011-12-01

    A novel algorithm for detecting soft faults in linear analog circuits based on the inverse problem concept is proposed. The proposed approach utilizes optimization techniques with the aid of sensitivity analysis. The main contribution of this work is to apply the inverse problem technique to estimate the actual parameter values of the tested circuit and so, to detect and diagnose single fault in analog circuits. The validation of the algorithm is illustrated through applying it to Sallen-Key second order band pass filter and the results show that the detecting percentage efficiency was 100% and also, the maximum error percentage of estimating the parameter values is 0.7%. This technique can be applied to any other linear circuit and it also can be extended to be applied to non-linear circuits. © 2011 IEEE.

  3. Local hyperspectral data multisharpening based on linear/linear-quadratic nonnegative matrix factorization by integrating lidar data

    Science.gov (United States)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2015-10-01

    In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.

  4. Recent advances in FIB-TEM specimen preparation techniques

    International Nuclear Information System (INIS)

    Li Jian; Malis, T.; Dionne, S.

    2006-01-01

    Preparing high-quality transmission electron microscopy (TEM) specimens is of paramount importance in TEM studies. The development of the focused ion beam (FIB) microscope has greatly enhanced TEM specimen preparation capabilities. In recent years, various FIB-TEM foil preparation techniques have been developed. However, the currently available techniques fail to produce TEM specimens from fragile and ultra-fine specimens such as fine fibers. In this paper, the conventional FIB-TEM specimen preparation techniques are reviewed, and their advantages and shortcomings are compared. In addition, a new technique suitable to prepare TEM samples from ultra-fine specimens is demonstrated

  5. Free-piston engine linear generator for hybrid vehicles modeling study

    Science.gov (United States)

    Callahan, T. J.; Ingram, S. K.

    1995-05-01

    Development of a free piston engine linear generator was investigated for use as an auxiliary power unit for a hybrid electric vehicle. The main focus of the program was to develop an efficient linear generator concept to convert the piston motion directly into electrical power. Computer modeling techniques were used to evaluate five different designs for linear generators. These designs included permanent magnet generators, reluctance generators, linear DC generators, and two and three-coil induction generators. The efficiency of the linear generator was highly dependent on the design concept. The two-coil induction generator was determined to be the best design, with an efficiency of approximately 90 percent.

  6. Approximating the Pareto set of multiobjective linear programs via robust optimization

    NARCIS (Netherlands)

    Gorissen, B.L.; den Hertog, D.

    2012-01-01

    We consider problems with multiple linear objectives and linear constraints and use adjustable robust optimization and polynomial optimization as tools to approximate the Pareto set with polynomials of arbitrarily large degree. The main difference with existing techniques is that we optimize a

  7. Dropout Prediction in E-Learning Courses through the Combination of Machine Learning Techniques

    Science.gov (United States)

    Lykourentzou, Ioanna; Giannoukos, Ioannis; Nikolopoulos, Vassilis; Mpardis, George; Loumos, Vassili

    2009-01-01

    In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to…

  8. Biostatistics Series Module 6: Correlation and Linear Regression.

    Science.gov (United States)

    Hazra, Avijit; Gogtay, Nithya

    2016-01-01

    Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient ( r ). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P correlation coefficient can also be calculated for an idea of the correlation in the population. The value r 2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation ( y = a + bx ), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous.

  9. Analysis of failed nuclear plant components

    Science.gov (United States)

    Diercks, D. R.

    1993-12-01

    Argonne National Laboratory has conducted analyses of failed components from nuclear power- gener-ating stations since 1974. The considerations involved in working with and analyzing radioactive compo-nents are reviewed here, and the decontamination of these components is discussed. Analyses of four failed components from nuclear plants are then described to illustrate the kinds of failures seen in serv-ice. The failures discussed are (1) intergranular stress- corrosion cracking of core spray injection piping in a boiling water reactor, (2) failure of canopy seal welds in adapter tube assemblies in the control rod drive head of a pressurized water reactor, (3) thermal fatigue of a recirculation pump shaft in a boiling water reactor, and (4) failure of pump seal wear rings by nickel leaching in a boiling water reactor.

  10. Non-linear Imaging using an Experimental Synthetic Aperture Real Time Ultrasound Scanner

    DEFF Research Database (Denmark)

    Rasmussen, Joachim; Du, Yigang; Jensen, Jørgen Arendt

    2011-01-01

    This paper presents the first non-linear B-mode image of a wire phantom using pulse inversion attained via an experimental synthetic aperture real-time ultrasound scanner (SARUS). The purpose of this study is to implement and validate non-linear imaging on SARUS for the further development of new...... non-linear techniques. This study presents non-linear and linear B-mode images attained via SARUS and an existing ultrasound system as well as a Field II simulation. The non-linear image shows an improved spatial resolution and lower full width half max and -20 dB resolution values compared to linear...

  11. Near-Regular Structure Discovery Using Linear Programming

    KAUST Repository

    Huang, Qixing; Guibas, Leonidas J.; Mitra, Niloy J.

    2014-01-01

    as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both

  12. A review on creatinine measurement techniques.

    Science.gov (United States)

    Mohabbati-Kalejahi, Elham; Azimirad, Vahid; Bahrami, Manouchehr; Ganbari, Ahmad

    2012-08-15

    This paper reviews the entire recent global tendency for creatinine measurement. Creatinine biosensors involve complex relationships between biology and micro-mechatronics to which the blood is subjected. Comparison between new and old methods shows that new techniques (e.g. Molecular Imprinted Polymers based algorithms) are better than old methods (e.g. Elisa) in terms of stability and linear range. All methods and their details for serum, plasma, urine and blood samples are surveyed. They are categorized into five main algorithms: optical, electrochemical, impedometrical, Ion Selective Field-Effect Transistor (ISFET) based technique and chromatography. Response time, detection limit, linear range and selectivity of reported sensors are discussed. Potentiometric measurement technique has the lowest response time of 4-10 s and the lowest detection limit of 0.28 nmol L(-1) belongs to chromatographic technique. Comparison between various techniques of measurements indicates that the best selectivity belongs to MIP based and chromatographic techniques. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Experience with failed or damaged spent fuel and its impacts on handling

    International Nuclear Information System (INIS)

    Bailey, W.J.

    1989-12-01

    Spent fuel management planning needs to include consideration of failed or damaged spent light-water reactor (LWR) fuel. Described in this paper, which was prepared under the Commercial Spent Fuel Management (CSFM) Program that is sponsored by the US Department of Energy (DOE), are the following: the importance of fuel integrity and the behavior of failed fuel, the quantity and burnup of failed or damaged fuel in storage, types of defects, difficulties in evaluating data on failed or damaged fuel, experience with wet storage, experience with dry storage, handling of failed or damaged fuel, transporting of fuel, experience with higher burnup fuel, and conclusions. 15 refs

  14. I Failed the edTPA

    Science.gov (United States)

    Kuranishi, Adam; Oyler, Celia

    2017-01-01

    In this article, co-written by a teacher and a professor, the authors examine possible explanations for why Adam (first author), a New York City public school special educator, failed the edTPA, a teacher performance assessment required by all candidates for state certification. Adam completed a yearlong teaching residency where he was the special…

  15. Risk factors for failed conversion of labor epidural analgesia to cesarean delivery anesthesia: a systematic review and meta-analysis of observational trials.

    Science.gov (United States)

    Bauer, M E; Kountanis, J A; Tsen, L C; Greenfield, M L; Mhyre, J M

    2012-10-01

    This systematic review and meta-analysis evaluates evidence for seven risk factors associated with failed conversion of labor epidural analgesia to cesarean delivery anesthesia. Online scientific literature databases were searched using a strategy which identified observational trials, published between January 1979 and May 2011, which evaluated risk factors for failed conversion of epidural analgesia to anesthesia or documented a failure rate resulting in general anesthesia. 1450 trials were screened, and 13 trials were included for review (n=8628). Three factors increase the risk for failed conversion: an increasing number of clinician-administered boluses during labor (OR=3.2, 95% CI 1.8-5.5), greater urgency for cesarean delivery (OR=40.4, 95% CI 8.8-186), and a non-obstetric anesthesiologist providing care (OR=4.6, 95% CI 1.8-11.5). Insufficient evidence is available to support combined spinal-epidural versus standard epidural techniques, duration of epidural analgesia, cervical dilation at the time of epidural placement, and body mass index or weight as risk factors for failed epidural conversion. The risk of failed conversion of labor epidural analgesia to anesthesia is increased with an increasing number of boluses administered during labor, an enhanced urgency for cesarean delivery, and care being provided by a non-obstetric anesthesiologist. Further high-quality studies are needed to evaluate the many potential risk factors associated with failed conversion of labor epidural analgesia to anesthesia for cesarean delivery. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Method of detecting failed fuels

    International Nuclear Information System (INIS)

    Ishizaki, Hideaki; Suzumura, Takeshi.

    1982-01-01

    Purpose: To enable the settlement of the temperature of an adequate filling high temperature pure water by detecting the outlet temperature of a high temperature pure water filling tube to a fuel assembly to control the heating of the pure water and detecting the failed fuel due to the sampling of the pure water. Method: A temperature sensor is provided at a water tube connected to a sipping cap for filling high temperature pure water to detect the temperature of the high temperature pure water at the outlet of the tube, and the temperature is confirmed by a temperature indicator. A heater is controlled on the basis of this confirmation, an adequate high temperature pure water is filled in the fuel assembly, and the pure water is replaced with coolant. Then, it is sampled to settle the adequate temperature of the high temperature coolant used for detecting the failure of the fuel assembly. As a result, the sipping effect does not decrease, and the failed fuel can be precisely detected. (Yoshihara, H.)

  17. Pass-fail grading: laying the foundation for self-regulated learning.

    Science.gov (United States)

    White, Casey B; Fantone, Joseph C

    2010-10-01

    Traditionally, medical schools have tended to make assumptions that students will "automatically" engage in self-education effectively after graduation and subsequent training in residency and fellowships. In reality, the majority of medical graduates out in practice feel unprepared for learning on their own. Many medical schools are now adopting strategies and pedagogies to help students become self-regulating learners. Along with these changes in practices and pedagogy, many schools are eliminating a cornerstone of extrinsic motivation: discriminating grades. To study the effects of the switch from discriminating to pass-fail grading in the second year of medical school, we compared internal and external assessments and evaluations for a second-year class with a discriminating grading scale (Honors, High Pass, Pass, Fail) and for a second-year class with a pass-fail grading scale. Of the measures we compared (MCATs, GPAs, means on second-year examinations, USMLE Step 1 scores, residency placement, in which there were no statistically significant changes), the only statistically significant decreases (lower performance with pass fail) were found in two of the second-year courses. Performance in one other course also improved significantly. Pass-fail grading can meet several important intended outcomes, including "leveling the playing field" for incoming students with different academic backgrounds, reducing competition and fostering collaboration among members of a class, more time for extracurricular interests and personal activities. Pass-fail grading also reduces competition and supports collaboration, and fosters intrinsic motivation, which is key to self-regulated, lifelong learning.

  18. Salvage of a failed open gastrocutaneous fistula repair with an endoscopic over-the-scope clip

    Directory of Open Access Journals (Sweden)

    Joshua Jaramillo

    2016-05-01

    Full Text Available Once enteral access via gastrostomy tube (G-tube is no longer indicated, the tube is typically removed in clinic with a high probability of spontaneous closure. When spontaneous closure is not achieved, the formation of a gastrocutaneous fistula (GCF is possible. The incidence of GCF is directly related with the length of time the tube has been placed. When conservative management fails, surgical intervention is the standard treatment. Endoscopic techniques have been described for primary closure of GCF in adults including banding and cauterizing of the fistula tract with placement of a standard endoscopic clip. Over-the-scope clips (OTSC have recently been reported in primary GCF closure in children (Wright et al., 2015. In patients with skin irritation surrounding a GCF making surgical repair difficult, endoscopic OTSC closure provides particular benefit. It is our belief that this is the first case report of endoscopically salvaging a leak from a failed open GCF repair.

  19. Plasma Brightenings in a Failed Solar Filament Eruption

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y.; Ding, M. D., E-mail: yingli@nju.edu.cn [School of Astronomy and Space Science, Nanjing University, Nanjing 210023 (China)

    2017-03-20

    Failed filament eruptions are solar eruptions that are not associated with coronal mass ejections. In a failed filament eruption, the filament materials usually show some ascending and falling motions as well as generating bright EUV emissions. Here we report a failed filament eruption (SOL2016-07-22) that occurred in a quiet-Sun region observed by the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory . In this event, the filament spreads out but gets confined by the surrounding magnetic field. When interacting with the ambient magnetic field, the filament material brightens up and flows along the magnetic field lines through the corona to the chromosphere. We find that some materials slide down along the lifting magnetic structure containing the filament and impact the chromosphere, and through kinetic energy dissipation, cause two ribbon-like brightenings in a wide temperature range. There is evidence suggesting that magnetic reconnection occurs between the filament magnetic structure and the surrounding magnetic fields where filament plasma is heated to coronal temperatures. In addition, thread-like brightenings show up on top of the erupting magnetic fields at low temperatures, which might be produced by an energy imbalance from a fast drop of radiative cooling due to plasma rarefaction. Thus, this single event of a failed filament eruption shows the existence of a variety of plasma brightenings that may be caused by completely different heating mechanisms.

  20. Fast non-linear extraction of plasma equilibrium parameters using a neural network mapping

    International Nuclear Information System (INIS)

    Lister, J.B.; Schnurrenberger, H.

    1991-01-01

    The shaping of non-circular plasmas requires a non-linear mapping between the measured diagnostic signals and selected equilibrium parameters. The particular configuration of neural network known as the multilayer perceptron provides a powerful and general technique for formulating an arbitrary continuous non-linear multi-dimensional mapping. This technique has been successfully applied to the extraction of equilibrium parameters from measurements of single-null diverted plasmas in the DIII-D tokamak; the results are compared with a purely linear mapping. The method is promising, and hardware implementation is straightforward. (author). 17 refs, 8 figs, 2 tab

  1. Revision surgery for failed thermal capsulorrhaphy.

    Science.gov (United States)

    Park, Hyung Bin; Yokota, Atsushi; Gill, Harpreet S; El Rassi, George; McFarland, Edward G

    2005-09-01

    With the failure of thermal capsulorrhaphy for shoulder instability, there have been concerns with capsular thinning and capsular necrosis affecting revision surgery. To report the findings at revision surgery for failed thermal capsulorrhaphy and to evaluate the technical effects on subsequent revision capsular plication. Case series; Level of evidence, 4. Fourteen patients underwent arthroscopic evaluation and open reconstruction for a failed thermal capsulorrhaphy. The cause of the failure, the quality of the capsule, and the ability to suture the capsule were recorded. The patients were evaluated at follow-up for failure, which was defined as recurrent subluxations or dislocations. The origin of the instability was traumatic (n = 6) or atraumatic (n = 8). At revision surgery in the traumatic group, 4 patients sustained failure of the Bankart repair with capsular laxity, and the others experienced capsular laxity alone. In the atraumatic group, all patients experienced capsular laxity as the cause of failure. Of the 14 patients, the capsule quality was judged to be thin in 5 patients and ablated in 1 patient. A glenoid-based capsular shift could be accomplished in all 14 patients. At follow-up (mean, 35.4 months; range, 22 to 48 months), 1 patient underwent revision surgery and 1 patient had a subluxation, resulting in a failure rate of 14%. Recurrent capsular laxity after failed thermal capsular shrinkage is common and frequently associated with capsular thinning. In most instances, the capsule quality does not appear to technically affect the revision procedure.

  2. National intelligence estimates and the Failed State Index.

    Science.gov (United States)

    Voracek, Martin

    2013-10-01

    Across 177 countries around the world, the Failed State Index, a measure of state vulnerability, was reliably negatively associated with the estimates of national intelligence. Psychometric analysis of the Failed State Index, compounded of 12 social, economic, and political indicators, suggested factorial unidimensionality of this index. The observed correspondence of higher national intelligence figures to lower state vulnerability might arise through these two macro-level variables possibly being proxies of even more pervasive historical and societal background variables that affect both.

  3. Failed State and the Mandate of Peacekeeping Operations

    OpenAIRE

    Eka Nizmi, Yusnarida

    2011-01-01

    By 1990, Somalia had become a good example of what was becoming known as a “failed state”- a people without a government strong enough to govern the country or represent it in International organizations; a country whose poverty, disorganization, refugee flows, political instability, and random warfare had the potential to spread across borders and threaten the stability of other states and the peace of the region.[1] At the end of the cold war there were several such failed states in Africa,...

  4. Non-linear realizations of supersymmetry with off-shell central charges

    International Nuclear Information System (INIS)

    Santos Filho, P.B.; Oliveira Rivelles, V. de.

    1985-01-01

    A new class of non-linear realizations of the extended supersymmetry algebra with central charges is presented. They were obtained by applying the technique of dimensional reduction by Legendre transformation to a non-linear realization without central charges in one higher dimension. As a result an off-shell central charge is obtained. The non-linear lagrangian is the same as is the case of vanishing central charge. On-shell the central charge vanishes so this non-linear realization differs from that without central charges only off-shell. It is worked in two dimensions and its extension to higher dimensions is discussed. (Author) [pt

  5. Cosmological perturbations beyond linear order

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Cosmological perturbation theory is the standard tool to understand the formation of the large scale structure in the Universe. However, its degree of applicability is limited by the growth of the amplitude of the matter perturbations with time. This problem can be tackled with by using N-body simulations or analytical techniques that go beyond the linear calculation. In my talk, I'll summarise some recent efforts in the latter that ameliorate the bad convergence of the standard perturbative expansion. The new techniques allow better analytical control on observables (as the matter power spectrum) over scales very relevant to understand the expansion history and formation of structure in the Universe.

  6. Linearization and efficiency enhancement of power amplifiers using digital predistortion

    Energy Technology Data Exchange (ETDEWEB)

    Safari, Nima

    2008-07-01

    Today, demand of higher spectral efficiency forces wireless communication systems to employ non-constant envelope modulation schemes such as Quadrature Amplitude Modulations (QAM), Code Division Multiple Access (CDMA) and Orthogonal Frequency-Division Multiplexing (OFDM) schemes. These modulation techniques generate signals with wide range of envelope fluctuation. This property makes these schemes sensitive to nonlinear amplifications. Nonlinearities introduced by Power Amplifiers (PA) cause both a distortion of the signal and an increased out of band output spectrum, which leads to a rise in adjacent channel interference. Thus, in order to ensure a high spectral efficiency and to avoid spectral regrowth, a linearization technique is required. Among all the linearization techniques, basedband Digital Predistortion (DPD) is one of the commonly used linearization techniques, which is characterized by robust operation, low implementation cost and high accuracy. In the first chapter of this thesis, an introduction on the motivation and necessity of using PA linearization techniques is presented. Digital Predistortion as a popular linearization technique aims to improve the efficiency and linearity of RF power amplifiers. The scope of the thesis, the goals to be achieved and the contributions are also discussed in chapter one. Chapter two, mainly discusses sample-by-sample updating algorithm in Digital Predistorters to adaptively linearize the PA memoryless nonlinearities. Look-up Table (LUT) and polynomial approaches are studied and implemented in Hardware using a test-bed provided by Nera Research. The experimental results together with a discussion are then given. A new DPD algorithm based on block estimation is proposed in chapter three to avoid realtime signal processing, reduce the complexity and also avoid the bad performance during the slow adaptation of adaptive the Adjacent Channel Power Ratio (ACPR) and the Error Vector Magnitude (EVM) requirements. In

  7. A method of failed fuel detection

    International Nuclear Information System (INIS)

    Uchida, Shunsuke; Utamura, Motoaki; Urata, Megumu.

    1976-01-01

    Object: To keep the coolant fed to a fuel assembly at a level below the temperature of existing coolant to detect a failed fuel with high accuracy without using a heater. Structure: When a coolant in a coolant pool disposed at the upper part of a reactor container is fed by a coolant feed system into a fuel assembly through a cap to fill therewith and exchange while forming a boundary layer between said coolant and the existing coolant, the temperature distribution of the feed coolant is heated by fuel rods so that the upper part is low whereas the lower part is high. Then, the lower coolant is upwardly moved by the agitating action and fission products leaked through a failed opening at the lower part of the fuel assembly and easily extracted by the sampling system. (Yoshino, Y.)

  8. Hydraulic experiments on the failed fuel location module of prototype fast breeder reactor

    International Nuclear Information System (INIS)

    Rajesh, K.; Kumar, S.; Padmakumar, G.; Prakash, V.; Vijayashree, R.; Rajan Babu, V.; Govinda Rajan, S.; Vaidyanathan, G.; Prabhaker, R.

    2003-01-01

    The design of Prototype Fast Breeder Reactor (PFBR) is based on sound design concepts with emphasis on intrinsic safety. The uncertainties involved in the design of various components, which are difficult to assess theoretically, are experimentally verified before design is validated. In PFBR core, the coolant (liquid sodium) enters the bottom of the fuel subassembly, passes over the fuel pins picking up the fission heat and issues in to a hot pool. If there is any breach in the fuel pins, the fission products come in direct contact with the coolant. This is undesirable and it is necessary to locate the subassembly with the failed fuel pin and to isolate it. A component called Failed Fuel Location Module (FFLM) is employed for locating the failed SA by monitoring the coolant samples coming out of each Subassembly. The coolant sample from each Subassembly is drawn by FFLM using an EM pump through sampling tube and selector valve and is monitored for the presence of delayed neutrons which is an indication of failure of the Subassembly. The pressure drop across the selector valve determines the rating of the EM Pump. The dilution of the coolant sample across the selector valve determines the effectiveness of monitoring for contamination. It is not possible to predict pressure drop across the selector valve and dilution of the coolant sample theoretically. These two parameters are determined using a hydraulic experiment on the FFLM. The experiment was carried out in conditions that simulate the reactor conditions following appropriate similarity laws. The paper discusses the details of the model, techniques of experiments and the results from the studies

  9. Robust control technique for nuclear power plants

    International Nuclear Information System (INIS)

    Murphy, G.V.; Bailey, J.M.

    1989-03-01

    This report summarizes the linear quadratic Guassian (LQG) design technique with loop transfer recovery (LQG/LTR) for design of control systems. The concepts of return ratio, return difference, inverse return difference, and singular values are summarized. The LQG/LTR design technique allows the synthesis of a robust control system. To illustrate the LQG/LTR technique, a linearized model of a simple process has been chosen. The process has three state variables, one input, and one output. Three control system design methods are compared: LQG, LQG/LTR, and a proportional plus integral controller (PI). 7 refs., 20 figs., 6 tabs

  10. CLIC e+e- Linear Collider Studies

    CERN Document Server

    Dannheim, Dominik; Linssen, Lucie; Schulte, Daniel; Simon, Frank; Stapnes, Steinar; Toge, Nobukazu; Weerts, Harry; Wells, James

    2012-01-01

    This document provides input from the CLIC e+e- linear collider studies to the update process of the European Strategy for Particle Physics. It is submitted on behalf of the CLIC/CTF3 collaboration and the CLIC physics and detector study. It describes the exploration of fundamental questions in particle physics at the energy frontier with a future TeV-scale e+e- linear collider based on the Compact Linear Collider (CLIC) two-beam acceleration technique. A high-luminosity high-energy e+e- collider allows for the exploration of Standard Model physics, such as precise measurements of the Higgs, top and gauge sectors, as well as for a multitude of searches for New Physics, either through direct discovery or indirectly, via high-precision observables. Given the current state of knowledge, following the observation of a \\sim125 GeV Higgs-like particle at the LHC, and pending further LHC results at 8 TeV and 14 TeV, a linear e+e- collider built and operated in centre-of-mass energy stages from a few-hundred GeV up t...

  11. Numerical computation of linear instability of detonations

    Science.gov (United States)

    Kabanov, Dmitry; Kasimov, Aslan

    2017-11-01

    We propose a method to study linear stability of detonations by direct numerical computation. The linearized governing equations together with the shock-evolution equation are solved in the shock-attached frame using a high-resolution numerical algorithm. The computed results are processed by the Dynamic Mode Decomposition technique to generate dispersion relations. The method is applied to the reactive Euler equations with simple-depletion chemistry as well as more complex multistep chemistry. The results are compared with those known from normal-mode analysis. We acknowledge financial support from King Abdullah University of Science and Technology.

  12. Mound's decommissioning experience, tooling, and techniques

    International Nuclear Information System (INIS)

    Combs, A.B.; Davis, W.P.; Elswick, T.C.; Garner, J.M.; Geichman, J.R.

    1982-01-01

    Monsanto Research Corporation (MRC), which operates Mound for the Department of Energy (DOE), has been decommissioning radioactively contaminated facilities since 1949. We are currently decommissioning three plutonium-238 contaminated facilities (approximately 50,000 ft 2 ) that contained 1100 linear ft of gloveboxes; 900 linear ft of conveyor housing; 2650 linear ft of dual underground liquid waste lines; and associated contaminated piping, services, equipment, structures, and soil. As of June 1982, over 29,000 Ci of plutonium-238 have been removed in waste and scrap residues. As a result of the current and previous decommissioning projects, valuable experience has been gained in tooling and techniques. Special techniques have been developed in planning, exposure control, contamination control, equipment removal, structural decontamination, and waste packaging

  13. ESPRIT And Uniform Linear Arrays

    Science.gov (United States)

    Roy, R. H.; Goldburg, M.; Ottersten, B. E.; Swindlehurst, A. L.; Viberg, M.; Kailath, T.

    1989-11-01

    Abstract ¬â€?ESPRIT is a recently developed and patented technique for high-resolution estimation of signal parameters. It exploits an invariance structure designed into the sensor array to achieve a reduction in computational requirements of many orders of magnitude over previous techniques such as MUSIC, Burg's MEM, and Capon's ML, and in addition achieves performance improvement as measured by parameter estimate error variance. It is also manifestly more robust with respect to sensor errors (e.g. gain, phase, and location errors) than other methods as well. Whereas ESPRIT only requires that the sensor array possess a single invariance best visualized by considering two identical but other-wise arbitrary arrays of sensors displaced (but not rotated) with respect to each other, many arrays currently in use in various applications are uniform linear arrays of identical sensor elements. Phased array radars are commonplace in high-resolution direction finding systems, and uniform tapped delay lines (i.e., constant rate A/D converters) are the rule rather than the exception in digital signal processing systems. Such arrays possess many invariances, and are amenable to other types of analysis, which is one of the main reasons such structures are so prevalent. Recent developments in high-resolution algorithms of the signal/noise subspace genre including total least squares (TLS) ESPRIT applied to uniform linear arrays are summarized. ESPRIT is also shown to be a generalization of the root-MUSIC algorithm (applicable only to the case of uniform linear arrays of omni-directional sensors and unimodular cisoids). Comparisons with various estimator bounds, including CramerRao bounds, are presented.

  14. FAST COMPENSATION OF GLOBAL LINEAR COUPLING IN RHIC USING AC DIPOLES

    International Nuclear Information System (INIS)

    CALAGA, R.; FRANCHI, A., TOMAS, R.; CERN)

    2006-01-01

    Global linear coupling has been extensively studied in accelerators and several methods have been developed to compensate the coupling coefficient C using skew quadrupole families scans. However, scanning techniques can become very time consuming especially during the commissioning of an energy ramp. In this paper they illustrate a new technique to measure and compensate, in a single machine cycle, global linear coupling from turn-by-turn BPM data without the need of a skew quadrupole scan. The algorithm is applied to RHIC BPM data using AC dipoles and compared with traditional methods

  15. Constrained non-linear waves for offshore wind turbine design

    International Nuclear Information System (INIS)

    Rainey, P J; Camp, T R

    2007-01-01

    Advancements have been made in the modelling of extreme wave loading in the offshore environment. We give an overview of wave models used at present, and their relative merits. We describe a method for embedding existing non-linear solutions for large, regular wave kinematics into linear, irregular seas. Although similar methods have been used before, the new technique is shown to offer advances in computational practicality, repeatability, and accuracy. NewWave theory has been used to constrain the linear simulation, allowing best possible fit with the large non-linear wave. GH Bladed was used to compare the effect of these models on a generic 5 MW turbine mounted on a tripod support structure

  16. Fail-safe reactivity compensation method for a nuclear reactor

    Science.gov (United States)

    Nygaard, Erik T.; Angelo, Peter L.; Aase, Scott B.

    2018-01-23

    The present invention relates generally to the field of compensation methods for nuclear reactors and, in particular to a method for fail-safe reactivity compensation in solution-type nuclear reactors. In one embodiment, the fail-safe reactivity compensation method of the present invention augments other control methods for a nuclear reactor. In still another embodiment, the fail-safe reactivity compensation method of the present invention permits one to control a nuclear reaction in a nuclear reactor through a method that does not rely on moving components into or out of a reactor core, nor does the method of the present invention rely on the constant repositioning of control rods within a nuclear reactor in order to maintain a critical state.

  17. Recursive and non-linear logistic regression: moving on from the original EuroSCORE and EuroSCORE II methodologies.

    Science.gov (United States)

    Poullis, Michael

    2014-11-01

    EuroSCORE II, despite improving on the original EuroSCORE system, has not solved all the calibration and predictability issues. Recursive, non-linear and mixed recursive and non-linear regression analysis were assessed with regard to sensitivity, specificity and predictability of the original EuroSCORE and EuroSCORE II systems. The original logistic EuroSCORE, EuroSCORE II and recursive, non-linear and mixed recursive and non-linear regression analyses of these risk models were assessed via receiver operator characteristic curves (ROC) and Hosmer-Lemeshow statistic analysis with regard to the accuracy of predicting in-hospital mortality. Analysis was performed for isolated coronary artery bypass grafts (CABGs) (n = 2913), aortic valve replacement (AVR) (n = 814), mitral valve surgery (n = 340), combined AVR and CABG (n = 517), aortic (n = 350), miscellaneous cases (n = 642), and combinations of the above cases (n = 5576). The original EuroSCORE had an ROC below 0.7 for isolated AVR and combined AVR and CABG. None of the methods described increased the ROC above 0.7. The EuroSCORE II risk model had an ROC below 0.7 for isolated AVR only. Recursive regression, non-linear regression, and mixed recursive and non-linear regression all increased the ROC above 0.7 for isolated AVR. The original EuroSCORE had a Hosmer-Lemeshow statistic that was above 0.05 for all patients and the subgroups analysed. All of the techniques markedly increased the Hosmer-Lemeshow statistic. The EuroSCORE II risk model had a Hosmer-Lemeshow statistic that was significant for all patients (P linear regression failed to improve on the original Hosmer-Lemeshow statistic. The mixed recursive and non-linear regression using the EuroSCORE II risk model was the only model that produced an ROC of 0.7 or above for all patients and procedures and had a Hosmer-Lemeshow statistic that was highly non-significant. The original EuroSCORE and the EuroSCORE II risk models do not have adequate ROC and Hosmer

  18. Active Fail-Safe Micro-Array Flow Control for Advanced Embedded Propulsion Systems

    Science.gov (United States)

    Anderson, Bernhard H.; Mace, James L.; Mani, Mori

    2009-01-01

    The primary objective of this research effort was to develop and analytically demonstrate enhanced first generation active "fail-safe" hybrid flow-control techniques to simultaneously manage the boundary layer on the vehicle fore-body and to control the secondary flow generated within modern serpentine or embedded inlet S-duct configurations. The enhanced first-generation technique focused on both micro-vanes and micro-ramps highly-integrated with micro -jets to provide nonlinear augmentation for the "strength' or effectiveness of highly-integrated flow control systems. The study focused on the micro -jet mass flow ratio (Wjet/Waip) range from 0.10 to 0.30 percent and jet total pressure ratios (Pjet/Po) from 1.0 to 3.0. The engine bleed airflow range under study represents about a 10 fold decrease in micro -jet airflow than previously required. Therefore, by pre-conditioning, or injecting a very small amount of high-pressure jet flow into the vortex generated by the micro-vane and/or micro-ramp, active flow control is achieved and substantial augmentation of the controlling flow is realized.

  19. Bounded Perturbation Regularization for Linear Least Squares Estimation

    KAUST Repository

    Ballal, Tarig; Suliman, Mohamed Abdalla Elhag; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper addresses the problem of selecting the regularization parameter for linear least-squares estimation. We propose a new technique called bounded perturbation regularization (BPR). In the proposed BPR method, a perturbation with a bounded

  20. Linear Algebra and Smarandache Linear Algebra

    OpenAIRE

    Vasantha, Kandasamy

    2003-01-01

    The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...

  1. Fail-safe first wall for preclusion of little leakage

    International Nuclear Information System (INIS)

    Shibui, Masanao; Nakahira, Masataka; Tada, Eisuke; Takatsu, Hideyuki

    1994-05-01

    Leakages although excluded by design measures would occur most probably in highly stressed areas, weldments and locations without possibility to classify the state by in-service inspection. In a water-cooled first wall, allowable leak rate of water is generally very small, and therefore, locating of the leak portion under highly activated environment will be very difficult and be time-consuming. The double-wall concept is promising for the ITER first wall, because it can be made fail-safe by the application of the leak-before-break and the multiple load path concepts, and because it has a potential capability to solve the little leak problem. When the fail safe strength is well defined, subcritical crack growth in the damaged wall can be permitted. This will enable to detect stable leakage of coolant without deteriorating plasma operation. The paper deals with the little leak problem and presents method for evaluating small leak rate of a liquid coolant from crack-like defects. The fail-safe first wall with the double-wall concept is also proposed for preclusion of little leakage and its fail-safety is discussed. (author)

  2. Linear regressive model structures for estimation and prediction of compartmental diffusive systems

    NARCIS (Netherlands)

    Vries, D; Keesman, K.J.; Zwart, Heiko J.

    In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state space

  3. Linear regressive model structures for estimation and prediction of compartmental diffusive systems

    NARCIS (Netherlands)

    Vries, D.; Keesman, K.J.; Zwart, H.

    2006-01-01

    Abstract In input-output relations of (compartmental) diffusive systems, physical parameters appear non-linearly, resulting in the use of (constrained) non-linear parameter estimation techniques with its short-comings regarding global optimality and computational effort. Given a LTI system in state

  4. Development of non-linear TWB parts

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.; Yoon, C.S.; Lim, J.D. [Hyundai Motor Company and Kia Motors Corp. (Korea). Advanced Technology Center; Park, H.C. [Hyundai Hysco (Korea). Technical Research Lab.

    2005-07-01

    New manufacturing methods have applied for automotive parts to reduce total weight of car, resulting in improvement of fuel efficiency. TWB technique is applied to auto body parts, especially door inner, side inner and outer panel, and center floor panel to accomplish this goal. We applied non-linear (circular welded) TWB to shock absorber housing (to reduce total weight of shock absorber housing assembly). Welding line and shape of blank were determined by FEM analysis. High formability steel sheet and 440MPa grade high strength steel sheet were laser welded and press formed to final shock absorber housing (S/ABS HSG) panel and assembled with other sub parts. As a result, more than 10% of total weight of shock absorber housing assembly could be reduced compared with the mass of same part manufactured by conventional method. Also circular welding technique made it possible to design optimum welding line of TWB part. This paper is about result of FEM analysis and development procedure of non-linear TWB part (shock absorber housing assembly). (orig.)

  5. Lean Transformation Guidance: Why Organizations Fail To Achieve and Sustain Excellence Through Lean Improvement

    Directory of Open Access Journals (Sweden)

    Mohammed Hamed Ahmed

    2013-06-01

    Full Text Available Many companies are complaining that lean didn’t achieve their long-term goals, and the improvement impact was very short-lived. 7 out of each 10 lean projects fail as companies try to use lean like a toolkit, copying and pasting the techniques without trying to adapt the employee’s culture, manage the improvement process, sustain the results, and develop their leaders. When the Toyota production system was created, the main goal was to remove wastes from the shop floor using some lean techniques and tools. What was not clear is that this required from Toyota a long process of leadership development, and a high commitment to training and coaching their employee. A Failure to achieve and sustain the improvement is a problem of both management and leadership as well as the improper understanding of the human behavior, and the required culture to success.

  6. Failed fuel detector

    International Nuclear Information System (INIS)

    Onodera, Koichi.

    1981-01-01

    Purpose: To improve the reliability of detecting the failure of a fuel rod by imparting a wire disconnection detecting function to a central electrode at the center of a failure mode thereto. Constitution: A wire disconnection detecting terminal is provided at the terminal opposite to the signal output terminal of a central electrode in a failed fuel detector used for detecting the failure of a fuel rod in an atomic power plant using liquid metal as a coolant, and a voltage monitor for monitoring the terminal voltage is connected to the terminal. The disconnection of the central electrode is detected by the failure of the output of the voltage monitor, and an alarm is thus generated. (Aizawa, K.)

  7. Rendezvous technique for recanalization of long-segmental chronic total occlusion above the knee following unsuccessful standard angioplasty.

    Science.gov (United States)

    Cao, Jun; Lu, Hai-Tao; Wei, Li-Ming; Zhao, Jun-Gong; Zhu, Yue-Qi

    2016-04-01

    To assess the technical feasibility and efficacy of the rendezvous technique, a type of subintimal retrograde wiring, for the treatment of long-segmental chronic total occlusions above the knee following unsuccessful standard angioplasty. The rendezvous technique was attempted in eight limbs of eight patients with chronic total occlusions above the knee after standard angioplasty failed. The clinical symptoms and ankle-brachial index were compared before and after the procedure. At follow-up, pain relief, wound healing, limb salvage, and the presence of restenosis of the target vessels were evaluated. The rendezvous technique was performed successfully in seven patients (87.5%) and failed in one patient (12.5%). Foot pain improved in all seven patients who underwent successful treatment, with ankle-brachial indexes improving from 0.23 ± 0.13 before to 0.71 ± 0.09 after the procedure (P rendezvous technique is a feasible and effective treatment for chronic total occlusions above the knee when standard angioplasty fails. © The Author(s) 2015.

  8. Effect of Image Linearization on Normalized Compression Distance

    Science.gov (United States)

    Mortensen, Jonathan; Wu, Jia Jie; Furst, Jacob; Rogers, John; Raicu, Daniela

    Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.

  9. Linear collider systems and costs

    International Nuclear Information System (INIS)

    Loew, G.A.

    1993-05-01

    The purpose of this paper is to examine some of the systems and sub-systems involved in so-called ''conventional'' e + e - linear colliders and to study how their design affects the overall cost of these machines. There are presently a total of at least six 500 GeV c. of m. linear collider projects under study in the world. Aside from TESLA (superconducting linac at 1.3 GHz) and CLIC (two-beam accelerator with main linac at 30GHz), the other four proposed e + e - linear colliders can be considered ''conventional'' in that their main linacs use the proven technique of driving room temperature accelerator sections with pulsed klystrons and modulators. The centrally distinguishing feature between these projects is their main linac rf frequency: 3 GHz for the DESY machine, 11.424 GHz for the SLAC and JLC machines, and 14 GHz for the VLEPP machine. The other systems, namely the electron and positron sources, preaccelerators, compressors, damping rings and final foci, are fairly similar from project to project. Probably more than 80% of the cost of these linear colliders will be incurred in the two main linacs facing each other and it is therefore in their design and construction that major savings or extra costs may be found

  10. Linear Programming and Its Application to Pattern Recognition Problems

    Science.gov (United States)

    Omalley, M. J.

    1973-01-01

    Linear programming and linear programming like techniques as applied to pattern recognition problems are discussed. Three relatively recent research articles on such applications are summarized. The main results of each paper are described, indicating the theoretical tools needed to obtain them. A synopsis of the author's comments is presented with regard to the applicability or non-applicability of his methods to particular problems, including computational results wherever given.

  11. Multivariate correlation analysis technique based on euclidean distance map for network traffic characterization

    NARCIS (Netherlands)

    Tan, Zhiyuan; Jamdagni, Aruna; He, Xiangjian; Nanda, Priyadarsi; Liu, Ren Ping; Qing, Sihan; Susilo, Willy; Wang, Guilin; Liu, Dongmei

    2011-01-01

    The quality of feature has significant impact on the performance of detection techniques used for Denial-of-Service (DoS) attack. The features that fail to provide accurate characterization for network traffic records make the techniques suffer from low accuracy in detection. Although researches

  12. Method for detecting a failed fuel

    International Nuclear Information System (INIS)

    Utamura, Motoaki; Urata, Megumu; Uchida, Shunsuke.

    1976-01-01

    Purpose: To provide a method for the detection of failed fuel by pouring hot water, in which pouring speed of liquid to be poured and temperature of the liquid are controlled to prevent the leakage of the liquid. Constitution: The method comprises blocking the top of a fuel assembly arranged in coolant to stop a flow of coolant, pouring a liquid higher in temperature than that of coolant into the fuel assembly, sampling the liquid poured, and measuring the concentration of radioactivity of coolant already subjected to sampling to detect a failed fuel. At this time, controlling is made so that the pouring speed of the poured liquid is set to about 25 l/min, and an increased portion of temperature from the temperature of liquid to the temperature of coolant is set to a level less than about 15 0 C. (Furukawa, Y.)

  13. A variational formulation for linear models in coupled dynamic thermoelasticity

    International Nuclear Information System (INIS)

    Feijoo, R.A.; Moura, C.A. de.

    1981-07-01

    A variational formulation for linear models in coupled dynamic thermoelasticity which quite naturally motivates the design of a numerical scheme for the problem, is studied. When linked to regularization or penalization techniques, this algorithm may be applied to more general models, namely, the ones that consider non-linear constraints associated to variational inequalities. The basic postulates of Mechanics and Thermodynamics as well as some well-known mathematical techniques are described. A thorough description of the algorithm implementation with the finite-element method is also provided. Proofs for existence and uniqueness of solutions and for convergence of the approximations are presented, and some numerical results are exhibited. (Author) [pt

  14. Fail-safe logic elements for use with reactor safety systems

    International Nuclear Information System (INIS)

    Bobis, J.P.; McDowell, W.P.

    1976-01-01

    A complete fail-safe trip circuit is described which utilizes fail-safe logic elements. The logic elements used are analog multipliers and active bandpass filter networks. These elements perform Boolean operations on a set of AC signals from the output of a reactor safety-channel trip comparator

  15. Linear Viscoelasticity, Reptation, Chain Stretching and Constraint Release

    DEFF Research Database (Denmark)

    Neergaard, Jesper; Schieber, Jay D.; Venerus, David C.

    2000-01-01

    A recently proposed self-consistent reptation model - alreadysuccessful at describing highly nonlinear shearing flows of manytypes using no adjustable parameters - is used here to interpretthe linear viscoelasticity of the same entangled polystyrenesolution. Using standard techniques, a relaxatio...

  16. Peroral endoscopic remyotomy for failed Heller myotomy: a prospective single-center study.

    Science.gov (United States)

    Zhou, P H; Li, Q L; Yao, L Q; Xu, M D; Chen, W F; Cai, M Y; Hu, J W; Li, L; Zhang, Y Q; Zhong, Y S; Ma, L L; Qin, W Z; Cui, Z

    2013-01-01

    Recurrence/persistence of symptoms occurs in approximately 20 % of patients after Heller myotomy for achalasia. Controversy exists regarding the therapy for patients in whom Heller myotomy has failed. The aim of the current study was to evaluate the efficacy and feasibility of peroral endoscopic myotomy (POEM), a new endoscopic myotomy technique, for patients with failed Heller myotomy. A total of 12 patients with recurrence/persistence of symptoms after Heller myotomy, as diagnosed by established methods and an Eckardt score of ≥ 4, were prospectively included. The primary outcome was symptom relief during follow-up, defined as an Eckardt score of ≤ 3. Secondary outcomes were procedure-related adverse events, lower esophageal sphincter (LES) pressure on manometry, reflux symptoms, and medication use before and after POEM. All 12 patients underwent successful POEM after a mean of 11.9 years (range 2 - 38 years) from the time of the primary Heller myotomy. No serious complications related to POEM were encountered. During a mean follow-up period of 10.4 months (range 5 - 14 months), treatment success was achieved in 11/12 patients (91.7 %; mean score pre- vs. post-treatment 9.2 vs. 1.3; P Heller myotomy resulting in short-term symptom relief in > 90 % of cases. Previous Heller myotomy may make subsequent endoscopic remyotomy more challenging, but does not prevent successful POEM. © Georg Thieme Verlag KG Stuttgart · New York.

  17. Variance-to-mean method generalized by linear difference filter technique

    International Nuclear Information System (INIS)

    Hashimoto, Kengo; Ohsaki, Hiroshi; Horiguchi, Tetsuo; Yamane, Yoshihiro; Shiroya, Seiji

    1998-01-01

    The conventional variance-to-mean method (Feynman-α method) seriously suffers the divergency of the variance under such a transient condition as a reactor power drift. Strictly speaking, then, the use of the Feynman-α is restricted to a steady state. To apply the method to more practical uses, it is desirable to overcome this kind of difficulty. For this purpose, we propose an usage of higher-order difference filter technique to reduce the effect of the reactor power drift, and derive several new formulae taking account of the filtering. The capability of the formulae proposed was demonstrated through experiments in the Kyoto University Critical Assembly. The experimental results indicate that the divergency of the variance can be effectively suppressed by the filtering technique, and that the higher-order filter becomes necessary with increasing variation rate in power

  18. A Linearized Relaxing Algorithm for the Specific Nonlinear Optimization Problem

    Directory of Open Access Journals (Sweden)

    Mio Horai

    2016-01-01

    Full Text Available We propose a new method for the specific nonlinear and nonconvex global optimization problem by using a linear relaxation technique. To simplify the specific nonlinear and nonconvex optimization problem, we transform the problem to the lower linear relaxation form, and we solve the linear relaxation optimization problem by the Branch and Bound Algorithm. Under some reasonable assumptions, the global convergence of the algorithm is certified for the problem. Numerical results show that this method is more efficient than the previous methods.

  19. Betatron tomography with the use of non-linear backprojection techniques

    International Nuclear Information System (INIS)

    Baranov, V.A.; Temnik, A.K.; Chakhlov, V.L.; Chekalin, A.S.

    1995-01-01

    The testing of heavy components under non-steady-state condition (at erection and building sites, at jigs, for testing of welded joints and valving of oil and gas pipelines, power and boiler plants repair, building construction and for testing of castings and welded joints of large thickness) traditionally belongs to most pressing NDT problems. One of essential prerequisites for success at this point was the elaboration of appropriate high energy radiation sources, in particular small size pulse betatrons like MIB-4 and MIB-6 with the energy 4 and 6 MeV. Now, taking into account the new possibilities of tomography, the adaptation of fresh methods of cross-sectional visualisation (like non-linear tomosynthesis) to this conventional problem-solving area is of special interest. (orig./RHM)

  20. International linear collider simulations using BDSIM

    Indian Academy of Sciences (India)

    BDSIM is a Geant4 [1] extension toolkit for the simulation of particle transport in accelerator beamlines. It is a code that combines accelerator-style particle tracking with traditional Geant-style tracking based on Runga–Kutta techniques. A more detailed description of the code can be found in [2]. In an e+e− linear collider ...

  1. Multiple Linear Regression: A Realistic Reflector.

    Science.gov (United States)

    Nutt, A. T.; Batsell, R. R.

    Examples of the use of Multiple Linear Regression (MLR) techniques are presented. This is done to show how MLR aids data processing and decision-making by providing the decision-maker with freedom in phrasing questions and by accurately reflecting the data on hand. A brief overview of the rationale underlying MLR is given, some basic definitions…

  2. Re-examining TG-142 recommendations in light of modern techniques for linear accelerator based radiosurgery.

    Science.gov (United States)

    Faught, Austin M; Trager, Michael; Yin, Fang-Fang; Kirkpatrick, John; Adamson, Justus

    2016-10-01

    The recent development of multifocal stereotactic radiosurgery (SRS) using a single isocenter volumetric modulated arc theory (VMAT) technique warrants a re-examination of the quality assurance (QA) tolerances for routine mechanical QA recommended by the American Association of Physicists in Medicine Task Group Report Number 142. Multifocal SRS can result in targets with small volumes being at a large off-axis distance from the treatment isocenter. Consequently, angular errors in the collimator, patient support assembly (PSA), or gantry could have an increased impact on target coverage. The authors performed a retrospective analysis of dose deviations caused by systematic errors in PSA, collimator, and gantry angle at the tolerance level for routine linear accelerator QA as recommended by TG-142. Dosimetric deviations from multifocal SRS plans (N = 10) were compared to traditional single target SRS using dynamic conformal arcs (N = 10). The chief dosimetric quantities used in determining clinical impact were V 100% and D 99% of the individual planning target volumes and V 12Gy of the healthy brain. Induced errors at tolerance levels showed the greatest change in multifocal SRS target coverage for collimator rotations (±1.0°) with the average changes to V 100% and D 99% being 5% and 6%, respectively, with maximum changes of 33% and 20%. A reduction in the induced error to half the TG-142 tolerance (±0.5°) demonstrated similar changes in coverage loss to traditional single target SRS assessed at the recommended tolerance level. The observed change in coverage for multifocal SRS was reduced for gantry errors (±1.0°) at 2% and 4.5% for V 100% and D 99% , respectively, with maximum changes of 18% and 12%. Minimal change in coverage was noted for errors in PSA rotation. This study indicates that institutions utilizing a single isocenter VMAT technique for multifocal disease should pay careful attention to the angular mechanical tolerances in designing a robust and

  3. In Search of Black Swans: Identifying Students at Risk of Failing Licensing Examinations.

    Science.gov (United States)

    Barber, Cassandra; Hammond, Robert; Gula, Lorne; Tithecott, Gary; Chahine, Saad

    2018-03-01

    To determine which admissions variables and curricular outcomes are predictive of being at risk of failing the Medical Council of Canada Qualifying Examination Part 1 (MCCQE1), how quickly student risk of failure can be predicted, and to what extent predictive modeling is possible and accurate in estimating future student risk. Data from five graduating cohorts (2011-2015), Schulich School of Medicine & Dentistry, Western University, were collected and analyzed using hierarchical generalized linear models (HGLMs). Area under the receiver operating characteristic curve (AUC) was used to evaluate the accuracy of predictive models and determine whether they could be used to predict future risk, using the 2016 graduating cohort. Four predictive models were developed to predict student risk of failure at admissions, year 1, year 2, and pre-MCCQE1. The HGLM analyses identified gender, MCAT verbal reasoning score, two preclerkship course mean grades, and the year 4 summative objective structured clinical examination score as significant predictors of student risk. The predictive accuracy of the models varied. The pre-MCCQE1 model was the most accurate at predicting a student's risk of failing (AUC 0.66-0.93), while the admissions model was not predictive (AUC 0.25-0.47). Key variables predictive of students at risk were found. The predictive models developed suggest, while it is not possible to identify student risk at admission, we can begin to identify and monitor students within the first year. Using such models, programs may be able to identify and monitor students at risk quantitatively and develop tailored intervention strategies.

  4. Systems with randomly failing repairable components

    DEFF Research Database (Denmark)

    Der Kiureghian, Armen; Ditlevsen, Ove Dalager; Song, Junho

    2005-01-01

    Closed-form expressions are derived for the steady-state availability, mean rate of failure, mean duration of downtime and reliability of a general system with randomly and independently failing repairable components. Component failures are assumed to be homogeneous Poisson events in time and rep...

  5. Evaluating forest management policies by parametric linear programing

    Science.gov (United States)

    Daniel I. Navon; Richard J. McConnen

    1967-01-01

    An analytical and simulation technique, parametric linear programing explores alternative conditions and devises an optimal management plan for each condition. Its application in solving policy-decision problems in the management of forest lands is illustrated in an example.

  6. Setting and validating the pass/fail score for the NBDHE.

    Science.gov (United States)

    Tsai, Tsung-Hsun; Dixon, Barbara Leatherman

    2013-04-01

    This report describes the overall process used for setting the pass/fail score for the National Board Dental Hygiene Examination (NBDHE). The Objective Standard Setting (OSS) method was used for setting the pass/fail score for the NBDHE. The OSS method requires a panel of experts to determine the criterion items and proportion of these items that minimally competent candidates would answer correctly, the percentage of mastery and the confidence level of the error band. A panel of 11 experts was selected by the Joint Commission on National Dental Examinations (Joint Commission). Panel members represented geographic distribution across the U.S. and had the following characteristics: full-time dental hygiene practitioners with experience in areas of preventive, periodontal, geriatric and special needs care, and full-time dental hygiene educators with experience in areas of scientific basis for dental hygiene practice, provision of clinical dental hygiene services and community health/research principles. Utilizing the expert panel's judgments, the pass/fail score was set and then the score scale was established using the Rasch measurement model. Statistical and psychometric analysis shows the actual failure rate and the OSS failure rate are reasonably consistent (2.4% vs. 2.8%). The analysis also showed the lowest error of measurement, an index of the precision at the pass/fail score point and that the highest reliability (0.97) are achieved at the pass/fail score point. The pass/fail score is a valid guide for making decisions about candidates for dental hygiene licensure. This new standard was reviewed and approved by the Joint Commission and was implemented beginning in 2011.

  7. Very Preterm Infants Failing CPAP Show Signs of Fatigue Immediately after Birth

    Science.gov (United States)

    Siew, Melissa L.; van Vonderen, Jeroen J.; Hooper, Stuart B.; te Pas, Arjan B.

    2015-01-01

    Objective To investigate the differences in breathing pattern and effort in infants at birth who failed or succeeded on continuous positive airway pressure (CPAP) during the first 48 hours after birth. Methods Respiratory function recordings of 32 preterm infants were reviewed of which 15 infants with a gestational age of 28.6 (0.7) weeks failed CPAP and 17 infants with a GA of 30.1 (0.4) weeks did not fail CPAP. Frequency, duration and tidal volumes (VT) of expiratory holds (EHs), peak inspiratory flows, CPAP-level and FiO2-levels were analysed. Results EH incidence increased CPAP-fail and CPAP-success infants. At 9-12 minutes, CPAP-fail infants more frequently used smaller VTs, 0-9 ml/kg and required higher peak inspiratory flows. However, CPAP-success infants often used large VTs (>9 ml/kg) with higher peak inspiratory flows than CPAP-fail infants (71.8 ± 15.8 vs. 15.5 ± 5.2 ml/kg.s, p CPAP-fail infants required higher FiO2 (0.31 ± 0.03 vs. 0.21 ± 0.01), higher CPAP pressures (6.62 ± 0.3 vs. 5.67 ± 0.26 cmH2O) and more positive pressure-delivered breaths (45 ± 12 vs. 19 ± 9%) (p CPAP-fail infants more commonly used lower VTs and required higher peak inspiratory flow rates while receiving greater respiratory support. VT was less variable and larger VT was infrequently used reflecting early signs of fatigue. PMID:26052947

  8. Pictorial essay: Role of ultrasound in failed carpal tunnel decompression

    Directory of Open Access Journals (Sweden)

    Rajesh Botchu

    2012-01-01

    Full Text Available USG has been used for the diagnosis of carpal tunnel syndrome. Scarring and incomplete decompression are the main causes for persistence or recurrence of symptoms. We performed a retrospective study to assess the role of ultrasound in failed carpal tunnel decompression. Of 422 USG studies of the wrist performed at our center over the last 5 years, 14 were for failed carpal tunnel decompression. Scarring was noted in three patients, incomplete decompression in two patients, synovitis in one patient, and an anomalous muscle belly in one patient. No abnormality was detected in seven patients. We present a pictorial review of USG findings in failed carpal tunnel decompression.

  9. Pictorial essay: Role of ultrasound in failed carpal tunnel decompression.

    Science.gov (United States)

    Botchu, Rajesh; Khan, Aman; Jeyapalan, Kanagaratnam

    2012-01-01

    USG has been used for the diagnosis of carpal tunnel syndrome. Scarring and incomplete decompression are the main causes for persistence or recurrence of symptoms. We performed a retrospective study to assess the role of ultrasound in failed carpal tunnel decompression. Of 422 USG studies of the wrist performed at our center over the last 5 years, 14 were for failed carpal tunnel decompression. Scarring was noted in three patients, incomplete decompression in two patients, synovitis in one patient, and an anomalous muscle belly in one patient. No abnormality was detected in seven patients. We present a pictorial review of USG findings in failed carpal tunnel decompression.

  10. Non-parametric data predistortion for non-linear channels with memory

    OpenAIRE

    Piazza, Roberto; Shankar, Bhavani; Ottersten, Björn

    2013-01-01

    With the growing application of high order modulation techniques, the mitigation of the non-linear distortions introduced by the power amplification, has become a major issue in telecommunication. More sophisticated techniques to counteract the strong generated interferences need to be investigated in order to achieve the desired power and spectral efficiency. This work proposes a novel approach for the definition of a transmitter technique (predistortion) that outperforms the standard method...

  11. Detection device for the failed position in fuels

    International Nuclear Information System (INIS)

    Tokunaga, Kensuke; Nomura, Teiji; Hiruta, Koji

    1985-01-01

    Purpose: To detect the failed position of a fuel assembly with ease and safety. Constitution: A fuel assembly is tightly closed in a sipper tube equipped with a gas supply tube and a gas exhaust tube at the upper portion and a purified water injection tube and a draining tube at the lower end. Then, water in the sipper tube is drained to the lower portion of the fuel assembly by the pressure of gases while opening the gas supply tube and the draining tube, and closing the exhaust tube and the injection tube. Then, after closing the gas supply tube and the draining tube while opening theexhaust tube and the injection tube, purified water is injected into the sipper tube from the injection tube to an optional height till the fuel assembly is immersed. Then, after leaving for a predetermined of time, water is sampled and the radioactive material density therein is measured. By changing the injection level of the purified water, since the radioactive material density changes at the failed position, the failed position can be detected with ease. (Sekiya, K.)

  12. Reduction of Linear Programming to Linear Approximation

    OpenAIRE

    Vaserstein, Leonid N.

    2006-01-01

    It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.

  13. Parallel beam dynamics simulation of linear accelerators

    International Nuclear Information System (INIS)

    Qiang, Ji; Ryne, Robert D.

    2002-01-01

    In this paper we describe parallel particle-in-cell methods for the large scale simulation of beam dynamics in linear accelerators. These techniques have been implemented in the IMPACT (Integrated Map and Particle Accelerator Tracking) code. IMPACT is being used to study the behavior of intense charged particle beams and as a tool for the design of next-generation linear accelerators. As examples, we present applications of the code to the study of emittance exchange in high intensity beams and to the study of beam transport in a proposed accelerator for the development of accelerator-driven waste transmutation technologies

  14. On the k-independence required by linear probing and minwise independence

    DEFF Research Database (Denmark)

    Pǎtraşcu, Mihai; Thorup, Mikkel

    2016-01-01

    We show that linear probing requires 5-independent hash functions for expected constant-time performance, matching an upper bound of Pagh et al. [2009]. More precisely, we construct a random 4-independent hash function yielding expected logarithmic search time for certain keys. For (1 + ε......)-approximate minwise independence, we show that Ω(lg1 ε)-independent hash functions are required, matching an upper bound of Indyk [2001]. We also show that the very fast 2-independent multiply-shift scheme of Dietzfelbinger [1996] fails badly in both applications....

  15. Underachievement, Failing Youth and Moral Panics

    Science.gov (United States)

    Smith, Emma

    2010-01-01

    This paper considers contemporary "moral panics" around the underachievement of boys in school examinations in the UK and America. In the UK, in particular, the underachievement of boys is central to current "crisis accounts" about falling standards and failing pupils. "Underachievement" is a familiar word to those…

  16. Direct probe of the bent and linear geometries of the core-excited Renner-Teller pair states by means of the triple-ion-coincidence momentum imaging technique

    International Nuclear Information System (INIS)

    Muramatsu, Y.; Ueda, K.; Chiba, H.; Saito, N.; Lavollee, M.; Czasch, A.; Weber, T.; Jagutzki, O.; Schmidt-Boecking, H.; Moshammer, R.; Becker, U.; Kubozuka, K.; Koyano, I.

    2002-01-01

    The doubly degenerate core-excited Π state of CO 2 splits into two due to static Renner-Teller effect. Using the triple-ion-coincidence momentum imaging technique and focusing on the dependence of the measured quantities on the polarization of the incident light, we have probed, directly and separately, the linear and bent geometries for the B 1 and A 1 Renner-Teller pair states, as a direct proof of the static Renner-Teller effect

  17. On the interaction of small-scale linear waves with nonlinear solitary waves

    Science.gov (United States)

    Xu, Chengzhu; Stastna, Marek

    2017-04-01

    In the study of environmental and geophysical fluid flows, linear wave theory is well developed and its application has been considered for phenomena of various length and time scales. However, due to the nonlinear nature of fluid flows, in many cases results predicted by linear theory do not agree with observations. One of such cases is internal wave dynamics. While small-amplitude wave motion may be approximated by linear theory, large amplitude waves tend to be solitary-like. In some cases, when the wave is highly nonlinear, even weakly nonlinear theories fail to predict the wave properties correctly. We study the interaction of small-scale linear waves with nonlinear solitary waves using highly accurate pseudo spectral simulations that begin with a fully nonlinear solitary wave and a train of small-amplitude waves initialized from linear waves. The solitary wave then interacts with the linear waves through either an overtaking collision or a head-on collision. During the collision, there is a net energy transfer from the linear wave train to the solitary wave, resulting in an increase in the kinetic energy carried by the solitary wave and a phase shift of the solitary wave with respect to a freely propagating solitary wave. At the same time the linear waves are greatly reduced in amplitude. The percentage of energy transferred depends primarily on the wavelength of the linear waves. We found that after one full collision cycle, the longest waves may retain as much as 90% of the kinetic energy they had initially, while the shortest waves lose almost all of their initial energy. We also found that a head-on collision is more efficient in destroying the linear waves than an overtaking collision. On the other hand, the initial amplitude of the linear waves has very little impact on the percentage of energy that can be transferred to the solitary wave. Because of the nonlinearity of the solitary wave, these results provide us some insight into wave-mean flow

  18. General Linearized Theory of Quantum Fluctuations around Arbitrary Limit Cycles.

    Science.gov (United States)

    Navarrete-Benlloch, Carlos; Weiss, Talitha; Walter, Stefan; de Valcárcel, Germán J

    2017-09-29

    The theory of Gaussian quantum fluctuations around classical steady states in nonlinear quantum-optical systems (also known as standard linearization) is a cornerstone for the analysis of such systems. Its simplicity, together with its accuracy far from critical points or situations where the nonlinearity reaches the strong coupling regime, has turned it into a widespread technique, being the first method of choice in most works on the subject. However, such a technique finds strong practical and conceptual complications when one tries to apply it to situations in which the classical long-time solution is time dependent, a most prominent example being spontaneous limit-cycle formation. Here, we introduce a linearization scheme adapted to such situations, using the driven Van der Pol oscillator as a test bed for the method, which allows us to compare it with full numerical simulations. On a conceptual level, the scheme relies on the connection between the emergence of limit cycles and the spontaneous breaking of the symmetry under temporal translations. On the practical side, the method keeps the simplicity and linear scaling with the size of the problem (number of modes) characteristic of standard linearization, making it applicable to large (many-body) systems.

  19. Chaos as an intermittently forced linear system.

    Science.gov (United States)

    Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kaiser, Eurika; Kutz, J Nathan

    2017-05-30

    Understanding the interplay of order and disorder in chaos is a central challenge in modern quantitative science. Approximate linear representations of nonlinear dynamics have long been sought, driving considerable interest in Koopman theory. We present a universal, data-driven decomposition of chaos as an intermittently forced linear system. This work combines delay embedding and Koopman theory to decompose chaotic dynamics into a linear model in the leading delay coordinates with forcing by low-energy delay coordinates; this is called the Hankel alternative view of Koopman (HAVOK) analysis. This analysis is applied to the Lorenz system and real-world examples including Earth's magnetic field reversal and measles outbreaks. In each case, forcing statistics are non-Gaussian, with long tails corresponding to rare intermittent forcing that precedes switching and bursting phenomena. The forcing activity demarcates coherent phase space regions where the dynamics are approximately linear from those that are strongly nonlinear.The huge amount of data generated in fields like neuroscience or finance calls for effective strategies that mine data to reveal underlying dynamics. Here Brunton et al.develop a data-driven technique to analyze chaotic systems and predict their dynamics in terms of a forced linear model.

  20. Obstetric Anaesthetists' Association and Difficult Airway Society guidelines for the management of difficult and failed tracheal intubation in obstetrics*

    OpenAIRE

    Mushambi, M C; Kinsella, S M; Popat, M; Swales, H; Ramaswamy, K K; Winton, A L; Quinn, A C

    2015-01-01

    The Obstetric Anaesthetists' Association and Difficult Airway Society have developed the first national obstetric guidelines for the safe management of difficult and failed tracheal intubation during general anaesthesia. They comprise four algorithms and two tables. A master algorithm provides an overview. Algorithm 1 gives a framework on how to optimise a safe general anaesthetic technique in the obstetric patient, and emphasises: planning and multidisciplinary communication; how to prevent ...

  1. Controlling attribute effect in linear regression

    KAUST Repository

    Calders, Toon; Karim, Asim A.; Kamiran, Faisal; Ali, Wasif Mohammad; Zhang, Xiangliang

    2013-01-01

    In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE.

  2. Controlling attribute effect in linear regression

    KAUST Repository

    Calders, Toon

    2013-12-01

    In data mining we often have to learn from biased data, because, for instance, data comes from different batches or there was a gender or racial bias in the collection of social data. In some applications it may be necessary to explicitly control this bias in the models we learn from the data. This paper is the first to study learning linear regression models under constraints that control the biasing effect of a given attribute such as gender or batch number. We show how propensity modeling can be used for factoring out the part of the bias that can be justified by externally provided explanatory attributes. Then we analytically derive linear models that minimize squared error while controlling the bias by imposing constraints on the mean outcome or residuals of the models. Experiments with discrimination-aware crime prediction and batch effect normalization tasks show that the proposed techniques are successful in controlling attribute effects in linear regression models. © 2013 IEEE.

  3. Efficient Feedforward Linearization Technique Using Genetic Algorithms for OFDM Systems

    Directory of Open Access Journals (Sweden)

    García Paloma

    2010-01-01

    Full Text Available Feedforward is a linearization method that simultaneously offers wide bandwidth and good intermodulation distortion suppression; so it is a good choice for Orthogonal Frequency Division Multiplexing (OFDM systems. Feedforward structure consists of two loops, being necessary an accurate adjustment between them along the time, and when temperature, environmental, or operating changes are produced. Amplitude and phase imbalances of the circuit elements in both loops produce mismatched effects that lead to degrade its performance. A method is proposed to compensate these mismatches, introducing two complex coefficients calculated by means of a genetic algorithm. A full study is carried out to choose the optimal parameters of the genetic algorithm applied to wideband systems based on OFDM technologies, which are very sensitive to nonlinear distortions. The method functionality has been verified by means of simulation.

  4. Spectral theories for linear differential equations

    International Nuclear Information System (INIS)

    Sell, G.R.

    1976-01-01

    The use of spectral analysis in the study of linear differential equations with constant coefficients is not only a fundamental technique but also leads to far-reaching consequences in describing the qualitative behaviour of the solutions. The spectral analysis, via the Jordan canonical form, will not only lead to a representation theorem for a basis of solutions, but will also give a rather precise statement of the (exponential) growth rates of various solutions. Various attempts have been made to extend this analysis to linear differential equations with time-varying coefficients. The most complete such extensions is the Floquet theory for equations with periodic coefficients. For time-varying linear differential equations with aperiodic coefficients several authors have attempted to ''extend'' the Foquet theory. The precise meaning of such an extension is itself a problem, and we present here several attempts in this direction that are related to the general problem of extending the spectral analysis of equations with constant coefficients. The main purpose of this paper is to introduce some problems of current research. The primary problem we shall examine occurs in the context of linear differential equations with almost periodic coefficients. We call it ''the Floquet problem''. (author)

  5. Failed fuel detection and location of LMFBR

    International Nuclear Information System (INIS)

    Mimoto, Yasuhide; Hukuda, Tooru; Nakamoto, Koichiro

    1974-01-01

    This is a summary report on Failed Fuel Detection and Location Methods of liquid metal cooled fast breeder reactors, and describes an outline of related research and development conducted by PNC. (auth.)

  6. Triggered activity and automaticity in ventricular trabeculae of failing human and rabbit hearts

    NARCIS (Netherlands)

    Vermeulen, J. T.; McGuire, M. A.; Opthof, T.; Coronel, R.; de Bakker, J. M.; Klöpping, C.; Janse, M. J.

    1994-01-01

    The aim of the study was to assess the occurrence of triggered activity and automaticity in ventricular trabeculae from failing human hearts and normal and failing rabbit hearts during exposure to a normal and altered extracellular environment. Ventricular trabeculae were harvested from failing

  7. Weak and Failing States: Evolving Security Threats and U.S. Policy

    National Research Council Canada - National Science Library

    Wyler, Liana S

    2008-01-01

    .... national security goal since the end of the Cold War. Numerous U.S. government documents point to several threats emanating from states that are variously described as weak, fragile, vulnerable, failing, precarious, failed, in crisis, or collapsed...

  8. Quasi-linear evolution of tearing modes

    International Nuclear Information System (INIS)

    Pellat, R.; Frey, M.; Tagger, M.

    1983-07-01

    The growth of a Tearing instability in Rutherford's nonlinear regime is investigated. Using a singular perturbation technique, lowest order Rutherford's result is recovered. To the following order it is shown that the mode generates a quasi-linear deformation of the equilibrium flux profile, whose resistive diffusion slows down the growth and shows the possibility of a saturation of the instability

  9. Linear collider accelerator physics issues regarding alignment

    International Nuclear Information System (INIS)

    Seeman, J.T.

    1990-01-01

    The next generation of linear colliders will require more stringent alignment tolerances than those for the SLC with regard to the accelerating structures, quadrupoles, and beam position monitors. New techniques must be developed to achieve these tolerances. A combination of mechanical-electrical and beam-based methods will likely be needed

  10. Families of Linear Recurrences for Catalan Numbers

    Science.gov (United States)

    Gauthier, N.

    2011-01-01

    Four different families of linear recurrences are derived for Catalan numbers. The derivations rest on John Riordan's 1973 generalization of Catalan numbers to a set of polynomials. Elementary differential and integral calculus techniques are used and the results should be of interest to teachers and students of introductory courses in calculus…

  11. Integration of post-irradiation examination results of failed WWER fuel rods

    International Nuclear Information System (INIS)

    Smirnov, A.; Markov, D.; Smirnov, V.; Polenok, V.; Perepelkin, S.

    2003-01-01

    The aim of the work is to investigate the causes of WWER fuel rod failures and to reveal the dependence of the failed fuel rod behaviour and state on the damage characteristics and duration of their operation in the core. The post-irradiation examination of 12 leaky fuel assemblies (5 for WWER-440 and 7 for WWER-1000) has been done at SSC RF RIAR. The results show that the main mechanism responsible for the majority of cases of the WWER fuel rod perforation is debris-damage of the claddings. Debris fretting of the claddings spread randomly over the fuel assembly cross-section and they are registered in the area of the bundle supporting grid or under the lower spacer grids along the fuel assembly height. In the WWER fuel rods, the areas of secondary hydrogenating of cladding are spaced from the primary defects by ∼2500-3000 mm, as a rule, and are often adjacent closely to the upper welded joints. There is no pronounced dependence of the distance between the primary and secondary cladding defects neither on the linear power, at which the fuel rods were operated, nor on the period of their operation in the leaky state. The time period of the significant secondary damage formation is about 250 ± 50 calendar days for the WWER fuel rods with slight through primary defects (∼0.1 - 0.5 mm 2 ) operated in the linear power range 170-215 W/cm. Cladding degradation, taking place due to the secondary hydrogenating, does not occur in case of large through debris-defects during operation up to 600 calendar days

  12. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2015-08-01

    Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques.

  13. The density matrix - The story of a failed transfer

    Energy Technology Data Exchange (ETDEWEB)

    Blum, Alexander [MPI fuer Wissenschaftsgeschichte, Berlin (Germany)

    2013-07-01

    With the discovery of the positron in 1933, Paul Dirac (along with most other physicists) was forced to really take seriously his earlier suggestion that in the world as we know it all negative energy states are occupied and we are thus surrounded by an infinite sea of electrons. What was needed was a way to treat this large number of electrons in a manageable fashion. Dirac resorted to the use of the density matrix, a technique he had earlier used to describe the large number of electrons in complex atoms. Initially, this transfer from atomic physics to what we would nowadays call particle physics was quite successful, and for a few years the density matrix was the state of the art in describing the Dirac electron sea, but then rapidly fell out of favor. I investigate the causes of this ultimately failed transfer and how it relates to changes in the physical notion of the vacuum, changes which eventually eliminated the analogy on which the transfer had been based in the first place.

  14. The Reputational Consequences of Failed Replications and Wrongness Admission among Scientists.

    Directory of Open Access Journals (Sweden)

    Adam K Fetterman

    Full Text Available Scientists are dedicating more attention to replication efforts. While the scientific utility of replications is unquestionable, the impact of failed replication efforts and the discussions surrounding them deserve more attention. Specifically, the debates about failed replications on social media have led to worry, in some scientists, regarding reputation. In order to gain data-informed insights into these issues, we collected data from 281 published scientists. We assessed whether scientists overestimate the negative reputational effects of a failed replication in a scenario-based study. Second, we assessed the reputational consequences of admitting wrongness (versus not as an original scientist of an effect that has failed to replicate. Our data suggests that scientists overestimate the negative reputational impact of a hypothetical failed replication effort. We also show that admitting wrongness about a non-replicated finding is less harmful to one's reputation than not admitting. Finally, we discovered a hint of evidence that feelings about the replication movement can be affected by whether replication efforts are aimed one's own work versus the work of another. Given these findings, we then present potential ways forward in these discussions.

  15. The Reputational Consequences of Failed Replications and Wrongness Admission among Scientists.

    Science.gov (United States)

    Fetterman, Adam K; Sassenberg, Kai

    2015-01-01

    Scientists are dedicating more attention to replication efforts. While the scientific utility of replications is unquestionable, the impact of failed replication efforts and the discussions surrounding them deserve more attention. Specifically, the debates about failed replications on social media have led to worry, in some scientists, regarding reputation. In order to gain data-informed insights into these issues, we collected data from 281 published scientists. We assessed whether scientists overestimate the negative reputational effects of a failed replication in a scenario-based study. Second, we assessed the reputational consequences of admitting wrongness (versus not) as an original scientist of an effect that has failed to replicate. Our data suggests that scientists overestimate the negative reputational impact of a hypothetical failed replication effort. We also show that admitting wrongness about a non-replicated finding is less harmful to one's reputation than not admitting. Finally, we discovered a hint of evidence that feelings about the replication movement can be affected by whether replication efforts are aimed one's own work versus the work of another. Given these findings, we then present potential ways forward in these discussions.

  16. Effects of dual-energy CT with non-linear blending on abdominal CT angiography

    International Nuclear Information System (INIS)

    Li, Sulan; Wang, Chaoqin; Jiang, Xiao Chen; Xu, Ge

    2014-01-01

    To determine whether non-linear blending technique for arterial-phase dual-energy abdominal CT angiography (CTA) could improve image quality compared to the linear blending technique and conventional 120 kVp imaging. This study included 118 patients who had accepted dual-energy abdominal CTA in the arterial phase. They were assigned to Sn140/80 kVp protocol (protocol A, n = 40) if body mass index (BMI) < 25 or Sn140/100 kVp protocol (protocol B, n = 41) if BMI ≥ 25. Non-linear blending images and linear blending images with a weighting factor of 0.5 in each protocol were generated and compared with the conventional 120 kVp images (protocol C, n = 37). The abdominal vascular enhancements, image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and radiation dose were assessed. Statistical analysis was performed using one-way analysis of variance test, independent t test, Mann-Whitney U test, and Kruskal-Wallis test. Mean vascular attenuation, CNR, SNR and subjective image quality score for the non-linear blending images in each protocol were all higher compared to the corresponding linear blending images and 120 kVp images (p values ranging from < 0.001 to 0.007) except for when compared to non-linear blending images for protocol B and 120 kVp images in CNR and SNR. No significant differences were found in image noise among the three kinds of images and the same kind of images in different protocols, but the lowest radiation dose was shown in protocol A. Non-linear blending technique of dual-energy CT can improve the image quality of arterial-phase abdominal CTA, especially with the Sn140/80 kVp scanning.

  17. Effects of dual-energy CT with non-linear blending on abdominal CT angiography

    Energy Technology Data Exchange (ETDEWEB)

    Li, Sulan; Wang, Chaoqin; Jiang, Xiao Chen; Xu, Ge [Dept. of Radiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou (China)

    2014-08-15

    To determine whether non-linear blending technique for arterial-phase dual-energy abdominal CT angiography (CTA) could improve image quality compared to the linear blending technique and conventional 120 kVp imaging. This study included 118 patients who had accepted dual-energy abdominal CTA in the arterial phase. They were assigned to Sn140/80 kVp protocol (protocol A, n = 40) if body mass index (BMI) < 25 or Sn140/100 kVp protocol (protocol B, n = 41) if BMI ≥ 25. Non-linear blending images and linear blending images with a weighting factor of 0.5 in each protocol were generated and compared with the conventional 120 kVp images (protocol C, n = 37). The abdominal vascular enhancements, image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and radiation dose were assessed. Statistical analysis was performed using one-way analysis of variance test, independent t test, Mann-Whitney U test, and Kruskal-Wallis test. Mean vascular attenuation, CNR, SNR and subjective image quality score for the non-linear blending images in each protocol were all higher compared to the corresponding linear blending images and 120 kVp images (p values ranging from < 0.001 to 0.007) except for when compared to non-linear blending images for protocol B and 120 kVp images in CNR and SNR. No significant differences were found in image noise among the three kinds of images and the same kind of images in different protocols, but the lowest radiation dose was shown in protocol A. Non-linear blending technique of dual-energy CT can improve the image quality of arterial-phase abdominal CTA, especially with the Sn140/80 kVp scanning.

  18. Advanced condition monitoring techniques and plant life extension studies at EBR-2

    International Nuclear Information System (INIS)

    Singer, R.M.; Gross, K.C.; Perry, W.H.; King, R.W.

    1991-01-01

    Numerous advanced techniques have been evaluated and tested at EBR-2 as part of a plant-life extension program for detection of degradation and other abnormalities in plant systems. Two techniques have been determined to be of considerable assistance in planning for the extended-life operation of EBR-2. The first, a computer-based pattern-recognition system (System State Analyzer or SSA) is used for surveillance of the primary system instrumentation, primary sodium pumps and plant heat balances. This surveillance has indicated that the SSA can detect instrumentation degradation and system performance degradation over varying time intervals and can be used to provide derived signal values to replace signals from failed sensors. The second technique, also a computer-based pattern-recognition system (Sequential Probability Ratio Test or SPRT) is used to validate signals and to detect incipient failures in sensors and components or systems. It is being used on the failed fuel detection system and is experimentally used on the primary coolant pumps. Both techniques are described and experience with their operation presented

  19. Failure to Fail in a Final Pre-Service Teaching Practicum

    Science.gov (United States)

    Danyluk, Patricia J.; Luhanga, Florence; Gwekwerere, Yovita N.; MacEwan, Leigh; Larocque, Sylvie

    2015-01-01

    This article presents a Canadian perspective on the issue of failure to fail in Bachelor of Education programs. The issue of failure to fail in Bachelor of Education programs is one that had not been explored in any great detail. What literature does exist focuses on the strain that a teacher experiences when s/he mentors a student teacher…

  20. "To big to fail"-doktrinen står for fald?

    DEFF Research Database (Denmark)

    Grosen, Anders

    2010-01-01

    Hvis præsident Barack Obama får sin vilje, skal den klassiske "too big to fail"-bankdoktrin afløses af en "small enough to fail"-doktrin. Det fremgår af præsidentens planer om at opdele storbankerne i mindre enheder og forbyde bankernes handelsaktiviteter for egen regning. Hvis Barack Obama får...

  1. Recipes for stable linear embeddings from Hilbert spaces to R^m

    OpenAIRE

    Puy, Gilles; Davies, Michael; Gribonval, Remi

    2017-01-01

    We consider the problem of constructing a linear map from a Hilbert space H (possibly infinite dimensional) to Rm that satisfies a restricted isometry property (RIP) on an arbitrary signal model, i.e., a subset of H. We present a generic framework that handles a large class of low-dimensional subsets but also unstructured and structured linear maps. We provide a simple recipe to prove that a random linear map satisfies a general RIP with high probability. We also describe a generic technique ...

  2. Laser beam propagation in non-linearly absorbing media

    CSIR Research Space (South Africa)

    Forbes, A

    2006-08-01

    Full Text Available Many analytical techniques exist to explore the propagation of certain laser beams in free space, or in a linearly absorbing medium. When the medium is nonlinearly absorbing the propagation must be described by an iterative process using the well...

  3. SNR Estimation in Linear Systems with Gaussian Matrices

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Alrashdi, Ayed; Ballal, Tarig; Al-Naffouri, Tareq Y.

    2017-01-01

    This letter proposes a highly accurate algorithm to estimate the signal-to-noise ratio (SNR) for a linear system from a single realization of the received signal. We assume that the linear system has a Gaussian matrix with one sided left correlation. The unknown entries of the signal and the noise are assumed to be independent and identically distributed with zero mean and can be drawn from any distribution. We use the ridge regression function of this linear model in company with tools and techniques adapted from random matrix theory to achieve, in closed form, accurate estimation of the SNR without prior statistical knowledge on the signal or the noise. Simulation results show that the proposed method is very accurate.

  4. Non linear identification applied to PWR steam generators

    International Nuclear Information System (INIS)

    Poncet, B.

    1982-11-01

    For the precise industrial purpose of PWR nuclear power plant steam generator water level control, a natural method is developed where classical techniques seem not to be efficient enough. From this essentially non-linear practical problem, an input-output identification of dynamic systems is proposed. Through Homodynamic Systems, characterized by a regularity property which can be found in most industrial processes with balance set, state form realizations are built, which resolve the exact joining of local dynamic behaviors, in both discrete and continuous time cases, avoiding any load parameter. Specifically non-linear modelling analytical means, which have no influence on local joined behaviors, are also pointed out. Non-linear autoregressive realizations allow us to perform indirect adaptive control under constraint of an admissible given dynamic family [fr

  5. SNR Estimation in Linear Systems with Gaussian Matrices

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2017-09-27

    This letter proposes a highly accurate algorithm to estimate the signal-to-noise ratio (SNR) for a linear system from a single realization of the received signal. We assume that the linear system has a Gaussian matrix with one sided left correlation. The unknown entries of the signal and the noise are assumed to be independent and identically distributed with zero mean and can be drawn from any distribution. We use the ridge regression function of this linear model in company with tools and techniques adapted from random matrix theory to achieve, in closed form, accurate estimation of the SNR without prior statistical knowledge on the signal or the noise. Simulation results show that the proposed method is very accurate.

  6. Thin Films of Novel Linear-Dendritic Diblock Copolymers

    Science.gov (United States)

    Iyer, Jyotsna; Hammond, Paula

    1998-03-01

    A series of diblock copolymers with one linear block and one dendrimeric block have been synthesized with the objective of forming ultrathin film nanoporous membranes. Polyethyleneoxide serves as the linear hydrophilic portion of the diblock copolymer. The hyperbranched dendrimeric block consists of polyamidoamine with functional end groups. Thin films of these materials made by spin casting and the Langmuir-Blodgett techniques are being studied. The effect of the polyethylene oxide block size and the number and chemical nature of the dendrimer end group on the nature and stability of the films formed willbe discussed.

  7. When the science fails and the ethics works: 'Fail-safe' ethics in the FEM-PrEP study.

    Science.gov (United States)

    Kingori, Patricia

    2015-12-01

    This paper will explore the concept of 'fail safe' ethics in the FEM PrEP trial, and the practice of research and ethics on the ground. FEM-PrEP examined the efficacy of PrEP in African women after promising outcomes in research conducted with MSM. This was a hugely optimistic time and FEM-PrEP was mobilised using rights-based ethical arguments that women should have access to PrEP. This paper will present data collected during an ethnographic study of frontline research workers involved in FEM-PrEP. During our discussions, 'fail-safe' ethics emerged as concept that encapsulated their confidence that their ethics could not fail. However, in 2011, FEM-PrEP was halted and deemed a failure. The women involved in the study were held responsible because contrary to researcher's expectations they were not taking the oral PrEP being researched. This examination of FEM-PrEP will show that ethical arguments are increasingly deployed to mobilise, maintain and in some cases stop trials in ways which, at times, are superseded or co-opted by other interests. While promoting the interests of women, rights-based approaches are argued to indirectly justify the continuation of individualised, biomedical interventions which have been problematic in other women-centred trials. In this examination of FEM-PrEP, the rights-based approach obscured: ethical concerns beyond access to PrEP; the complexities of power relationships between donor and host countries; the operations of the HIV industry in research-saturated areas and the cumulative effect of unfilled expectations in HIV research and how this has shaped ideas of research and ethics.

  8. A Scientific Investigation into why Firms Fail: A Model of corporate ...

    African Journals Online (AJOL)

    A Scientific Investigation into why Firms Fail: A Model of corporate health trajectory. ... to analyse the data of 20 banks, 10 which failed and 10 that is successful. Key words: Corporate collapse, trajectories of failure, bank failure, bank distress, ...

  9. Providing nuclear reactor control information in the presence of instrument failures

    International Nuclear Information System (INIS)

    Tylee, J.L.; Purviance, J.E.

    1986-01-01

    A technique for using unfailed instrument outputs to generate optimal estimates of failed sensor outputs is presented and evaluated. The technique uses a bank of discrete, linear Kalman filters, each dedicated to one instrument, and a combinatory logic to perform the output estimation. The technique is tested using measurement data from a university research reactor

  10. Modelling Loudspeaker Non-Linearities

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    2007-01-01

    This paper investigates different techniques for modelling the non-linear parameters of the electrodynamic loudspeaker. The methods are tested not only for their accuracy within the range of original data, but also for the ability to work reasonable outside that range, and it is demonstrated...... that polynomial expansions are rather poor at this, whereas an inverse polynomial expansion or localized fitting functions such as the gaussian are better suited for modelling the Bl-factor and compliance. For the inductance the sigmoid function is shown to give very good results. Finally the time varying...

  11. Linear Prediction Using Refined Autocorrelation Function

    Directory of Open Access Journals (Sweden)

    M. Shahidur Rahman

    2007-07-01

    Full Text Available This paper proposes a new technique for improving the performance of linear prediction analysis by utilizing a refined version of the autocorrelation function. Problems in analyzing voiced speech using linear prediction occur often due to the harmonic structure of the excitation source, which causes the autocorrelation function to be an aliased version of that of the vocal tract impulse response. To estimate the vocal tract characteristics accurately, however, the effect of aliasing must be eliminated. In this paper, we employ homomorphic deconvolution technique in the autocorrelation domain to eliminate the aliasing effect occurred due to periodicity. The resulted autocorrelation function of the vocal tract impulse response is found to produce significant improvement in estimating formant frequencies. The accuracy of formant estimation is verified on synthetic vowels for a wide range of pitch frequencies typical for male and female speakers. The validity of the proposed method is also illustrated by inspecting the spectral envelopes of natural speech spoken by high-pitched female speaker. The synthesis filter obtained by the current method is guaranteed to be stable, which makes the method superior to many of its alternatives.

  12. Assessing the potential cost of a failed Doha round

    OpenAIRE

    Antoine Bouet

    2010-01-01

    This study offers new conclusions on the economic cost of a failed Doha Development Agenda (DDA). We assess potential outcome of the Doha Round as well as four protectionist scenarios using the MIRAGE Computable General Equilibrium (CGE) model. In a scenario where applied tariffs of World Trade Organization (WTO) economies would go up to currently bound tariff rates, world trade would decrease by 7.7 % and world welfare by US$353 billion. The economic cost of a failed DDA is here evaluated by...

  13. 7 CFR 1951.264 - Action when borrower fails to cooperate, respond or graduate.

    Science.gov (United States)

    2010-01-01

    ... Analyzing Credit Needs and Graduation of Borrowers § 1951.264 Action when borrower fails to cooperate, respond or graduate. (a) When borrowers with other than FCP loans fail to: (1) Provide information... appeal the decision. (b) If an FCP borrower fails to cooperate after a lender expresses a willingness to...

  14. Synthesis Report on the understanding of failed LMFBR fuel element performance

    International Nuclear Information System (INIS)

    Plitz, H.; Bagley, K.; Harbourne, B.

    1990-07-01

    In the coarse of LMFBR operation fuel element failures cannot entirely be avoided as experienced during the operation of PFR, PHENIX and KNK II, where 44 failed fuel elements have been registered between 1978 and 1989. In earlier irradiations, post irradiation examinations showed mixed oxide pin diameter increases up to pin pitch distance, urging to stress reactor safety questions on the potential of fuel pin failure propagation within pin bundles. The chemical interaction of sodium with mixed oxide fuel is regarded to be the key for the understanding of failed fuel behavior. Valuable results on the failed fuel pin behavior during operation were obtained from the SILOE sodium loop test. Based on the bulk of experience with the detection of fuel pin failures, with the continued operation and with the handling of failed pins respectively elements, one can state: 1. All fuel pin failures have been detected securely in time and have been located. 2. Small defects are developing slowly. 3. Even large defects at end-of-life pins resulted in limited fuel loss. 4. Clad failures behave benign in main aspects. 5. The chemical interaction of sodium with mixed oxide is an important factor in the behavior of failed fuel pins, especially at high burnup. 6. Despite different pin designs and different operation conditions, on the basis of 44 failed elements in PFR, PHENIX and KNK II no pin-to-pin propagation was observed and fuel release was rather low, often not detectable. 7. In no case hazard conditions affecting reactor safety have been experienced

  15. Universal Linear Precoding for NBI-Proof Widely Linear Equalization in MC Systems

    Directory of Open Access Journals (Sweden)

    Donatella Darsena

    2007-09-01

    Full Text Available In multicarrier (MC systems, transmitter redundancy, which is introduced by means of finite-impulse response (FIR linear precoders, allows for perfect or zero-forcing (ZF equalization of FIR channels (in the absence of noise. Recently, it has been shown that the noncircular or improper nature of some symbol constellations offers an intrinsic source of redundancy, which can be exploited to design efficient FIR widely-linear (WL receiving structures for MC systems operating in the presence of narrowband interference (NBI. With regard to both cyclic-prefixed and zero-padded transmission techniques, it is shown in this paper that, with appropriately designed precoders, it is possible to synthesize in both cases WL-ZF universal equalizers, which guarantee perfect symbol recovery for any FIR channel. Furthermore, it is theoretically shown that the intrinsic redundancy of the improper symbol sequence also enables WL-ZF equalization, based on the minimum mean output-energy criterion, with improved NBI suppression capabilities. Finally, results of numerical simulations are presented, which assess the merits of the proposed precoding designs and validate the theoretical analysis carried out.

  16. Technical challenge of future linear colliders

    International Nuclear Information System (INIS)

    Himel, T.

    1986-05-01

    The next generation of high energy e + e - colliders is likely to be built with colliding linear accelerators. A lot of research and development is needed before such a machine can be practically built. Some of the problems and recent progress made toward their solution are described here. Quantum corrections to beamstrahlung, the production of low emittance beams and strong focusing techniques are covered

  17. Evolution of Godoy & Godoy manual lymph drainage. Technique with linear movements

    Directory of Open Access Journals (Sweden)

    José Maria Pereira de Godoy

    2017-10-01

    Full Text Available Manual lymph drainage has become the mainstay in the treatment of lymphedema for decades now. Five evolving variants have been described by Godoy & Godoy over the years: i manual lymph drainage using rollers; ii self-applied manual lymph drainage using rollers; iii manual lymph drainage using the hands (manual lymphatic therapy; iv mechanical lymphatic therapy using the RAGodoy® device; and v lymphatic therapy using cervical stimulation in general lymphatic treatment. After breast cancer treatment using adapted technique with intermittent compression therapy. Lymphoscintigraphy, volumetry and bioimpedance were employed to analyze such treatment techniques applied to the upper and lower extremities. These treatment and evaluation topics are described in this brief report.

  18. Estimating monotonic rates from biological data using local linear regression.

    Science.gov (United States)

    Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R

    2017-03-01

    Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.

  19. Alternative Techniques for Cannulation of Biliary Strictures Resistant to the 0.035 System Following Living Donor Liver Transplantation

    International Nuclear Information System (INIS)

    Yoon, Hee Mang; Kim, Jin Hyoung; Ko, Gi Young; Song, Ho Young; Gwon, Dong Il; Sung, Kyu Bo

    2012-01-01

    To assess the clinical efficacy of alternative techniques for biliary stricture cannulation in patients undergoing living donor liver transplantation (LDLT), after cannulation failure with a conventional (0.035-inch guidewire) technique. Of 293 patients with biliary strictures after LDLT, 19 (6%) patients, 11 men and 8 women of mean age 48.5 years, had the failed cannulation of the stricture by conventional techniques. Recannulation was attempted by using two alternative methods, namely a micro-catheter set via percutaneous access and a snare (rendezvous) technique using percutaneous and endoscopic approaches. Strictures were successfully cannulated in 16 (84%) of the 19 patients. A microcatheter set was used in 12 and a snare technique in four patients. Stricture cannulation failed in the remaining three patients, who finally underwent surgical revision. Most technical failures using a conventional technique for biliary stricture cannulation after LDLT can be overcome by using a microcatheter set or a snare (rendezvous) technique.

  20. Alternative Techniques for Cannulation of Biliary Strictures Resistant to the 0.035 System Following Living Donor Liver Transplantation

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, Hee Mang; Kim, Jin Hyoung; Ko, Gi Young; Song, Ho Young; Gwon, Dong Il; Sung, Kyu Bo [Asan Medical Center, University of Ulsan College of Medicine, Seoul (Korea, Republic of)

    2012-03-15

    To assess the clinical efficacy of alternative techniques for biliary stricture cannulation in patients undergoing living donor liver transplantation (LDLT), after cannulation failure with a conventional (0.035-inch guidewire) technique. Of 293 patients with biliary strictures after LDLT, 19 (6%) patients, 11 men and 8 women of mean age 48.5 years, had the failed cannulation of the stricture by conventional techniques. Recannulation was attempted by using two alternative methods, namely a micro-catheter set via percutaneous access and a snare (rendezvous) technique using percutaneous and endoscopic approaches. Strictures were successfully cannulated in 16 (84%) of the 19 patients. A microcatheter set was used in 12 and a snare technique in four patients. Stricture cannulation failed in the remaining three patients, who finally underwent surgical revision. Most technical failures using a conventional technique for biliary stricture cannulation after LDLT can be overcome by using a microcatheter set or a snare (rendezvous) technique.

  1. Diggers failing to become diggers

    DEFF Research Database (Denmark)

    Jensen, Lars

    Mining has in recent years emerged as a national discourse in Australia as the combined result of the mining boom and national anxieties over the GFC featured prominently in references to Australia as a failed competitive state (the folding of manufacturing, where the closure of car factories pla...... to the broader issue of how mining relates to the question of the society Australia wants to be – on the scale from ecological sanctuary to global quarry....

  2. Independent checks of linear accelerators equipped with multileaf collimators

    International Nuclear Information System (INIS)

    Pavlikova, I.; Ekendahl, D.; Horakova, I.

    2005-01-01

    National Radiation Protection Institute (NRPI) provides independent checks of therapeutic equipment as a part of state supervision. In the end of 2003, the audit was broaden for linear accelerators equipped with multileaf collimators (MLC). NRPI provides TLD postal audits and on-site independent checks. This contribution describes tests for multileaf collimators and intensity modulated radiation therapy (IMRT) technique that are accomplished within the independent on-site check of linear accelerators. The character and type of tests that are necessary to pursue for multileaf collimator depends on application technique. There are three basic application of the MLC. The first we call 'static MLC' and it serves for replacing conventional blocking or for adjusting the field shape to match the beam's-eye view projection of a planning target volume during an arc rotation of the x-ray beam. This procedure is called conformal radiotherapy. The most advanced technique with MLC is intensity modulated radiation therapy. The dose can be delivered to the patient with IMRT in various different ways: dynamic MLC, segmented MLC and IMRT arc therapy. Independent audits represent an important instrument of quality assurance. Methodology for independent check of static MLC was successfully verified on two types of accelerators: Varian and Elekta. Results from pilot measurements with dynamic MLC imply that the methodology is applicable for Varian accelerators. In the future, the experience with other types of linear accelerators will contribute to renovation, modification, and broaden independent checks methodology. (authors)

  3. Stability Analysis for Multi-Parameter Linear Periodic Systems

    DEFF Research Database (Denmark)

    Seyranian, A.P.; Solem, Frederik; Pedersen, Pauli

    1999-01-01

    This paper is devoted to stability analysis of general linear periodic systems depending on real parameters. The Floquet method and perturbation technique are the basis of the development. We start out with the first and higher-order derivatives of the Floquet matrix with respect to problem...

  4. Linear zonal atmospheric prediction for adaptive optics

    Science.gov (United States)

    McGuire, Patrick C.; Rhoadarmer, Troy A.; Coy, Hanna A.; Angel, J. Roger P.; Lloyd-Hart, Michael

    2000-07-01

    We compare linear zonal predictors of atmospheric turbulence for adaptive optics. Zonal prediction has the possible advantage of being able to interpret and utilize wind-velocity information from the wavefront sensor better than modal prediction. For simulated open-loop atmospheric data for a 2- meter 16-subaperture AO telescope with 5 millisecond prediction and a lookback of 4 slope-vectors, we find that Widrow-Hoff Delta-Rule training of linear nets and Back- Propagation training of non-linear multilayer neural networks is quite slow, getting stuck on plateaus or in local minima. Recursive Least Squares training of linear predictors is two orders of magnitude faster and it also converges to the solution with global minimum error. We have successfully implemented Amari's Adaptive Natural Gradient Learning (ANGL) technique for a linear zonal predictor, which premultiplies the Delta-Rule gradients with a matrix that orthogonalizes the parameter space and speeds up the training by two orders of magnitude, like the Recursive Least Squares predictor. This shows that the simple Widrow-Hoff Delta-Rule's slow convergence is not a fluke. In the case of bright guidestars, the ANGL, RLS, and standard matrix-inversion least-squares (MILS) algorithms all converge to the same global minimum linear total phase error (approximately 0.18 rad2), which is only approximately 5% higher than the spatial phase error (approximately 0.17 rad2), and is approximately 33% lower than the total 'naive' phase error without prediction (approximately 0.27 rad2). ANGL can, in principle, also be extended to make non-linear neural network training feasible for these large networks, with the potential to lower the predictor error below the linear predictor error. We will soon scale our linear work to the approximately 108-subaperture MMT AO system, both with simulations and real wavefront sensor data from prime focus.

  5. Privacy-Preserving Distributed Linear Regression on High-Dimensional Data

    Directory of Open Access Journals (Sweden)

    Gascón Adrià

    2017-10-01

    Full Text Available We propose privacy-preserving protocols for computing linear regression models, in the setting where the training dataset is vertically distributed among several parties. Our main contribution is a hybrid multi-party computation protocol that combines Yao’s garbled circuits with tailored protocols for computing inner products. Like many machine learning tasks, building a linear regression model involves solving a system of linear equations. We conduct a comprehensive evaluation and comparison of different techniques for securely performing this task, including a new Conjugate Gradient Descent (CGD algorithm. This algorithm is suitable for secure computation because it uses an efficient fixed-point representation of real numbers while maintaining accuracy and convergence rates comparable to what can be obtained with a classical solution using floating point numbers. Our technique improves on Nikolaenko et al.’s method for privacy-preserving ridge regression (S&P 2013, and can be used as a building block in other analyses. We implement a complete system and demonstrate that our approach is highly scalable, solving data analysis problems with one million records and one hundred features in less than one hour of total running time.

  6. Supervised linear dimensionality reduction with robust margins for object recognition

    Science.gov (United States)

    Dornaika, F.; Assoum, A.

    2013-01-01

    Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.

  7. Examination of a failed fifth wheel coupling

    CSIR Research Space (South Africa)

    Fernandes, PJL

    1998-03-01

    Full Text Available Examination of a fifth wheel coupling which had failed in service showed that it had been modified and that the operating handle had been moved from its original design position. This modification completely eliminated the safety device designed...

  8. Very Preterm Infants Failing CPAP Show Signs of Fatigue Immediately after Birth.

    Directory of Open Access Journals (Sweden)

    Melissa L Siew

    Full Text Available To investigate the differences in breathing pattern and effort in infants at birth who failed or succeeded on continuous positive airway pressure (CPAP during the first 48 hours after birth.Respiratory function recordings of 32 preterm infants were reviewed of which 15 infants with a gestational age of 28.6 (0.7 weeks failed CPAP and 17 infants with a GA of 30.1 (0.4 weeks did not fail CPAP. Frequency, duration and tidal volumes (VT of expiratory holds (EHs, peak inspiratory flows, CPAP-level and FiO2-levels were analysed.EH incidence increased 9 ml/kg with higher peak inspiratory flows than CPAP-fail infants (71.8 ± 15.8 vs. 15.5 ± 5.2 ml/kg.s, p <0.05. CPAP-fail infants required higher FiO2 (0.31 ± 0.03 vs. 0.21 ± 0.01, higher CPAP pressures (6.62 ± 0.3 vs. 5.67 ± 0.26 cmH2O and more positive pressure-delivered breaths (45 ± 12 vs. 19 ± 9% (p <0.05.At 9-12 minutes after birth, CPAP-fail infants more commonly used lower VTs and required higher peak inspiratory flow rates while receiving greater respiratory support. VT was less variable and larger VT was infrequently used reflecting early signs of fatigue.

  9. Integration of differential equations by the pseudo-linear (PL) approximation

    International Nuclear Information System (INIS)

    Bonalumi, Riccardo A.

    1998-01-01

    A new method of integrating differential equations was originated with the technique of approximately calculating the integrals called the pseudo-linear (PL) procedure: this method is A-stable. This article contains the following examples: 1st order ordinary differential equations (ODEs), 2nd order linear ODEs, stiff system of ODEs (neutron kinetics), one-dimensional parabolic (diffusion) partial differential equations. In this latter case, this PL method coincides with the Crank-Nicholson method

  10. The Jejunal Serosal Patch Procedure: A Successful Technique for ...

    African Journals Online (AJOL)

    Background: The selection of the most appropriate technique for the repair of peptic ulcer perforations, especially when the initial attempt of closure has failed have been the concern of many surgeons. Since the experimental report regarding the jejunal serosal patch procedure by Koboldin in 1963, authors have reported its ...

  11. Hybrid Spectral Unmixing: Using Artificial Neural Networks for Linear/Non-Linear Switching

    Directory of Open Access Journals (Sweden)

    Asmau M. Ahmed

    2017-07-01

    Full Text Available Spectral unmixing is a key process in identifying spectral signature of materials and quantifying their spatial distribution over an image. The linear model is expected to provide acceptable results when two assumptions are satisfied: (1 The mixing process should occur at macroscopic level and (2 Photons must interact with single material before reaching the sensor. However, these assumptions do not always hold and more complex nonlinear models are required. This study proposes a new hybrid method for switching between linear and nonlinear spectral unmixing of hyperspectral data based on artificial neural networks. The neural networks was trained with parameters within a window of the pixel under consideration. These parameters are computed to represent the diversity of the neighboring pixels and are based on the Spectral Angular Distance, Covariance and a non linearity parameter. The endmembers were extracted using Vertex Component Analysis while the abundances were estimated using the method identified by the neural networks (Vertex Component Analysis, Fully Constraint Least Square Method, Polynomial Post Nonlinear Mixing Model or Generalized Bilinear Model. Results show that the hybrid method performs better than each of the individual techniques with high overall accuracy, while the abundance estimation error is significantly lower than that obtained using the individual methods. Experiments on both synthetic dataset and real hyperspectral images demonstrated that the proposed hybrid switch method is efficient for solving spectral unmixing of hyperspectral images as compared to individual algorithms.

  12. Computational linear and commutative algebra

    CERN Document Server

    Kreuzer, Martin

    2016-01-01

    This book combines, in a novel and general way, an extensive development of the theory of families of commuting matrices with applications to zero-dimensional commutative rings, primary decompositions and polynomial system solving. It integrates the Linear Algebra of the Third Millennium, developed exclusively here, with classical algorithmic and algebraic techniques. Even the experienced reader will be pleasantly surprised to discover new and unexpected aspects in a variety of subjects including eigenvalues and eigenspaces of linear maps, joint eigenspaces of commuting families of endomorphisms, multiplication maps of zero-dimensional affine algebras, computation of primary decompositions and maximal ideals, and solution of polynomial systems. This book completes a trilogy initiated by the uncharacteristically witty books Computational Commutative Algebra 1 and 2 by the same authors. The material treated here is not available in book form, and much of it is not available at all. The authors continue to prese...

  13. Linear versus non-linear supersymmetry, in general

    Energy Technology Data Exchange (ETDEWEB)

    Ferrara, Sergio [Theoretical Physics Department, CERN,CH-1211 Geneva 23 (Switzerland); INFN - Laboratori Nazionali di Frascati,Via Enrico Fermi 40, I-00044 Frascati (Italy); Department of Physics and Astronomy, UniversityC.L.A.,Los Angeles, CA 90095-1547 (United States); Kallosh, Renata [SITP and Department of Physics, Stanford University,Stanford, California 94305 (United States); Proeyen, Antoine Van [Institute for Theoretical Physics, Katholieke Universiteit Leuven,Celestijnenlaan 200D, B-3001 Leuven (Belgium); Wrase, Timm [Institute for Theoretical Physics, Technische Universität Wien,Wiedner Hauptstr. 8-10, A-1040 Vienna (Austria)

    2016-04-12

    We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.

  14. Linear versus non-linear supersymmetry, in general

    International Nuclear Information System (INIS)

    Ferrara, Sergio; Kallosh, Renata; Proeyen, Antoine Van; Wrase, Timm

    2016-01-01

    We study superconformal and supergravity models with constrained superfields. The underlying version of such models with all unconstrained superfields and linearly realized supersymmetry is presented here, in addition to the physical multiplets there are Lagrange multiplier (LM) superfields. Once the equations of motion for the LM superfields are solved, some of the physical superfields become constrained. The linear supersymmetry of the original models becomes non-linearly realized, its exact form can be deduced from the original linear supersymmetry. Known examples of constrained superfields are shown to require the following LM’s: chiral superfields, linear superfields, general complex superfields, some of them are multiplets with a spin.

  15. Ultrabroadband optical chirp linearization for precision metrology applications.

    Science.gov (United States)

    Roos, Peter A; Reibel, Randy R; Berg, Trenton; Kaylor, Brant; Barber, Zeb W; Babbitt, Wm Randall

    2009-12-01

    We demonstrate precise linearization of ultrabroadband laser frequency chirps via a fiber-based self-heterodyne technique to enable extremely high-resolution, frequency-modulated cw laser-radar (LADAR) and a wide range of other metrology applications. Our frequency chirps cover bandwidths up to nearly 5 THz with frequency errors as low as 170 kHz, relative to linearity. We show that this performance enables 31-mum transform-limited LADAR range resolution (FWHM) and 86 nm range precisions over a 1.5 m range baseline. Much longer range baselines are possible but are limited by atmospheric turbulence and fiber dispersion.

  16. Resitting or Compensating a Failed Examination: Does It Affect Subsequent Results?

    Science.gov (United States)

    Arnold, Ivo

    2017-01-01

    Institutions of higher education commonly employ a conjunctive standard setting strategy, which requires students to resit failed examinations until they pass all tests. An alternative strategy allows students to compensate a failing grade with other test results. This paper uses regression discontinuity design to compare the effect of first-year…

  17. Hip arthroplasty in failed intertrochanteric fractures in elderly

    Directory of Open Access Journals (Sweden)

    Javahir A Pachore

    2013-01-01

    Full Text Available Background: Failed intertrochanteric fractures in elderly patients are surgical challenge with limited options. Hip arthroplasty is a good salvage procedure even though it involves technical issues such as implant removal, bone loss, poor bone quality, trochanteric nonunion and difficulty of surgical exposure. Materials and Methods: 30 patients of failed intertrochanteric fractures where hip arthroplasty was done between May 2008 and December 2011 were included in study. 13 were males and 17 were females with average age of 67.3 years. There were 2 cemented bipolar arthroplasties, 19 uncemented bipolar, 4 cemented total hip arthroplasty and 5 uncemented total hip arthroplasties. 16 patients had a trochanteric nonunion, which was treated by tension band principles. Total hip was considered where there was acetabular damage due to the penetration of implant. Results: The average followup was 20 months (range 6-48 months. Patients were followed up from 6 to 48 months with average followup of 20 months. None of the patients were lost to followup. There was no dislocation. All patients were ambulatory at the final followup. Conclusion: A predictable functional outcome can be achieved by hip arthroplasty in elderly patients with failed intertrochanteric fractures. Though technically demanding, properly performed hip arthroplasty can be a good salvage option for this patient group.

  18. Measurements of Rayleigh-Taylor-Induced Magnetic Fields in the Linear and Non-linear Regimes

    Science.gov (United States)

    Manuel, Mario

    2012-10-01

    Magnetic fields are generated in plasmas by the Biermann-battery, or thermoelectric, source driven by non-collinear temperature and density gradients. The ablation front in laser-irradiated targets is susceptible to Rayleigh-Taylor (RT) growth that produces gradients capable of generating magnetic fields. Measurements of these RT-induced magnetic fields in planar foils have been made using a combination of x-ray and monoenergetic-proton radiography techniques. At a perturbation wavelength of 120 μm, proton radiographs indicate an increase of the magnetic-field strength from ˜1 to ˜10 Tesla during the linear growth phase. A characteristic change in field structure was observed later in time for irradiated foils of different initial surface perturbations. Proton radiographs show a regular cellular configuration initiated at the same time during the drive, independent of the initial foil conditions. This non-linear behavior has been experimentally investigated and the source of these characteristic features will be discussed.

  19. Nonoscillation of half-linear dynamic equations

    Czech Academy of Sciences Publication Activity Database

    Matucci, S.; Řehák, Pavel

    2010-01-01

    Roč. 60, č. 5 (2010), s. 1421-1429 ISSN 0898-1221 R&D Projects: GA AV ČR KJB100190701 Grant - others:GA ČR(CZ) GA201/07/0145 Institutional research plan: CEZ:AV0Z10190503 Keywords : half-linear dynamic equation * time scale * (non)oscillation * Riccati technique Subject RIV: BA - General Mathematics Impact factor: 1.472, year: 2010 http://www.sciencedirect.com/science/article/pii/S0898122110004384

  20. Intrauterine adhesions as a risk factor for failed first-trimester pregnancy termination.

    Science.gov (United States)

    Luk, Janelle; Allen, Rebecca H; Schantz-Dunn, Julianna; Goldberg, Alisa B

    2007-10-01

    Risk factors for failed first-trimester surgical abortion include endometrial distortion caused by leiomyomas, uterine anomalies and malposition and cervical stenosis. This report introduces intrauterine adhesions as an additional risk factor. A multiparous woman presented for pregnancy termination at 6 weeks' gestation. Three suction-curettage attempts failed to remove what appeared to be an intrauterine pregnancy. Rising beta-hCG levels and concern for an interstitial ectopic pregnancy prompted a diagnostic laparoscopy and exploratory laparotomy without the identification of an ectopic pregnancy. After methotrexate treatment failed, the patient underwent ultrasound-guided hysteroscopy and suction curettage using a cannula with a whistle-cut aperture for the successful removal of a pregnancy implanted behind intrauterine adhesions. Intrauterine adhesions are a cause of failed surgical abortion. Ultrasound-guided hysteroscopy may be required for diagnosis.

  1. Post-deportation risks for failed asylum seekers

    Directory of Open Access Journals (Sweden)

    Jill Alpes

    2017-02-01

    Full Text Available What happens to people who are deported after their asylum applications have failed? Many who are deported are at risk of harm when they return to their country of origin but there is little monitoring done of deportation outcomes.

  2. Pixel multiplexing technique for real-time three-dimensional-imaging laser detection and ranging system using four linear-mode avalanche photodiodes

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Fan; Wang, Yuanqing, E-mail: yqwang@nju.edu.cn; Li, Fenfang [School of Electronic Science and Engineering, Nanjing University, Nanjing 210046 (China)

    2016-03-15

    The avalanche-photodiode-array (APD-array) laser detection and ranging (LADAR) system has been continually developed owing to its superiority of nonscanning, large field of view, high sensitivity, and high precision. However, how to achieve higher-efficient detection and better integration of the LADAR system for real-time three-dimensional (3D) imaging continues to be a problem. In this study, a novel LADAR system using four linear mode APDs (LmAPDs) is developed for high-efficient detection by adopting a modulation and multiplexing technique. Furthermore, an automatic control system for the array LADAR system is proposed and designed by applying the virtual instrumentation technique. The control system aims to achieve four functions: synchronization of laser emission and rotating platform, multi-channel synchronous data acquisition, real-time Ethernet upper monitoring, and real-time signal processing and 3D visualization. The structure and principle of the complete system are described in the paper. The experimental results demonstrate that the LADAR system is capable of achieving real-time 3D imaging on an omnidirectional rotating platform under the control of the virtual instrumentation system. The automatic imaging LADAR system utilized only 4 LmAPDs to achieve 256-pixel-per-frame detection with by employing 64-bit demodulator. Moreover, the lateral resolution is ∼15 cm and range accuracy is ∼4 cm root-mean-square error at a distance of ∼40 m.

  3. Modeling ionospheric foF 2 response during geomagnetic storms using neural network and linear regression techniques

    Science.gov (United States)

    Tshisaphungo, Mpho; Habarulema, John Bosco; McKinnell, Lee-Anne

    2018-06-01

    In this paper, the modeling of the ionospheric foF 2 changes during geomagnetic storms by means of neural network (NN) and linear regression (LR) techniques is presented. The results will lead to a valuable tool to model the complex ionospheric changes during disturbed days in an operational space weather monitoring and forecasting environment. The storm-time foF 2 data during 1996-2014 from Grahamstown (33.3°S, 26.5°E), South Africa ionosonde station was used in modeling. In this paper, six storms were reserved to validate the models and hence not used in the modeling process. We found that the performance of both NN and LR models is comparable during selected storms which fell within the data period (1996-2014) used in modeling. However, when validated on storm periods beyond 1996-2014, the NN model gives a better performance (R = 0.62) compared to LR model (R = 0.56) for a storm that reached a minimum Dst index of -155 nT during 19-23 December 2015. We also found that both NN and LR models are capable of capturing the ionospheric foF 2 responses during two great geomagnetic storms (28 October-1 November 2003 and 6-12 November 2004) which have been demonstrated to be difficult storms to model in previous studies.

  4. The Value of Failing in Career Development: A Chaos Theory Perspective

    Science.gov (United States)

    Pryor, Robert G. L.; Bright, James E. H.

    2012-01-01

    Failing is a neglected topic in career development theory and counselling practice. Most theories see failing as simply the opposite of success and something to be avoided. It is contended that the Chaos Theory of Careers with its emphasis on complexity, uncertainty and consequent human imitations, provides a conceptually coherent account of…

  5. Linearized gyro-kinetic equation

    International Nuclear Information System (INIS)

    Catto, P.J.; Tsang, K.T.

    1976-01-01

    An ordering of the linearized Fokker-Planck equation is performed in which gyroradius corrections are retained to lowest order and the radial dependence appropriate for sheared magnetic fields is treated without resorting to a WKB technique. This description is shown to be necessary to obtain the proper radial dependence when the product of the poloidal wavenumber and the gyroradius is large (k rho much greater than 1). A like particle collision operator valid for arbitrary k rho also has been derived. In addition, neoclassical, drift, finite β (plasma pressure/magnetic pressure), and unperturbed toroidal electric field modifications are treated

  6. Identify too big to fail banks and capital insurance: An equilibrium approach

    OpenAIRE

    Katerina Ivanov

    2017-01-01

    The objective of this paper is develop a rational expectation equilibrium model of capital insurance to identify too big to fail banks. The main results of this model include (1) too big to fail banks can be identified explicitly by a systemic risk measure, loss betas, of all banks in the entire financial sector; (2) the too big to fail feature can be largely justified by a high level of loss beta; (3) the capital insurance proposal benefits market participants and reduces the systemic risk; ...

  7. Fail-safe computer-based plant protection systems

    International Nuclear Information System (INIS)

    Keats, A.B.

    1983-01-01

    A fail-safe mode of operation for computers used in nuclear reactor protection systems was first evolved in the UK for application to a sodium cooled fast reactor. The fail-safe properties of both the hardware and the software were achieved by permanently connecting test signals to some of the multiplexed inputs. This results in an unambiguous data pattern, each time the inputs are sequentially scanned by the multiplexer. The ''test inputs'' simulate transient excursions beyond defined safe limits. The alternating response of the trip algorithms to the ''out-of-limits'' test signals and the normal plant measurements is recognised by hardwired pattern recognition logic external to the computer system. For more general application to plant protection systems, a ''Test Signal Generator'' (TSG) is used to compute and generate test signals derived from prevailing operational conditions. The TSG, from its knowledge of the sensitivity of the trip algorithm to each of the input variables, generates a ''test disturbance'' which is superimposed upon each variable in turn, to simulate a transient excursion beyond the safe limits. The ''tripped'' status yielded by the trip algorithm when using data from a ''disturbed'' input forms part of a pattern determined by the order in which the disturbances are applied to the multiplexer inputs. The data pattern formed by the interleaved test disturbances is again recognised by logic external to the protection system's computers. This fail-safe mode of operation of computer-based protection systems provides a powerful defence against common-mode failure. It also reduces the importance of software verification in the licensing procedure. (author)

  8. Methods in half-linear asymptotic theory

    Directory of Open Access Journals (Sweden)

    Pavel Rehak

    2016-10-01

    Full Text Available We study the asymptotic behavior of eventually positive solutions of the second-order half-linear differential equation $$ (r(t|y'|^{\\alpha-1}\\hbox{sgn} y''=p(t|y|^{\\alpha-1}\\hbox{sgn} y, $$ where r(t and p(t are positive continuous functions on $[a,\\infty$, $\\alpha\\in(1,\\infty$. The aim of this article is twofold. On the one hand, we show applications of a wide variety of tools, like the Karamata theory of regular variation, the de Haan theory, the Riccati technique, comparison theorems, the reciprocity principle, a certain transformation of dependent variable, and principal solutions. On the other hand, we solve open problems posed in the literature and generalize existing results. Most of our observations are new also in the linear case.

  9. SU-E-T-479: IMRT Plan Recalculation in Patient Based On Dynalog Data and the Effect of a Single Failing MLC Motor

    International Nuclear Information System (INIS)

    Morcos, M; Mitrou, E

    2015-01-01

    Purpose: Using Linac dynamic logs (Dynalogs) we evaluate the impact of a single failing MLC motor on the deliverability of an IMRT plan by assessing the recalculated dose volume histograms (DVHs) taking the delivered MLC positions and beam hold-offs into consideration. Methods: This is a retrospective study based on a deteriorating MLC motor (leaf 36B) which was observed to be failing via Dynalog analysis. To investigate further, Eclipse-importable MLC files were generated from Dynalogs to recalculate the actual delivered dose and to assess the clinical impact through DVHs. All deliveries were performed on a Varian 21EX linear accelerator equipped with Millennium-120 MLC. The analysis of Dynalog files and subsequent conversion to Eclipse-importable MLC files were all performed by in-house programming in Python. Effects on plan DVH are presented in the following section on a particular brain-IMRT plan which was delivered with a failing MLC motor which was then replaced. Results: Global max dose increased by 13.5%, max dose to the brainstem PRV increased by 8.2%, max dose to the optic chiasm increased by 7.6%, max dose to optic nerve increased by 8.8% and the mean dose to the PTV increased by 7.9% when comparing the original plan to the fraction with the failing MLC motor. The reason the dose increased was due to the failure being on the B-bank which is the lagging side on a sliding window delivery, therefore any failures on this side will cause an over-irradiation as the B-bank leaves struggles to keep the window from growing. Conclusion: Our findings suggest that a single failing MLC motor may jeopardize the entire delivery. This may be due to the bad MLC motor drawing too much current causing all MLCs on the same bank to underperform. This hypothesis will be investigated in a future study

  10. SU-E-T-479: IMRT Plan Recalculation in Patient Based On Dynalog Data and the Effect of a Single Failing MLC Motor

    Energy Technology Data Exchange (ETDEWEB)

    Morcos, M [Vantage Oncology, San Bernardino, CA (United States); Mitrou, E [Centre Hospitalier de l’Universite de Montreal, Montreal, QC (Canada)

    2015-06-15

    Purpose: Using Linac dynamic logs (Dynalogs) we evaluate the impact of a single failing MLC motor on the deliverability of an IMRT plan by assessing the recalculated dose volume histograms (DVHs) taking the delivered MLC positions and beam hold-offs into consideration. Methods: This is a retrospective study based on a deteriorating MLC motor (leaf 36B) which was observed to be failing via Dynalog analysis. To investigate further, Eclipse-importable MLC files were generated from Dynalogs to recalculate the actual delivered dose and to assess the clinical impact through DVHs. All deliveries were performed on a Varian 21EX linear accelerator equipped with Millennium-120 MLC. The analysis of Dynalog files and subsequent conversion to Eclipse-importable MLC files were all performed by in-house programming in Python. Effects on plan DVH are presented in the following section on a particular brain-IMRT plan which was delivered with a failing MLC motor which was then replaced. Results: Global max dose increased by 13.5%, max dose to the brainstem PRV increased by 8.2%, max dose to the optic chiasm increased by 7.6%, max dose to optic nerve increased by 8.8% and the mean dose to the PTV increased by 7.9% when comparing the original plan to the fraction with the failing MLC motor. The reason the dose increased was due to the failure being on the B-bank which is the lagging side on a sliding window delivery, therefore any failures on this side will cause an over-irradiation as the B-bank leaves struggles to keep the window from growing. Conclusion: Our findings suggest that a single failing MLC motor may jeopardize the entire delivery. This may be due to the bad MLC motor drawing too much current causing all MLCs on the same bank to underperform. This hypothesis will be investigated in a future study.

  11. Signals and transforms in linear systems analysis

    CERN Document Server

    Wasylkiwskyj, Wasyl

    2013-01-01

    Signals and Transforms in Linear Systems Analysis covers the subject of signals and transforms, particularly in the context of linear systems theory. Chapter 2 provides the theoretical background for the remainder of the text. Chapter 3 treats Fourier series and integrals. Particular attention is paid to convergence properties at step discontinuities. This includes the Gibbs phenomenon and its amelioration via the Fejer summation techniques. Special topics include modulation and analytic signal representation, Fourier transforms and analytic function theory, time-frequency analysis and frequency dispersion. Fundamentals of linear system theory for LTI analogue systems, with a brief account of time-varying systems, are covered in Chapter 4 . Discrete systems are covered in Chapters 6 and 7.  The Laplace transform treatment in Chapter 5 relies heavily on analytic function theory as does Chapter 8 on Z -transforms. The necessary background on complex variables is provided in Appendix A. This book is intended to...

  12. Merger incentives and the failing firm defense

    NARCIS (Netherlands)

    Bouckaert, J.M.C.; Kort, P.M.

    2014-01-01

    The merger incentives between profitable firms differ fundamentally from the incentives of a profitable firm to merge with a failing firm. We investigate these incentives under different modes of price competition and Cournot behavior. Our main finding is that firms strictly prefer exit of the

  13. A general technique for characterizing x-ray position sensitive arrays

    International Nuclear Information System (INIS)

    Dufresne, E.; Bruning, R.; Sutton, M.; Stephenson, G.B.

    1994-03-01

    We present a general statistical technique for characterizing x-ray sensitive linear diode arrays and CCD arrays. We apply this technique to characterize the response of a linear diode array, Princeton Instrument model X-PDA, and a virtual phase CCD array, TI 4849, to direct illumination by x-rays. We find that the response of the linear array is linearly proportional to the incident intensity and uniform over its length to within 2 %. Its quantum efficiency is 38 % for Cu K α x-rays. The resolution function is evaluated from the spatial autocorrelation function and falls to 10 % of its peak value after one pixel. On the other hand, the response of the CCD detecting system to direct x-ray exposure is non-linear. To properly quantify the scattered x-rays, one must correct for the non- linearity. The resolution is two pixels along the serial transfer direction. We characterize the noise of the CCD and propose a model that takes into account the non-linearity and the resolution function to estimate the quantum efficiency of the detector. The quantum efficiency is 20 %

  14. A discrete homotopy perturbation method for non-linear Schrodinger equation

    Directory of Open Access Journals (Sweden)

    H. A. Wahab

    2015-12-01

    Full Text Available A general analysis is made by homotopy perturbation method while taking the advantages of the initial guess, appearance of the embedding parameter, different choices of the linear operator to the approximated solution to the non-linear Schrodinger equation. We are not dependent upon the Adomian polynomials and find the linear forms of the components without these calculations. The discretised forms of the nonlinear Schrodinger equation allow us whether to apply any numerical technique on the discritisation forms or proceed for perturbation solution of the problem. The discretised forms obtained by constructed homotopy provide the linear parts of the components of the solution series and hence a new discretised form is obtained. The general discretised form for the NLSE allows us to choose any initial guess and the solution in the closed form.

  15. More than 100 Colleges Fail Education Department's Test of Financial Strength

    Science.gov (United States)

    Blumenstyk, Goldie

    2009-01-01

    A newly compiled analysis by the U.S. Department of Education and obtained by "The Chronicle" shows that 114 private nonprofit degree-granting colleges were in such fragile financial condition at the end of their last fiscal year that they failed the department's financial-responsibility test. Colleges that fail the test are subject to extra…

  16. Gait recognition using kinect and locally linear embedding ...

    African Journals Online (AJOL)

    This paper presents the use of locally linear embedding (LLE) as feature extraction technique for classifying a person's identity based on their walking gait patterns. Skeleton data acquired from Microsoft Kinect camera were used as an input for (1). Multilayer Perceptron (MLP) and (2). LLE with MLP. The MLP classification ...

  17. Cool-season precipitation in the southwestern USA since AD 1000: comparison of linear and nonlinear techniques for reconstruction

    Science.gov (United States)

    Ni, Fenbiao; Cavazos, Tereza; Hughes, Malcolm K.; Comrie, Andrew C.; Funkhouser, Gary

    2002-11-01

    A 1000 year reconstruction of cool-season (November-April) precipitation was developed for each climate division in Arizona and New Mexico from a network of 19 tree-ring chronologies in the southwestern USA. Linear regression (LR) and artificial neural network (NN) models were used to identify the cool-season precipitation signal in tree rings. Using 1931-88 records, the stepwise LR model was cross-validated with a leave-one-out procedure and the NN was validated with a bootstrap technique. The final models were also independently validated using the 1896-1930 precipitation data. In most of the climate divisions, both techniques can successfully reconstruct dry and normal years, and the NN seems to capture large precipitation events and more variability better than the LR. In the 1000 year reconstructions the NN also produces more distinctive wet events and more variability, whereas the LR produces more distinctive dry events. The 1000 year reconstructed precipitation from the two models shows several sustained dry and wet periods comparable to the 1950s drought (e.g. 16th century mega drought) and to the post-1976 wet period (e.g. 1330s, 1610s). The impact of extreme periods on the environment may be stronger during sudden reversals from dry to wet, which were not uncommon throughout the millennium, such as the 1610s wet interval that followed the 16th century mega drought. The instrumental records suggest that strong dry to wet precipitation reversals in the past 1000 years might be linked to strong shifts from cold to warm El Niño-southern oscillation events and from a negative to positive Pacific decadal oscillation.

  18. Perceived adherence barriers among patients failing second-line ...

    African Journals Online (AJOL)

    Perceived adherence barriers among patients failing second-line antiretroviral therapy in Khayelitsha, South Africa. W Barnett, G Patten, B Kerschberger, K Conradie, DB Garone, G van Cutsem, CJ Colvin ...

  19. Rescue EUS-guided intrahepatic biliary drainage for malignant hilar biliary stricture after failed transpapillary re-intervention.

    Science.gov (United States)

    Minaga, Kosuke; Takenaka, Mamoru; Kitano, Masayuki; Chiba, Yasutaka; Imai, Hajime; Yamao, Kentaro; Kamata, Ken; Miyata, Takeshi; Omoto, Shunsuke; Sakurai, Toshiharu; Watanabe, Tomohiro; Nishida, Naoshi; Kudo, Masatoshi

    2017-11-01

    Treatment of unresectable malignant hilar biliary stricture (UMHBS) is challenging, especially after failure of repeated transpapillary endoscopic stenting. Endoscopic ultrasonography-guided intrahepatic biliary drainage (EUS-IBD) is a recent technique for intrahepatic biliary decompression, but indications for its use for complex hilar strictures have not been well studied. The aim of this study was to assess the feasibility and safety of EUS-IBD for UMHBS after failed transpapillary re-intervention. Retrospective analysis of all consecutive patients with UMHBS of Bismuth II grade or higher who, between December 2008 and May 2016, underwent EUS-IBD after failed repeated transpapillary interventions. The technical success, clinical success, and complication rates were evaluated. Factors associated with clinical ineffectiveness of EUS-IBD were explored. A total of 30 patients (19 women, median age 66 years [range 52-87]) underwent EUS-IBD for UMHBS during the study period. Hilar biliary stricture morphology was classified as Bismuth II, III, or IV in 5, 13, and 12 patients, respectively. The median number of preceding endoscopic interventions was 4 (range 2-14). EUS-IBD was required because the following procedures failed: duodenal scope insertion (n = 4), accessing the papilla after duodenal stent insertion (n = 5), or achieving desired intrahepatic biliary drainage (n = 21). Technical success with EUS-IBD was achieved in 29 of 30 patients (96.7%) and clinical success was attained in 22 of these 29 (75.9%). Mild peritonitis occurred in three of 30 (10%) and was managed conservatively. Stent dysfunction occurred in 23.3% (7/30). There was no procedure-related mortality. On multivariable analysis, Bismuth IV stricture predicted clinical ineffectiveness (odds ratio = 12.7, 95% CI 1.18-135.4, P = 0.035). EUS-IBD may be a feasible and effective rescue alternative with few major complications after failed transpapillary endoscopic re-intervention in patients

  20. PAPR reduction in FBMC using an ACE-based linear programming optimization

    Science.gov (United States)

    van der Neut, Nuan; Maharaj, Bodhaswar TJ; de Lange, Frederick; González, Gustavo J.; Gregorio, Fernando; Cousseau, Juan

    2014-12-01

    This paper presents four novel techniques for peak-to-average power ratio (PAPR) reduction in filter bank multicarrier (FBMC) modulation systems. The approach extends on current PAPR reduction active constellation extension (ACE) methods, as used in orthogonal frequency division multiplexing (OFDM), to an FBMC implementation as the main contribution. The four techniques introduced can be split up into two: linear programming optimization ACE-based techniques and smart gradient-project (SGP) ACE techniques. The linear programming (LP)-based techniques compensate for the symbol overlaps by utilizing a frame-based approach and provide a theoretical upper bound on achievable performance for the overlapping ACE techniques. The overlapping ACE techniques on the other hand can handle symbol by symbol processing. Furthermore, as a result of FBMC properties, the proposed techniques do not require side information transmission. The PAPR performance of the techniques is shown to match, or in some cases improve, on current PAPR techniques for FBMC. Initial analysis of the computational complexity of the SGP techniques indicates that the complexity issues with PAPR reduction in FBMC implementations can be addressed. The out-of-band interference introduced by the techniques is investigated. As a result, it is shown that the interference can be compensated for, whilst still maintaining decent PAPR performance. Additional results are also provided by means of a study of the PAPR reduction of the proposed techniques at a fixed clipping probability. The bit error rate (BER) degradation is investigated to ensure that the trade-off in terms of BER degradation is not too severe. As illustrated by exhaustive simulations, the SGP ACE-based technique proposed are ideal candidates for practical implementation in systems employing the low-complexity polyphase implementation of FBMC modulators. The methods are shown to offer significant PAPR reduction and increase the feasibility of FBMC as