WorldWideScience

Sample records for problem requires corrective

  1. 49 CFR 40.208 - What problem requires corrective action but does not result in the cancellation of a test?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false What problem requires corrective action but does not result in the cancellation of a test? 40.208 Section 40.208 Transportation Office of the Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems...

  2. Correcting environmental problems facing the nuclear weapons complex

    International Nuclear Information System (INIS)

    Rezendes, V.S.

    1990-06-01

    This report discusses DOE's efforts to correct the environmental problems facing the nuclear weapons complex. It focuses on three main points. First, the weapons complex faces a variety of serious and costly environmental problems. Second, during the past year, DOE has made some important changes to its organization that should help change its management focus from one that emphasizes materials production to one that more clearly focuses on environmental concerns. Third, because resolution of DOE's environmental problems will require considerable resources during a period of budgetary constraints, it is imperative that DOE have internal controls in place to ensure that resources are spent efficiently

  3. Developing Formal Correctness Properties from Natural Language Requirements

    Science.gov (United States)

    Nikora, Allen P.

    2006-01-01

    This viewgraph presentation reviews the rationale of the program to transform natural language specifications into formal notation.Specifically, automate generation of Linear Temporal Logic (LTL)correctness properties from natural language temporal specifications. There are several reasons for this approach (1) Model-based techniques becoming more widely accepted, (2) Analytical verification techniques (e.g., model checking, theorem proving) significantly more effective at detecting types of specification design errors (e.g., race conditions, deadlock) than manual inspection, (3) Many requirements still written in natural language, which results in a high learning curve for specification languages, associated tools and increased schedule and budget pressure on projects reduce training opportunities for engineers, and (4) Formulation of correctness properties for system models can be a difficult problem. This has relevance to NASA in that it would simplify development of formal correctness properties, lead to more widespread use of model-based specification, design techniques, assist in earlier identification of defects and reduce residual defect content for space mission software systems. The presentation also discusses: potential applications, accomplishments and/or technological transfer potential and the next steps.

  4. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  5. 49 CFR 40.205 - How are drug test problems corrected?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false How are drug test problems corrected? 40.205 Section 40.205 Transportation Office of the Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests § 40.205 How are drug test problems...

  6. 49 CFR 40.271 - How are alcohol testing problems corrected?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false How are alcohol testing problems corrected? 40.271 Section 40.271 Transportation Office of the Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Alcohol Testing § 40.271 How are alcohol testing...

  7. Requirements Elicitation Problems: A Literature Analysis

    Directory of Open Access Journals (Sweden)

    Bill Davey

    2015-06-01

    Full Text Available Requirements elicitation is the process through which analysts determine the software requirements of stakeholders. Requirements elicitation is seldom well done, and an inaccurate or incomplete understanding of user requirements has led to the downfall of many software projects. This paper proposes a classification of problem types that occur in requirements elicitation. The classification has been derived from a literature analysis. Papers reporting on techniques for improving requirements elicitation practice were examined for the problem the technique was designed to address. In each classification the most recent or prominent techniques for ameliorating the problems are presented. The classification allows the requirements engineer to be sensitive to problems as they arise and the educator to structure delivery of requirements elicitation training.

  8. Solving the simple plant location problem using a data correcting approach

    NARCIS (Netherlands)

    Goldengorin, B.; Tijssen, G.A.; Ghosh, D.; Sierksma, G.

    The Data Correcting Algorithm is a branch and bound type algorithm in which the data of a given problem instance is `corrected' at each branching in such a way that the new instance will be as close as possible to a polynomially solvable instance and the result satisfies an acceptable accuracy (the

  9. High-order multi-implicit spectral deferred correction methods for problems of reactive flow

    International Nuclear Information System (INIS)

    Bourlioux, Anne; Layton, Anita T.; Minion, Michael L.

    2003-01-01

    Models for reacting flow are typically based on advection-diffusion-reaction (A-D-R) partial differential equations. Many practical cases correspond to situations where the relevant time scales associated with each of the three sub-processes can be widely different, leading to disparate time-step requirements for robust and accurate time-integration. In particular, interesting regimes in combustion correspond to systems in which diffusion and reaction are much faster processes than advection. The numerical strategy introduced in this paper is a general procedure to account for this time-scale disparity. The proposed methods are high-order multi-implicit generalizations of spectral deferred correction methods (MISDC methods), constructed for the temporal integration of A-D-R equations. Spectral deferred correction methods compute a high-order approximation to the solution of a differential equation by using a simple, low-order numerical method to solve a series of correction equations, each of which increases the order of accuracy of the approximation. The key feature of MISDC methods is their flexibility in handling several sub-processes implicitly but independently, while avoiding the splitting errors present in traditional operator-splitting methods and also allowing for different time steps for each process. The stability, accuracy, and efficiency of MISDC methods are first analyzed using a linear model problem and the results are compared to semi-implicit spectral deferred correction methods. Furthermore, numerical tests on simplified reacting flows demonstrate the expected convergence rates for MISDC methods of orders three, four, and five. The gain in efficiency by independently controlling the sub-process time steps is illustrated for nonlinear problems, where reaction and diffusion are much stiffer than advection. Although the paper focuses on this specific time-scales ordering, the generalization to any ordering combination is straightforward

  10. 78 FR 29247 - Contractor Legal Management Requirements; Acquisition Regulations; Correction

    Science.gov (United States)

    2013-05-20

    ... DEPARTMENT OF ENERGY 48 CFR Part 952 RIN 1990-AA37 Contractor Legal Management Requirements; Acquisition Regulations; Correction AGENCY: Department of Energy. ACTION: Final rule; correction. SUMMARY: The... (78 FR 25795). In this document, DOE revised existing regulations covering contractor legal management...

  11. Differences in Visual Attention between Those Who Correctly and Incorrectly Answer Physics Problems

    Science.gov (United States)

    Madsen, Adrian M.; Larson, Adam M.; Loschky, Lester C.; Rebello, N. Sanjay

    2012-01-01

    This study investigated how visual attention differed between those who correctly versus incorrectly answered introductory physics problems. We recorded eye movements of 24 individuals on six different conceptual physics problems where the necessary information to solve the problem was contained in a diagram. The problems also contained areas…

  12. Problem-Oriented Requirements in Practice

    DEFF Research Database (Denmark)

    Lauesen, Søren

    2018-01-01

    [Context and motivation] Traditional requirements describe what the system shall do. This gives suppliers little freedom to use what they have already. In contrast, problem-oriented requirements describe the customer’s demands: what he wants to use the system for and which problems he wants...... to remove. The supplier specifies how his system will deal with these issues. The author developed the problem-oriented approach in 2007 on request from the Danish Government, and named it SL-07. [Question/problem] SL-07 has been used in many projects – usually with success. However, we had no detailed re......¬ports of the effects. [Princi¬pal ideas/results] This paper is a case study of SL-07 in acquisition of a complex case-management system. The author wrote the requirements and managed the supplier selection. Next, he was asked to run the entire acquisition project, although he was a novice project manager. Some...

  13. Generalised Batho correction factor

    International Nuclear Information System (INIS)

    Siddon, R.L.

    1984-01-01

    There are various approximate algorithms available to calculate the radiation dose in the presence of a heterogeneous medium. The Webb and Fox product over layers formulation of the generalised Batho correction factor requires determination of the number of layers and the layer densities for each ray path. It has been shown that the Webb and Fox expression is inefficient for the heterogeneous medium which is expressed as regions of inhomogeneity rather than layers. The inefficiency of the layer formulation is identified as the repeated problem of determining for each ray path which inhomogeneity region corresponds to a particular layer. It has been shown that the formulation of the Batho correction factor as a product over inhomogeneity regions avoids that topological problem entirely. The formulation in terms of a product over regions simplifies the computer code and reduces the time required to calculate the Batho correction factor for the general heterogeneous medium. (U.K.)

  14. Dispatching power system for preventive and corrective voltage collapse problem in a deregulated power system

    Science.gov (United States)

    Alemadi, Nasser Ahmed

    generation capability to specific generators to allow a load flow solution to be obtained. The minimum control solvability problem can also obtain solution of the load flow without curtailing transactions that shed load and generation as recommended by VSSAD. A minimum control solvability problem will be implemented as a corrective control, that will achieve the above objectives by using minimum control changes. The control includes; (1) voltage setpoint on generator bus voltage terminals; (2) under load tap changer tap positions and switchable shunt capacitors; and (3) active generation at generator buses. The minimum control solvability problem uses the VSSAD recommendation to obtain the feasible stable starting point but completely eliminates the impossible or onerous recommendation made by VSSAD. This thesis reviews the capabilities of Voltage Stability Security Assessment and Diagnosis and how it can be used to implement a contingency selection module for the Open Access System Dispatch (OASYDIS). The OASYDIS will also use the corrective control computed by Security Constrained Dispatch. The corrective control would be computed off line and stored for each contingency that produces voltage instability. The control is triggered and implemented to correct the voltage instability in the agent experiencing voltage instability only after the equipment outage or operating changes predicted to produce voltage instability have occurred. The advantages and the requirements to implement the corrective control are also discussed.

  15. Differences in visual attention between those who correctly and incorrectly answer physics problems

    Directory of Open Access Journals (Sweden)

    N. Sanjay Rebello1

    2012-05-01

    Full Text Available This study investigated how visual attention differed between those who correctly versus incorrectly answered introductory physics problems. We recorded eye movements of 24 individuals on six different conceptual physics problems where the necessary information to solve the problem was contained in a diagram. The problems also contained areas consistent with a novicelike response and areas of high perceptual salience. Participants ranged from those who had only taken one high school physics course to those who had completed a Physics Ph.D. We found that participants who answered correctly spent a higher percentage of time looking at the relevant areas of the diagram, and those who answered incorrectly spent a higher percentage of time looking in areas of the diagram consistent with a novicelike answer. Thus, when solving physics problems, top-down processing plays a key role in guiding visual selective attention either to thematically relevant areas or novicelike areas depending on the accuracy of a student’s physics knowledge. This result has implications for the use of visual cues to redirect individuals’ attention to relevant portions of the diagrams and may potentially influence the way they reason about these problems.

  16. Dijkstra's interpretation of the approach to solving a problem of program correctness

    Directory of Open Access Journals (Sweden)

    Markoski Branko

    2010-01-01

    Full Text Available Proving the program correctness and designing the correct programs are two connected theoretical problems, which are of great practical importance. The first is solved within program analysis, and the second one in program synthesis, although intertwining of these two processes is often due to connection between the analysis and synthesis of programs. Nevertheless, having in mind the automated methods of proving correctness and methods of automatic program synthesis, the difference is easy to tell. This paper presents denotative interpretation of programming calculation explaining semantics by formulae φ and ψ, in such a way that they can be used for defining state sets for program P.

  17. 76 FR 16588 - Risk Management Requirements for Derivatives Clearing Organizations; Correction

    Science.gov (United States)

    2011-03-24

    ... COMMODITY FUTURES TRADING COMMISSION 17 CFR Part 39 RIN 3038-AC98 Risk Management Requirements for Derivatives Clearing Organizations; Correction AGENCY: Commodity Futures Trading Commission. ACTION: Notice of... Register of January 20, 2011, regarding Risk Management Requirements for Derivatives Clearing Organizations...

  18. Large radiative corrections to the effective potential and the gauge hierarchy problem

    International Nuclear Information System (INIS)

    Sachrajda, C.T.C.

    1982-01-01

    We study the higher order corrections to the effective potential in a simple toy model and in the SU(5) grand unified theory, with a view to seeing what their effects are on the stability equations, and hence on the gauge hierarchy problem for these theories. These corrections contain powers of log (v 2 /h 2 ), where v and h are the large and small vacuum expectation values respectively, and hence cannot a priori be neglected. Nevertheless, after summing these large logarithms we find that the stability equations always contain two equations for v (i.e. these equations are independent of h) and hence can only be satisfied by a special (and hence unnatural) choice of parameters. This we claim is the precise statement of the gauge hierarchy problem. (orig.)

  19. Solving the simple plant location problem using a data correcting approach

    NARCIS (Netherlands)

    Goldengorin, Boris

    2001-01-01

    The Data Correcting Algorithm is a branch and bound algorithm in which thedata of a given problem instance is ‘corrected’ at each branching in such a waythat the new instance will be as close as possible to a polynomially solvableinstance and the result satisfies an acceptable accuracy (the

  20. 76 FR 50481 - Announcement of Requirements and Registration for “Lifeline Facebook App Challenge”; Correction

    Science.gov (United States)

    2011-08-15

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Announcement of Requirements and Registration for ``Lifeline Facebook App Challenge''; Correction AGENCY: Office of the Assistant Secretary for Preparedness... Requirements and Registration for ``Lifeline Facebook App Challenge''. DATES: This correction is effective...

  1. Closed orbit related problems: Correction, feedback, and analysis

    International Nuclear Information System (INIS)

    Bozoki, E.S.

    1995-01-01

    Orbit correction - moving the orbit to a desired orbit, orbit stability - keeping the orbit on the desired orbit using feedback to filter out unwanted noise, and orbit analysis - to learn more about the model of the machine, are strongly interrelated. They are the three facets of the same problem. The better one knows the model of the machine, the better the predictions that can be made on the behavior of the machine (inverse modeling) and the more accurately one can control the machine. On the other hand, one of the tools to learn more about the machine (modeling) is to study and analyze the orbit response to open-quotes kicks.close quotes

  2. 78 FR 47154 - Core Principles and Other Requirements for Swap Execution Facilities; Correction

    Science.gov (United States)

    2013-08-05

    ... COMMODITY FUTURES TRADING COMMISSION 17 CFR Part 37 RIN 3038-AD18 Core Principles and Other Requirements for Swap Execution Facilities; Correction AGENCY: Commodity Futures Trading Commission. ACTION... Principles [Corrected] 2. On page 33600, in the second column, under the heading Core Principle 3 of Section...

  3. Solving large instances of the quadratic cost of partition problem on dense graphs by data correcting algorithms

    NARCIS (Netherlands)

    Goldengorin, Boris; Vink, Marius de

    1999-01-01

    The Data-Correcting Algorithm (DCA) corrects the data of a hard problem instance in such a way that we obtain an instance of a well solvable special case. For a given prescribed accuracy of the solution, the DCA uses a branch and bound scheme to make sure that the solution of the corrected instance

  4. Illumination correction in psoriasis lesions images

    DEFF Research Database (Denmark)

    Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær

    2003-01-01

    An approach to automatically correct illumination problems in dermatological images is presented. The illumination function is estimated after combining the thematic map indicating skin-produced by an automated classification scheme- with the dermatological image data. The user is only required t...

  5. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    Science.gov (United States)

    Konik, Arda Bekir

    Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model

  6. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  7. 76 FR 58226 - Waiver of Citizenship Requirements for Crewmembers on Commercial Fishing Vessels; Correction

    Science.gov (United States)

    2011-09-20

    ... DEPARTMENT OF HOMELAND SECURITY Coast Guard 46 CFR Part 28 [Docket No. USCG-2011-0887] RIN 1625-AB61 Waiver of Citizenship Requirements for Crewmembers on Commercial Fishing Vessels; Correction... August 18, 2011, entitled ``Waiver of Citizenship Requirements for Crewmembers on Commercial Fishing...

  8. Problems in Operating a Drug Rehabilitation Center in an Adult Correctional Setting and Some Preventive Guidelines or Strategies.

    Science.gov (United States)

    Smith, S. Mae; And Others

    1979-01-01

    Some of the problems include differences in philosophy, nontherapeutic aspects of the prison environment, dependency on the prison environment, and unique staff problems. The authors conclude that changes can be made and effective treatment can exist within the correctional setting. (Author)

  9. 49 CFR 40.203 - What problems cause a drug test to be cancelled unless they are corrected?

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false What problems cause a drug test to be cancelled unless they are corrected? 40.203 Section 40.203 Transportation Office of the Secretary of Transportation PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Problems in Drug Tests § 40.203...

  10. Requirements Engineering as Creative Problem Solving: A Research Agenda for Idea Finding

    OpenAIRE

    Maiden, N.; Jones, S.; Karlsen, I. K.; Neill, R.; Zachos, K.; Milne, A.

    2010-01-01

    This vision paper frames requirements engineering as a creative problem solving process. Its purpose is to enable requirements researchers and practitioners to recruit relevant theories, models, techniques and tools from creative problem solving to understand and support requirements processes more effectively. It uses 4 drivers to motivate the case for requirements engineering as a creative problem solving process. It then maps established requirements activities onto one of the longest-esta...

  11. A First-order Prediction-Correction Algorithm for Time-varying (Constrained) Optimization: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simonetto, Andrea [Universite catholique de Louvain

    2017-07-25

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are established to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.

  12. Correct safety requirements during the life cycle of heating plants; Korrekta saekerhetskrav under vaermeanlaeggningars livscykel

    Energy Technology Data Exchange (ETDEWEB)

    Tegehall, Jan; Hedberg, Johan [Swedish National Testing and Research Inst., Boraas (Sweden)

    2006-10-15

    The safety of old steam boilers or hot water generators is in principle based on electromechanical components which are generally easy to understand. The use of safety-PLC is a new and flexible way to design a safe system. A programmable system offers more degrees of freedom and consequently new problems may arise. As a result, new standards which use the Safety Integrity Level (SIL) concept for the level of safety have been elaborated. The goal is to define a way of working to handle requirements on safety in control systems of heat and power plants. SIL-requirements are relatively new within the domain and there is a need for guidance to be able to follow the requirements. The target of this report is the people who work with safety questions during new construction, reconstruction, or modification of furnace plants. In the work, the Pressure Equipment Directive, 97/23/EC, as well as standards which use the SIL concept have been studied. Additionally, standards for water-tube boilers have been studied. The focus has been on the safety systems (safety functions) which are used in water-tube boilers for heat and power plants; other systems, which are parts of these boilers, have not been considered. Guidance has been given for the aforementioned standards as well as safety requirements specification and risk analysis. An old hot water generator and a relatively new steam boiler have been used as case studies. The design principles and safety functions of the furnaces have been described. During the risk analysis important hazards were identified. A method for performing a risk analysis has been described and the appropriate content of a safety requirements specification has been defined. If a heat or power plant is constructed, modified, or reconstructed, a safety life cycle shall be followed. The purpose of the safety life cycle is to plan, describe, document, perform, check, test, and validate that everything is correctly done. The components of the safety

  13. Problem in the surgical correction of long-face with vertical open bite

    Directory of Open Access Journals (Sweden)

    Coen Pramono D

    2005-12-01

    Full Text Available Long-face cases usually need both treatment of orthodontic and surgery. The problem appearing in the correction of long-face might be able to be related with some difficult factors such as the crowded teeth and excessive vertical height. A class III malocclusion and excessive open bite can be also followed in long face. This situation might worsen the facial aesthetic condition and increase the difficulty in orthodontic treatment. The orthodontic approach is oriented toward positioning the teeth pre-surgically to facilitate the surgical plan. The form of mandible which has grown in the downward direction in the area of mandible angle makes an extreme vertical open bite. The maxilla is usually presented with a maxillary hypolasia. Double-jaw surgery was done as the correction of the lower jaw alone would produce a flattened face appearance and difficulty in repositioning the mandible to achieve a good facial performance. Several cephalometric points were measured to observe the facial situation progress after surgery. Two cases of longface are reported, and the same surgical treatments were performed and showed different results.

  14. Requirements for qualification of manufacture of the ITER Central Solenoid and Correction Coils

    Energy Technology Data Exchange (ETDEWEB)

    Libeyre, Paul, E-mail: paul.libeyre@iter.org [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul lez Durance (France); Li, Hongwei [ITER China, 15B Fuxing Road, Beijing 100862 (China); Reiersen, Wayne [US ITER Project Office, 1055 Commerce Park Dr., Oak Ridge, TN 37831 (United States); Dolgetta, Nello; Jong, Cornelis; Lyraud, Charles; Mitchell, Neil; Laurenti, Adamo [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul lez Durance (France); Sgobba, Stefano [CERN, CH-1211 Genève 23 (Switzerland); Turck, Bernard [ITER Organization, Route de Vinon sur Verdon, 13115 St Paul lez Durance (France); Martovetsky, Nicolai; Everitt, David; Freudenberg, K.; Litherland, Steve; Rosenblad, Peter [US ITER Project Office, 1055 Commerce Park Dr., Oak Ridge, TN 37831 (United States); Smith, John; Spitzer, Jeff [General Atomics, P.O. Box 85608, San Diego, CA 92186-5608 (United States); Wei, Jing; Dong, Xiaoyu; Fang, Chao [ASIPP, Shushan Hu Road 350, Hefei, Anhui 230031 (China); and others

    2015-10-15

    Highlights: • A manufacturing line is installed for the ITER Correction Coils. • A manufacturing line is under installation for the ITER Central Solenoid. • Qualification of the manufacturing procedures has started for both manufacturing lines and acceptance criteria set. • Winding procedure of Correction Coils is qualified. - Abstract: The manufacturing line of the ITER Correction Coils (CC) at ASIPP in Hefei (China) was completed in 2013 and the manufacturing line of the ITER Central Solenoid (CS) modules is under installation at General Atomic premises in Poway (USA). In both cases, before starting production of the first coils, qualification of the manufacturing procedures is achieved by the construction of a set of mock-ups and prototypes to demonstrate that design requirements defined by the ITER Organization are effectively met. For each qualification item, the corresponding mock-ups are presented with the tests to be performed and the related acceptance criteria. The first qualification results are discussed.

  15. Centrifuge Modelling of Two Civil-Environmental Problems

    National Research Council Canada - National Science Library

    Goodings, Deborah

    2001-01-01

    Research Problem 1: Frost heave and thaw induced settlement in silt and silty clay developing over a year have been modelled correctly using a geotechnical centrifuge with tests requiring less than a day...

  16. Renormalization group in the theory of fully developed turbulence. Problem of the infrared relevant corrections to the Navier-Stokes equation

    International Nuclear Information System (INIS)

    Antonov, N.V.; Borisenok, S.V.; Girina, V.I.

    1996-01-01

    Within the framework of the renormalization group approach to the theory of fully developed turbulence we consider the problem of possible IR relevant corrections to the Navier-Stokes equation. We formulate an exact criterion of the actual IR relevance of the corrections. In accordance with this criterion we verify the IR relevance for certain classes of composite operators. 17 refs., 2 tabs

  17. Logarithmic corrections of the two-body QED problem

    International Nuclear Information System (INIS)

    Khriplovich, I.B.; Mil'shtejn, A.I.; Elkhovskij, A.S.

    1992-01-01

    The logarithmic part of the Lamb shift, the contribution of the relative order α 3 log(1/α) to the atomic state energy, is related to the usual infrared divergence. For positronium, the calculated logarithmic correction does not vanish only in n 3 S 1 states and constitutes 5/24mα 6 log(1/α)/m 3 . Logarithmic corrections of the relative order α 2 log(1/α) to the positronium decay rate are also of the relativistic origin and can be easily computed within the same approach. 31 refs.; 11 figs

  18. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... and Craniofacial Surgery Cleft Lip/Palate and Craniofacial Surgery A cleft lip may require one or more ... find out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment ...

  19. Semantics and correctness proofs for programs with partial functions

    International Nuclear Information System (INIS)

    Yakhnis, A.; Yakhnis, V.

    1996-01-01

    This paper presents a portion of the work on specification, design, and implementation of safety-critical systems such as reactor control systems. A natural approach to this problem, once all the requirements are captured, would be to state the requirements formally and then either to prove (preferably via automated tools) that the system conforms to spec (program verification), or to try to simultaneously generate the system and a mathematical proof that the requirements are being met (program derivation). An obstacle to this is frequent presence of partially defined operations within the software and its specifications. Indeed, the usual proofs via first order logic presuppose everywhere defined operations. Recognizing this problem, David Gries, in ''The Science of Programming,'' 1981, introduced the concept of partial functions into the mainstream of program correctness and gave hints how his treatment of partial functions could be formalized. Still, however, existing theorem provers and software verifiers have difficulties in checking software with partial functions, because of absence of uniform first order treatment of partial functions within classical 2-valued logic. Several rigorous mechanisms that took partiality into account were introduced [Wirsing 1990, Breu 1991, VDM 1986, 1990, etc.]. However, they either did not discuss correctness proofs or departed from first order logic. To fill this gap, the authors provide a semantics for software correctness proofs with partial functions within classical 2-valued 1st order logic. They formalize the Gries treatment of partial functions and also cover computations of functions whose argument lists may be only partially available. An example is nuclear reactor control relying on sensors which may fail to deliver sense data. This approach is sufficiently general to cover correctness proofs in various implementation languages

  20. The ρ - ω mass difference in a relativistic potential model with pion corrections

    International Nuclear Information System (INIS)

    Palladino, B.E.; Ferreira, P.L.

    1988-01-01

    The problem of the ρ - ω mass difference is studied in the framework of the relativistic, harmonic, S+V independent quark model implemented by center-of-mass, one-gluon exchange and plon-cloud corrections stemming from the requirement of chiral symmetry in the (u,d) SU(2) flavour sector of the model. The plonic self-energy corrections with different intermediate energy states are instrumental of the analysis of the problem, which requires and appropriate parametrization of the mesonic sector different from that previously used to calculate the mass spectrum of the S-wave baryons. The right ρ - ω mass splitting is found, together with a satisfactory value for the mass of the pion, calculated as a bound-state of a quark-antiquark pair. An analogous discussion based on the cloudy-bag model is also presented. (author) [pt

  1. Correction of the near threshold behavior of electron collisional excitation cross-sections in the plane-wave Born approximation

    Science.gov (United States)

    Kilcrease, D. P.; Brookes, S.

    2013-12-01

    The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. A simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure for the Born cross-sections that employs the Elwert-Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. We also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.

  2. The effect of assessment form to the ability of student to answer the problem correctly

    Directory of Open Access Journals (Sweden)

    Arifian Dimas

    2017-02-01

    Full Text Available Assessment is an important part of education. For educators, are collecting information about students learning and information about the learning process. For students, the assessment is the process of informing them about the progress of learning. Effective assessment process is responsive to the strengths, needs and clearly articulated student learning objectives. This research was aimed to know the effect of assessment form towards students ability in answering the problem correctly on kinematics and dynamics of motion. The method used in this research is descriptive qualitative. The data collecting method are assessment test and interview. Assessment test instrument are written test and animation form test. The question we use was taken "Force Concept Inventory" on kinematics and dynamics concepts. The sample are 36 student of 6th terms student of Physics Undergraduate Departement in Sebelas Maret University. The result shows that for kinematics concept, more students answer correctly for test presented in animation form but for dynamics concept conventional test is better.

  3. Is Necessary Attenuation Correction for Cat Brain PET?

    International Nuclear Information System (INIS)

    Kim, Jin Su; Lee, Jae Sung; Park, Min Hyun; Im, Ki Chun; Oh, Seung Ha; Lee, Dong Soo; Moon, Dae Hyuk

    2007-01-01

    Photon attenuation and scatter corrections (AC and SC) were necessary for quantification of human PET. However, there is no consensus on whether AC and SC are necessary for the cat brain PET imaging. Since post-injection transmission (TX) PET scans are not permitted or provided to microPET scanner users at present, additional time for performing TX scan and awaiting FDG uptake is required for attenuation and scatter corrections. Increasing probability of subject movement and possible biological effect of long term anesthesia would be the problem in additional TX scan. The aim of this study was to examine the effect of AC and SC for the quantification of cat brain PET data

  4. Scalable effective-temperature reduction for quantum annealers via nested quantum annealing correction

    Science.gov (United States)

    Vinci, Walter; Lidar, Daniel A.

    2018-02-01

    Nested quantum annealing correction (NQAC) is an error-correcting scheme for quantum annealing that allows for the encoding of a logical qubit into an arbitrarily large number of physical qubits. The encoding replaces each logical qubit by a complete graph of degree C . The nesting level C represents the distance of the error-correcting code and controls the amount of protection against thermal and control errors. Theoretical mean-field analyses and empirical data obtained with a D-Wave Two quantum annealer (supporting up to 512 qubits) showed that NQAC has the potential to achieve a scalable effective-temperature reduction, Teff˜C-η , with 0 temperature of a quantum annealer. Such effective-temperature reduction is relevant for machine-learning applications. Since we demonstrate that NQAC achieves error correction via a reduction of the effective-temperature of the quantum annealing device, our results address the problem of the "temperature scaling law for quantum annealers," which requires the temperature of quantum annealers to be reduced as problems of larger sizes are attempted to be solved.

  5. Self-correcting Multigrid Solver

    International Nuclear Information System (INIS)

    Lewandowski, Jerome L.V.

    2004-01-01

    A new multigrid algorithm based on the method of self-correction for the solution of elliptic problems is described. The method exploits information contained in the residual to dynamically modify the source term (right-hand side) of the elliptic problem. It is shown that the self-correcting solver is more efficient at damping the short wavelength modes of the algebraic error than its standard equivalent. When used in conjunction with a multigrid method, the resulting solver displays an improved convergence rate with no additional computational work

  6. Corrective agricultural actions: ecological bases and problems relating to their implementation

    International Nuclear Information System (INIS)

    Vandecasteele, C.M.; Burton, O.; Kirchmann, R.

    1997-01-01

    Several types of corrective actions. more or less scientific or empirical, were implemented aiming at limiting the contamination of products ingested by human population or animals. Although based on scientific reasons rather a significant number of measures seem to be inapplicable or too expensive to be put into effect in real situations. Generally, preference should be given to the corrective actions the application of which would imply not new technologies, requiring specific checking periods before becoming operative, but currently available materials and machines. Better results may be obtained often by resorting to combinations of measures ran either simultaneously or sequentially. The efficiency of directives may vary depending on the conditions of implementing and sometimes may be accompanied by undesirable side-effects. For instance, lime used in excess may entail precipitation of micro-nutrients and induce deficiencies in the plants and animals nourished with deficient forage; substantial fertilization of a semi-natural system may result in profound modifications of the ecosystems. It is worth noting that certain measures are irreversible or almost so and that the situation can be hardly restored if these measures were not rationally applied. The sections of the papers deal with: contamination direct and indirect of vegetation, the radioactivity transfer to animals, influence of chemical properties of the radionuclides, influence of chemical species, influence of alimentary regime, the species idiosyncrasy, physiological parameters, limiting the contamination of animal products and food processing

  7. Qubits in phase space: Wigner-function approach to quantum-error correction and the mean-king problem

    International Nuclear Information System (INIS)

    Paz, Juan Pablo; Roncaglia, Augusto Jose; Saraceno, Marcos

    2005-01-01

    We analyze and further develop a method to represent the quantum state of a system of n qubits in a phase-space grid of NxN points (where N=2 n ). The method, which was recently proposed by Wootters and co-workers (Gibbons et al., Phys. Rev. A 70, 062101 (2004).), is based on the use of the elements of the finite field GF(2 n ) to label the phase-space axes. We present a self-contained overview of the method, we give insights into some of its features, and we apply it to investigate problems which are of interest for quantum-information theory: We analyze the phase-space representation of stabilizer states and quantum error-correction codes and present a phase-space solution to the so-called mean king problem

  8. Algebraic reasoning and bat-and-ball problem variants: Solving isomorphic algebra first facilitates problem solving later.

    Science.gov (United States)

    Hoover, Jerome D; Healy, Alice F

    2017-12-01

    The classic bat-and-ball problem is used widely to measure biased and correct reasoning in decision-making. University students overwhelmingly tend to provide the biased answer to this problem. To what extent might reasoners be led to modify their judgement, and, more specifically, is it possible to facilitate problem solution by prompting participants to consider the problem from an algebraic perspective? One hundred ninety-seven participants were recruited to investigate the effect of algebraic cueing as a debiasing strategy on variants of the bat-and-ball problem. Participants who were cued to consider the problem algebraically were significantly more likely to answer correctly relative to control participants. Most of this cueing effect was confined to a condition that required participants to solve isomorphic algebra equations corresponding to the structure of bat-and-ball question types. On a subsequent critical question with differing item and dollar amounts presented without a cue, participants were able to generalize the learned information to significantly reduce overall bias. Math anxiety was also found to be significantly related to bat-and-ball problem accuracy. These results suggest that, under specific conditions, algebraic reasoning is an effective debiasing strategy on bat-and-ball problem variants, and provide the first documented evidence for the influence of math anxiety on Cognitive Reflection Test performance.

  9. FORMATION OF TOILET SKILLS IN CHILDREN IN RUSSIA. PROBLEM ANALYSIS

    Directory of Open Access Journals (Sweden)

    G. A. Karkashadze

    2012-01-01

    Full Text Available The article is devoted to one of the most pressing and largely discussed problems, not only in pediatrics, but also in pedagogy and psychology — toilet skills training for children. The authors formulate a number of tasks required to solve the problem of the correct toilet training the child, and discussion questions are the following: at what age to start correctly this skill forming, what is the conscious use of the potty? In addition, the problem of toilet training skills was shown in terms of different specialists: doctors, as well as parents, manufacturers, and law. There was shown an important role in solving this problem at the state level of professional organizations, as well as the need for uniform terminology with the same understanding of their meaning by all stakeholders. 

  10. High order field-to-field corrections for imaging and overlay to achieve sub 20-nm lithography requirements

    Science.gov (United States)

    Mulkens, Jan; Kubis, Michael; Hinnen, Paul; de Graaf, Roelof; van der Laan, Hans; Padiy, Alexander; Menchtchikov, Boris

    2013-04-01

    Immersion lithography is being extended to the 20-nm and 14-nm node and the lithography performance requirements need to be tightened further to enable this shrink. In this paper we present an integral method to enable high-order fieldto- field corrections for both imaging and overlay, and we show that this method improves the performance with 20% - 50%. The lithography architecture we build for these higher order corrections connects the dynamic scanner actuators with the angle resolved scatterometer via a separate application server. Improvements of CD uniformity are based on enabling the use of freeform intra-field dose actuator and field-to-field control of focus. The feedback control loop uses CD and focus targets placed on the production mask. For the overlay metrology we use small in-die diffraction based overlay targets. Improvements of overlay are based on using the high order intra-field correction actuators on a field-tofield basis. We use this to reduce the machine matching error, extending the heating control and extending the correction capability for process induced errors.

  11. Justifications of policy-error correction: a case study of error correction in the Three Mile Island Nuclear Power Plant Accident

    International Nuclear Information System (INIS)

    Kim, Y.P.

    1982-01-01

    The sensational Three Mile Island Nuclear Power Plant Accident of 1979 raised many policy problems. Since the TMI accident, many authorities in the nation, including the President's Commission on TMI, Congress, GAO, as well as NRC, have researched lessons and recommended various corrective measures for the improvement of nuclear regulatory policy. As an effort to translate the recommendations into effective actions, the NRC developed the TMI Action Plan. How sound are these corrective actions. The NRC approach to the TMI Action Plan is justifiable to the extent that decisions were reached by procedures to reduce the effects of judgmental bias. Major findings from the NRC's effort to justify the corrective actions include: (A) The deficiencies and errors in the operations at the Three Mile Island Plant were not defined through a process of comprehensive analysis. (B) Instead, problems were identified pragmatically and segmentally, through empirical investigations. These problems tended to take one of two forms - determinate problems subject to regulatory correction on the basis of available causal knowledge, and indeterminate problems solved by interim rules plus continuing study. The information to justify the solution was adjusted to the problem characteristics. (C) Finally, uncertainty in the determinate problems was resolved by seeking more causal information, while efforts to resolve indeterminate problems relied upon collective judgment and a consensus rule governing decisions about interim resolutions

  12. Use Residual Correction Method and Monotone Iterative Technique to Calculate the Upper and Lower Approximate Solutions of Singularly Perturbed Non-linear Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Chi-Chang Wang

    2013-09-01

    Full Text Available This paper seeks to use the proposed residual correction method in coordination with the monotone iterative technique to obtain upper and lower approximate solutions of singularly perturbed non-linear boundary value problems. First, the monotonicity of a non-linear differential equation is reinforced using the monotone iterative technique, then the cubic-spline method is applied to discretize and convert the differential equation into the mathematical programming problems of an inequation, and finally based on the residual correction concept, complex constraint solution problems are transformed into simpler questions of equational iteration. As verified by the four examples given in this paper, the method proposed hereof can be utilized to fast obtain the upper and lower solutions of questions of this kind, and to easily identify the error range between mean approximate solutions and exact solutions.

  13. Nuclear security. Improving correction of security deficiencies at DOE's weapons facilities

    International Nuclear Information System (INIS)

    Wells, James E.; Cannon, Doris E.; Fenzel, William F.; Lightner, Kenneth E. Jr.; Curtis, Lois J.; DuBois, Julia A.; Brown, Gail W.; Trujillo, Charles S.; Tumler, Pamela K.

    1992-11-01

    The US nuclear weapons research, development, and production are conducted at 10 DOE nuclear weapons facilities by contractors under the guidance and oversight of 9 DOE field offices. Because these facilities house special nuclear materials used in making nuclear weapons and nuclear weapons components, DOE administers a security program to protect (1) against theft, sabotage, espionage, terrorism, or other risks to national security and (2) the safety and health of DOE employees and the public. DOE spends almost $1 billion a year on this security program. DOE administers the security program through periodic inspections that evaluate and monitor the effectiveness of facilities' safeguards and security. Security inspections identify deficiencies, instances of noncompliance with safeguards and security requirements or poor performance of the systems being evaluated, that must be corrected to maintain adequate security. The contractors and DOE share responsibility for correcting deficiencies. Contractors, in correcting deficiencies, must comply with several DOE orders. The contractors' performances were not adequate in conducting four of the eight procedures considered necessary in meeting DOE's deficiency correction requirements. For 19 of the 20 deficiency cases we reviewed, contractors could not demonstrate that they had conducted three critical deficiency analyses (root cause, risk assessment, and cost-benefit) required by DOE. Additionally, the contractors did not always adequately verify that corrective actions taken were appropriate, effective, and complete. The contractors performed the remaining four procedures (reviewing deficiencies for duplication, entering deficiencies into a data base, tracking the status of deficiencies, and preparing and implementing a corrective action plan) adequately in all 20 cases. DOE's oversight of the corrective action process could be improved in three areas. The computerized systems used to track the status of security

  14. Gauge hierarchy problem in grand unified theories

    International Nuclear Information System (INIS)

    Alhendi, H.A.A.

    1982-01-01

    In grand unification schemes, several mass scales are to be introduced, with some of them much larger than all the others, to cope with experimental observations, in which elementary particles of higher masses require higher energy to observe them than elementary particles of lower masses. There have been controversial arguments in the literature on such hierarchical scale structure, when radiative corrections are taken into account. It has been asserted that the gauge hierarchy depends on the choice of the subtraction point (in the classical field space), of the four-point function at zero external momentum. It also has been asserted that the gauge hierarchy problem whenever it is possible to be maintained in one sector of particles, it also is possible to be maintained in the other sectors. These two problems have been studied in a prototype model, namely an 0(3)-model with two triplets of real scalar Higgs fields. Our analysis shows that, within ordinary perturbation theory, none of these two problems is quite correct

  15. Assessment of the NASA Space Shuttle Program's Problem Reporting and Corrective Action System

    Science.gov (United States)

    Korsmeryer, D. J.; Schreiner, J. A.; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper documents the general findings and recommendations of the Design for Safety Programs Study of the Space Shuttle Programs (SSP) Problem Reporting and Corrective Action (PRACA) System. The goals of this Study were: to evaluate and quantify the technical aspects of the SSP's PRACA systems, and to recommend enhancements addressing specific deficiencies in preparation for future system upgrades. The Study determined that the extant SSP PRACA systems accomplished a project level support capability through the use of a large pool of domain experts and a variety of distributed formal and informal database systems. This operational model is vulnerable to staff turnover and loss of the vast corporate knowledge that is not currently being captured by the PRACA system. A need for a Program-level PRACA system providing improved insight, unification, knowledge capture, and collaborative tools was defined in this study.

  16. Vision Problems in Homeless Children.

    Science.gov (United States)

    Smith, Natalie L; Smith, Thomas J; DeSantis, Diana; Suhocki, Marissa; Fenske, Danielle

    2015-08-01

    Vision problems in homeless children can decrease educational achievement and quality of life. To estimate the prevalence and specific diagnoses of vision problems in children in an urban homeless shelter. A prospective series of 107 homeless children and teenagers who underwent screening with a vision questionnaire, eye chart screening (if mature enough) and if vision problem suspected, evaluation by a pediatric ophthalmologist. Glasses and other therapeutic interventions were provided if necessary. The prevalence of vision problems in this population was 25%. Common diagnoses included astigmatism, amblyopia, anisometropia, myopia, and hyperopia. Glasses were required and provided for 24 children (22%). Vision problems in homeless children are common and frequently correctable with ophthalmic intervention. Evaluation by pediatric ophthalmologist is crucial for accurate diagnoses and treatment. Our system of screening and evaluation is feasible, efficacious, and reproducible in other homeless care situations.

  17. Eigenvectors phase correction in inverse modal problem

    Science.gov (United States)

    Qiao, Guandong; Rahmatalla, Salam

    2017-12-01

    The solution of the inverse modal problem for the spatial parameters of mechanical and structural systems is heavily dependent on the quality of the modal parameters obtained from the experiments. While experimental and environmental noises will always exist during modal testing, the resulting modal parameters are expected to be corrupted with different levels of noise. A novel methodology is presented in this work to mitigate the errors in the eigenvectors when solving the inverse modal problem for the spatial parameters. The phases of the eigenvector component were utilized as design variables within an optimization problem that minimizes the difference between the calculated and experimental transfer functions. The equation of motion in terms of the modal and spatial parameters was used as a constraint in the optimization problem. Constraints that reserve the positive and semi-positive definiteness and the inter-connectivity of the spatial matrices were implemented using semi-definite programming. Numerical examples utilizing noisy eigenvectors with augmented Gaussian white noise of 1%, 5%, and 10% were used to demonstrate the efficacy of the proposed method. The results showed that the proposed method is superior when compared with a known method in the literature.

  18. Insight Is Not in the Problem: Investigating Insight in Problem Solving across Task Types.

    Science.gov (United States)

    Webb, Margaret E; Little, Daniel R; Cropper, Simon J

    2016-01-01

    The feeling of insight in problem solving is typically associated with the sudden realization of a solution that appears obviously correct (Kounios et al., 2006). Salvi et al. (2016) found that a solution accompanied with sudden insight is more likely to be correct than a problem solved through conscious and incremental steps. However, Metcalfe (1986) indicated that participants would often present an inelegant but plausible (wrong) answer as correct with a high feeling of warmth (a subjective measure of closeness to solution). This discrepancy may be due to the use of different tasks or due to different methods in the measurement of insight (i.e., using a binary vs. continuous scale). In three experiments, we investigated both findings, using many different problem tasks (e.g., Compound Remote Associates, so-called classic insight problems, and non-insight problems). Participants rated insight-related affect (feelings of Aha-experience, confidence, surprise, impasse, and pleasure) on continuous scales. As expected we found that, for problems designed to elicit insight, correct solutions elicited higher proportions of reported insight in the solution compared to non-insight solutions; further, correct solutions elicited stronger feelings of insight compared to incorrect solutions.

  19. Sequence-specific bias correction for RNA-seq data using recurrent neural networks.

    Science.gov (United States)

    Zhang, Yao-Zhong; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru

    2017-01-25

    The recent success of deep learning techniques in machine learning and artificial intelligence has stimulated a great deal of interest among bioinformaticians, who now wish to bring the power of deep learning to bare on a host of bioinformatical problems. Deep learning is ideally suited for biological problems that require automatic or hierarchical feature representation for biological data when prior knowledge is limited. In this work, we address the sequence-specific bias correction problem for RNA-seq data redusing Recurrent Neural Networks (RNNs) to model nucleotide sequences without pre-determining sequence structures. The sequence-specific bias of a read is then calculated based on the sequence probabilities estimated by RNNs, and used in the estimation of gene abundance. We explore the application of two popular RNN recurrent units for this task and demonstrate that RNN-based approaches provide a flexible way to model nucleotide sequences without knowledge of predetermined sequence structures. Our experiments show that training a RNN-based nucleotide sequence model is efficient and RNN-based bias correction methods compare well with the-state-of-the-art sequence-specific bias correction method on the commonly used MAQC-III data set. RNNs provides an alternative and flexible way to calculate sequence-specific bias without explicitly pre-determining sequence structures.

  20. Manifold corrections on spinning compact binaries

    International Nuclear Information System (INIS)

    Zhong Shuangying; Wu Xin

    2010-01-01

    This paper deals mainly with a discussion of three new manifold correction methods and three existing ones, which can numerically preserve or correct all integrals in the conservative post-Newtonian Hamiltonian formulation of spinning compact binaries. Two of them are listed here. One is a new momentum-position scaling scheme for complete consistency of both the total energy and the magnitude of the total angular momentum, and the other is the Nacozy's approach with least-squares correction of the four integrals including the total energy and the total angular momentum vector. The post-Newtonian contributions, the spin effects, and the classification of orbits play an important role in the effectiveness of these six manifold corrections. They are all nearly equivalent to correct the integrals at the level of the machine epsilon for the pure Kepler problem. Once the third-order post-Newtonian contributions are added to the pure orbital part, three of these corrections have only minor effects on controlling the errors of these integrals. When the spin effects are also included, the effectiveness of the Nacozy's approach becomes further weakened, and even gets useless for the chaotic case. In all cases tested, the new momentum-position scaling scheme always shows the optimal performance. It requires a little but not much expensive additional computational cost when the spin effects exist and several time-saving techniques are used. As an interesting case, the efficiency of the correction to chaotic eccentric orbits is generally better than one to quasicircular regular orbits. Besides this, the corrected fast Lyapunov indicators and Lyapunov exponents of chaotic eccentric orbits are large as compared with the uncorrected counterparts. The amplification is a true expression of the original dynamical behavior. With the aid of both the manifold correction added to a certain low-order integration algorithm as a fast and high-precision device and the fast Lyapunov

  1. Operator quantum error-correcting subsystems for self-correcting quantum memories

    International Nuclear Information System (INIS)

    Bacon, Dave

    2006-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures

  2. Growing geometric reasoning in solving problems of analytical geometry through the mathematical communication problems to state Islamic university students

    Science.gov (United States)

    Mujiasih; Waluya, S. B.; Kartono; Mariani

    2018-03-01

    Skills in working on the geometry problems great needs of the competence of Geometric Reasoning. As a teacher candidate, State Islamic University (UIN) students need to have the competence of this Geometric Reasoning. When the geometric reasoning in solving of geometry problems has grown well, it is expected the students are able to write their ideas to be communicative for the reader. The ability of a student's mathematical communication is supposed to be used as a marker of the growth of their Geometric Reasoning. Thus, the search for the growth of geometric reasoning in solving of analytic geometry problems will be characterized by the growth of mathematical communication abilities whose work is complete, correct and sequential, especially in writing. Preceded with qualitative research, this article was the result of a study that explores the problem: Was the search for the growth of geometric reasoning in solving analytic geometry problems could be characterized by the growth of mathematical communication abilities? The main activities in this research were done through a series of activities: (1) Lecturer trains the students to work on analytic geometry problems that were not routine and algorithmic process but many problems that the process requires high reasoning and divergent/open ended. (2) Students were asked to do the problems independently, in detail, complete, order, and correct. (3) Student answers were then corrected each its stage. (4) Then taken 6 students as the subject of this research. (5) Research subjects were interviewed and researchers conducted triangulation. The results of this research, (1) Mathematics Education student of UIN Semarang, had adequate the mathematical communication ability, (2) the ability of this mathematical communication, could be a marker of the geometric reasoning in solving of problems, and (3) the geometric reasoning of UIN students had grown in a category that tends to be good.

  3. The Prerogative of "Corrective Recasts" as a Sign of Hegemony in the Use of Language: Further Thoughts on Eric Hauser's (2005) "Coding 'Corrective Recasts': The Maintenance of Meaning and More Fundamental Problems"

    Science.gov (United States)

    Rajagopalan, Kanavillil

    2006-01-01

    The objective of this response article is to think through some of what I see as the far-reaching implications of a recent paper by Eric Hauser (2005) entitled "Coding 'corrective recasts': the maintenance of meaning and more fundamental problems". Hauser makes a compelling, empirically-backed case for his contention that, contrary to widespread…

  4. Coding "Corrective Recasts": The Maintenance of Meaning and More Fundamental Problems

    Science.gov (United States)

    Hauser, Eric

    2005-01-01

    A fair amount of descriptive research in the field of second language acquisition has looked at the presence of what have been labeled corrective recasts. This research has relied on the methodological practice of coding to identify particular turns as "corrective recasts." Often, the coding criteria make use of the notion of the maintenance of…

  5. MCNP: Photon benchmark problems

    International Nuclear Information System (INIS)

    Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.

    1991-09-01

    The recent widespread, markedly increased use of radiation transport codes has produced greater user and institutional demand for assurance that such codes give correct results. Responding to these pressing requirements for code validation, the general purpose Monte Carlo transport code MCNP has been tested on six different photon problem families. MCNP was used to simulate these six sets numerically. Results for each were compared to the set's analytical or experimental data. MCNP successfully predicted the analytical or experimental results of all six families within the statistical uncertainty inherent in the Monte Carlo method. From this we conclude that MCNP can accurately model a broad spectrum of photon transport problems. 8 refs., 30 figs., 5 tabs

  6. Precise method for correcting count-rate losses in scintillation cameras

    International Nuclear Information System (INIS)

    Madsen, M.T.; Nickles, R.J.

    1986-01-01

    Quantitative studies performed with scintillation detectors often require corrections for lost data because of the finite resolving time of the detector. Methods that monitor losses by means of a reference source or pulser have unacceptably large statistical fluctuations associated with their correction factors. Analytic methods that model the detector as a paralyzable system require an accurate estimate of the system resolving time. Because the apparent resolving time depends on many variables, including the window setting, source distribution, and the amount of scattering material, significant errors can be introduced by relying on a resolving time obtained from phantom measurements. These problems can be overcome by curve-fitting the data from a reference source to a paralyzable model in which the true total count rate in the selected window is estimated from the observed total rate. The resolving time becomes a free parameter in this method which is optimized to provide the best fit to the observed reference data. The fitted curve has the inherent accuracy of the reference source method with the precision associated with the observed total image count rate. Correction factors can be simply calculated from the ratio of the true reference source rate and the fitted curve. As a result, the statistical uncertainty of the data corrected by this method is not significantly increased

  7. Multifield methods for nuclear thermohydraulics problems

    International Nuclear Information System (INIS)

    Banerjee, S.

    1987-01-01

    The multifield model, in which separate sets of conservation equations are written for each phase, or clearly identifiable portions of a phase, is derived by averaging the local instantaneous equations. The closure relationships required to replace information lost in the averaging process are discussed. The mathematical structure of the model is considered and it is shown that application to a variety of problems in which the phases are well separated leads to good predictions of experimental data. For problems in which the phases are more closely coupled, the model is more difficult to apply correctly. However, careful consideration of interfield momentum and heat transfer is shown to give excellent results for some complex problems like density wave propagation in bubbly flows. The model in its present form is shown to be less useful for highly intermittent regimes like slug and churn flows. Data on a reflux condensation situation near the flooding point are discussed to indicate directions in which further work is required

  8. Automatic Power Factor Correction Using Capacitive Bank

    OpenAIRE

    Mr.Anant Kumar Tiwari,; Mrs. Durga Sharma

    2014-01-01

    The power factor correction of electrical loads is a problem common to all industrial companies. Earlier the power factor correction was done by adjusting the capacitive bank manually [1]. The automated power factor corrector (APFC) using capacitive load bank is helpful in providing the power factor correction. Proposed automated project involves measuring the power factor value from the load using microcontroller. The design of this auto-adjustable power factor correction is ...

  9. The Systematic Bias of Ingestible Core Temperature Sensors Requires a Correction by Linear Regression.

    Science.gov (United States)

    Hunt, Andrew P; Bach, Aaron J E; Borg, David N; Costello, Joseph T; Stewart, Ian B

    2017-01-01

    An accurate measure of core body temperature is critical for monitoring individuals, groups and teams undertaking physical activity in situations of high heat stress or prolonged cold exposure. This study examined the range in systematic bias of ingestible temperature sensors compared to a certified and traceable reference thermometer. A total of 119 ingestible temperature sensors were immersed in a circulated water bath at five water temperatures (TEMP A: 35.12 ± 0.60°C, TEMP B: 37.33 ± 0.56°C, TEMP C: 39.48 ± 0.73°C, TEMP D: 41.58 ± 0.97°C, and TEMP E: 43.47 ± 1.07°C) along with a certified traceable reference thermometer. Thirteen sensors (10.9%) demonstrated a systematic bias > ±0.1°C, of which 4 (3.3%) were > ± 0.5°C. Limits of agreement (95%) indicated that systematic bias would likely fall in the range of -0.14 to 0.26°C, highlighting that it is possible for temperatures measured between sensors to differ by more than 0.4°C. The proportion of sensors with systematic bias > ±0.1°C (10.9%) confirms that ingestible temperature sensors require correction to ensure their accuracy. An individualized linear correction achieved a mean systematic bias of 0.00°C, and limits of agreement (95%) to 0.00-0.00°C, with 100% of sensors achieving ±0.1°C accuracy. Alternatively, a generalized linear function (Corrected Temperature (°C) = 1.00375 × Sensor Temperature (°C) - 0.205549), produced as the average slope and intercept of a sub-set of 51 sensors and excluding sensors with accuracy outside ±0.5°C, reduced the systematic bias to Correction of sensor temperature to a reference thermometer by linear function eliminates this systematic bias (individualized functions) or ensures systematic bias is within ±0.1°C in 98% of the sensors (generalized function).

  10. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... their surgery, orthognathic surgery is performed to correct functional problems. Jaw Surgery can have a dramatic effect on many aspects of life. Following are some of the conditions that may ... front, or side Facial injury Birth defects Receding lower jaw and ...

  11. CORRECTIVE ACTION IN CAR MANUFACTURING

    Directory of Open Access Journals (Sweden)

    H. Rohne

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: In this paper the important .issues involved in successfully implementing corrective action systems in quality management are discussed. The work is based on experience in implementing and operating such a system in an automotive manufacturing enterprise in South Africa. The core of a corrective action system is good documentation, supported by a computerised information system. Secondly, a systematic problem solving methodology is essential to resolve the quality related problems identified by the system. In the following paragraphs the general corrective action process is discussed and the elements of a corrective action system are identified, followed by a more detailed discussion of each element. Finally specific results from the application are discussed.

    AFRIKAANSE OPSOMMING: Belangrike oorwegings by die suksesvolle implementering van korrektiewe aksie stelsels in gehaltebestuur word in hierdie artikel bespreek. Die werk is gebaseer op ondervinding in die implementering en bedryf van so 'n stelsel by 'n motorvervaardiger in Suid Afrika. Die kern van 'n korrektiewe aksie stelsel is goeie dokumentering, gesteun deur 'n gerekenariseerde inligtingstelsel. Tweedens is 'n sistematiese probleemoplossings rnetodologie nodig om die gehalte verwante probleme wat die stelsel identifiseer aan te spreek. In die volgende paragrawe word die algemene korrektiewe aksie proses bespreek en die elemente van die korrektiewe aksie stelsel geidentifiseer. Elke element word dan in meer besonderhede bespreek. Ten slotte word spesifieke resultate van die toepassing kortliks behandel.

  12. Corrections in clinical Magnetic Resonance Spectroscopy and SPECT

    DEFF Research Database (Denmark)

    de Nijs, Robin

    infants. In Iodine-123 SPECT the problem of downscatter was addressed. This thesis is based on two papers. Paper I deals with the problem of motion in Single Voxel Spectroscopy. Two novel methods for the identification of outliers in the set of repeated measurements were implemented and compared...... a detrimental effect of the extra-uterine environment on brain development. Paper II describes a method to correct for downscatter in low count Iodine-123 SPECT with a broad energy window above the normal imaging window. Both spatial dependency and weight factors were measured. As expected, the implicitly...... be performed by the subtraction of an energy window, a method was developed to perform scatter and downscatter correction simultaneously. A phantom study has been performed, where the in paper II described downscatter correction was extended with scatter correction. This new combined correction was compared...

  13. Spontaneous gestures influence strategy choices in problem solving.

    Science.gov (United States)

    Alibali, Martha W; Spencer, Robert C; Knox, Lucy; Kita, Sotaro

    2011-09-01

    Do gestures merely reflect problem-solving processes, or do they play a functional role in problem solving? We hypothesized that gestures highlight and structure perceptual-motor information, and thereby make such information more likely to be used in problem solving. Participants in two experiments solved problems requiring the prediction of gear movement, either with gesture allowed or with gesture prohibited. Such problems can be correctly solved using either a perceptual-motor strategy (simulation of gear movements) or an abstract strategy (the parity strategy). Participants in the gesture-allowed condition were more likely to use perceptual-motor strategies than were participants in the gesture-prohibited condition. Gesture promoted use of perceptual-motor strategies both for participants who talked aloud while solving the problems (Experiment 1) and for participants who solved the problems silently (Experiment 2). Thus, spontaneous gestures influence strategy choices in problem solving.

  14. Open-Start Mathematics Problems: An Approach to Assessing Problem Solving

    Science.gov (United States)

    Monaghan, John; Pool, Peter; Roper, Tom; Threlfall, John

    2009-01-01

    This article describes one type of mathematical problem, open-start problems, and discusses their potential for use in assessment. In open-start problems how one starts to address the problem can vary but they have a correct answer. We argue that the use of open-start problems in assessment could positively influence classroom mathematics…

  15. Esthetics built to last: treatment of functional anomalies may need to precede esthetic corrections.

    Science.gov (United States)

    Bassett, Joyce L

    2014-02-01

    In this case of a 33 year-old male patient seeking a more esthetically pleasing smile, comprehensive restorative treatment planning included recognition of the patient's incisor position and morphology, dentofacial requirements, and appropriate vertical dimension. The accepted treatment plan consisted of orthodontic correction of the patient's anterior constriction, followed by placement of eight maxillary veneers and composite augmentation on the mandibular incisors and canines. Keys to achieving a successful outcome included knowledge of smile design, material selection, and preparation techniques. The case demonstrates how functional problems oftentimes must be addressed before esthetic correction can be made.

  16. Correction magnet power supplies for APS machine

    International Nuclear Information System (INIS)

    Kang, Y.G.

    1991-04-01

    A number of correction magnets are required for the advanced photon source (APS) machine to correct the beam. There are five kinds of correction magnets for the storage ring, two for the injector synchrotron, and two for the positron accumulator ring (PAR). Table I shoes a summary of the correction magnet power supplies for the APS machine. For the storage ring, the displacement of the quadrupole magnets due to the low frequency vibration below 25 Hz has the most significant effect on the stability of the positron closed orbit. The primary external source of the low frequency vibration is the ground motion of approximately 20 μm amplitude, with frequency components concentrated below 10 Hz. These low frequency vibrations can be corrected by using the correction magnets, whose field strengths are controlled individually through the feedback loop comprising the beam position monitoring system. The correction field require could be either positive or negative. Thus for all the correction magnets, bipolar power supplies (BPSs) are required to produce both polarities of correction fields. Three different types of BPS are used for all the correction magnets. Type I BPSs cover all the correction magnets for the storage ring, except for the trim dipoles. The maximum output current of the Type I BPS is 140 Adc. A Type II BPS powers a trim dipole, and its maximum output current is 60 Adc. The injector synchrotron and PAR correction magnets are powered form Type III BPSs, whose maximum output current is 25 Adc

  17. Problems about the analysis of technical requirements compliance in NPPPCI systems

    International Nuclear Information System (INIS)

    Perello, M.

    1978-01-01

    The display of the problems that the analysis of the technical requirements compliance bring along is presented. In the project of nuclear power plants, above all, the influence of national and international standards in the analysis of the adjustment of requirements established by the governments of nuclear safety of the different countries. In the oral presentation greater emphasis is made on the difficulties that the PSAR evaluation brings when the lack of technical standards in the owner country makes necessary the use of other countries rules. (author)

  18. A Predictor-Corrector Method for Solving Equilibrium Problems

    Directory of Open Access Journals (Sweden)

    Zong-Ke Bao

    2014-01-01

    Full Text Available We suggest and analyze a predictor-corrector method for solving nonsmooth convex equilibrium problems based on the auxiliary problem principle. In the main algorithm each stage of computation requires two proximal steps. One step serves to predict the next point; the other helps to correct the new prediction. At the same time, we present convergence analysis under perfect foresight and imperfect one. In particular, we introduce a stopping criterion which gives rise to Δ-stationary points. Moreover, we apply this algorithm for solving the particular case: variational inequalities.

  19. On the Correctness of Real-Time Modular Computer Systems Modeling with Stopwatch Automata Networks

    Directory of Open Access Journals (Sweden)

    Alevtina B. Glonina

    2018-01-01

    Full Text Available In this paper, we consider a schedulability analysis problem for real-time modular computer systems (RT MCS. A system configuration is called schedulable if all the jobs finish within their deadlines. The authors propose a stopwatch automata-based general model of RT MCS operation. A model instance for a given RT MCS configuration is a network of stopwatch automata (NSA and it can be built automatically using the general model. A system operation trace, which is necessary for checking the schedulability criterion, can be obtained from the corresponding NSA trace. The paper substantiates the correctness of the proposed approach. A set of correctness requirements to models of system components and to the whole system model were derived from RT MCS specifications. The authors proved that if all models of system components satisfy the corresponding requirements, the whole system model built according to the proposed approach satisfies its correctness requirements and is deterministic (i.e. for a given configuration a trace generated by the corresponding model run is uniquely determined. The model determinism implies that any model run can be used for schedulability analysis. This fact is crucial for the approach efficiency, as the number of possible model runs grows exponentially with the number of jobs in a system. Correctness requirements to models of system components models can be checked automatically by a verifier using observer automata approach. The authors proved by using UPPAAL verifier that all the developed models of system components satisfy the corresponding requirements. User-defined models of system components can be also used for system modeling if they satisfy the requirements.

  20. SORM correction of FORM results for the FBC load combination problem

    DEFF Research Database (Denmark)

    Ditlevsen, Ove

    2005-01-01

    The old stochastic load combination model of Ferry Borges and Castanheta and the corresponding extreme random load effect value is considered. The evaluation of the distribution function of the extreme value by use of a particular first order reliability method was first described in a celebrated...... calculations. The calculation gives a limit state curvature correction factor on the probability approximation obtained by the RF algorithm. This correction factor is based on Breitung’s celebrated asymptotic formula. Example calculations with comparisons with exact results show an impressing accuracy...

  1. Testing and inference in nonlinear cointegrating vector error correction models

    DEFF Research Database (Denmark)

    Kristensen, D.; Rahbek, A.

    2013-01-01

    We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full...... asymptotic theory for estimators and test statistics. The derived asymptotic results prove to be nonstandard compared to results found elsewhere in the literature due to the impact of the estimated cointegration relations. This complicates implementation of tests motivating the introduction of bootstrap...

  2. Directional Overcurrent Relays Coordination Problems in Distributed Generation Systems

    Directory of Open Access Journals (Sweden)

    Jakub Ehrenberger

    2017-09-01

    Full Text Available This paper proposes a new approach to the distributed generation system protection coordination based on directional overcurrent protections with inverse-time characteristics. The key question of protection coordination is the determination of correct values of all inverse-time characteristics coefficients. The coefficients must be correctly chosen considering the sufficiently short tripping times and the sufficiently long selectivity times. In the paper a new approach to protection coordination is designed, in which not only some, but all the required types of short-circuit contributions are taken into account. In radial systems, if the pickup currents are correctly chosen, protection coordination for maximum contributions is enough to ensure selectivity times for all the required short-circuit types. In distributed generation systems, due to different contributions flowing through the primary and selective protections, coordination for maximum contributions is not enough, but all the short-circuit types must be taken into account, and the protection coordination becomes a complex problem. A possible solution to the problem, based on an appropriately designed optimization, has been proposed in the paper. By repeating a simple optimization considering only one short-circuit type, the protection coordination considering all the required short-circuit types has been achieved. To show the importance of considering all the types of short-circuit contributions, setting optimizations with one (the highest and all the types of short-circuit contributions have been performed. Finally, selectivity time values are explored throughout the entire protected section, and both the settings are compared.

  3. Optical proximity correction for anamorphic extreme ultraviolet lithography

    Science.gov (United States)

    Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas

    2017-10-01

    The change from isomorphic to anamorphic optics in high numerical aperture (NA) extreme ultraviolet (EUV) scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated, and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking (MRC). OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs which are more tolerant to mask errors.

  4. Spherical aberration correction with threefold symmetric line currents.

    Science.gov (United States)

    Hoque, Shahedul; Ito, Hiroyuki; Nishi, Ryuji; Takaoka, Akio; Munro, Eric

    2016-02-01

    It has been shown that N-fold symmetric line current (henceforth denoted as N-SYLC) produces 2N-pole magnetic fields. In this paper, a threefold symmetric line current (N3-SYLC in short) is proposed for correcting 3rd order spherical aberration of round lenses. N3-SYLC can be realized without using magnetic materials, which makes it free of the problems of hysteresis, inhomogeneity and saturation. We investigate theoretically the basic properties of an N3-SYLC configuration which can in principle be realized by simple wires. By optimizing the parameters of a system with beam energy of 5.5keV, the required excitation current for correcting 3rd order spherical aberration coefficient of 400 mm is less than 1AT, and the residual higher order aberrations can be kept sufficiently small to obtain beam size of less than 1 nm for initial slopes up to 5 mrad. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  6. The Systematic Bias of Ingestible Core Temperature Sensors Requires a Correction by Linear Regression

    Directory of Open Access Journals (Sweden)

    Andrew P. Hunt

    2017-04-01

    Full Text Available An accurate measure of core body temperature is critical for monitoring individuals, groups and teams undertaking physical activity in situations of high heat stress or prolonged cold exposure. This study examined the range in systematic bias of ingestible temperature sensors compared to a certified and traceable reference thermometer. A total of 119 ingestible temperature sensors were immersed in a circulated water bath at five water temperatures (TEMP A: 35.12 ± 0.60°C, TEMP B: 37.33 ± 0.56°C, TEMP C: 39.48 ± 0.73°C, TEMP D: 41.58 ± 0.97°C, and TEMP E: 43.47 ± 1.07°C along with a certified traceable reference thermometer. Thirteen sensors (10.9% demonstrated a systematic bias > ±0.1°C, of which 4 (3.3% were > ± 0.5°C. Limits of agreement (95% indicated that systematic bias would likely fall in the range of −0.14 to 0.26°C, highlighting that it is possible for temperatures measured between sensors to differ by more than 0.4°C. The proportion of sensors with systematic bias > ±0.1°C (10.9% confirms that ingestible temperature sensors require correction to ensure their accuracy. An individualized linear correction achieved a mean systematic bias of 0.00°C, and limits of agreement (95% to 0.00–0.00°C, with 100% of sensors achieving ±0.1°C accuracy. Alternatively, a generalized linear function (Corrected Temperature (°C = 1.00375 × Sensor Temperature (°C − 0.205549, produced as the average slope and intercept of a sub-set of 51 sensors and excluding sensors with accuracy outside ±0.5°C, reduced the systematic bias to < ±0.1°C in 98.4% of the remaining sensors (n = 64. In conclusion, these data show that using an uncalibrated ingestible temperature sensor may provide inaccurate data that still appears to be statistically, physiologically, and clinically meaningful. Correction of sensor temperature to a reference thermometer by linear function eliminates this systematic bias (individualized functions or ensures

  7. Lessons Learned for Cx PRACA. Constellation Program Problem Reporting, Analysis and Corrective Action Process and System

    Science.gov (United States)

    Kelle, Pido I.; Ratterman, Christian; Gibbs, Cecil

    2009-01-01

    This slide presentation reviews the Constellation Program Problem Reporting, Analysis and Corrective Action Process and System (Cx PRACA). The goal of the Cx PRACA is to incorporate Lessons learned from the Shuttle, ISS, and Orbiter programs by creating a single tool for managing the PRACA process, that clearly defines the scope of PRACA applicability and what must be reported, and defines the ownership and responsibility for managing the PRACA process including disposition approval authority. CxP PRACA is a process, supported by a single information gathering data module which will be integrated with a single CxP Information System, providing interoperability, import and export capability making the CxP PRACA a more effective and user friendly technical and management tool.

  8. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  9. CC-MUSIC: An Optimization Estimator for Mutual Coupling Correction of L-Shaped Nonuniform Array with Single Snapshot

    Directory of Open Access Journals (Sweden)

    Yuguan Hou

    2015-01-01

    Full Text Available For the case of the single snapshot, the integrated SNR gain could not be obtained without the multiple snapshots, which degrades the mutual coupling correction performance under the lower SNR case. In this paper, a Convex Chain MUSIC (CC-MUSIC algorithm is proposed for the mutual coupling correction of the L-shaped nonuniform array with single snapshot. It is an online self-calibration algorithm and does not require the prior knowledge of the correction matrix initialization and the calibration source with the known position. An optimization for the approximation between the no mutual coupling covariance matrix without the interpolated transformation and the covariance matrix with the mutual coupling and the interpolated transformation is derived. A global optimization problem is formed for the mutual coupling correction and the spatial spectrum estimation. Furthermore, the nonconvex optimization problem of this global optimization is transformed as a chain of the convex optimization, which is basically an alternating optimization routine. The simulation results demonstrate the effectiveness of the proposed method, which improve the resolution ability and the estimation accuracy of the multisources with the single snapshot.

  10. Generalized Tellegen Principle and Physical Correctness of System Representations

    Directory of Open Access Journals (Sweden)

    Vaclav Cerny

    2006-06-01

    Full Text Available The paper deals with a new problem of physical correctness detection in the area of strictly causal system representations. The proposed approach to the problem solution is based on generalization of Tellegen's theorem well known from electrical engineering. Consequently, mathematically as well as physically correct results are obtained. Some known and often used system representation structures are discussed from the developed point of view as an addition.

  11. High order corrections to the renormalon

    International Nuclear Information System (INIS)

    Faleev, S.V.

    1997-01-01

    High order corrections to the renormalon are considered. Each new type of insertion into the renormalon chain of graphs generates a correction to the asymptotics of perturbation theory of the order of ∝1. However, this series of corrections to the asymptotics is not the asymptotic one (i.e. the mth correction does not grow like m.). The summation of these corrections for the UV renormalon may change the asymptotics by a factor N δ . For the traditional IR renormalon the mth correction diverges like (-2) m . However, this divergence has no infrared origin and may be removed by a proper redefinition of the IR renormalon. On the other hand, for IR renormalons in hadronic event shapes one should naturally expect these multiloop contributions to decrease like (-2) -m . Some problems expected upon reaching the best accuracy of perturbative QCD are also discussed. (orig.)

  12. Functional requirements for the man-vehicle systems research facility. [identifying and correcting human errors during flight simulation

    Science.gov (United States)

    Clement, W. F.; Allen, R. W.; Heffley, R. K.; Jewell, W. F.; Jex, H. R.; Mcruer, D. T.; Schulman, T. M.; Stapleford, R. L.

    1980-01-01

    The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included.

  13. 1 D Additive correction strategy for solving tow dimensional problem of heat and mas transfer in porous media with non-rectangular domain

    International Nuclear Information System (INIS)

    Al Mers, A.; Mimet, A.

    2006-01-01

    We propose a new procedure using 1 D additive correction strategy (AC), for resolution of tow dimensional problem of heat and mass transfer in field reactor of adsorption cooling machine. The reactor contains a porous medium constituted of activated carbon reacting by adsorption with ammonia. The present paper demonstrated how the new procedure of the (AC) propose here can be used, in the case of non-rectangular domain and strongly anisotropic coefficients, to improve the convergence rate of different iterative solvers currently used: Point Gauss-Seidel (GS), the line Gauss-Seidel (LGS), strongly implicit procedure (SIP) and the strongly implicit solver (SIS). Results shows that for different solvers, the performance of the additive correction strategy is efficiently improved by using the new procedure.(Author)

  14. Corrective Action Plan for Corrective Action Unit 562: Waste Systems, Nevada National Security Site, Nevada

    International Nuclear Information System (INIS)

    2011-01-01

    This Corrective Action Plan has been prepared for Corrective Action Unit (CAU) 562, Waste Systems, in accordance with the Federal Facility Agreement and Consent Order (1996; as amended March 2010). CAU 562 consists of 13 Corrective Action Sites (CASs) located in Areas 2, 23, and 25 of the Nevada National Security Site. Site characterization activities were performed in 2009 and 2010, and the results are presented in Appendix A of the Corrective Action Decision Document for CAU 562. The scope of work required to implement the recommended closure alternatives is summarized. (1) CAS 02-26-11, Lead Shot, will be clean closed by removing shot. (2) CAS 02-44-02, Paint Spills and French Drain, will be clean closed by removing paint and contaminated soil. As a best management practice (BMP), asbestos tile will be removed. (3) CAS 02-59-01, Septic System, will be clean closed by removing septic tank contents. As a BMP, the septic tank will be removed. (4) CAS 02-60-01, Concrete Drain, contains no contaminants of concern (COCs) above action levels. No further action is required; however, as a BMP, the concrete drain will be removed. (5) CAS 02-60-02, French Drain, was clean closed. Corrective actions were completed during corrective action investigation activities. As a BMP, the drain grates and drain pipe will be removed. (6) CAS 02-60-03, Steam Cleaning Drain, will be clean closed by removing contaminated soil. As a BMP, the steam cleaning sump grate and outfall pipe will be removed. (7) CAS 02-60-04, French Drain, was clean closed. Corrective actions were completed during corrective action investigation activities. (8) CAS 02-60-05, French Drain, will be clean closed by removing contaminated soil. (9) CAS 02-60-06, French Drain, contains no COCs above action levels. No further action is required. (10) CAS 02-60-07, French Drain, requires no further action. The french drain identified in historical documentation was not located during corrective action investigation

  15. The analysis of normative requirements to materials of PWR components, basing on LBB concepts

    International Nuclear Information System (INIS)

    Anikovsky, V.V.; Karzov, G.P.; Timofeev, B.T.

    1997-01-01

    The paper discusses the advisability of the correction of Norms to solve in terms of material science the Problem: how the normative requirements to materials must be changed in terms of the concept open-quotes leak before breakclose quotes (LBB)

  16. The analysis of normative requirements to materials of PWR components, basing on LBB concepts

    Energy Technology Data Exchange (ETDEWEB)

    Anikovsky, V.V.; Karzov, G.P.; Timofeev, B.T. [CRISM Prometey, St. Petersburg (Russian Federation)

    1997-04-01

    The paper discusses the advisability of the correction of Norms to solve in terms of material science the Problem: how the normative requirements to materials must be changed in terms of the concept {open_quotes}leak before break{close_quotes} (LBB).

  17. Text recognition and correction for automated data collection by mobile devices

    Science.gov (United States)

    Ozarslan, Suleyman; Eren, P. Erhan

    2014-03-01

    Participatory sensing is an approach which allows mobile devices such as mobile phones to be used for data collection, analysis and sharing processes by individuals. Data collection is the first and most important part of a participatory sensing system, but it is time consuming for the participants. In this paper, we discuss automatic data collection approaches for reducing the time required for collection, and increasing the amount of collected data. In this context, we explore automated text recognition on images of store receipts which are captured by mobile phone cameras, and the correction of the recognized text. Accordingly, our first goal is to evaluate the performance of the Optical Character Recognition (OCR) method with respect to data collection from store receipt images. Images captured by mobile phones exhibit some typical problems, and common image processing methods cannot handle some of them. Consequently, the second goal is to address these types of problems through our proposed Knowledge Based Correction (KBC) method used in support of the OCR, and also to evaluate the KBC method with respect to the improvement on the accurate recognition rate. Results of the experiments show that the KBC method improves the accurate data recognition rate noticeably.

  18. Robust Active Label Correction

    DEFF Research Database (Denmark)

    Kremer, Jan; Sha, Fei; Igel, Christian

    2018-01-01

    for the noisy data lead to different active label correction algorithms. If loss functions consider the label noise rates, these rates are estimated during learning, where importance weighting compensates for the sampling bias. We show empirically that viewing the true label as a latent variable and computing......Active label correction addresses the problem of learning from input data for which noisy labels are available (e.g., from imprecise measurements or crowd-sourcing) and each true label can be obtained at a significant cost (e.g., through additional measurements or human experts). To minimize......). To select labels for correction, we adopt the active learning strategy of maximizing the expected model change. We consider the change in regularized empirical risk functionals that use different pointwise loss functions for patterns with noisy and true labels, respectively. Different loss functions...

  19. Operating experience and corrective action program at Ontario Hydro Nuclear

    International Nuclear Information System (INIS)

    Collingwood, Barry; Turner, David

    1998-01-01

    This is a slide-based talk given at the COG/IAEA: 5. Technical Committee Meeting on 'Exchange of operating experience of pressurized heavy water reactors'. In the introduction there are presented the operating experience (OPEX) program of OHN, and the OPEX Program Mission, ensuring that the right information gets to the right staff at the right time. The OPEX Processes are analysed. These are: - Internal Corrective Action; - Inter-site Lesson Transfer; - External Lesson Transfer; - External Posting of OHN Events; - Internalizing Operating Experience. Steps in solving the Corrective Action Program are described: - Identify the Problem; - Notify Immediate Supervision/Manager; - Evaluate the Problem; - Correct the Problem; Monitor/Report Status. The Internal Corrective Action is then presented as a flowchart. The internalizing operating experience is presented under three aspects: - Communication; - Interface; - Training. The following items are discussed, respectively: peer meetings, department/section meetings, safety meetings, e-mail folders, newsletters and bulletin boards; work planning, pre-job briefings, supervisors' briefing cards; classroom initial and refresher (case studies), simulator, management courses. A diagram is presented showing the flow and treatment of information within OHN, centered on the weekly screening meetings. Finally, the corrective action processes are depicted in a flowchart and analysed in details

  20. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... surgery. It is important to understand that your treatment, which will probably include orthodontics before and after ... to realistically estimate the time required for your treatment. Correction of Common Dentofacial Deformities ​ ​ The information provided ...

  1. Effects of the Eccentricity of a Perturbing Third Body on the Orbital Correction Maneuvers of a Spacecraft

    Directory of Open Access Journals (Sweden)

    R. C. Domingos

    2014-01-01

    Full Text Available The fuel consumption required by the orbital maneuvers when correcting perturbations on the orbit of a spacecraft due to a perturbing body was estimated. The main goals are the measurement of the influence of the eccentricity of the perturbing body on the fuel consumption required by the station keeping maneuvers and the validation of the averaged methods when applied to the problem of predicting orbital maneuvers. To study the evolution of the orbits, the restricted elliptic three-body problem and the single- and double-averaged models are used. Maneuvers are made by using impulsive and low thrust maneuvers. The results indicated that the averaged models are good to make predictions for the orbital maneuvers when the spacecraft is in a high inclined orbit. The eccentricity of the perturbing body plays an important role in increasing the effects of the perturbation and the fuel consumption required for the station keeping maneuvers. It is shown that the use of more frequent maneuvers decreases the annual cost of the station keeping to correct the orbit of a spacecraft. An example of an eccentric planetary system of importance to apply the present study is the dwarf planet Haumea and its moons, one of them in an eccentric orbit.

  2. Patients with proximal junctional kyphosis requiring revision surgery have higher postoperative lumbar lordosis and larger sagittal balance corrections.

    Science.gov (United States)

    Kim, Han Jo; Bridwell, Keith H; Lenke, Lawrence G; Park, Moon Soo; Song, Kwang Sup; Piyaskulkaew, Chaiwat; Chuntarapas, Tapanut

    2014-04-20

    Case control study. To evaluate risk factors in patients in 3 groups: those without proximal junctional kyphosis (PJK) (N), with PJK but not requiring revision (P), and then those with PJK requiring revision surgery (S). It is becoming clear that some patients maintain stable PJK angles, whereas others progress and develop severe PJK necessitating revision surgery. A total of 206 patients at a single institution from 2002 to 2007 with adult scoliosis with 2-year minimum follow-up (average 3.5 yr) were analyzed. Inclusion criteria were age more than 18 years and primary fusions greater than 5 levels from any thoracic upper instrumented vertebra to any lower instrumented vertebrae. Revisions were excluded. Radiographical assessment included Cobb measurements in the coronal/sagittal plane and measurements of the PJK angle at postoperative time points: 1 to 2 months, 2 years, and final follow-up. PJK was defined as an angle greater than 10°. The prevalence of PJK was 34%. The average age in N was 49.9 vs. 51.3 years in P and 60.1 years in S. Sex, body mass index, and smoking status were not significantly different between groups. Fusions extending to the pelvis were 74%, 85%, and 91% of the cases in groups N, P, and S. Instrumentation type was significantly different between groups N and S, with a higher number of upper instrumented vertebra hooks in group N. Radiographical parameters demonstrated a higher postoperative lumbar lordosis and a larger sagittal balance change, with surgery in those with PJK requiring revision surgery. Scoliosis Research Society postoperative pain scores were inferior in group N vs. P and S, and Oswestry Disability Index scores were similar between all groups. Patients with PJK requiring revision were older, had higher postoperative lumbar lordosis, and larger sagittal balance corrections than patients without PJK. Based on these data, it seems as though older patients with large corrections in their lumbar lordosis and sagittal balance

  3. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2014-01-01

    The aim in this paper is to develop a new local defect correction approach to gridding for problems with localised regions of high activity in the boundary element method. The technique of local defect correction has been studied for other methods as finite difference methods and finite volume

  4. Operating experience feedback report - Air systems problems

    International Nuclear Information System (INIS)

    Ornstein, H.L.

    1987-12-01

    This report highlights significant operating events involving observed or potential failures of safety-related systems in U.S. plants that resulted from degraded or malfunctioning non-safety grade air systems. Based upon the evaluation of these events, the Office for Analysis and Evaluation of Operational Data (AEOD) concludes that the issue of air systems problems is an important one which requires additional NRC and industry attention. This report also provides AEOD's recommendations for corrective actions to deal with the issue. (author)

  5. Measured attenuation correction methods

    International Nuclear Information System (INIS)

    Ostertag, H.; Kuebler, W.K.; Doll, J.; Lorenz, W.J.

    1989-01-01

    Accurate attenuation correction is a prerequisite for the determination of exact local radioactivity concentrations in positron emission tomography. Attenuation correction factors range from 4-5 in brain studies to 50-100 in whole body measurements. This report gives an overview of the different methods of determining the attenuation correction factors by transmission measurements using an external positron emitting source. The long-lived generator nuclide 68 Ge/ 68 Ga is commonly used for this purpose. The additional patient dose from the transmission source is usually a small fraction of the dose due to the subsequent emission measurement. Ring-shaped transmission sources as well as rotating point or line sources are employed in modern positron tomographs. By masking a rotating line or point source, random and scattered events in the transmission scans can be effectively suppressed. The problems of measured attenuation correction are discussed: Transmission/emission mismatch, random and scattered event contamination, counting statistics, transmission/emission scatter compensation, transmission scan after administration of activity to the patient. By using a double masking technique simultaneous emission and transmission scans become feasible. (orig.)

  6. Error Correcting Codes

    Indian Academy of Sciences (India)

    successful consumer products of all time - the Compact Disc. (CD) digital audio .... We can make ... only 2 t additional parity check symbols are required, to be able to correct t .... display information (contah'ling music related data and a table.

  7. 32 CFR 555.9 - Reporting requirements for work in support of DOE.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 3 2010-07-01 2010-07-01 true Reporting requirements for work in support of DOE. 555.9 Section 555.9 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY MILITARY.... This notification shall include: (1) A brief statement of the problem. (2) Nature of corrective action...

  8. Corrective Action Investigation Plan for Corrective Action Unit 552: Area 12 Muckpile and Ponds, Nevada Test Site, Nevada, Rev.1

    International Nuclear Information System (INIS)

    Boehlecke, Robert F.

    2005-01-01

    Corrective Action Unit 552 is being investigated because man-made radionuclides and chemical contaminants may be present in concentrations that could potentially pose an unacceptable risk to human health and/or the environment. The CAI will be conducted following the data quality objectives (DQOs) developed by representatives of the Nevada Division of Environmental Protection (NDEP) and the DOE National Nuclear Security Administration Nevada Site Office (NNSA/NSO). The DQOs are used to identify the type, amount, and quality of data needed to define the nature and extent of contamination and identify and evaluate the most appropriate corrective action alternatives for CAU 552. The primary problem statement for the investigation is: 'Existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives for CAS 12-23-05.' To address this problem statement, the resolution of the following two decision statements is required: (1) The Decision I statement is: 'Is a contaminant present within the CAU at a concentration that could pose an unacceptable risk to human health and the environment?' Any site-related contaminant detected at a concentration exceeding the corresponding preliminary action level (PAL), as defined in Section A.1.4.2, will be considered a contaminant of concern (COC). A COC is defined as a site-related constituent that exceeds the screening criteria (PAL). The presence of a contaminant within each CAS is defined as the analytical detection of a COC. (2) The Decision II statement is: 'Determine the extent of contamination identified above PALs.' This decision will be achieved by the collection of data that are adequate to define the extent of COCs. Decision II samples are used to determine the lateral and vertical extent of the contamination as well as the likelihood of COCs to migrate outside of the site boundaries. The migration pattern can be derived from the Decision II

  9. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 547: Miscellaneous Contaminated Waste Sites, Nevada National Security Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Mark Krauss

    2011-09-01

    the CASs were sufficient to meet the DQOs and evaluate CAAs without additional investigation. As a result, further investigation of the CAU 547 CASs was not required. The following CAAs were identified for the gas sampling assemblies: (1) clean closure, (2) closure in place, (3) modified closure in place, (4) no further action (with administrative controls), and (5) no further action. Based on the CAAs evaluation, the recommended corrective action for the three CASs in CAU 547 is closure in place. This corrective action will involve construction of a soil cover on top of the gas sampling assembly components and establishment of use restrictions at each site. The closure in place alternative was selected as the best and most appropriate corrective action for the CASs at CAU 547 based on the following factors: (1) Provides long-term protection of human health and the environment; (2) Minimizes short-term risk to site workers in implementing corrective action; (3) Is easily implemented using existing technology; (4) Complies with regulatory requirements; (5) Fulfills FFACO requirements for site closure; (6) Does not generate transuranic waste requiring offsite disposal; (7) Is consistent with anticipated future land use of the areas (i.e., testing and support activities); and (8) Is consistent with other NNSS site closures where contamination was left in place.

  10. Patient motion correction for single photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Geckle, W.J.; Becker, L.C.; Links, J.M.; Frank, T.

    1986-01-01

    An investigation has been conducted to develop and validate techniques for the correction of projection images in SPECT studies of the myocardium subject to misalignment due to voluntary patient motion. The problem is frequently encountered due to the uncomfortable position the patient must assume during the 30 minutes required to obtain a 180 degree set of projection images. The reconstruction of misaligned projections can lead to troublesome artifacts in reconstructed images and degrade the diagnostic potential of the procedure. Significant improvement in the quality of heart reconstructions has been realized with the implementation of an algorithm to provide detection of and correction for patient motion. Normal, involuntary motion is not corrected for, however, since such movement is below the spatial resolution of the thallium imaging system under study. The algorithm is based on a comparison of the positions of an object in a set of projection images to the known, sinusoidal trajectory of an off-axis fixed point in space. Projection alignment, therefore, is achieved by shifting the position of a point or set of points in a projection image to the sinusoid of a fixed position in space

  11. Corrective Action Decision Document/Closure Report for Corrective Action Unit 232: Area 25 Sewage Lagoons, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    US Department of Energy Nevada Operations Office

    1999-01-01

    This Corrective Action Decision Document/Closure Report (CADD/CR) has been prepared for Corrective Action Unit (CAU) 232, Area 25 Sewage Lagoons, in accordance with the Federal Facility Agreement and Consent Order. Located at the Nevada Test Site in Nevada, approximately 65 miles northwest of Las Vegas, CAU 232 is comprised of Corrective Action Site 25-03-01, Sewage Lagoon. This CADD/CR identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's (DOE/NV's) recommendation that no corrective action is deemed necessary for CAU 232. The Corrective Action Decision Document and Closure Report have been combined into one report because sample data collected during the July 1999 corrective action investigation (CAI) activities disclosed no evidence of contamination at the site. Contaminants of potential concern (COPCs) addressed during the CAI included total volatile organic compounds, total semivolatile organic compounds, total Resource Conservation and Recovery Act metals, total pesticides, total herbicides, total petroleum hydrocarbons (gasoline and diesel/oil range), polychlorinated biphenyls, isotopic uranium, isotopic plutonium, strontium-90, and gamma-emitting radionuclides. The data confirmed that none of the COPCs identified exceeded preliminary action levels outlined in the CAIP; therefore, no corrective actions were necessary for CAU 232. After the CAI, best management practice activities were completed and included installation of a fence and signs to limit access to the lagoons, cementing Manhole No. 2 and the diverter box, and closing off influent and effluent ends of the sewage lagoon piping. As a result of the CAI, the DOE/NV recommended that: (1) no further actions were required; (2) no Corrective Action Plan would be required; and (3) no use restrictions were required to be placed on the CAU

  12. Correction of Severe Traditional Medication-induced Lower Lid ...

    African Journals Online (AJOL)

    Setting: The correction of the lower lid tarsal ectropion was carried out at the Rachel Eye Center in Abuja, Nigeria. Result: After conservative intervention failed, a free preauricular skin graft of the floppy ectropion, led to a stable correction. Conclusions: Harmful traditional eye medication continues to be a problem in the ...

  13. Electroweak vacuum stability and finite quadratic radiative corrections

    Energy Technology Data Exchange (ETDEWEB)

    Masina, Isabella [Ferrara Univ. (Italy). Dipt. di Fisica e Scienze della Terra; INFN, Sezione di Ferrara (Italy); Southern Denmark Univ., Odense (Denmark). CP3-Origins; Southern Denmark Univ., Odense (Denmark). DIAS; Nardini, Germano [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Quiros, Mariano [Institucio Catalana de Recerca i Estudis Avancats (ICREA), Barcelona (Spain); IFAE-IAB, Barcelona (Spain)

    2015-07-15

    If the Standard Model (SM) is an effective theory, as currently believed, it is valid up to some energy scale Λ to which the Higgs vacuum expectation value is sensitive throughout radiative quadratic terms. The latter ones destabilize the electroweak vacuum and generate the SM hierarchy problem. For a given perturbative Ultraviolet (UV) completion, the SM cutoff can be computed in terms of fundamental parameters. If the UV mass spectrum involves several scales the cutoff is not unique and each SM sector has its own UV cutoff Λ{sub i}. We have performed this calculation assuming the Minimal Supersymmetric Standard Model (MSSM) is the SM UV completion. As a result, from the SM point of view, the quadratic corrections to the Higgs mass are equivalent to finite threshold contributions. For the measured values of the top quark and Higgs masses, and depending on the values of the different cutoffs Λ{sub i}, these contributions can cancel even at renormalization scales as low as multi-TeV, unlike the case of a single cutoff where the cancellation only occurs at Planckian energies, a result originally obtained by Veltman. From the MSSM point of view, the requirement of stability of the electroweak minimum under radiative corrections is incorporated into the matching conditions and provides an extra constraint on the Focus Point solution to the little hierarchy problem in the MSSM. These matching conditions can be employed for precise calculations of the Higgs sector in scenarios with heavy supersymmetric fields.

  14. 7 CFR 1730.25 - Corrective action.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Corrective action. 1730.25 Section 1730.25... AGRICULTURE ELECTRIC SYSTEM OPERATIONS AND MAINTENANCE Operations and Maintenance Requirements § 1730.25 Corrective action. (a) For any items on the RUS Form 300 rated unsatisfactory (i.e., 0 or 1) by the borrower...

  15. Corrective Action Decision Document for Corrective Action Unit 562: Waste Systems Nevada Test Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Mark Krause

    2010-08-01

    This Corrective Action Decision Document (CADD) presents information supporting the selection of corrective action alternatives (CAAs) leading to the closure of Corrective Action Unit (CAU) 562, Waste Systems, in Areas 2, 23, and 25 of the Nevada Test Site, Nevada. This complies with the requirements of the Federal Facility Agreement and Consent Order (FFACO) that was agreed to by the State of Nevada; U.S. Department of Energy (DOE), Environmental Management; U.S. Department of Defense; and DOE, Legacy Management. Corrective Action Unit 562 comprises the following corrective action sites (CASs): • 02-26-11, Lead Shot • 02-44-02, Paint Spills and French Drain • 02-59-01, Septic System • 02-60-01, Concrete Drain • 02-60-02, French Drain • 02-60-03, Steam Cleaning Drain • 02-60-04, French Drain • 02-60-05, French Drain • 02-60-06, French Drain • 02-60-07, French Drain • 23-60-01, Mud Trap Drain and Outfall • 23-99-06, Grease Trap • 25-60-04, Building 3123 Outfalls The purpose of this CADD is to identify and provide the rationale for the recommendation of CAAs for the 13 CASs within CAU 562. Corrective action investigation (CAI) activities were performed from July 27, 2009, through May 12, 2010, as set forth in the CAU 562 Corrective Action Investigation Plan. The purpose of the CAI was to fulfill the following data needs as defined during the data quality objective (DQO) process: • Determine whether COCs are present. • If COCs are present, determine their nature and extent. • Provide sufficient information and data to complete appropriate corrective actions. A data quality assessment (DQA) performed on the CAU 562 data demonstrated the quality and acceptability of the data for use in fulfilling the DQO data needs. Analytes detected during the CAI were evaluated against appropriate final action levels (FALs) to identify the COCs for each CAS. The results of the CAI identified COCs at 10 of the 13 CASs in CAU 562, and thus corrective

  16. 11Li-12C scattering as a four-body problem

    International Nuclear Information System (INIS)

    Formanek, J.; Lombard, R.J.

    1995-01-01

    11 Li- 12 C scattering is described as a four-body problem. The succession of approximations required to obtain the simplified approach of Yabana et al(1992) is examined in detail. We found that whereas their simple model is roughly acceptable for total cross sections (with discrepancies of the order of a few per cent), it has dramatic effects on the differential cross section. Beyond the very forward angles, the problem has to be treated in its full complexity; the various possible intermediate approximations generate differential cross sections which deviate noticeably from the correct calculation. (author)

  17. An open-source software program for performing Bonferroni and related corrections for multiple comparisons

    Directory of Open Access Journals (Sweden)

    Kyle Lesack

    2011-01-01

    Full Text Available Increased type I error resulting from multiple statistical comparisons remains a common problem in the scientific literature. This may result in the reporting and promulgation of spurious findings. One approach to this problem is to correct groups of P-values for "family-wide significance" using a Bonferroni correction or the less conservative Bonferroni-Holm correction or to correct for the "false discovery rate" with a Benjamini-Hochberg correction. Although several solutions are available for performing this correction through commercially available software there are no widely available easy to use open source programs to perform these calculations. In this paper we present an open source program written in Python 3.2 that performs calculations for standard Bonferroni, Bonferroni-Holm and Benjamini-Hochberg corrections.

  18. New component-based normalization method to correct PET system models

    International Nuclear Information System (INIS)

    Kinouchi, Shoko; Miyoshi, Yuji; Suga, Mikio; Yamaya, Taiga; Yoshida, Eiji; Nishikido, Fumihiko; Tashima, Hideaki

    2011-01-01

    Normalization correction is necessary to obtain high-quality reconstructed images in positron emission tomography (PET). There are two basic types of normalization methods: the direct method and component-based methods. The former method suffers from the problem that a huge count number in the blank scan data is required. Therefore, the latter methods have been proposed to obtain high statistical accuracy normalization coefficients with a small count number in the blank scan data. In iterative image reconstruction methods, on the other hand, the quality of the obtained reconstructed images depends on the system modeling accuracy. Therefore, the normalization weighing approach, in which normalization coefficients are directly applied to the system matrix instead of a sinogram, has been proposed. In this paper, we propose a new component-based normalization method to correct system model accuracy. In the proposed method, two components are defined and are calculated iteratively in such a way as to minimize errors of system modeling. To compare the proposed method and the direct method, we applied both methods to our small OpenPET prototype system. We achieved acceptable statistical accuracy of normalization coefficients while reducing the count number of the blank scan data to one-fortieth that required in the direct method. (author)

  19. Correction magnet power supplies for APS machine

    International Nuclear Information System (INIS)

    Kang, Y.G.

    1991-01-01

    The Advanced Photon Source machine requires a number of correction magnets; five kinds for the storage ring, two for the injector synchrotron, and two for the positron accumulator ring. Three types of bipolar power supply will be used for all the correction magnets. This paper describes the design aspects and considerations for correction magnet power supplies for the APS machine. 3 refs., 3 figs., 1 tab

  20. [Aggression and mobbing among correctional officers].

    Science.gov (United States)

    Merecz-Kot, Dorota; Cebrzyńska, Joanna

    2008-01-01

    The paper addresses the issue of violence among correctional officers. The aim of the study was to assess the frequency of exposure to violence in this professional group. The study comprised the sample of 222 correctional officers who voluntary and anonymously fulfilled the MDM questionnaire. The MDM Questionnaire allows for assessing exposure to aggression and mobbing at work. Preliminary assessment of exposure to single aggressive acts and mobbing shows a quite alarming tendency--around one third of subjects under the study experienced repetitive aggressive acts from coworkers and/or superiors. The problem of organizational aggression in correctional institutions should be recognized in details to develop effective preventive measures against violent behaviors occurring at work.

  1. Elementary education in the correctional institution in Valjevo

    Directory of Open Access Journals (Sweden)

    Milak Siniša

    2015-01-01

    Full Text Available Education of the residents in correctional institutions for juveniles has not a long tradition in our country. That is the reason why many questions linked to the organization and forms of their education, especially elementary education, remain open. The scope of this article is to present the process of acquiring elementary education of the residents in the correctional institution for juveniles. Since the Correctional Institute in Valjevo is the only institution for juveniles in our country at the moment, in which juvenile offenders spend their time penalties, the question of their education in the institution is worth our attention. The paper presents the results of a qualitative research conducted in this institution. The exeminees were the teachers who teach the residents in the school which is within the Correctional Institute in Valjevo. The teachers evaluated the existing organization and education of residents mainly positively, and singled out only some financial, space, and technical problems. The research pointed out the importance and value of this form of education for the residents, especially for their subsequent resocialization and integration in the society after leaving the Institute, as well as the importance of pedagogy for solving the problems of educating residents of correctional instutions for juveniles.

  2. Geometrical E-beam proximity correction for raster scan systems

    Science.gov (United States)

    Belic, Nikola; Eisenmann, Hans; Hartmann, Hans; Waas, Thomas

    1999-04-01

    High pattern fidelity is a basic requirement for the generation of masks containing sub micro structures and for direct writing. Increasing needs mainly emerging from OPC at mask level and x-ray lithography require a correction of the e-beam proximity effect. The most part of e-beam writers are raster scan system. This paper describes a new method for geometrical pattern correction in order to provide a correction solution for e-beam system that are not able to apply variable doses.

  3. Differential requirement for utrophin in the induced pluripotent stem cell correction of muscle versus fat in muscular dystrophy mice.

    Directory of Open Access Journals (Sweden)

    Amanda J Beck

    Full Text Available Duchenne muscular dystrophy (DMD is an incurable degenerative muscle disorder. We injected WT mouse induced pluripotent stem cells (iPSCs into mdx and mdx∶utrophin mutant blastocysts, which are predisposed to develop DMD with an increasing degree of severity (mdx <<< mdx∶utrophin. In mdx chimeras, iPSC-dystrophin was supplied to the muscle sarcolemma to effect corrections at morphological and functional levels. Dystrobrevin was observed in dystrophin-positive and, at a lesser extent, utrophin-positive areas. In the mdx∶utrophin mutant chimeras, although iPSC-dystrophin was also supplied to the muscle sarcolemma, mice still displayed poor skeletal muscle histopathology, and negligible levels of dystrobrevin in dystrophin- and utrophin-negative areas. Not only dystrophin-expressing tissues are affected by iPSCs. Mdx and mdx∶utrophin mice have reduced fat/body weight ratio, but iPSC injection normalized this parameter in both mdx and mdx∶utrophin chimeras, despite the fact that utrophin was compromised in the mdx∶utrophin chimeric fat. The results suggest that the presence of utrophin is required for the iPSC-corrections in skeletal muscle. Furthermore, the results highlight a potential (utrophin-independent non-cell autonomous role for iPSC-dystrophin in the corrections of non-muscle tissue like fat, which is intimately related to the muscle.

  4. NP-hardness of decoding quantum error-correction codes

    Science.gov (United States)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  5. NP-hardness of decoding quantum error-correction codes

    International Nuclear Information System (INIS)

    Hsieh, Min-Hsiu; Le Gall, Francois

    2011-01-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  6. A simple model for correcting the zero point energy problem in classical trajectory simulations of polyatomic molecules

    International Nuclear Information System (INIS)

    Miller, W.H.; Hase, W.L.; Darling, C.L.

    1989-01-01

    A simple model is proposed for correcting problems with zero point energy in classical trajectory simulations of dynamical processes in polyatomic molecules. The ''problems'' referred to are that classical mechanics allows the vibrational energy in a mode to decrease below its quantum zero point value, and since the total energy is conserved classically this can allow too much energy to pool in other modes. The proposed model introduces hard sphere-like terms in action--angle variables that prevent the vibrational energy in any mode from falling below its zero point value. The algorithm which results is quite simple in terms of the cartesian normal modes of the system: if the energy in a mode k, say, decreases below its zero point value at time t, then at this time the momentum P k for that mode has its sign changed, and the trajectory continues. This is essentially a time reversal for mode k (only exclamation point), and it conserves the total energy of the system. One can think of the model as supplying impulsive ''quantum kicks'' to a mode whose energy attempts to fall below its zero point value, a kind of ''Planck demon'' analogous to a Brownian-like random force. The model is illustrated by application to a model of CH overtone relaxation

  7. On the decoding process in ternary error-correcting output codes.

    Science.gov (United States)

    Escalera, Sergio; Pujol, Oriol; Radeva, Petia

    2010-01-01

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.

  8. ORBIT CORRECTION IN A NON-SCALING FFAG

    CERN Document Server

    Kelliher, D J; Sheehy, S L

    2010-01-01

    EMMA - the Electron Model of Many Applications - is to be built at the STFC Daresbury Laboratory in the UK and will be the first non-scaling FFAG ever constructed. The purpose of EMMA is to study beam dynamics in such an accelerator. The EMMA orbit correction scheme must deal with two characteristics of a non-scaling FFAG: i.e. the lack of a well defined reference orbit and the variation with momentum of the phase advance. In this study we present a novel orbit correction scheme that avoids the former problem by instead aiming to maximise both the symmetry of the orbit and the physical aperture of the beam. The latter problem is dealt with by optimising the corrector strengths over the energy range.

  9. Assessing student expertise in introductory physics with isomorphic problems. II. Effect of some potential factors on problem solving and transfer

    Directory of Open Access Journals (Sweden)

    Chandralekha Singh

    2008-03-01

    Full Text Available In this paper, we explore the use of isomorphic problem pairs (IPPs to assess introductory physics students’ ability to solve and successfully transfer problem-solving knowledge from one context to another in mechanics. We call the paired problems “isomorphic” because they require the same physics principle to solve them. We analyze written responses and individual discussions for a range of isomorphic problems. We examine potential factors that may help or hinder transfer of problem-solving skills from one problem in a pair to the other. For some paired isomorphic problems, one context often turned out to be easier for students in that it was more often correctly solved than the other. When quantitative and conceptual questions were paired and given back to back, students who answered both questions in the IPP often performed better on the conceptual questions than those who answered the corresponding conceptual questions only. Although students often took advantage of the quantitative counterpart to answer a conceptual question of an IPP correctly, when only given the conceptual question, students seldom tried to convert it into a quantitative question, solve it, and then reason about the solution conceptually. Even in individual interviews when students who were given only conceptual questions had difficulty and the interviewer explicitly encouraged them to convert the conceptual question into the corresponding quantitative problem by choosing appropriate variables, a majority of students were reluctant and preferred to guess the answer to the conceptual question based upon their gut feeling. Misconceptions associated with friction in some problems were so robust that pairing them with isomorphic problems not involving friction did not help students discern their underlying similarities. Alternatively, from the knowledge-in-pieces perspective, the activation of the knowledge resource related to friction was so strongly and automatically

  10. A hybrid numerical method for orbit correction

    International Nuclear Information System (INIS)

    White, G.; Himel, T.; Shoaee, H.

    1997-09-01

    The authors describe a simple hybrid numerical method for beam orbit correction in particle accelerators. The method overcomes both degeneracy in the linear system being solved and respects boundaries on the solution. It uses the Singular Value Decomposition (SVD) to find and remove the null-space in the system, followed by a bounded Linear Least Squares analysis of the remaining recast problem. It was developed for correcting orbit and dispersion in the B-factory rings

  11. Correction of the closed orbit and vertical dispersion and the tuning and field correction system in ISABELLE

    International Nuclear Information System (INIS)

    Parzen, G.

    1979-01-01

    Each ring in ISABELLE will have 10 separately powered systematic field correction coils to make required corrections which are the same in corresponding magnets around the ring. These corrections include changing the ν-value, shaping the working line in ν-space, correction of field errors due to iron saturation effects, the conductor arrangements, the construction of the coil ends, diamagnetic effects in the superconductor and to rate-dependent induced currents. The twelve insertion quadrupoles in the insertion surrounding each crossing point will each have a quadrupole trim coil. The closed orbit will be controlled by a system of 84 horizontal dipole coils and 90 vertical dipole coils in each ring, each coil being separately powered. This system of dipole coils will also be used to correct the vertical dispersion at the crossing points. Two families of skew quadrupoles per ring will be provided for correction of the coupling between the horizontal and vertical motions. Although there will be 258 separately powered correction coils in each ring

  12. Simulation and correction of the closed orbit in the cooler synchrotron COSY

    International Nuclear Information System (INIS)

    Dinev, D.

    1990-11-01

    In this paper the problem of COSY closed orbit control and its correction is discussed. The results of a simulation of the COSY closed orbit and its correction using different correction methods are given. The interactive computer program ORBIT, created especially for the simulation and correction of the COSY closed orbit, is described. The paper includes as well a survey of the orbit correction methods and related topics. (orig.)

  13. Exploring students’ perceived and actual ability in solving statistical problems based on Rasch measurement tools

    Science.gov (United States)

    Azila Che Musa, Nor; Mahmud, Zamalia; Baharun, Norhayati

    2017-09-01

    One of the important skills that is required from any student who are learning statistics is knowing how to solve statistical problems correctly using appropriate statistical methods. This will enable them to arrive at a conclusion and make a significant contribution and decision for the society. In this study, a group of 22 students majoring in statistics at UiTM Shah Alam were given problems relating to topics on testing of hypothesis which require them to solve the problems using confidence interval, traditional and p-value approach. Hypothesis testing is one of the techniques used in solving real problems and it is listed as one of the difficult concepts for students to grasp. The objectives of this study is to explore students’ perceived and actual ability in solving statistical problems and to determine which item in statistical problem solving that students find difficult to grasp. Students’ perceived and actual ability were measured based on the instruments developed from the respective topics. Rasch measurement tools such as Wright map and item measures for fit statistics were used to accomplish the objectives. Data were collected and analysed using Winsteps 3.90 software which is developed based on the Rasch measurement model. The results showed that students’ perceived themselves as moderately competent in solving the statistical problems using confidence interval and p-value approach even though their actual performance showed otherwise. Item measures for fit statistics also showed that the maximum estimated measures were found on two problems. These measures indicate that none of the students have attempted these problems correctly due to reasons which include their lack of understanding in confidence interval and probability values.

  14. Single molecule sequencing-guided scaffolding and correction of draft assemblies.

    Science.gov (United States)

    Zhu, Shenglong; Chen, Danny Z; Emrich, Scott J

    2017-12-06

    Although single molecule sequencing is still improving, the lengths of the generated sequences are inevitably an advantage in genome assembly. Prior work that utilizes long reads to conduct genome assembly has mostly focused on correcting sequencing errors and improving contiguity of de novo assemblies. We propose a disassembling-reassembling approach for both correcting structural errors in the draft assembly and scaffolding a target assembly based on error-corrected single molecule sequences. To achieve this goal, we formulate a maximum alternating path cover problem. We prove that this problem is NP-hard, and solve it by a 2-approximation algorithm. Our experimental results show that our approach can improve the structural correctness of target assemblies in the cost of some contiguity, even with smaller amounts of long reads. In addition, our reassembling process can also serve as a competitive scaffolder relative to well-established assembly benchmarks.

  15. Global intensity correction in dynamic scenes

    NARCIS (Netherlands)

    Withagen, P.J.; Schutte, K.; Groen, F.C.A.

    2007-01-01

    Changing image intensities causes problems for many computer vision applications operating in unconstrained environments. We propose generally applicable algorithms to correct for global differences in intensity between images recorded with a static or slowly moving camera, regardless of the cause

  16. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds

    Science.gov (United States)

    Xiong, B.; Oude Elberink, S.; Vosselman, G.

    2014-07-01

    In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.

  17. Corrective Action Investigation Plan for Corrective Action Unit 552: Area 12 Muckpile and Ponds, Nevada Test Site, Nevada, Rev. 1

    Energy Technology Data Exchange (ETDEWEB)

    Robert F. Boehlecke

    2005-01-01

    Corrective Action Unit 552 is being investigated because man-made radionuclides and chemical contaminants may be present in concentrations that could potentially pose an unacceptable risk to human health and/or the environment. The CAI will be conducted following the data quality objectives (DQOs) developed by representatives of the Nevada Division of Environmental Protection (NDEP) and the DOE National Nuclear Security Administration Nevada Site Office (NNSA/NSO). The DQOs are used to identify the type, amount, and quality of data needed to define the nature and extent of contamination and identify and evaluate the most appropriate corrective action alternatives for CAU 552. The primary problem statement for the investigation is: ''Existing information on the nature and extent of potential contamination is insufficient to evaluate and recommend corrective action alternatives for CAS 12-23-05.'' To address this problem statement, the resolution of the following two decision statements is required: (1) The Decision I statement is: ''Is a contaminant present within the CAU at a concentration that could pose an unacceptable risk to human health and the environment?'' Any site-related contaminant detected at a concentration exceeding the corresponding preliminary action level (PAL), as defined in Section A.1.4.2, will be considered a contaminant of concern (COC). A COC is defined as a site-related constituent that exceeds the screening criteria (PAL). The presence of a contaminant within each CAS is defined as the analytical detection of a COC. (2) The Decision II statement is: ''Determine the extent of contamination identified above PALs.'' This decision will be achieved by the collection of data that are adequate to define the extent of COCs. Decision II samples are used to determine the lateral and vertical extent of the contamination as well as the likelihood of COCs to migrate outside of the site

  18. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  19. Detection and correction of patient movement in prostate brachytherapy seed reconstruction

    Science.gov (United States)

    Lam, Steve T.; Cho, Paul S.; Marks, Robert J., II; Narayanan, Sreeram

    2005-05-01

    Intraoperative dosimetry of prostate brachytherapy can help optimize the dose distribution and potentially improve clinical outcome. Evaluation of dose distribution during the seed implant procedure requires the knowledge of 3D seed coordinates. Fluoroscopy-based seed localization is a viable option. From three x-ray projections obtained at different gantry angles, 3D seed positions can be determined. However, when local anaesthesia is used for prostate brachytherapy, the patient movement during fluoroscopy image capture becomes a practical problem. If uncorrected, the errors introduced by patient motion between image captures would cause seed mismatches. Subsequently, the seed reconstruction algorithm would either fail to reconstruct or yield erroneous results. We have developed an algorithm that permits detection and correction of patient movement that may occur between fluoroscopy image captures. The patient movement is decomposed into translational shifts along the tabletop and rotation about an axis perpendicular to the tabletop. The property of spatial invariance of the co-planar imaging geometry is used for lateral movement correction. Cranio-caudal movement is corrected by analysing the perspective invariance along the x-ray axis. Rotation is estimated by an iterative method. The method can detect and correct for the range of patient movement commonly seen in the clinical environment. The algorithm has been implemented for routine clinical use as the preprocessing step for seed reconstruction.

  20. Noticing relevant problem features: activating prior knowledge affects problem solving by guiding encoding

    Science.gov (United States)

    Crooks, Noelle M.; Alibali, Martha W.

    2013-01-01

    This study investigated whether activating elements of prior knowledge can influence how problem solvers encode and solve simple mathematical equivalence problems (e.g., 3 + 4 + 5 = 3 + __). Past work has shown that such problems are difficult for elementary school students (McNeil and Alibali, 2000). One possible reason is that children's experiences in math classes may encourage them to think about equations in ways that are ultimately detrimental. Specifically, children learn a set of patterns that are potentially problematic (McNeil and Alibali, 2005a): the perceptual pattern that all equations follow an “operations = answer” format, the conceptual pattern that the equal sign means “calculate the total”, and the procedural pattern that the correct way to solve an equation is to perform all of the given operations on all of the given numbers. Upon viewing an equivalence problem, knowledge of these patterns may be reactivated, leading to incorrect problem solving. We hypothesized that these patterns may negatively affect problem solving by influencing what people encode about a problem. To test this hypothesis in children would require strengthening their misconceptions, and this could be detrimental to their mathematical development. Therefore, we tested this hypothesis in undergraduate participants. Participants completed either control tasks or tasks that activated their knowledge of the three patterns, and were then asked to reconstruct and solve a set of equivalence problems. Participants in the knowledge activation condition encoded the problems less well than control participants. They also made more errors in solving the problems, and their errors resembled the errors children make when solving equivalence problems. Moreover, encoding performance mediated the effect of knowledge activation on equivalence problem solving. Thus, one way in which experience may affect equivalence problem solving is by influencing what students encode about the

  1. Maintenance performance improvement with System Dynamics : A Corrective Maintenance showcase

    NARCIS (Netherlands)

    Deenen, R.E.M.; Van Daalen, C.E.; Koene, E.G.C.

    2008-01-01

    This paper presents a case study of an analysis of a Corrective Maintenance process to realize performance improvement. The Corrective Maintenance process is supported by SAP, which has indicated the performance realisation problem. System Dynamics is used in a Group Model Building process to

  2. Economic benefits of power factor correction at a nuclear facility

    International Nuclear Information System (INIS)

    Boger, R.M.; Dalos, W.; Juguilon, M.E.

    1986-01-01

    The economic benefits of correcting poor power factor at an operating nuclear facility are shown. A project approach for achieving rapid return of investment without disrupting plant availability is described. Examples of technical problems associated with using capacitors for power factor correction are presented

  3. Real-Time Corrected Traffic Correlation Model for Traffic Flow Forecasting

    Directory of Open Access Journals (Sweden)

    Hua-pu Lu

    2015-01-01

    Full Text Available This paper focuses on the problems of short-term traffic flow forecasting. The main goal is to put forward traffic correlation model and real-time correction algorithm for traffic flow forecasting. Traffic correlation model is established based on the temporal-spatial-historical correlation characteristic of traffic big data. In order to simplify the traffic correlation model, this paper presents correction coefficients optimization algorithm. Considering multistate characteristic of traffic big data, a dynamic part is added to traffic correlation model. Real-time correction algorithm based on Fuzzy Neural Network is presented to overcome the nonlinear mapping problems. A case study based on a real-world road network in Beijing, China, is implemented to test the efficiency and applicability of the proposed modeling methods.

  4. Von Weizsaecker and exchange corrections in the Thomas Fermi theory

    International Nuclear Information System (INIS)

    Benguria, R.D.

    1979-01-01

    Two corrections to the Thomas-Fermi theory of atoms are studied. First the correction for exchange, that is the effect of the Pauli principle in the interaction energy, is considered. The defining variational problem is non-convex and standard technique to prove existence of a minimizing solution do not apply. Existence and uniqueness of solutions are established by convexifying or relaxing the energy functional. Properties of the minimizing solution are studied. A second correction due to von Weizsaecker is also discussed. Finally the dual principle to the Thomas-Fermi variational problem is studied (only in the neutral case). A dual principle is suggested for the ionic case. Also, a review of recent rigorous results concerning Thomas-Fermi theory is presented

  5. Development of a reactivity worth correction scheme for the one-dimensional transient analysis

    International Nuclear Information System (INIS)

    Cho, J. Y.; Song, J. S.; Joo, H. G.; Kim, H. Y.; Kim, K. S.; Lee, C. C.; Zee, S. Q.

    2003-11-01

    This work is to develop a reactivity worth correction scheme for the MASTER one-dimensional (1-D) calculation model. The 1-D cross section variations according to the core state in the MASTER input file, which are produced for 1-D calculation performed by the MASTER code, are incorrect in most of all the core states except for exactly the same core state where the variations are produced. Therefore this scheme performs the reactivity worth correction factor calculations before the main 1-D transient calculation, and generates correction factors for boron worth, Doppler and moderator temperature coefficients, and control rod worth, respectively. These correction factors force the one dimensional calculation to generate the same reactivity worths with the 3-dimensional calculation. This scheme is applied to the control bank withdrawal accident of Yonggwang unit 1 cycle 14, and the performance is examined by comparing the 1-D results with the 3-D results. This problem is analyzed by the RETRAN-MASTER consolidated code system. Most of all results of 1-D calculation including the transient power behavior, the peak power and time are very similar with the 3-D results. In the MASTER neutronics computing time, the 1-D calculation including the correction factor calculation requires the negligible time comparing with the 3-D case. Therefore, the reactivity worth correction scheme is concluded to be very good in that it enables the 1-D calculation to produce the very accurate results in a few computing time

  6. A Generalized Correction for Attenuation.

    Science.gov (United States)

    Petersen, Anne C.; Bock, R. Darrell

    Use of the usual bivariate correction for attenuation with more than two variables presents two statistical problems. This pairwise method may produce a covariance matrix which is not at least positive semi-definite, and the bivariate procedure does not consider the possible influences of correlated errors among the variables. The method described…

  7. Physical Health Problems and Environmental Challenges Influence Balancing Behaviour in Laying Hens.

    Directory of Open Access Journals (Sweden)

    Stephanie LeBlanc

    Full Text Available With rising public concern for animal welfare, many major food chains and restaurants are changing their policies, strictly buying their eggs from non-cage producers. However, with the additional space in these cage-free systems to perform natural behaviours and movements comes the risk of injury. We evaluated the ability to maintain balance in adult laying hens with health problems (footpad dermatitis, keel damage, poor wing feather cover; n = 15 using a series of environmental challenges and compared such abilities with those of healthy birds (n = 5. Environmental challenges consisted of visual and spatial constraints, created using a head mask, perch obstacles, and static and swaying perch states. We hypothesized that perch movement, environmental challenges, and diminished physical health would negatively impact perching performance demonstrated as balance (as measured by time spent on perch and by number of falls of the perch and would require more exaggerated correctional movements. We measured perching stability whereby each bird underwent eight 30-second trials on a static and swaying perch: with and without disrupted vision (head mask, with and without space limitations (obstacles and combinations thereof. Video recordings (600 Hz and a three-axis accelerometer/gyroscope (100 Hz were used to measure the number of jumps/falls, latencies to leave the perch, as well as magnitude and direction of both linear and rotational balance-correcting movements. Laying hens with and without physical health problems, in both challenged and unchallenged environments, managed to perch and remain off the ground. We attribute this capacity to our training of the birds. Environmental challenges and physical state had an effect on the use of accelerations and rotations to stabilize themselves on a perch. Birds with physical health problems performed a higher frequency of rotational corrections to keep the body centered over the perch, whereas, for both

  8. Physical Health Problems and Environmental Challenges Influence Balancing Behaviour in Laying Hens.

    Science.gov (United States)

    LeBlanc, Stephanie; Tobalske, Bret; Quinton, Margaret; Springthorpe, Dwight; Szkotnicki, Bill; Wuerbel, Hanno; Harlander-Matauschek, Alexandra

    2016-01-01

    With rising public concern for animal welfare, many major food chains and restaurants are changing their policies, strictly buying their eggs from non-cage producers. However, with the additional space in these cage-free systems to perform natural behaviours and movements comes the risk of injury. We evaluated the ability to maintain balance in adult laying hens with health problems (footpad dermatitis, keel damage, poor wing feather cover; n = 15) using a series of environmental challenges and compared such abilities with those of healthy birds (n = 5). Environmental challenges consisted of visual and spatial constraints, created using a head mask, perch obstacles, and static and swaying perch states. We hypothesized that perch movement, environmental challenges, and diminished physical health would negatively impact perching performance demonstrated as balance (as measured by time spent on perch and by number of falls of the perch) and would require more exaggerated correctional movements. We measured perching stability whereby each bird underwent eight 30-second trials on a static and swaying perch: with and without disrupted vision (head mask), with and without space limitations (obstacles) and combinations thereof. Video recordings (600 Hz) and a three-axis accelerometer/gyroscope (100 Hz) were used to measure the number of jumps/falls, latencies to leave the perch, as well as magnitude and direction of both linear and rotational balance-correcting movements. Laying hens with and without physical health problems, in both challenged and unchallenged environments, managed to perch and remain off the ground. We attribute this capacity to our training of the birds. Environmental challenges and physical state had an effect on the use of accelerations and rotations to stabilize themselves on a perch. Birds with physical health problems performed a higher frequency of rotational corrections to keep the body centered over the perch, whereas, for both health categories

  9. Manual for investigation and correction of feedwater heater failures

    International Nuclear Information System (INIS)

    Bell, R.J.; Diaz-Tous, I.A.; Bartz, J.A.

    1993-01-01

    The Electric Power Research Institute (EPRI) has sponsored the development of a recently published manual which is designed to assist utility personnel in identifying and correcting closed feedwater heater problems. The main portion of the manual describes common failure modes, probable means of identifying root causes and appropriate corrective actions. These include materials selection, fabrication practices, design, normal/abnormal operation and maintenance. The manual appendices include various data, intended to aid those involved in monitoring and condition assessment of feedwater heaters. This paper contains a detailed overview of the manual content and suggested means for its efficient use by utility engineers and operations and maintenance personnel who are charged with the responsibilities of performing investigations to identify the root cause(s) of closed feedwater problems/failures and to provide appropriate corrective actions. 4 refs., 3 figs., 2 tabs

  10. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 97: Yucca Flat/Climax Mine Nevada National Security Site, Nevada, Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Farnham, Irene [Navarro, Las Vegas, NV (United States)

    2017-08-01

    This corrective action decision document (CADD)/corrective action plan (CAP) has been prepared for Corrective Action Unit (CAU) 97, Yucca Flat/Climax Mine, Nevada National Security Site (NNSS), Nevada. The Yucca Flat/Climax Mine CAU is located in the northeastern portion of the NNSS and comprises 720 corrective action sites. A total of 747 underground nuclear detonations took place within this CAU between 1957 and 1992 and resulted in the release of radionuclides (RNs) in the subsurface in the vicinity of the test cavities. The CADD portion describes the Yucca Flat/Climax Mine CAU data-collection and modeling activities completed during the corrective action investigation (CAI) stage, presents the corrective action objectives, and describes the actions recommended to meet the objectives. The CAP portion describes the corrective action implementation plan. The CAP presents CAU regulatory boundary objectives and initial use-restriction boundaries identified and negotiated by DOE and the Nevada Division of Environmental Protection (NDEP). The CAP also presents the model evaluation process designed to build confidence that the groundwater flow and contaminant transport modeling results can be used for the regulatory decisions required for CAU closure. The UGTA strategy assumes that active remediation of subsurface RN contamination is not feasible with current technology. As a result, the corrective action is based on a combination of characterization and modeling studies, monitoring, and institutional controls. The strategy is implemented through a four-stage approach that comprises the following: (1) corrective action investigation plan (CAIP), (2) CAI, (3) CADD/CAP, and (4) closure report (CR) stages.

  11. Why should correction values be better known than the measurand true value?

    International Nuclear Information System (INIS)

    Pavese, Franco

    2013-01-01

    Since the beginning of the history of modern measurement science, the experimenters faced the problem of dealing with systematic effects, as distinct from, and opposed to, random effects. Two main schools of thinking stemmed from the empirical and theoretical exploration of the problem, one dictating that the two species should be kept and reported separately, the other indicating ways to combine the two species into a single numerical value for the total uncertainty (often indicated as 'error'). The second way of thinking was adopted by the GUM, and, generally, adopts the method of assuming that their expected value is null by requiring, for all systematic effects taken into account in the model, that corresponding 'corrections' are applied to the measured values before the uncertainty analysis is performed. On the other hand, about the value of the measurand intended to be the object of measurement, classical statistics calls it 'true value', admitting that a value should exist objectively (e.g. the value of a fundamental constant), and that any experimental operation aims at obtaining an ideally exact measure of it. However, due to the uncertainty affecting every measurement process, this goal can be attained only approximately, in the sense that nobody can ever know exactly how much any measured value differs from the true value. The paper discusses the credibility of the numerical value attributed to an estimated correction, compared with the credibility of the estimate of the location of the true value, concluding that the true value of a correction should be considered as imprecisely evaluable as the true value of any 'input quantity', and of the measurand itself. From this conclusion, one should derive that the distinction between 'input quantities' and 'corrections' is not justified and not useful

  12. A Preliminary Bloom's Taxonomy Assessment of End-of-Chapter Problems in Business School Textbooks

    Science.gov (United States)

    Marshall, Jennings B.; Carson, Charles M.

    2008-01-01

    This article examines textbook problems used in a sampling of some of the most common core courses found in schools of business to ascertain what level of learning, as defined by Bloom's Taxonomy, is required to provide a correct answer. A set of working definitions based on Bloom's Taxonomy (Bloom & Krathwohl, 1956) was developed for the six…

  13. Parametric linear programming for a materials requirement planning problem solution with uncertainty

    OpenAIRE

    Martin Darío Arango Serna; Conrado Augusto Serna; Giovanni Pérez Ortega

    2010-01-01

    Using fuzzy set theory as a methodology for modelling and analysing decision systems is particularly interesting for researchers in industrial engineering because it allows qualitative and quantitative analysis of problems involving uncertainty and imprecision. Thus, in an effort to gain a better understanding of the use of fuzzy logic in industrial engineering, more specifically in the field of production planning, this article was aimed at providing a materials requirement planning (MRP) pr...

  14. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 447: Project Shoal Area, Subsurface, Nevada, Rev. No.: 3 with Errata Sheet

    Energy Technology Data Exchange (ETDEWEB)

    Tim Echelard

    2006-03-01

    This Corrective Action Decision Document/Corrective Action Plan (CADD/CAP) has been prepared for Corrective Action Unit (CAU) 447, Project Shoal Area (PSA)-Subsurface, Nevada, in accordance with the ''Federal Facility Agreement and Consent Order'' (FFACO) (1996). Corrective Action Unit 447 is located in the Sand Springs Mountains in Churchill County, Nevada, approximately 48 kilometers (30 miles) southeast of Fallon, Nevada. The CADD/CAP combines the decision document (CADD) with the Corrective Action Plan (CAP) and provides or references the specific information necessary to recommend corrective actions for CAU 447, as provided in the FFACO. Corrective Action Unit 447 consists of two corrective action sites (CASs): CAS 57-49-01, Emplacement Shaft, and CAS 57-57-001, Cavity. The emplacement shaft (CAS-57-49-01) was backfilled and plugged in 1996 and will not be evaluated further. The purpose of the CADD portion of the document (Section 1.0 to Section 4.0) is to identify and provide a rationale for the selection of a recommended corrective action alternative for the subsurface at PSA. To achieve this, the following tasks were required: (1) Develop corrective action objectives. (2) Identify corrective action alternative screening criteria. (3) Develop corrective action alternatives. (4) Perform detailed and comparative evaluations of the corrective action alternatives in relation to the corrective action objectives and screening criteria. (5) Recommend a preferred corrective action alternative for the subsurface at PSA. The original Corrective Action Investigation Plan (CAIP) for the PSA was approved in September 1996 and described a plan to drill and test four characterization wells, followed by flow and transport modeling (DOE/NV, 1996). The resultant drilling is described in a data report (DOE/NV, 1998e) and the data analysis and modeling in an interim modeling report (Pohll et al., 1998). After considering the results of the modeling effort

  15. Coulomb corrections for interferometry analysis of expanding hadron systems

    Energy Technology Data Exchange (ETDEWEB)

    Sinyukov, Yu.M.; Lednicky, R.; Pluta, J.; Erazmus, B. [Centre National de la Recherche Scientifique, 44 - Nantes (France). Lab. de Physique Subatomique et des Technologies Associees; Akkelin, S.V. [ITP, Kiev (Ukraine)

    1997-09-01

    The problem of the Coulomb corrections to the two-boson correlation functions for the systems formed in ultra-relativistic heavy ion collisions is considered for large effective system volumes. The modification of the standard zero-distance correction (so called Gamow or Coulomb factor) has been proposed for such a kind of systems. For the {pi}{sup +}{pi}{sup +} and K{sup +}K{sup +} correlation functions the analytical calculations of the Coulomb correction are compared with the exact numerical results. (author). 20 refs.

  16. Optimization of Broadband Wavefront Correction at the Princeton High Contrast Imaging Laboratory

    Science.gov (United States)

    Groff, Tyler Dean; Kasdin, N.; Carlotti, A.

    2011-01-01

    Wavefront control for imaging of terrestrial planets using coronagraphic techniques requires improving the performance of the wavefront control techniques to expand the correction bandwidth and the size of the dark hole over which it is effective. At the Princeton High Contrast Imaging Laboratory we have focused on increasing the search area using two deformable mirrors (DMs) in series to achieve symmetric correction by correcting both amplitude and phase aberrations. Here we are concerned with increasing the bandwidth of light over which this correction is effective so we include a finite bandwidth into the optimization problem to generate a new stroke minimization algorithm. This allows us to minimize the actuator stroke on the DMs given contrast constraints at multiple wavelengths which define a window over which the dark hole will persist. This windowed stroke minimization algorithm is written in such a way that a weight may be applied to dictate the relative importance of the outer wavelengths to the central wavelength. In order to supply the estimates at multiple wavelengths a functional relationship to a central estimation wavelength is formed. Computational overhead and new experimental results of this windowed stroke minimization algorithm are discussed. The tradeoff between symmetric correction and achievable bandwidth is compared to the observed contrast degradation with wavelength in the experimental results. This work is supported by NASA APRA Grant #NNX09AB96G. The author is also supported under an NESSF Fellowship.

  17. Automatic Contextual Text Correction Using The Linguistic Habits Graph Lhg

    Directory of Open Access Journals (Sweden)

    Marcin Gadamer

    2009-01-01

    Full Text Available Automatic text correction is an essential problem of today text processors and editors. Thispaper introduces a novel algorithm for automation of contextual text correction using a LinguisticHabit Graph (LHG also introduced in this paper. A specialist internet crawler hasbeen constructed for searching through web sites in order to build a Linguistic Habit Graphafter text corpuses gathered in polish web sites. The achieved correction results on a basis ofthis algorithm using this LHG were compared with commercial programs which also enableto make text correction: Microsoft Word 2007, Open Office Writer 3.0 and search engineGoogle. The achieved results of text correction were much better than correction made bythese commercial tools.

  18. DOE's efforts to correct environmental problems of the nuclear weapons complex

    International Nuclear Information System (INIS)

    Rezendes, V.S.

    1990-03-01

    This report focuses on four main issues: the environmental problems at DOE's nuclear weapons complex, recent changes in DOE's organizational structure, DOE's 1991 budget request, and the need for effective management systems. This report concludes that the environmental problems are enormous and will take decades to resolve. Widespread contamination can be found at many DOE sites, and the full extent of the environmental problems is unknown. DOE has taken several steps during the past year to better deal with these problems, including making organizational improvements and requesting additional funds for environmental restoration and waste management activities

  19. Local Dynamic Reactive Power for Correction of System Voltage Problems

    Energy Technology Data Exchange (ETDEWEB)

    Kueck, John D [ORNL; Rizy, D Tom [ORNL; Li, Fangxing [ORNL; Xu, Yan [ORNL; Li, Huijuan [University of Tennessee, Knoxville (UTK); Adhikari, Sarina [ORNL; Irminger, Philip [ORNL

    2008-12-01

    Distribution systems are experiencing outages due to a phenomenon known as local voltage collapse. Local voltage collapse is occurring in part because modern air conditioner compressor motors are much more susceptible to stalling during a voltage dip than older motors. These motors can stall in less than 3 cycles (.05s) when a fault, such as on the sub-transmission system, causes voltage to sag to 70 to 60%. The reasons for this susceptibility are discussed in the report. During the local voltage collapse, voltages are depressed for a period of perhaps one or two minutes. There is a concern that these local events are interacting together over larger areas and may present a challenge to system reliability. An effective method of preventing local voltage collapse is the use of voltage regulation from Distributed Energy Resources (DER) that can supply or absorb reactive power. DER, when properly controlled, can provide a rapid correction to voltage dips and prevent motor stall. This report discusses the phenomenon and causes of local voltage collapse as well as the control methodology we have developed to counter voltage sag. The problem is growing because of the use of low inertia, high efficiency air conditioner (A/C) compressor motors and because the use of electric A/C is growing in use and becoming a larger percentage of system load. A method for local dynamic voltage regulation is discussed which uses reactive power injection or absorption from local DER. This method is independent, rapid, and will not interfere with conventional utility system voltage control. The results of simulations of this method are provided. The method has also been tested at the ORNL s Distributed Energy Communications and Control (DECC) Laboratory using our research inverter and synchronous condenser. These systems at the DECC Lab are interconnected to an actual distribution system, the ORNL distribution system, which is fed from TVA s 161kV sub-transmission backbone. The test results

  20. A practical procedure to improve the accuracy of radiochromic film dosimetry. A integration with a correction method of uniformity correction and a red/blue correction method

    International Nuclear Information System (INIS)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-01-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000 G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical intensity modulated radiation therapy (IMRT) dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method. (author)

  1. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    Science.gov (United States)

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  2. The Comparative Effect of Online Self-Correction, Peer- correction, and Teacher Correction in Descriptive Writing Tasks on Intermediate EFL Learners’ Grammar Knowledge The Prospect of Mobile Assisted Language Learning (MALL

    Directory of Open Access Journals (Sweden)

    Mojtaba Aghajani

    2018-05-01

    Full Text Available 60 participants of the study were selected based on their scores on the Nelson proficiency test and divided into three Telegram groups comprising a peer-correction, a self-correction and a teacher-correction group, each with 20 students. The pretest was administered to measure the subjects' grammar knowledge. Subsequently, three Telegram groups each with 21 members (20 students + 1 teacher were formed. Then during a course of nearly one academic term the grammatical notions were taught by the teacher. The members were required to write on the prompt in about 50 to 70 words and post it on the group. Then, their writings were corrected through self-correction, peer-correction and teacher-correction under the feedback provided by the researcher. The study used a pretest-posttest design to compare the learners’ progress after the application of three different types of treatment. One-Way between-groups ANOVA was run to test whether there was any statistically significant difference in grammar knowledge in descriptive writing of intermediate EFL learners’ who receive mobile-assisted self-correction, peer-correction and teacher-correction. The researcher also used Post-Hoc Tests to determine the exact difference between correction methods. Online self-correction, peer-correction and teacher-correction were the independent variables and grammar knowledge was the dependent variable. Examining the result of the study prove that significance level between self-correction and teacher-correction was the strongest (sig. = 0.000 but the significance level was a little less strong between peer-correction and teacher-correction whereas no significance was observed between self-correction and peer-correction.

  3. Orthodontic correction of Class III malocclusion in a young patient with the use of a simple fixed appliance.

    Science.gov (United States)

    Park, Jae Hyun

    2012-01-01

    Anterior crossbites are one of the most common orthodontic problems we observe in growing children. The first step in treating an anterior crossbite is to determine whether the crossbite is dental or skeletal in nature. To determine a precise diagnosis, a thorough clinical, radiographic and model analysis is required. This article shows the treatment of Class III malocclusion by correcting anterior dental crossbite with the use of a simple fixed appliance.

  4. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 413: Clean Slate II Plutonium Dispersion (TTR) Tonopah Test Range, Nevada. Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Patrick [Navarro, Las Vegas, NV (United States)

    2017-05-01

    This Corrective Action Decision Document/Corrective Action Plan provides the rationale and supporting information for the selection and implementation of corrective actions at Corrective Action Unit (CAU) 413, Clean Slate II Plutonium Dispersion (TTR). CAU 413 is located on the Tonopah Test Range and includes one corrective action site, TA-23-02CS. CAU 413 consists of the release of radionuclides to the surface and shallow subsurface from the Clean Slate II (CSII) storage–transportation test conducted on May 31, 1963. The CSII test was a non-nuclear detonation of a nuclear device located inside a concrete bunker covered with 2 feet of soil. To facilitate site investigation and the evaluation of data quality objectives decisions, the releases at CAU 413 were divided into seven study groups: 1 Undisturbed Areas 2 Disturbed Areas 3 Sedimentation Areas 4 Former Staging Area 5 Buried Debris 6 Potential Source Material 7 Soil Mounds Corrective action investigation (CAI) activities, as set forth in the CAU 413 Corrective Action Investigation Plan, were performed from June 2015 through May 2016. Radionuclides detected in samples collected during the CAI were used to estimate total effective dose using the Construction Worker exposure scenario. Corrective action was required for areas where total effective dose exceeded, or was assumed to exceed, the radiological final action level (FAL) of 25 millirem per year. The results of the CAI and the assumptions made in the data quality objectives resulted in the following conclusions: The FAL is exceeded in surface soil in SG1, Undisturbed Areas; The FAL is assumed to be exceeded in SG5, Buried Debris, where contaminated debris and soil were buried after the CSII test; The FAL is not exceeded at SG2, SG3, SG4, SG6, or SG7. Because the FAL is exceeded at CAU 413, corrective action is required and corrective action alternatives (CAAs) must be evaluated. For CAU 413, three CAAs were evaluated: no further action, clean closure, and

  5. Assessing Requirements Quality through Requirements Coverage

    Science.gov (United States)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software

  6. Die Defects and Die Corrections in Metal Extrusion

    Directory of Open Access Journals (Sweden)

    Sayyad Zahid Qamar

    2018-05-01

    Full Text Available Extrusion is a very popular and multi-faceted manufacturing process. A large number of products for the automotive, aerospace, and construction sectors are produced through aluminum extrusion. Many defects in the extruded products occur because of the conditions of the dies and tooling. The problems in dies can be due to material issues, design and manufacturing, or severe usage. They can be avoided by maintaining the billet quality, by controlling the extrusion process parameters, and through routine maintenance. Die problems that occur on a day-to-day basis are mostly repairable and are rectified through various types of die correction operations. These defects and repair operations have not been reported in detail in the published literature. The current paper presents an in-depth description of repairable die defects and related die correction operations in metal extrusion. All major die defects are defined and classified, and their causes, preventive measures, and die correction operations are described. A brief frequency-based statistical study of die defects is also carried out to identify the most frequent die corrections. This work can be of direct benefit to plant engineers and operators and to researchers and academics in the field of metal extrusion.

  7. Mean Field Analysis of Quantum Annealing Correction.

    Science.gov (United States)

    Matsuura, Shunji; Nishimori, Hidetoshi; Albash, Tameem; Lidar, Daniel A

    2016-06-03

    Quantum annealing correction (QAC) is a method that combines encoding with energy penalties and decoding to suppress and correct errors that degrade the performance of quantum annealers in solving optimization problems. While QAC has been experimentally demonstrated to successfully error correct a range of optimization problems, a clear understanding of its operating mechanism has been lacking. Here we bridge this gap using tools from quantum statistical mechanics. We study analytically tractable models using a mean-field analysis, specifically the p-body ferromagnetic infinite-range transverse-field Ising model as well as the quantum Hopfield model. We demonstrate that for p=2, where the phase transition is of second order, QAC pushes the transition to increasingly larger transverse field strengths. For p≥3, where the phase transition is of first order, QAC softens the closing of the gap for small energy penalty values and prevents its closure for sufficiently large energy penalty values. Thus QAC provides protection from excitations that occur near the quantum critical point. We find similar results for the Hopfield model, thus demonstrating that our conclusions hold in the presence of disorder.

  8. Comparison of violence and abuse in juvenile correctional facilities and schools.

    Science.gov (United States)

    Davidson-Arad, Bilha; Benbenishty, Rami; Golan, Miriam

    2009-02-01

    Peer violence, peer sexual harassment and abuse, and staff abuse experienced by boys and girls in juvenile correctional facilities are compared with those experienced by peers in schools in the community. Responses of 360 youths in 20 gender-separated correctional facilities in Israel to a questionnaire tapping these forms of mistreatment were compared with those of 7,012 students in a representative sample of Israeli junior high and high schools. Victimization was reported more frequently by those in correctional facilities than by those in schools. However, some of the more prevalent forms of violence and abuse were reported with equal frequency in both settings, and some more frequently in schools. Despite being victimized more frequently, those in the correctional facilities tended to view their victimization as a significantly less serious problem than those in the schools and to rate the staff as doing a better job of dealing with the problem.

  9. Automatic computation of radiative corrections

    International Nuclear Information System (INIS)

    Fujimoto, J.; Ishikawa, T.; Shimizu, Y.; Kato, K.; Nakazawa, N.; Kaneko, T.

    1997-01-01

    Automated systems are reviewed focusing on their general structure and requirement specific to the calculation of radiative corrections. Detailed description of the system and its performance is presented taking GRACE as a concrete example. (author)

  10. Paralegals in Corrections: A Proposed Model.

    Science.gov (United States)

    McShane, Marilyn D.

    1987-01-01

    Describes the legal assistance program currently offered by the Texas Department of Corrections which demonstrates the wide range of questions and problems that the paralegal can address. Reviews paralegal's functions in the prison setting and the services they can provide in assisting prisoners to maintain their rights. (Author/ABB)

  11. The Mentally Retarded Offender and Corrections.

    Science.gov (United States)

    Santamour, Miles; West, Bernadette

    The booklet provides an overview of the issues involved in correctional rehabilitation for the mentally retarded offender. Reviewed are clinical and legal definitions of criminal behavior and retardation, and discussed are such issues as law enforcement and court proceedings problems, pros and cons of special facilities, labeling, normalization,…

  12. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  13. Low-level radioactive waste management handbook series: corrective measures technology for shallow land burial

    International Nuclear Information System (INIS)

    1984-10-01

    The purpose of this document is to serve as a handbook to operators of low-level waste burial sites for dealing with conditions which can cause problems in waste isolation. This handbook contains information on planning and applying corrective actions, and is organized in such a way as to assist the operator in associating problems or potential problems with causative conditions. Thus, the operator is encouraged to direct actions at those conditions, rather than the possible temporary expedient of treating symptoms. In Chapter 2 of this handbook, corrective action planning is briefly presented. Chapter 3 discusses the application of corrective measures by addressing, in separate sections, the following conditions which can occur at burial sites: eroding trench cover; permeable trench cover; subsidence of trench; groundwater entering trenches; trench intrusion by deep-rooted plants; and trench intrusion by burrowing animals. In each of these sections, a condition is introduced and related to burial-site problems. It is followed by a discussion of alternative methods for correcting the condition. This discussion includes descriptive information, application considerations for these alternatives, a listing of potential advantages and disadvantages, presentation of generalized cost information, and in conclusion, a statement of recommendations regarding application of corrective action technologies. 66 references, 21 figures, 24 tables

  14. Corrective Action Decision Document/Closure Report for Corrective Action Unit 477: Area 12 N-Tunnel Muckpile, Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    NSTec Environmental Restoration

    2010-03-15

    This Corrective Action Decision Document (CADD)/Closure Report (CR) was prepared by the Defense Threat Reduction Agency (DTRA) for Corrective Action Unit (CAU) 477, N-Tunnel Muckpile. This CADD/CR is consistent with the requirements of the Federal Facility Agreement and Consent Order (FFACO) agreed to by the State of Nevada, the U.S. Department of Energy, and the U.S. Department of Defense. Corrective Action Unit 477 is comprised of one Corrective Action Site (CAS): • 12-06-03, Muckpile The purpose of this CADD/CR is to provide justification and documentation supporting the recommendation for closure with no further action, by placing use restrictions on CAU 477.

  15. Corrective Action Decision Document/Closure Report for Corrective Action Unit 476: Area 12 T-Tunnel Muckpile, Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    NSTec Environmental Restoration

    2010-03-15

    This Corrective Action Decision Document (CADD)/Closure Report (CR) was prepared by the Defense Threat Reduction Agency (DTRA) for Corrective Action Unit (CAU) 476, Area 12 T-Tunnel Muckpile. This CADD/CR is consistent with the requirements of the Federal Facility Agreement and Consent Order (FFACO) agreed to by the State of Nevada, the U.S. Department of Energy, and the U.S. Department of Defense. Corrective Action Unit 476 is comprised of one Corrective Action Site (CAS): • 12-06-02, Muckpile The purpose of this CADD/CR is to provide justification and documentation supporting the recommendation for closure in place with use restrictions for CAU 476.

  16. Correction for near vision in pseudophakic patients

    Directory of Open Access Journals (Sweden)

    Dujić Mirjana

    2004-01-01

    Full Text Available Objective of the study was to show the mean values of correction for near vision and to discuss the presbyopic correction in pseudophakic patients. Setting was the Eye department where authors work. Inclusion criteria for 55 patients were native or corrected distant vision of 0.8-1.0 on Snellen's chart; 0,6 on Jagger's chart for near vision; round pupil and good position of the implant. Biometry of the anterior chamber depth with Alcon biophysics during distant and near vision was performed in our study. „Hi square" test was carried out and it was concluded that patients younger than 59 years (41 eyes had median correction of +2.0 dsph, while patients older than 60 years (36 eyes had correction of+3.0 dsph, but it was not statistically significant. There was no statistically significant difference of the correction between pseudophakic (41 and phakic (19 eyes in patients younger than 59 years. The anterior movement of the IOL was 0.18 mm in the younger group and 0.15 mm in the older group. With good IOL movement and new materials which could have changeable refractive power, the problem of pseudophakic correction for near vision might be solved.

  17. Solar neutrino problem

    Energy Technology Data Exchange (ETDEWEB)

    Faulkner, D J [Australian National Univ., Canberra. Mount Stromlo and Siding Spring Observatories

    1975-10-01

    This paper reviews several recent attempts to solve the problem in terms of modified solar interior models. Some of these have removed the count rate discrepancy, but have violated other observational data for the sun. One successfully accounts for the Davis results at the expense of introducing an ad hoc correction with no current physical explanation. An introductory description of the problem is given.

  18. Experimental quantum annealing: case study involving the graph isomorphism problem.

    Science.gov (United States)

    Zick, Kenneth M; Shehab, Omar; French, Matthew

    2015-06-08

    Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N(2) to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers.

  19. A weighted least-squares lump correction algorithm for transmission-corrected gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Sprinkle, J.K. Jr.; Sheppard, G.A.

    1993-01-01

    With transmission-corrected gamma-ray nondestructive assay instruments such as the Segmented Gamma Scanner (SGS) and the Tomographic Gamma Scanner (TGS) that is currently under development at Los Alamos National Laboratory, the amount of gamma-ray emitting material can be underestimated for samples in which the emitting material consists of particles or lumps of highly attenuating material. This problem is encountered in the assay of uranium and plutonium-bearing samples. To correct for this source of bias, we have developed a least-squares algorithm that uses transmission-corrected assay results for several emitted energies and a weighting function to account for statistical uncertainties in the assay results. The variation of effective lump size in the fitted model is parameterized; this allows the correction to be performed for a wide range of lump-size distributions. It may be possible to use the reduced chi-squared value obtained in the fit to identify samples in which assay assumptions have been violated. We found that the algorithm significantly reduced bias in simulated assays and improved SGS assay results for plutonium-bearing samples. Further testing will be conducted with the TGS, which is expected to be less susceptible than the SGS to systematic source of bias

  20. TPC cross-talk correction: CERN-Dubna-Milano algorithm and results

    CERN Document Server

    De Min, A; Guskov, A; Krasnoperov, A; Nefedov, Y; Zhemchugov, A

    2003-01-01

    The CDM (CERN-Dubna-Milano) algorithm for TPC Xtalk correction is presented and discussed in detail. It is a data-driven, model-independent approach to the problem of Xtalk correction. It accounts for arbitrary amplitudes and pulse shapes of signals, and corrects (almost) all generations of Xtalk, with a view to handling (almost) correctly even complex multi-track events. Results on preamp amplification and preamp linearity from the analysis of test-charge injection data of all six TPC sectors are presented. The minimal expected error on the measurement of signal charges in the TPC is discussed. Results are given on the application of the CDM Xtalk correction to test-charge events and krypton events.

  1. ENERGY CORRECTION FOR HIGH POWER PROTON/H MINUS LINAC INJECTORS.

    Energy Technology Data Exchange (ETDEWEB)

    RAPARIA, D.; LEE, Y.Y.; WEI, J.

    2005-05-16

    High-energy proton/H minus energy (> GeV) linac injector suffer from energy jitter due to RF amplitude and phase stability. Especially in high power injectors this energy jitter result beam losses more than 1 W/m that require for hand on maintenance. Depending upon the requirements for next accelerator in the chain, this energy jitter may or may not require to be corrected. This paper will discuss the sources of this energy jitter, correction schemes with specific examples.

  2. Relativistic and the first sectorial harmonics corrections in the critical inclination

    Science.gov (United States)

    Rahoma, W. A.; Khattab, E. H.; Abd El-Salam, F. A.

    2014-05-01

    The problem of the critical inclination is treated in the Hamiltonian framework taking into consideration post-Newtonian corrections as well as the main correction term of sectorial harmonics for an earth-like planet. The Hamiltonian is expressed in terms of Delaunay canonical variables. A canonical transformation is applied to eliminate short period terms. A modified critical inclination is obtained due to relativistic and the first sectorial harmonics corrections.

  3. 34 CFR 200.42 - Corrective action.

    Science.gov (United States)

    2010-07-01

    ... action; and (ii) Any underlying staffing, curriculum, or other problems in the school; (2) Is designed to... provide all students enrolled in the school with the option to transfer to another public school in... Programs Operated by Local Educational Agencies Lea and School Improvement § 200.42 Corrective action. (a...

  4. Touchless attitude correction for satellite with constant magnetic moment

    Science.gov (United States)

    Ao, Hou-jun; Yang, Le-ping; Zhu, Yan-wei; Zhang, Yuan-wen; Huang, Huan

    2017-09-01

    Rescue of satellite with attitude fault is of great value. Satellite with improper injection attitude may lose contact with ground as the antenna points to the wrong direction, or encounter energy problems as solar arrays are not facing the sun. Improper uploaded command may set the attitude out of control, exemplified by Japanese Hitomi spacecraft. In engineering practice, traditional physical contact approaches have been applied, yet with a potential risk of collision and a lack of versatility since the mechanical systems are mission-specific. This paper puts forward a touchless attitude correction approach, in which three satellites are considered, one having constant dipole and two having magnetic coils to control attitude of the first. Particular correction configurations are designed and analyzed to maintain the target's orbit during the attitude correction process. A reference coordinate system is introduced to simplify the control process and avoid the singular value problem of Euler angles. Based on the spherical triangle basic relations, the accurate varying geomagnetic field is considered in the attitude dynamic mode. Sliding mode control method is utilized to design the correction law. Finally, numerical simulation is conducted to verify the theoretical derivation. It can be safely concluded that the no-contact attitude correction approach for the satellite with uniaxial constant magnetic moment is feasible and potentially applicable to on-orbit operations.

  5. Quantum mean-field decoding algorithm for error-correcting codes

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Saika, Yohei; Okada, Masato

    2009-01-01

    We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).

  6. Iterative optimization of quantum error correcting codes

    International Nuclear Information System (INIS)

    Reimpell, M.; Werner, R.F.

    2005-01-01

    We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step

  7. Solid Waste Management in Nigeria: Problems and Issues.

    Science.gov (United States)

    AGUNWAMBA

    1998-11-01

    / This paper is a presentation of the problems of solid waste management in Nigeria and certain important issues that must be addressed in order to achieve success. At the core of the problems of solid waste management are the absence of adequate policies, enabling legislation, and an environmentally stimulated and enlightened public. Government policies on the environment are piecemeal where they exist and are poorly implemented. Public enlightenment programs lacked the needed coverage, intensity, and continuity to correct the apathetic public attitude towards the environment. Up to now the activities of the state environmental agencies have been hampered by poor funding, inadequate facilities and human resources, inappropriate technology, and an inequitable taxation system. Successful solid waste management in Nigeria will require a holistic program that will integrate all the technical, economic, social, cultural, and psychological factors that are often ignored in solid waste programs.KEY WORDS: Solid waste; Management; Problems; Solutions; Nigeria

  8. Correcting geometric and photometric distortion of document images on a smartphone

    Science.gov (United States)

    Simon, Christian; Williem; Park, In Kyu

    2015-01-01

    A set of document image processing algorithms for improving the optical character recognition (OCR) capability of smartphone applications is presented. The scope of the problem covers the geometric and photometric distortion correction of document images. The proposed framework was developed to satisfy industrial requirements. It is implemented on an off-the-shelf smartphone with limited resources in terms of speed and memory. Geometric distortions, i.e., skew and perspective distortion, are corrected by sending horizontal and vertical vanishing points toward infinity in a downsampled image. Photometric distortion includes image degradation from moiré pattern noise and specular highlights. Moiré pattern noise is removed using low-pass filters with different sizes independently applied to the background and text region. The contrast of the text in a specular highlighted area is enhanced by locally enlarging the intensity difference between the background and text while the noise is suppressed. Intensive experiments indicate that the proposed methods show a consistent and robust performance on a smartphone with a runtime of less than 1 s.

  9. RCRA corrective action determination of no further action

    International Nuclear Information System (INIS)

    1996-06-01

    On July 27, 1990, the U.S. Environmental Protection Agency (EPA) proposed a regulatory framework (55 FR 30798) for responding to releases of hazardous waste and hazardous constituents from solid waste management units (SWMUs) at facilities seeking permits or permitted under the Resource Conservation and Recovery Act (RCRA). The proposed rule, 'Corrective Action for Solid Waste Management Units at Hazardous Waste Facilities', would create a new Subpart S under the 40 CFR 264 regulations, and outlines requirements for conducting RCRA Facility Investigations, evaluating potential remedies, and selecting and implementing remedies (i.e., corrective measures) at RCRA facilities. EPA anticipates instances where releases or suspected releases of hazardous wastes or constituents from SWMUs identified in a RCRA Facility Assessment, and subsequently addressed as part of required RCRA Facility Investigations, will be found to be non-existent or non-threatening to human health or the environment. Such releases may require no further action. For such situations, EPA proposed a mechanism for making a determination that no further corrective action is needed. This mechanism is known as a Determination of No Further Action (DNFA) (55 FR 30875). This information Brief describes what a DNFA is and discusses the mechanism for making a DNFA. This is one of a series of Information Briefs on RCRA corrective action

  10. Corrective action program reengineering project

    International Nuclear Information System (INIS)

    Vernick, H.R.

    1996-01-01

    A series of similar refueling floor events that occurred during the early 1990s prompted Susquehanna steam electric station (SSES) management to launch a broad-based review of how the Nuclear Department conducts business. This was accomplished through the formation of several improvement initiative teams. Clearly, one of the key areas that benefited from this management initiative was the corrective action program. The corrective action improvement team was charged with taking a comprehensive look at how the Nuclear Department identified and resolved problems. The 10-member team included management and bargaining unit personnel as well as an external management consultant. This paper provides a summary of this self-assessment initiative, including a discussion of the issues identified, opportunities for improvement, and subsequent completed or planned actions

  11. Power corrections and renormalons in Transverse Momentum Distributions

    Energy Technology Data Exchange (ETDEWEB)

    Scimemi, Ignazio [Departamento de Física Teórica II, Universidad Complutense de Madrid,Ciudad Universitaria, 28040 Madrid (Spain); Vladimirov, Alexey [Institut für Theoretische Physik, Universität Regensburg,D-93040 Regensburg (Germany)

    2017-03-01

    We study the power corrections to Transverse Momentum Distributions (TMDs) by analyzing renormalon divergences of the perturbative series. The renormalon divergences arise independently in two constituents of TMDs: the rapidity evolution kernel and the small-b matching coefficient. The renormalon contributions (and consequently power corrections and non-perturbative corrections to the related cross sections) have a non-trivial dependence on the Bjorken variable and the transverse distance. We discuss the consistency requirements for power corrections for TMDs and suggest inputs for the TMD phenomenology in accordance with this study. Both unpolarized quark TMD parton distribution function and fragmentation function are considered.

  12. Beyond hypercorrection: remembering corrective feedback for low-confidence errors.

    Science.gov (United States)

    Griffiths, Lauren; Higham, Philip A

    2018-02-01

    Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.

  13. Corrective Action Decision Document/Closure Report for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada, Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Patrick

    2014-01-01

    The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 105 based on the implementation of the corrective actions. Corrective action investigation (CAI) activities were performed from October 22, 2012, through May 23, 2013, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites; and in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices.

  14. True coincidence summing corrections for an extended energy range HPGe detector

    Energy Technology Data Exchange (ETDEWEB)

    Venegas-Argumedo, Y. [Centro de Investigación en Materiales Avanzados (CIMAV), Miguel de Cervantes 120, Chihuahua, Chih 31109 (Mexico); M.S. Student at CIMAV (Mexico); Montero-Cabrera, M. E., E-mail: elena.montero@cimav.edu.mx [Centro de Investigación en Materiales Avanzados (CIMAV), Miguel de Cervantes 120, Chihuahua, Chih 31109 (Mexico)

    2015-07-23

    True coincidence summing (TCS) effect for natural radioactive families of U-238 and Th-232 represents a problem when an environmental sample with a close source-detector geometry measurement is performed. By using a certified multi-nuclide standard source to calibrate an energy extended range (XtRa) HPGe detector, it is possible to obtain an intensity spectrum slightly affected by the TCS effect with energies from 46 to 1836 keV. In this work, the equations and some other considerations required to calculate the TCS correction factor for isotopes of natural radioactive chains are described. It is projected a validation of the calibration, performed with the IAEA-CU-2006-03 samples (soil and water)

  15. Corrective Action Decision Document/Closure Report for Corrective Action Unit 478: Area 12 T-Tunnel Ponds, Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    NSTec Environmental Restoration

    2010-03-15

    This Corrective Action Decision Document (CADD)/Closure Report (CR) was prepared by the Defense Threat Reduction Agency (DTRA) for Corrective Action Unit (CAU) 478, Area 12 T-Tunnel Ponds. This CADD/CR is consistent with the requirements of the Federal Facility Agreement and Consent Order (FFACO) agreed to by the State of Nevada, the U.S. Department of Energy (DOE), and the U.S. Department of Defense. Corrective Action Unit 478 is comprised of one corrective action site (CAS): • 12-23-01, Ponds (5) RAD Area The purpose of this CADD/CR is to provide justification and documentation supporting the recommendation for closure in place with use restrictions for CAU 478.

  16. Corrective Action Decision Document/Closure Report for Corrective Action Unit 559: T Tunnel Compressor/Blower Pad, Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    NSTec Environmental Restoration

    2010-03-15

    This Corrective Action Decision Document (CADD)/Closure Report (CR) was prepared by the Defense Threat Reduction Agency (DTRA) for Corrective Action Unit (CAU) 559, T-Tunnel Compressor/Blower Pad. This CADD/CR is consistent with the requirements of the Federal Facility Agreement and Consent Order (FFACO) agreed to by the State of Nevada, the U.S. Department of Energy, and the U.S. Department of Defense. Corrective Action Unit 559 is comprised of one Corrective Action Site (CAS): • 12-25-13, Oil Stained Soil and Concrete The purpose of this CADD/CR is to provide justification and documentation supporting the recommendation for closure in place with use restrictions for CAU 559.

  17. MIL-HDBK-338: Environmental Conversion Table Correction

    Science.gov (United States)

    Hark, Frank; Novack, Steven

    2017-01-01

    In reliability analysis, especially for launch vehicles, limited data is frequently a problem. Component data from other environments must be used. MIL-HBK-338 has a matrix showing the conversation between environments. Due to round off the conversions are not commutative, converting from A to B will not equal converting from B to A. Agenda: Introduction to environment conversions; Original table; Original table with edits; How big is the problem?; First attempt at correction; Proposed solution.

  18. Problems of unsteady temperature measurements in a pulsating flow of gas

    International Nuclear Information System (INIS)

    Olczyk, A

    2008-01-01

    Unsteady flow temperature is one of the most difficult and complex flow parameters to measure. Main problems concern insufficient dynamic properties of applied sensors and an interpretation of recorded signals, composed of static and dynamic temperatures. An attempt is made to solve these two problems in the case of measurements conducted in a pulsating flow of gas in the 0–200 Hz range of frequencies, which corresponds to real conditions found in exhaust pipes of modern diesel engines. As far as sensor dynamics is concerned, an analysis of requirements related to the thermometer was made, showing that there was no possibility of assuring such a high frequency band within existing solutions. Therefore, a method of double-channel correction of sensor dynamics was proposed and experimentally tested. The results correspond well with the calculations made by means of the proposed model of sensor dynamics. In the case of interpretation of the measured temperature signal, a method for distinguishing its two components was proposed. This decomposition considerably helps with a correct interpretation of unsteady flow phenomena in pipes

  19. Histogram-driven cupping correction (HDCC) in CT

    Science.gov (United States)

    Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.

    2010-04-01

    Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.

  20. Bias Correction with Jackknife, Bootstrap, and Taylor Series

    OpenAIRE

    Jiao, Jiantao; Han, Yanjun; Weissman, Tsachy

    2017-01-01

    We analyze the bias correction methods using jackknife, bootstrap, and Taylor series. We focus on the binomial model, and consider the problem of bias correction for estimating $f(p)$, where $f \\in C[0,1]$ is arbitrary. We characterize the supremum norm of the bias of general jackknife and bootstrap estimators for any continuous functions, and demonstrate the in delete-$d$ jackknife, different values of $d$ may lead to drastically different behavior in jackknife. We show that in the binomial ...

  1. Study of lung density corrections in a clinical trial (RTOG 88-08)

    International Nuclear Information System (INIS)

    Orton, Colin G.; Chungbin, Suzanne; Klein, Eric E.; Gillin, Michael T.; Schultheiss, Timothy E.; Sause, William T.

    1998-01-01

    Purpose: To investigate the effect of lung density corrections on the dose delivered to lung cancer radiotherapy patients in a multi-institutional clinical trial, and to determine whether commonly available density-correction algorithms are sufficient to improve the accuracy and precision of dose calculation in the clinical trials setting. Methods and Materials: A benchmark problem was designed (and a corresponding phantom fabricated) to test density-correction algorithms under standard conditions for photon beams ranging from 60 Co to 24 MV. Point doses and isodose distributions submitted for a Phase III trial in regionally advanced, unresectable non-small-cell lung cancer (Radiation Therapy Oncology Group 88-08) were calculated with and without density correction. Tumor doses were analyzed for 322 patients and 1236 separate fields. Results: For the benchmark problem studied here, the overall correction factor for a four-field treatment varied significantly with energy, ranging from 1.14 ( 60 Co) to 1.05 (24 MV) for measured doses, or 1.17 ( 60 Co) to 1.05 (24 MV) for doses calculated by conventional density-correction algorithms. For the patient data, overall correction factors (calculated) ranged from 0.95 to 1.28, with a mean of 1.05 and distributional standard deviation of 0.05. The largest corrections were for lateral fields, with a mean correction factor of 1.11 and standard deviation of 0.08. Conclusions: Lung inhomogeneities can lead to significant variations in delivered dose between patients treated in a clinical trial. Existing density-correction algorithms are accurate enough to significantly reduce these variations

  2. Capacitor requirements for controlled thermonuclear experiments and reactors

    International Nuclear Information System (INIS)

    Boicourt, G.P.; Hoffman, P.S.

    1975-01-01

    Future controlled thermonuclear experiments as well as controlled thermonuclear reactors will require substantial numbers of capacitors. The demands on these units are likely to be quite severe and quite different from the normal demands placed on either present energy storage capacitors or present power factor correction capacitors. It is unlikely that these two types will suffice for all necessary Controlled Thermonuclear Research (CTR) applications. The types of capacitors required for the various CTR operating conditions are enumerated. Factors that influence the life, cost and operating abilities of these types of capacitors are discussed. The problems of capacitors in a radiation environment are considered. Areas are defined where future research is needed. Some directions that this research should take are suggested. (U.S.)

  3. Capacitor requirements for controlled thermonuclear experiments and reactors

    International Nuclear Information System (INIS)

    Boicourt, G.P.; Hoffman, P.S.

    1975-01-01

    Future controlled thermonuclear experiments as well as controlled thermonuclear reactors will require substantial numbers of capacitors. The demands on these units are likely to be quite severe and quite different from the normal demands placed on either present energy storage capacitors or present power factor correction capacitors. It is unlikely that these two types will suffice for all necessary Controlled Thermonuclear Research (CTR) applications. The types of capacitors required for the various CTR operating conditions are enumerated. Factors that influence the life, cost and operating abilities of these types of capacitors are discussed. The problems of capacitors in a radiation environment are considered. Areas are defined where future research is needed. Some directions that this research should take are suggested

  4. Comprehension and computation in Bayesian problem solving

    Directory of Open Access Journals (Sweden)

    Eric D. Johnson

    2015-07-01

    Full Text Available Humans have long been characterized as poor probabilistic reasoners when presented with explicit numerical information. Bayesian word problems provide a well-known example of this, where even highly educated and cognitively skilled individuals fail to adhere to mathematical norms. It is widely agreed that natural frequencies can facilitate Bayesian reasoning relative to normalized formats (e.g. probabilities, percentages, both by clarifying logical set-subset relations and by simplifying numerical calculations. Nevertheless, between-study performance on transparent Bayesian problems varies widely, and generally remains rather unimpressive. We suggest there has been an over-focus on this representational facilitator (i.e. transparent problem structures at the expense of the specific logical and numerical processing requirements and the corresponding individual abilities and skills necessary for providing Bayesian-like output given specific verbal and numerical input. We further suggest that understanding this task-individual pair could benefit from considerations from the literature on mathematical cognition, which emphasizes text comprehension and problem solving, along with contributions of online executive working memory, metacognitive regulation, and relevant stored knowledge and skills. We conclude by offering avenues for future research aimed at identifying the stages in problem solving at which correct versus incorrect reasoners depart, and how individual difference might influence this time point.

  5. RESTORATION OF WEAK PHASE-CONTRAST IMAGES RECORDED WITH A HIGH DEGREE OF DEFOCUS: THE"TWIN IMAGE" PROBLEM ASSOCIATED WITH CTF CORRECTION

    Energy Technology Data Exchange (ETDEWEB)

    Downing, Kenneth H.; Glaeser, Robert M.

    2008-03-28

    Relatively large values of objective-lens defocus must normally be used to produce detectable levels of image contrast for unstained biological specimens, which are generally weak phase objects. As a result, a subsequent restoration operation must be used to correct for oscillations in the contrast transfer function (CTF) at higher resolution. Currently used methods of CTF-correction assume the ideal case in which Friedel mates in the scattered wave have contributed pairs of Fourier components that overlap with one another in the image plane. This"ideal" situation may be only poorly satisfied, or not satisfied at all, as the particle size gets smaller, the defocus value gets larger, and the resolution gets higher. We have therefore investigated whether currently used methods of CTF correction are also effective in restoring the single-sideband image information that becomes displaced (delocalized) by half (or more) the diameter of a particle of finite size. Computer simulations are used to show that restoration either by"phase flipping" or by multiplying by the CTF recovers only about half of the delocalized information. The other half of the delocalized information goes into a doubly defocused"twin" image of the type produced during optical reconstruction of an in-line hologram. Restoration with a Wiener filter is effective in recovering the delocalized information only when the signal-to-noise ratio (S/N) is orders of magnitude higher than that which exists in low-dose images of biological specimens, in which case the Wiener filter approaches division by the CTF (i.e. the formal inverse). For realistic values of the S/N, however, the"twin image" problem seenwith a Wiener filter is very similar to that seen when either phase flipping or multiplying by the CTF are used for restoration. The results of these simulations suggest that CTF correction is a poor alternative to using a Zernike-type phase plate when imaging biological specimens, in which case the images can

  6. Immediate postoperative outcome of orthognathic surgical planning, and prediction of positional changes in hard and soft tissue, independently of the extent and direction of the surgical corrections required

    DEFF Research Database (Denmark)

    Donatsky, Ole; Bjørn-Jørgensen, Jens; Hermund, Niels Ulrich

    2011-01-01

    orthognathic correction using the computerised, cephalometric, orthognathic, surgical planning system (TIOPS). Preoperative cephalograms were analysed and treatment plans and prediction tracings produced by computerised interactive simulation. The planned changes were transferred to models and finally...... with the presently included soft tissue algorithms, the current study shows relatively high mean predictability of the immediately postoperative hard and soft tissue outcome, independent of the extent and direction of required orthognathic correction. Because of the relatively high individual variability, caution...

  7. Problems in quantum cosmology

    International Nuclear Information System (INIS)

    Amsterdamski, P.

    1986-01-01

    The standard cosmological model is reviewed and shown not to be self-sufficient in that it requires initial conditions most likely to be supplied by quantum cosmology. The possible approaches to the issue of initial conditions for cosmology are then discussed. In this thesis, the author considers three separate problems related to this issue. First, the possibility of inflation is investigated in detail by analyzing the evolution of metric perturbations and fluctuations in the expectation value of a scalar field prior to a phase transition; finite temperature effects are also included. Since the inhomogeneities were damped well before the onset of a phase transition. It is concluded that an inflation was possible. Next, the effective action of neutrino and photon fields is calculated for homogeneous spacetimes with small anisotropy; it is shown that quantum corrections to the action due to these fields influence the evolution of an early Universe in the Same way as do the analogous correction terms arising from a conformally invariant scalar which has been previously studied. Finally, the question of an early anisotropy is also discussed in a framework of Hartle-Hawking wave function of the Universe. A wave function of a Bianchi IX type Universe is calculated in a semiclassical approximation

  8. A novel surgical correction and innovative splint for swan neck deformity in hypermobility syndrome

    Directory of Open Access Journals (Sweden)

    Karthik Vishwanathan

    2018-01-01

    Full Text Available Splinting is a great domain of occupational therapy profession. Making a splint for the patient would depend on the need or requirement of the problems and deformities. Swan neck deformity is an uncommon condition, and it can be seen in rheumatoid arthritis, cerebral palsy, and after trauma. Conservative treatment of the swan neck deformity is available by different static splints only. There are very few reports of surgical correction of swan-neck deformity in benign hypermobility syndrome. This case report describes the result of novel surgical intervention and an innovative hand splint in a 20-year-old female with a history of cardiovascular stroke with no residual neurological deficit. She presented with correctable swan neck deformity and failed to improve with static ring splints to correct the deformity. She underwent volar plate plication of the proximal interphalangeal joint of the left ring finger along with hemitenodesis of ulnar slip of flexor digitorum superficialis (FDS tendon whereby, the ulnar slip of FDS was passed through a small surgically created rent in A2 pulley and sutured back to itself. Postoperatively, the patient was referred to occupational therapy for splinting with the instruction that the splint would work sometimes for as static and some time as dynamic for positional and correction of the finger. After occupational therapy intervention and splinting, the patient had a full correction of the swan-neck deformity with near full flexion of the operated finger and can work independently.

  9. Corrective Action Decision Document/Closure Report for Corrective Action Unit 567: Miscellaneous Soil Sites - Nevada National Security Site, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Patrick [Navarro-Intera, LLC (N-I), Las Vegas, NV (United States)

    2014-12-01

    This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 567: Miscellaneous Soil Sites, Nevada National Security Site, Nevada. The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 567 based on the implementation of the corrective actions. The corrective actions implemented at CAU 567 were developed based on an evaluation of analytical data from the CAI, the assumed presence of COCs at specific locations, and the detailed and comparative analysis of the CAAs. The CAAs were selected on technical merit focusing on performance, reliability, feasibility, safety, and cost. The implemented corrective actions meet all requirements for the technical components evaluated. The CAAs meet all applicable federal and state regulations for closure of the site. Based on the implementation of these corrective actions, the DOE, National Nuclear Security Administration Nevada Field Office provides the following recommendations: • No further corrective actions are necessary for CAU 567. • The Nevada Division of Environmental Protection issue a Notice of Completion to the DOE, National Nuclear Security Administration Nevada Field Office for closure of CAU 567. • CAU 567 be moved from Appendix III to Appendix IV of the FFACO.

  10. Corrective Action Decision Document/Closure Report for Corrective Action Unit 266: Area 25 Building 3124 Leachfield, Nevada Test Site, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    NNSA/NV

    2000-02-17

    This Corrective Action Decision Document/Closure Report (CADD/CR) was prepared for Corrective Action Unit (CAU) 266, Area 25 Building 3124 Leachfield, in accordance with the Federal Facility Agreement and Consent Order. Located in Area 25 at the Nevada Test Site in Nevada, CAU 266 includes Corrective Action Site (CAS) 25-05-09. The Corrective Action Decision Document and Closure Report were combined into one report because sample data collected during the corrective action investigation (CAI) indicated that contaminants of concern (COCs) were either not present in the soil, or present at concentrations not requiring corrective action. This CADD/CR identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's recommendation that no corrective action was necessary for CAU 266. From February through May 1999, CAI activities were performed as set forth in the related Corrective Action Investigation Plan. Analytes detected during the three-stage CAI of CAU 266 were evaluated against preliminary action levels (PALs) to determine COCs, and the analysis of the data generated from soil collection activities indicated the PALs were not exceeded for total volatile/semivolatile organic compounds, total petroleum hydrocarbons, polychlorinated biphenyls, total Resource Conservation and Recovery Act metals, gamma-emitting radionuclides, isotopic uranium/plutonium, and strontium-90 for any of the samples. However, COCs were identified in samples from within the septic tank and distribution box; and the isotopic americium concentrations in the two soil samples did exceed PALs. Closure activities were performed at the site to address the COCs identified in the septic tank and distribution box. Further, no use restrictions were required to be placed on CAU 266 because the CAI revealed soil contamination to be less than the 100 millirems per year limit established by DOE Order 5400.5.

  11. Corrective Action Decision Document/Closure Report for Corrective Action Unit 266: Area 25 Building 3124 Leachfield, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    2000-01-01

    This Corrective Action Decision Document/Closure Report (CADD/CR) was prepared for Corrective Action Unit (CAU) 266, Area 25 Building 3124 Leachfield, in accordance with the Federal Facility Agreement and Consent Order. Located in Area 25 at the Nevada Test Site in Nevada, CAU 266 includes Corrective Action Site (CAS) 25-05-09. The Corrective Action Decision Document and Closure Report were combined into one report because sample data collected during the corrective action investigation (CAI) indicated that contaminants of concern (COCs) were either not present in the soil, or present at concentrations not requiring corrective action. This CADD/CR identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's recommendation that no corrective action was necessary for CAU 266. From February through May 1999, CAI activities were performed as set forth in the related Corrective Action Investigation Plan. Analytes detected during the three-stage CAI of CAU 266 were evaluated against preliminary action levels (PALs) to determine COCs, and the analysis of the data generated from soil collection activities indicated the PALs were not exceeded for total volatile/semivolatile organic compounds, total petroleum hydrocarbons, polychlorinated biphenyls, total Resource Conservation and Recovery Act metals, gamma-emitting radionuclides, isotopic uranium/plutonium, and strontium-90 for any of the samples. However, COCs were identified in samples from within the septic tank and distribution box; and the isotopic americium concentrations in the two soil samples did exceed PALs. Closure activities were performed at the site to address the COCs identified in the septic tank and distribution box. Further, no use restrictions were required to be placed on CAU 266 because the CAI revealed soil contamination to be less than the 100 millirems per year limit established by DOE Order 5400.5

  12. Color correction pipeline optimization for digital cameras

    Science.gov (United States)

    Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo

    2013-04-01

    The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.

  13. From Answer-Getters to Problem Solvers

    Science.gov (United States)

    Flynn, Mike

    2017-01-01

    In some math classrooms, students are taught to follow and memorize procedures to arrive at the correct solution to problems. In this article, author Mike Flynn suggests a way to move beyond answer-getting to true problem solving. He describes an instructional approach called three-act tasks in which students solve an engaging math problem in…

  14. Incremental Interactive Verification of the Correctness of Object-Oriented Software

    DEFF Research Database (Denmark)

    Mehnert, Hannes

    Development of correct object-oriented software is difficult, in particular if a formalised proof of its correctness is demanded. A lot of current software is developed using the object-oriented programming paradigm. This paradigm compensated for safety and security issues with imperative...... structure. For efficiency, our implementation uses copy-on-write and shared mutable data, not observable by a client. I further use this data structure to verify the correctness of a solution to the point location problem. The results demonstrate that I am able to verify the correctness of object-oriented...... programming, such as manual memory management. Popularly used integrated development environments (IDEs) provide features such as debugging and unit testing to facilitate development of robust software, but hardly any development environment supports the development of provable correct software. A tight...

  15. Corrective Action Plan for Corrective Action Unit 563: Septic Systems, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    2009-01-01

    This Corrective Action Plan (CAP) has been prepared for Corrective Action Unit (CAU) 563, Septic Systems, in accordance with the Federal Facility Agreement and Consent Order. CAU 563 consists of four Corrective Action Sites (CASs) located in Areas 3 and 12 of the Nevada Test Site. CAU 563 consists of the following CASs: CAS 03-04-02, Area 3 Subdock Septic Tank CAS 03-59-05, Area 3 Subdock Cesspool CAS 12-59-01, Drilling/Welding Shop Septic Tanks CAS 12-60-01, Drilling/Welding Shop Outfalls Site characterization activities were performed in 2007, and the results are presented in Appendix A of the CAU 563 Corrective Action Decision Document. The scope of work required to implement the recommended closure alternatives is summarized below. CAS 03-04-02, Area 3 Subdock Septic Tank, contains no contaminants of concern (COCs) above action levels. No further action is required for this site; however, as a best management practice (BMP), all aboveground features (e.g., riser pipes and bumper posts) will be removed, the septic tank will be removed, and all open pipe ends will be sealed with grout. CAS 03-59-05, Area 3 Subdock Cesspool, contains no COCs above action levels. No further action is required for this site; however, as a BMP, all aboveground features (e.g., riser pipes and bumper posts) will be removed, the cesspool will be abandoned by filling it with sand or native soil, and all open pipe ends will be sealed with grout. CAS 12-59-01, Drilling/Welding Shop Septic Tanks, will be clean closed by excavating approximately 4 cubic yards (yd3) of arsenic- and chromium-impacted soil. In addition, as a BMP, the liquid in the South Tank will be removed, the North Tank will be removed or filled with grout and left in place, the South Tank will be filled with grout and left in place, all open pipe ends will be sealed with grout or similar material, approximately 10 yd3 of chlordane-impacted soil will be excavated, and debris within the CAS boundary will be removed. CAS 12

  16. The problem in 180 deg data sampling and radioactivity decay correction in gated cardiac blood pool scanning using SPECT

    International Nuclear Information System (INIS)

    Ohtake, Tohru; Watanabe, Toshiaki; Nishikawa, Junichi

    1986-01-01

    In cardiac blood pool scanning using SPECT, half 180 deg data collection (HD) vs. full 360 deg data collection (FD) and Tc-99m decay are problems in quantifying the ejection count (EC) (end-diastolic count - end-systolic count) of both ventricles and the ratio of the ejection count of the right and left ventricles (RVEC/LVEC). We studied the change produced by altering the starting position of data sampling in HD scans. In our results of phantom and 4 clinical cases, when the cardiac axis deviation was not large and there was not remarkable cardiac enlargement, the change in LVEC, RVEC and RVEC/LVEC was small (1 - 4 %) within 12 degree change of the starting position, and the difference between the results of HD scan with a good starting position (the average of LV peak and RV peak) and FD scan was not large (less than 7 %). Because of this, we think HD scan can be used in those cases. But when the cardiac axis deviation was large or there was remarkable cardiac enlargement, the change of LVEC, RVEC and RVEC/LVEC was large (more than 10 %) even within 12 degree change of the starting position. So we think FD scan would be better in those cases. In our results of 6 patients, the half-life of Tc-99m labeled albumin in blood varied from 2 to 4 hr (3.03 ± 0.59 hr, mean ± s.d.). Using a program for radioactivity (RA) decay correction, we studied the change in LVEC, RVEC and LVEC/RVEC in 11 cases. When RA decay correction was performed using a halflife of 3.0 hr, LVEC increased 7.5 %, RVEC increased 8.7 % and RVEC/LVEC increased 0.9 % on the average in HD scans of 8 cases (LPO to RAO, 32 views, 60 beat/1 view). We think RA decay correction would not be needed in quantifying RVEC/LVEC in most cases because the change of RVEC/LVEC was very small. (author)

  17. New Y2K problem for mask making (or, Surviving mask data problems after 2000)

    Science.gov (United States)

    Sturgeon, Roger

    1999-08-01

    The Y2K problem has analogies in the mask-making world. With the Y2K problem where a date field has just two bytes for the year, there are some cases of mask-making data in which the file size cannot exceed 2 gigabytes. Where a two-digit date field can only unambiguously use a limited range of values (00 to 99), design coordinates can only cover a range of about 4 billion values, which is getting a little uncomfortable for all of the new applications. In retrospect, with a degree of foresight and planning the Y2K date problem could have been easily solved if new encodings had been allowed in the two- digit field. Likewise, in the mask-making industry we currently have the opportunity to achieve far superior data compression if we allow some new forms of data encoding in our data. But this will require universal agreement. The correct way to look at the Y2K problem is that some information was left out of the data stream due to common understandings that made the additional information superfluous. But as the year 2000 approaches, it has become widely recognized that missing data needs to be stated explicitly, and any ambiguities in the representation of the data will need to be eliminated with precise specifications. In a similar way, old mask data generation methods have had numerous flaws that we have been able to ignore for a long time. But now is the time to fix theses flaws and provide extended capabilities. What is not yet clear is if the old data generation methods can be modified to meet these developing needs. Unilateral action is not likely to lead to much progress, so some united effort is required by all interested parties if success is to be achieved in the brief time that remains.

  18. De-confusing the THOG problem: the Pythagorean solution.

    Science.gov (United States)

    Griggs, R A; Koenig, C S; Alea, N L

    2001-08-01

    Sources of facilitation for Needham and Amado's (1995) Pythagoras version of Wason's THOG problem were systematically examined in three experiments with 174 participants. Although both the narrative structure and figural notation used in the Pythagoras problem independently led to significant facilitation (40-50% correct), pairing hypothesis generation with either factor or pairing the two factors together was found to be necessary to obtain substantial facilitation (> 50% correct). Needham and Amado's original finding for the complete Pythagoras problem was also replicated. These results are discussed in terms of the "confusion theory" explanation for performance on the standard THOG problem. The possible role of labelling as a de-confusing factor in other versions of the THOG problem and the implications of the present findings for human reasoning are also considered.

  19. Effects of image distortion correction on voxel-based morphometry

    International Nuclear Information System (INIS)

    Goto, Masami; Abe, Osamu; Kabasawa, Hiroyuki

    2012-01-01

    We aimed to show that correcting image distortion significantly affects brain volumetry using voxel-based morphometry (VBM) and to assess whether the processing of distortion correction reduces system dependency. We obtained contiguous sagittal T 1 -weighted images of the brain from 22 healthy participants using 1.5- and 3-tesla magnetic resonance (MR) scanners, preprocessed images using Statistical Parametric Mapping 5, and tested the relation between distortion correction and brain volume using VBM. Local brain volume significantly increased or decreased on corrected images compared with uncorrected images. In addition, the method used to correct image distortion for gradient nonlinearity produced fewer volumetric errors from MR system variation. This is the first VBM study to show more precise volumetry using VBM with corrected images. These results indicate that multi-scanner or multi-site imaging trials require correction for distortion induced by gradient nonlinearity. (author)

  20. Correcting ligands, metabolites, and pathways

    Directory of Open Access Journals (Sweden)

    Vriend Gert

    2006-11-01

    Full Text Available Abstract Background A wide range of research areas in bioinformatics, molecular biology and medicinal chemistry require precise chemical structure information about molecules and reactions, e.g. drug design, ligand docking, metabolic network reconstruction, and systems biology. Most available databases, however, treat chemical structures more as illustrations than as a datafield in its own right. Lack of chemical accuracy impedes progress in the areas mentioned above. We present a database of metabolites called BioMeta that augments the existing pathway databases by explicitly assessing the validity, correctness, and completeness of chemical structure and reaction information. Description The main bulk of the data in BioMeta were obtained from the KEGG Ligand database. We developed a tool for chemical structure validation which assesses the chemical validity and stereochemical completeness of a molecule description. The validation tool was used to examine the compounds in BioMeta, showing that a relatively small number of compounds had an incorrect constitution (connectivity only, not considering stereochemistry and that a considerable number (about one third had incomplete or even incorrect stereochemistry. We made a large effort to correct the errors and to complete the structural descriptions. A total of 1468 structures were corrected and/or completed. We also established the reaction balance of the reactions in BioMeta and corrected 55% of the unbalanced (stoichiometrically incorrect reactions in an automatic procedure. The BioMeta database was implemented in PostgreSQL and provided with a web-based interface. Conclusion We demonstrate that the validation of metabolite structures and reactions is a feasible and worthwhile undertaking, and that the validation results can be used to trigger corrections and improvements to BioMeta, our metabolite database. BioMeta provides some tools for rational drug design, reaction searches, and

  1. Investigation of the relationship between students' problem solving and conceptual understanding of electricity

    Science.gov (United States)

    Cobanoglu Aktan, Derya

    The purpose of this study was to investigate the relationship between students' qualitative problem solving and conceptual understanding of electricity. For the analysis data were collected from observations of group problem solving, from their homework artifacts, and from semi-structured interviews. The data for six undergraduate students were analyzed by qualitative research methods. The students in the study were found to use tools (such as computer simulations and formulas) differently from one another, and they made different levels of interpretations for the electricity representations. Consequently each student had different problem solving strategies. The students exhibited a wide range of levels of understanding of the electricity concepts. It was found that students' conceptual understandings and their problem solving strategies were closely linked with one another. The students who tended to use multiple tools to make high level interpretations for representations to arrive at a single solution exhibited a higher level of understanding than the students who tended to use tools to make low level interpretations to reach a solution. This study demonstrates a relationship between conceptual understanding and problem solving strategies. Similar to the results of the existing research on students' quantitative problem solving, it was found that students were able to give correct answers to some problems without fully understanding the concepts behind the problem. However, some problems required a conceptual understanding in order for a student to arrive at a correct answer. An implication of this study is that careful selection of qualitative questions is necessary for capturing high levels of conceptual understanding. Additionally, conceptual understanding among some types of problem solvers can be improved by activities or tasks that can help them reflect on their problem solving strategies and the tools they use.

  2. The hazards of correcting myths about health care reform.

    Science.gov (United States)

    Nyhan, Brendan; Reifler, Jason; Ubel, Peter A

    2013-02-01

    Misperceptions are a major problem in debates about health care reform and other controversial health issues. We conducted an experiment to determine if more aggressive media fact-checking could correct the false belief that the Affordable Care Act would create "death panels." Participants from an opt-in Internet panel were randomly assigned to either a control group in which they read an article on Sarah Palin's claims about "death panels" or an intervention group in which the article also contained corrective information refuting Palin. The correction reduced belief in death panels and strong opposition to the reform bill among those who view Palin unfavorably and those who view her favorably but have low political knowledge. However, it backfired among politically knowledgeable Palin supporters, who were more likely to believe in death panels and to strongly oppose reform if they received the correction. These results underscore the difficulty of reducing misperceptions about health care reform among individuals with the motivation and sophistication to reject corrective information.

  3. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  4. Fully multidimensional flux-corrected transport algorithms for fluids

    International Nuclear Information System (INIS)

    Zalesak, S.T.

    1979-01-01

    The theory of flux-corrected transport (FCT) developed by Boris and Book is placed in a simple, generalized format, and a new algorithm for implementing the critical flux limiting stage in multidimensions without resort to time splitting is presented. The new flux limiting algorithm allows the use of FCT techniques in multidimensional fluid problems for which time splitting would produce unacceptable numerical results, such as those involving incompressible or nearly incompressible flow fields. The 'clipping' problem associated with the original one dimensional flux limiter is also eliminated or alleviated. Test results and applications to a two dimensional fluid plasma problem are presented

  5. Identification and analysis of student conceptions used to solve chemical equilibrium problems

    Science.gov (United States)

    Voska, Kirk William

    This study identified and quantified chemistry conceptions students use when solving chemical equilibrium problems requiring the application of Le Chatelier's principle, and explored the feasibility of designing a paper and pencil test for this purpose. It also demonstrated the utility of conditional probabilities to assess test quality. A 10-item pencil-and-paper, two-tier diagnostic instrument, the Test to Identify Student Conceptualizations (TISC) was developed and administered to 95 second-semester university general chemistry students after they received regular course instruction concerning equilibrium in homogeneous aqueous, heterogeneous aqueous, and homogeneous gaseous systems. The content validity of TISC was established through a review of TISC by a panel of experts; construct validity was established through semi-structured interviews and conditional probabilities. Nine students were then selected from a stratified random sample for interviews to validate TISC. The probability that TISC correctly identified an answer given by a student in an interview was p = .64, while the probability that TISC correctly identified a reason given by a student in an interview was p=.49. Each TISC item contained two parts. In the first part the student selected the correct answer to a problem from a set of four choices. In the second part students wrote reasons for their answer to the first part. TISC questions were designed to identify students' conceptions concerning the application of Le Chatelier's principle, the constancy of the equilibrium constant, K, and the effect of a catalyst. Eleven prevalent incorrect conceptions were identified. This study found students consistently selected correct answers more frequently (53% of the time) than they provided correct reasons (33% of the time). The association between student answers and respective reasons on each TISC item was quantified using conditional probabilities calculated from logistic regression coefficients. The

  6. 76 FR 49650 - Regulations Governing Practice Before the Internal Revenue Service; Correction

    Science.gov (United States)

    2011-08-11

    ... governing of practice before the IRS and the standards with respect to tax returns. DATES: This correction... Part 10 Accountants, Administrative practice and procedure, Lawyers, Reporting and recordkeeping requirements, Taxes. Correction of Publication Accordingly, 31 CFR part 10 is corrected by making the following...

  7. Real time prediction and correction of ADCS problems in LEO satellites using fuzzy logic

    Directory of Open Access Journals (Sweden)

    Yassin Mounir Yassin

    2017-06-01

    Full Text Available This approach is concerned with adapting the operations of attitude determination and control subsystem (ADCS of low earth orbit LEO satellites through analyzing the telemetry readings received by mission control center, and then responding to ADCS off-nominal situations. This can be achieved by sending corrective operational Tele-commands within real time. Our approach is related to the fuzzy membership of off-nominal telemetry readings of corrective actions through a set of fuzzy rules based on understanding the ADCS modes resulted from the satellite telemetry readings. Response in real time gives us a chance to avoid risky situations. The approach is tested on the EgyptSat-1 engineering model, which is our method to simulate the results.

  8. Geological Corrections in Gravimetry

    Science.gov (United States)

    Mikuška, J.; Marušiak, I.

    2015-12-01

    Applying corrections for the known geology to gravity data can be traced back into the first quarter of the 20th century. Later on, mostly in areas with sedimentary cover, at local and regional scales, the correction known as gravity stripping has been in use since the mid 1960s, provided that there was enough geological information. Stripping at regional to global scales became possible after releasing the CRUST 2.0 and later CRUST 1.0 models in the years 2000 and 2013, respectively. Especially the later model provides quite a new view on the relevant geometries and on the topographic and crustal densities as well as on the crust/mantle density contrast. Thus, the isostatic corrections, which have been often used in the past, can now be replaced by procedures working with an independent information interpreted primarily from seismic studies. We have developed software for performing geological corrections in space domain, based on a-priori geometry and density grids which can be of either rectangular or spherical/ellipsoidal types with cells of the shapes of rectangles, tesseroids or triangles. It enables us to calculate the required gravitational effects not only in the form of surface maps or profiles but, for instance, also along vertical lines, which can shed some additional light on the nature of the geological correction. The software can work at a variety of scales and considers the input information to an optional distance from the calculation point up to the antipodes. Our main objective is to treat geological correction as an alternative to accounting for the topography with varying densities since the bottoms of the topographic masses, namely the geoid or ellipsoid, generally do not represent geological boundaries. As well we would like to call attention to the possible distortions of the corrected gravity anomalies. This work was supported by the Slovak Research and Development Agency under the contract APVV-0827-12.

  9. A spectrum correction method for fuel assembly rehomogenization

    International Nuclear Information System (INIS)

    Lee, Kyung Taek; Cho, Nam Zin

    2004-01-01

    To overcome the limitation of existing homogenization methods based on the single assembly calculation with zero current boundary condition, we propose a new rehomogenization method, named spectrum correction method (SCM), consisting of the multigroup energy spectrum approximation by spectrum correction and the condensed two-group heterogeneous single assembly calculations with non-zero current boundary condition. In SCM, the spectrum shifting phenomena caused by current across assembly interfaces are considered by the spectrum correction at group condensation stage at first. Then, heterogeneous single assembly calculations with two-group cross sections condensed by using corrected multigroup energy spectrum are performed to obtain rehomogenized nodal diffusion parameters, i.e., assembly-wise homogenized cross sections and discontinuity factors. To evaluate the performance of SCM, it was applied to the analytic function expansion nodal (AFEN) method and several test problems were solved. The results show that SCM can reduce the errors significantly both in multiplication factors and assembly averaged power distributions

  10. ECOLOGICAL THERAPY AS CORRECTIONAL AND PEDAGOGICAL ELEMENT OF INTEGRATED APPROACH IN THE TREATMENT OF LOGONEUROSIS AMONG PRESCHOOL CHILDREN

    Directory of Open Access Journals (Sweden)

    I. V. Kalashnikova

    2016-01-01

    Full Text Available . The aims of the publication are: to analyse domestic experience of education and training of preschool children with logoneurosis; to designate the causes and displays of this disease; to define the problems arising through organization of correctional work in preschool institution taking into account modern requirements of the Russian legislation and climatic features of regions; to present a possible version of the solution of these problems.Methods. The methods of theoretical analysis and generalization of scientific, methodical publications, and legislative base on a problem of correctional work on stuttering among preschool children are used.Results and scientific novelty. The authors’ program of additional education with the correctional elements «Ecotherapy for Children at the age of 5–7 years with Logoneurosis» developed by the staff of the Ecotherapy Laboratory of the Polar Alpine Botanical Garden – Institute named after N. A. Avrorin is described in the publication. The program complies with the modern requirements of Federal State Educational Standard of preschool education and is focused on tutors and speech language therapists of correctional groups and logocentres. In the course of mastering the program, a child by means of a game at once joins in search-investigative activity in the field of biology and ecology with visualization of an ultimate goal and obligatory practical material realization of results of work. From the point of view of medical expediency, the program has included the special breathing and relaxation exercises which are picked up for the lesson topic. The efficiency of a combination in correctional pedagogics of standard logopedic methods and the practise with nonconventional methods of art-, garden-, and animal-assisted therapy is confirmed. Special relevance of the proposed techniques and methods in the conditions of the Polar region (the region, wherein during the period of an exit from polar night

  11. Corrective Action Decision Document/Closure Report for Corrective Action Unit 383: Area E-Tunnel Sites, Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    NSTec Environmental Restoration

    2010-03-15

    This Corrective Action Decision Document/Closure Report (CADD/CR) was prepared by the Defense Threat Reduction Agency (DTRA) for Corrective Action Unit (CAU) 383, Area 12 E-Tunnel Sites, which is the joint responsibility of DTRA and the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office (NNSA/NSO). This CADD/CR is consistent with the requirements of the Federal Facility Agreement and Consent Order (FFACO) agreed to by the State of Nevada, the DOE, and the U.S. Department of Defense. Corrective Action Unit 383 is comprised of three Corrective Action Sites (CASs) and two adjacent areas: • CAS 12-06-06, Muckpile • CAS 12-25-02, Oil Spill • CAS 12-28-02, Radioactive Material • Drainage below the Muckpile • Ponds 1, 2, and 3 The purpose of this CADD/CR is to provide justification and documentation to support the recommendation for closure with no further corrective action, by placing use restrictions at the three CASs and two adjacent areas of CAU 383.

  12. Relativistic neoclassical transport coefficients with momentum correction

    International Nuclear Information System (INIS)

    Marushchenko, I.; Azarenkov, N.A.

    2016-01-01

    The parallel momentum correction technique is generalized for relativistic approach. It is required for proper calculation of the parallel neoclassical flows and, in particular, for the bootstrap current at fusion temperatures. It is shown that the obtained system of linear algebraic equations for parallel fluxes can be solved directly without calculation of the distribution function if the relativistic mono-energetic transport coefficients are already known. The first relativistic correction terms for Braginskii matrix coefficients are calculated.

  13. Methods of orbit correction system optimization

    International Nuclear Information System (INIS)

    Chao, Yu-Chiu.

    1997-01-01

    Extracting optimal performance out of an orbit correction system is an important component of accelerator design and evaluation. The question of effectiveness vs. economy, however, is not always easily tractable. This is especially true in cases where betatron function magnitude and phase advance do not have smooth or periodic dependencies on the physical distance. In this report a program is presented using linear algebraic techniques to address this problem. A systematic recipe is given, supported with quantitative criteria, for arriving at an orbit correction system design with the optimal balance between performance and economy. The orbit referred to in this context can be generalized to include angle, path length, orbit effects on the optical transfer matrix, and simultaneous effects on multiple pass orbits

  14. Corrective Action Plan for Corrective Action Unit 139: Waste Disposal Sites, Nevada Test Site, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    NSTec Environmental Restoration

    2007-07-01

    Corrective Action Unit (CAU) 139, Waste Disposal Sites, is listed in the Federal Facility Agreement and Consent Order (FFACO) of 1996 (FFACO, 1996). CAU 139 consists of seven Corrective Action Sites (CASs) located in Areas 3, 4, 6, and 9 of the Nevada Test Site (NTS), which is located approximately 65 miles (mi) northwest of Las Vegas, Nevada (Figure 1). CAU 139 consists of the following CASs: CAS 03-35-01, Burn Pit; CAS 04-08-02, Waste Disposal Site; CAS 04-99-01, Contaminated Surface Debris; CAS 06-19-02, Waste Disposal Site/Burn Pit; CAS 06-19-03, Waste Disposal Trenches; CAS 09-23-01, Area 9 Gravel Gertie; and CAS 09-34-01, Underground Detection Station. Details of the site history and site characterization results for CAU 139 are provided in the approved Corrective Action Investigation Plan (CAIP) (U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office [NNSA/NSO], 2006) and in the approved Corrective Action Decision Document (CADD) (NNSA/NSO, 2007). The purpose of this Corrective Action Plan (CAP) is to present the detailed scope of work required to implement the recommended corrective actions as specified in Section 4.0 of the approved CADD (NNSA/NSO, 2007). The approved closure activities for CAU 139 include removal of soil and debris contaminated with plutonium (Pu)-239, excavation of geophysical anomalies, removal of surface debris, construction of an engineered soil cover, and implementation of use restrictions (URs). Table 1 presents a summary of CAS-specific closure activities and contaminants of concern (COCs). Specific details of the corrective actions to be performed at each CAS are presented in Section 2.0 of this report.

  15. Corrective Action Plan for Corrective Action Unit 139: Waste Disposal Sites, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    NSTec Environmental Restoration

    2007-01-01

    Corrective Action Unit (CAU) 139, Waste Disposal Sites, is listed in the Federal Facility Agreement and Consent Order (FFACO) of 1996 (FFACO, 1996). CAU 139 consists of seven Corrective Action Sites (CASs) located in Areas 3, 4, 6, and 9 of the Nevada Test Site (NTS), which is located approximately 65 miles (mi) northwest of Las Vegas, Nevada (Figure 1). CAU 139 consists of the following CASs: CAS 03-35-01, Burn Pit; CAS 04-08-02, Waste Disposal Site; CAS 04-99-01, Contaminated Surface Debris; CAS 06-19-02, Waste Disposal Site/Burn Pit; CAS 06-19-03, Waste Disposal Trenches; CAS 09-23-01, Area 9 Gravel Gertie; and CAS 09-34-01, Underground Detection Station. Details of the site history and site characterization results for CAU 139 are provided in the approved Corrective Action Investigation Plan (CAIP) (U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office [NNSA/NSO], 2006) and in the approved Corrective Action Decision Document (CADD) (NNSA/NSO, 2007). The purpose of this Corrective Action Plan (CAP) is to present the detailed scope of work required to implement the recommended corrective actions as specified in Section 4.0 of the approved CADD (NNSA/NSO, 2007). The approved closure activities for CAU 139 include removal of soil and debris contaminated with plutonium (Pu)-239, excavation of geophysical anomalies, removal of surface debris, construction of an engineered soil cover, and implementation of use restrictions (URs). Table 1 presents a summary of CAS-specific closure activities and contaminants of concern (COCs). Specific details of the corrective actions to be performed at each CAS are presented in Section 2.0 of this report

  16. Diagnosing plant problems

    Science.gov (United States)

    Cheryl A. Smith

    2008-01-01

    Diagnosing Christmas tree problems can be a challenge, requiring a basic knowledge of plant culture and physiology, the effect of environmental influences on plant health, and the ability to identify the possible causes of plant problems. Developing a solution or remedy to the problem depends on a proper diagnosis, a process that requires recognition of a problem and...

  17. Some ethical problems of hazardous substances in the working environment1

    Science.gov (United States)

    Lee, W. R.

    1977-01-01

    ABSTRACT Exposure of persons to conditions at work may involve some risk to health. It is not possible always to ensure that exposure can be kept below a level from which it may be categorically stated that there is no risk. The decision that has to be made, what ought to be done, poses an ethical problem. What principles are available for examining such ethical problems? Two theories from the study of ethics seem relevant. On the one hand Intuitionism asserts that we possess a moral sense which, correctly applied, enables us to determine what is a right action. The familiar use of 'conscience' and the teachings of some of the influential Western religions follow this theory. On the other hand Utilitarianism (in particular Objective Utilitarianism) asserts that we may judge the rightness of an action by looking at its consequences. This theory, translated into legislative reform, has provided a substantial basis for much of the social reforming legislation of the last century. In economic terms it appears as cost benefit analysis. Despite its attraction and almost plausible objectivity, Utilitarianism requires the quantification and even costing of consequences which cannot always be measured (for example, emotions) but which from an important part of the totality of life. Decisions about the right course of action are required politically but cannot always be made objectively. They may require an element of judgement—a correct application of the moral sense—to use the Intuitionists' phrase. Doctors, used to making ethical decisions in the clinical setting, must examine carefully their role when contributing to ethical decisions in the industrial setting. PMID:588483

  18. Systems analysis determining critical items, critical assembly processes, primary failure modes and corrective actions on ASST magnets

    International Nuclear Information System (INIS)

    Arden, C.S.

    1993-04-01

    During the assembly process through the completion of the Accelerator Surface String Test (ASST) phase one test, Magnet Systems Division Reliability Engineering has tracked all the known discrepancies utilizing the Failure Reporting, Analysis and Corrective Action System (FRACAS) and data base. This paper discusses the critical items, critical assembly processes, primary failure modes and corrective actions (lessons learned) based on actual data for the ASST magnets. The ASST magnets include seven Brookhaven Lab Dipoles (DCA-207 through 213), fourteen Fermi Lab Dipoles (DCA-310 through 323) and five Lawrence Berkeley Lab Quadrupoles (QCC-402 through 406). Between all the ASST magnets built there were one hundred eighty six (186) class one discrepancies reported out of approximately eleven hundred total discrepancy reports. The class one or critical discrepancies are defined as form, fit, function, safety or reliability problem. Each and every ASST magnet is considered a success, as they all achieved the quench performance requirements and were capable of being incorporated into the string test. This paper also discuss some specific magnet discrepancies, including failure cause(s), corrective action and possible open issues

  19. Systems analysis determining critical items, critical assembly processes, primary failure modes and corrective actions on ASST magnets

    International Nuclear Information System (INIS)

    Arden, C.S.

    1994-01-01

    During the assembly process through the completion of the Accelerator Surface String Test (ASST) phase one test, Magnet Systems Division Reliability Engineering has tracked all the known discrepancies utilizing the Failure Reporting, Analysis and Corrective Action System (FRACAS) and data base. This paper discusses the critical items, critical assembly processes, primary failure modes and corrective actions (lessons learned) based on actual data for the ASST magnets. The ASST magnets include seven Brookhaven Lab Dipoles (DCA-207 through 213), fourteen Fermi Lab Dipoles (DCA-310 through 323) and five Lawrence Berkeley Lab Quadrupoles (QCC-402 through 406). Between all the ASST magnets built there were one hundred eighty six (186) class one discrepancies reported out of approximately eleven hundred total discrepancy reports. The class one or critical discrepancies are defined as form, fit, function, safety or reliability problem. Each and every ASST magnet is considered a success, as they all achieved the quench performance requirements and were capable of being incorporated into the string test. This paper will also discuss some specific magnet discrepancies, including failure cause(s), corrective action and possible open issues

  20. Class and Home Problems: Optimization Problems

    Science.gov (United States)

    Anderson, Brian J.; Hissam, Robin S.; Shaeiwitz, Joseph A.; Turton, Richard

    2011-01-01

    Optimization problems suitable for all levels of chemical engineering students are available. These problems do not require advanced mathematical techniques, since they can be solved using typical software used by students and practitioners. The method used to solve these problems forces students to understand the trends for the different terms…

  1. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 573: Alpha Contaminated Sites Nevada National Security Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Patrick [Nevada Site Office, Las Vegas, NV (United States)

    2016-02-01

    CAU 573 comprises the following corrective action sites (CASs): • 05-23-02, GMX Alpha Contaminated Area • 05-45-01, Atmospheric Test Site - Hamilton These two CASs include the release at the Hamilton weapons-related tower test and a series of 29 atmospheric experiments conducted at GMX. The two CASs are located in two distinctly separate areas within Area 5. To facilitate site investigation and data quality objective (DQO) decisions, all identified releases (i.e., CAS components) were organized into study groups. The reporting of investigation results and the evaluation of DQO decisions are at the release level. The corrective action alternatives (CAAs) were evaluated at the FFACO CAS level. The purpose of this CADD/CAP is to evaluate potential CAAs, provide the rationale for the selection of recommended CAAs, and provide the plan for implementation of the recommended CAA for CAU 573. Corrective action investigation (CAI) activities were performed from January 2015 through November 2015, as set forth in the CAU 573 Corrective Action Investigation Plan (CAIP). Analytes detected during the CAI were evaluated against appropriate final action levels (FALs) to identify the contaminants of concern. Assessment of the data generated from investigation activities conducted at CAU 573 revealed the following: • Radiological contamination within CAU 573 does not exceed the FALs (based on the Occasional Use Area exposure scenario). • Chemical contamination within CAU 573 does not exceed the FALs. • Potential source material—including lead plates, lead bricks, and lead-shielded cables—was removed during the investigation and requires no additional corrective action.

  2. Corrective Action Decision Document/Corrective Action Plan for Corrective Action Unit 573: Alpha Contaminated Sites Nevada National Security Site, Nevada, Revision 0

    International Nuclear Information System (INIS)

    Matthews, Patrick

    2016-01-01

    CAU 573 comprises the following corrective action sites (CASs): • 05-23-02, GMX Alpha Contaminated Area • 05-45-01, Atmospheric Test Site - Hamilton These two CASs include the release at the Hamilton weapons-related tower test and a series of 29 atmospheric experiments conducted at GMX. The two CASs are located in two distinctly separate areas within Area 5. To facilitate site investigation and data quality objective (DQO) decisions, all identified releases (i.e., CAS components) were organized into study groups. The reporting of investigation results and the evaluation of DQO decisions are at the release level. The corrective action alternatives (CAAs) were evaluated at the FFACO CAS level. The purpose of this CADD/CAP is to evaluate potential CAAs, provide the rationale for the selection of recommended CAAs, and provide the plan for implementation of the recommended CAA for CAU 573. Corrective action investigation (CAI) activities were performed from January 2015 through November 2015, as set forth in the CAU 573 Corrective Action Investigation Plan (CAIP). Analytes detected during the CAI were evaluated against appropriate final action levels (FALs) to identify the contaminants of concern. Assessment of the data generated from investigation activities conducted at CAU 573 revealed the following: • Radiological contamination within CAU 573 does not exceed the FALs (based on the Occasional Use Area exposure scenario). • Chemical contamination within CAU 573 does not exceed the FALs. • Potential source material - including lead plates, lead bricks, and lead-shielded cables was removed during the investigation and requires no additional corrective action.

  3. Corrective Action Decision Document for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada: Revision 0, Including Errata Sheet

    Energy Technology Data Exchange (ETDEWEB)

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2004-04-01

    This Corrective Action Decision Document identifies the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's corrective action alternative recommendation for each of the corrective action sites (CASs) within Corrective Action Unit (CAU) 204: Storage Bunkers, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. An evaluation of analytical data from the corrective action investigation, review of current and future operations at each CAS, and a detailed comparative analysis of potential corrective action alternatives were used to determine the appropriate corrective action for each CAS. There are six CASs in CAU 204, which are all located between Areas 1, 2, 3, and 5 on the NTS. The No Further Action alternative was recommended for CASs 01-34-01, 02-34-01, 03-34-01, and 05-99-02; and a Closure in Place with Administrative Controls recommendation was the preferred corrective action for CASs 05-18-02 and 05-33-01. These alternatives were judged to meet all requirements for the technical components evaluated as well as applicable state and federal regulations for closure of the sites and will eliminate potential future exposure pathways to the contaminated media at CAU 204.

  4. Corrective Action Decision Document/Closure Report for Corrective Action Unit 500: Test Cell A Septic System, Nevada Test Site, Nevada, Rev. 0

    International Nuclear Information System (INIS)

    2000-01-01

    This Corrective Action Decision Document/Closure Report (CADD/CR) has been prepared for Corrective Action Unit (CAU) 500: Test Cell A Septic System, in accordance with the Federal Facility Agreement and Consent Order. Located in Area 25 at the Nevada Test Site in Nevada, CAU 500 is comprised of one Corrective Action Site, CAS 25-04-05. This CADD/CR identifies and rationalizes the U.S. Department of Energy, Nevada Operations Office's (DOE/NV's) recommendation that no corrective action is deemed necessary for CAU 500. The Corrective Action Decision Document and Closure Report have been combined into one report based on sample data collected during the field investigation performed between February and May 1999, which showed no evidence of soil contamination at this site. The clean closure justification for CAU 500 is based on these results. Analytes detected were evaluated against preliminary action levels (PALs) to determine contaminants of concern (COCs) for CAU 500, and it was determined that the PALs were not exceeded for total volatile organic compounds, total semivolatile organic compounds, total petroleum hydrocarbons, polychlorinated biphenyls, total Resource Conservation and Recovery Act metals, gamma-emitting radionuclides, isotopic uranium, and strontium-90 for any of the soil samples collected. COCs were identified only within the septic tank and distribution box at the CAU. No COCs were identified outside these two areas; therefore, no corrective action was necessary for the soil. Closure activities were performed to address the COCs identified within the septic tank and distribution box. The DOE/NV recommended that neither corrective action nor a corrective action plan was required at CAU 500. Further, no use restrictions were required to be placed on CAU 500, and the septic tank and distribution box have been closed in accordance with all applicable state and federal regulations for closure of the site

  5. A simulation study of linear coupling effects and their correction in RHIC

    International Nuclear Information System (INIS)

    Parzen, G.

    1993-01-01

    This paper describes a possible skew quadrupole correction system for linear coupling effects for the RHIC92 lattice. A simulation study has been done for this correction system. Results are given for the performance of the correction system and the required strength of the skew quadrupole corrections. The location of the correctors is discussed. For RHIC92, it appears possible to use the same 2 family correction system for all the likely choices of β*. The simulation study gives results for the residual tune splitting that remains after correction with a 2 family correction system. It also gives results for the beta functions before and after correction

  6. Inquiry-based problem solving in introductory physics

    Science.gov (United States)

    Koleci, Carolann

    What makes problem solving in physics difficult? How do students solve physics problems, and how does this compare to an expert physicist's strategy? Over the past twenty years, physics education research has revealed several differences between novice and expert problem solving. The work of Chi, Feltovich, and Glaser demonstrates that novices tend to categorize problems based on surface features, while experts categorize according to theory, principles, or concepts1. If there are differences between how problems are categorized, then are there differences between how physics problems are solved? Learning more about the problem solving process, including how students like to learn and what is most effective, requires both qualitative and quantitative analysis. In an effort to learn how novices and experts solve introductory electricity problems, a series of in-depth interviews were conducted, transcribed, and analyzed, using both qualitative and quantitative methods. One-way ANOVA tests were performed in order to learn if there are any significant problem solving differences between: (a) novices and experts, (b) genders, (c) students who like to answer questions in class and those who don't, (d) students who like to ask questions in class and those who don't, (e) students employing an interrogative approach to problem solving and those who don't, and (f) those who like physics and those who dislike it. The results of both the qualitative and quantitative methods reveal that inquiry-based problem solving is prevalent among novices and experts, and frequently leads to the correct physics. These findings serve as impetus for the third dimension of this work: the development of Choose Your Own Adventure Physics(c) (CYOAP), an innovative teaching tool in physics which encourages inquiry-based problem solving. 1Chi, M., P. Feltovich, R. Glaser, "Categorization and Representation of Physics Problems by Experts and Novices", Cognitive Science, 5, 121--152 (1981).

  7. Fast decoding techniques for extended single-and-double-error-correcting Reed Solomon codes

    Science.gov (United States)

    Costello, D. J., Jr.; Deng, H.; Lin, S.

    1984-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. For example, some 256K-bit dynamic random access memories are organized as 32K x 8 bit-bytes. Byte-oriented codes such as Reed Solomon (RS) codes provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. Some special high speed decoding techniques for extended single and double error correcting RS codes. These techniques are designed to find the error locations and the error values directly from the syndrome without having to form the error locator polynomial and solve for its roots.

  8. Correcting for static shift of magnetotelluric data with airborne electromagnetic measurements: a case study from Rathlin Basin, Northern Ireland

    Directory of Open Access Journals (Sweden)

    R. Delhaye

    2017-05-01

    Full Text Available Galvanic distortions of magnetotelluric (MT data, such as the static-shift effect, are a known problem that can lead to incorrect estimation of resistivities and erroneous modelling of geometries with resulting misinterpretation of subsurface electrical resistivity structure. A wide variety of approaches have been proposed to account for these galvanic distortions, some depending on the target area, with varying degrees of success. The natural laboratory for our study is a hydraulically permeable volume of conductive sediment at depth, the internal resistivity structure of which can be used to estimate reservoir viability for geothermal purposes; however, static-shift correction is required in order to ensure robust and precise modelling accuracy.We present here a possible method to employ frequency–domain electromagnetic data in order to correct static-shift effects, illustrated by a case study from Northern Ireland. In our survey area, airborne frequency domain electromagnetic (FDEM data are regionally available with high spatial density. The spatial distributions of the derived static-shift corrections are analysed and applied to the uncorrected MT data prior to inversion. Two comparative inversion models are derived, one with and one without static-shift corrections, with instructive results. As expected from the one-dimensional analogy of static-shift correction, at shallow model depths, where the structure is controlled by a single local MT site, the correction of static-shift effects leads to vertical scaling of resistivity–thickness products in the model, with the corrected model showing improved correlation to existing borehole wireline resistivity data. In turn, as these vertical scalings are effectively independent of adjacent sites, lateral resistivity distributions are also affected, with up to half a decade of resistivity variation between the models estimated at depths down to 2000 m. Simple estimation of differences in bulk

  9. Daily online bony correction is required for prostate patients without fiducial markers or soft-tissue imaging.

    Science.gov (United States)

    Johnston, M L; Vial, P; Wiltshire, K L; Bell, L J; Blome, S; Kerestes, Z; Morgan, G W; O'Driscoll, D; Shakespeare, T P; Eade, T N

    2011-09-01

    To compare online position verification strategies with offline correction protocols for patients undergoing definitive prostate radiotherapy. We analysed 50 patients with implanted fiducial markers undergoing curative prostate radiation treatment, all of whom underwent daily kilovoltage imaging using an on-board imager. For each treatment, patients were set-up initially with skin tattoos and in-room lasers. Orthogonal on-board imager images were acquired and the couch shift to match both bony anatomy and the fiducial markers recorded. The set-up error using skin tattoos and offline bone correction was compared with online bone correction. The fiducial markers were used as the reference. Data from 1923 fractions were analysed. The systematic error was ≤1 mm for all protocols. The average random error was 2-3mm for online bony correction and 3-5mm for skin tattoos or offline-bone. Online-bone showed a significant improvement compared with offline-bone in the number of patients with >5mm set-up errors for >10% (P20% (Pmarkers or daily soft-tissue imaging. Copyright © 2011 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  10. Mixed-Precision Spectral Deferred Correction: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Grout, Ray W. S.

    2015-09-02

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  11. Local defect correction for boundary integral equation methods

    NARCIS (Netherlands)

    Kakuba, G.; Anthonissen, M.J.H.

    2013-01-01

    This paper presents a new approach to gridding for problems with localised regions of high activity. The technique of local defect correction has been studied for other methods as ¿nite difference methods and ¿nite volume methods. In this paper we develop the technique for the boundary element

  12. Corrective Action Decision Document, Area 15 Environmental Protection Agency Farm Laboratory Building, Corrective Action Unit No. 95, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-08-18

    This report is the Corrective Action Decision Document (CADD) for the Nevada Test Site (NTS) Area 15 U.S. Environmental Protection Agency (EPA) Farm, Laboratory Building (Corrective Action Unit [CAU] No. 95), at the Nevada Test Site, Nye County, Nevada. The scope of this CADD is to identify and evaluate potential corrective action alternatives for the decommissioning and decontamination (D and D) of the Laboratory Building, which were selected based on the results of investigative activities. Based on this evaluation, a preferred corrective action alternative is recommended. Studies were conducted at the EPA Farm from 1963 to 1981 to determine the animal intake and retention of radionuclides. The main building, the Laboratory Building, has approximately 370 square meters (4,000 square feet) of operational space. Other CAUS at the EPA Farm facility that will be investigated and/or remediated through other environmental restoration subprojects are not included in this CADD, with the exception of housekeeping sites. Associated structures that do not require classification as CAUS are considered in the evaluation of corrective action alternatives for CAU 95.

  13. Corrective Action Decision Document, Area 15 Environmental Protection Agency Farm Laboratory Building, Corrective Action Unit No. 95, Revision 0

    International Nuclear Information System (INIS)

    1997-01-01

    This report is the Corrective Action Decision Document (CADD) for the Nevada Test Site (NTS) Area 15 U.S. Environmental Protection Agency (EPA) Farm, Laboratory Building (Corrective Action Unit [CAU] No. 95), at the Nevada Test Site, Nye County, Nevada. The scope of this CADD is to identify and evaluate potential corrective action alternatives for the decommissioning and decontamination (D and D) of the Laboratory Building, which were selected based on the results of investigative activities. Based on this evaluation, a preferred corrective action alternative is recommended. Studies were conducted at the EPA Farm from 1963 to 1981 to determine the animal intake and retention of radionuclides. The main building, the Laboratory Building, has approximately 370 square meters (4,000 square feet) of operational space. Other CAUS at the EPA Farm facility that will be investigated and/or remediated through other environmental restoration subprojects are not included in this CADD, with the exception of housekeeping sites. Associated structures that do not require classification as CAUS are considered in the evaluation of corrective action alternatives for CAU 95

  14. Attenuation correction for SPECT

    International Nuclear Information System (INIS)

    Hosoba, Minoru

    1986-01-01

    Attenuation correction is required for the reconstruction of a quantitative SPECT image. A new method for detecting body contours, which are important for the correction of tissue attenuation, is presented. The effect of body contours, detected by the newly developed method, on the reconstructed images was evaluated using various techniques for attenuation correction. The count rates in the specified region of interest in the phantom image by the Radial Post Correction (RPC) method, the Weighted Back Projection (WBP) method, Chang's method were strongly affected by the accuracy of the contours, as compared to those by Sorenson's method. To evaluate the effect of non-uniform attenuators on the cardiac SPECT, computer simulation experiments were performed using two types of models, the uniform attenuator model (UAM) and the non-uniform attenuator model (NUAM). The RPC method showed the lowest relative percent error (%ERROR) in UAM (11 %). However, 20 to 30 percent increase in %ERROR was observed for NUAM reconstructed with the RPC, WBP, and Chang's methods. Introducing an average attenuation coefficient (0.12/cm for Tc-99m and 0.14/cm for Tl-201) in the RPC method decreased %ERROR to the levels for UAM. Finally, a comparison between images, which were obtained by 180 deg and 360 deg scans and reconstructed from the RPC method, showed that the degree of the distortion of the contour of the simulated ventricles in the 180 deg scan was 15 % higher than that in the 360 deg scan. (Namekawa, K.)

  15. Vacuum chamber eddy current correction coil for the AGS Booster

    International Nuclear Information System (INIS)

    Danby, G.; Jackson, J.

    1988-01-01

    The AGS Booster injector will perform a variety of functions. Heavy ion acceleration requires a bakeable, ultra-high vacuum system (VC). Acceleration for intense proton beams requires rapid cycling (B /preceq/ 10T/sec). If straight forward heavy walled VC are used, the field perturbations due to eddy currents are large. The state of the art lattice has highly distributed lumped sextupoles capable of substantially correcting the induced field nonlinearity. Nevertheless, for the very highest space charge-intensity limits, it is desirable to have the capability to remove eddy current fields at the source. Correction coils attached to the outside of the VC cancel its current aberrations over the required good field aperture. These can be passively powered by transformer action, using two turn windings around the magnet yoke. Programmed power supplies can also be used. This inexpensive additional correction option uses a three turn per quadrant coil which follows the local contour of the VC. Transverse movements of several mms of the VC will have no beam optical effect since the large field aberrations and their corrections have the same displaced coordinates. Experimental and computer studies will be presented, as well as mechanical and electrical design of a simple method of construction. 6 figs

  16. Vacuum chamber eddy current correction coil for the AGS booster

    International Nuclear Information System (INIS)

    Danby, G.; Jackson, J.

    1988-01-01

    This paper reports on the AGS Booster injector that performs a variety of functions. Heavy ion acceleration requires a bakeable, ultra-high vacuum system (VC). Acceleration for intense proton beams requires rapid cycling (B ≤10T/sec). If straight forward heavy walled VC are used, the field perturbations due to eddy currents are large. The state of the art lattice has highly distributed lumped sextupoles capable of substantially correcting the induced field nonlinearity. Nevertheless, for the very highest space charge-intensity limits, it is desirable to have the capability to remove eddy current fields at the source. Correction coils attached to the outside of the VC cancel its current aberrations over the required good field aperture. These can be passively powered by transformer action, using two turn windings around the magnet yoke. Programmed power supplies can also be used. This inexpensive additional correction option uses a three turn per quadrant coil which follows the local contour of the VC. Transverse movements of several mms of the VC will have no beam optical effect since the large field aberrations and their corrections have the same displace coordinates. Experimental and computer studies will be presented, as well as mechanical and electrical design of a simple method of construction

  17. LAGRANGE SOLUTIONS TO THE DISCRETE-TIME GENERAL THREE-BODY PROBLEM

    International Nuclear Information System (INIS)

    Minesaki, Yukitaka

    2013-01-01

    There is no known integrator that yields exact orbits for the general three-body problem (G3BP). It is difficult to verify whether a numerical procedure yields the correct solutions to the G3BP because doing so requires knowledge of all 11 conserved quantities, whereas only six are known. Without tracking all of the conserved quantities, it is possible to show that the discrete general three-body problem (d-G3BP) yields the correct orbits corresponding to Lagrange solutions of the G3BP. We show that the d-G3BP yields the correct solutions to the G3BP for two special cases: the equilateral triangle and collinear configurations. For the triangular solution, we use the fact that the solution to the three-body case is a superposition of the solutions to the three two-body cases, and we show that the three bodies maintain the same relative distances at all times. To obtain the collinear solution, we assume a specific permutation of the three bodies arranged along a straight rotating line, and we show that the d-G3BP maintains the same distance ratio between two bodies as in the G3BP. Proving that the d-G3BP solutions for these cases are equivalent to those of the G3BP makes it likely that the d-G3BP and G3BP solutions are equivalent in other cases. To our knowledge, this is the first work that proves the equivalence of the discrete solutions and the Lagrange orbits.

  18. Blind retrospective motion correction of MR images.

    Science.gov (United States)

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2013-12-01

    Subject motion can severely degrade MR images. A retrospective motion correction algorithm, Gradient-based motion correction, which significantly reduces ghosting and blurring artifacts due to subject motion was proposed. The technique uses the raw data of standard imaging sequences; no sequence modifications or additional equipment such as tracking devices are required. Rigid motion is assumed. The approach iteratively searches for the motion trajectory yielding the sharpest image as measured by the entropy of spatial gradients. The vast space of motion parameters is efficiently explored by gradient-based optimization with a convergence guarantee. The method has been evaluated on both synthetic and real data in two and three dimensions using standard imaging techniques. MR images are consistently improved over different kinds of motion trajectories. Using a graphics processing unit implementation, computation times are in the order of a few minutes for a full three-dimensional volume. The presented technique can be an alternative or a complement to prospective motion correction methods and is able to improve images with strong motion artifacts from standard imaging sequences without requiring additional data. Copyright © 2013 Wiley Periodicals, Inc., a Wiley company.

  19. RELEVANT PROBLEMS OF UKRAINE’S INTEGRATION INTO GLOBAL ECONOMY

    Directory of Open Access Journals (Sweden)

    Yevheniia Duliba

    2017-12-01

    Full Text Available The purpose of this research is to study the main problems that prevent Ukraine from integrating into the global economy and to determine correct focuses of the foreign economic policy of Ukraine against the background of strengthening of globalization tendencies throughout the world. The bases of this research are bases of business development of the foreign economic policy of Ukraine and improvement of Ukrainian economy against the background of international integration. At the heart of the research methodology is a dialectical method of scientific knowledge and, besides, special methods of research based on modern scientific bases of economic, management and related to them knowledge: economic and statistic method – for the assessment of the modern state of foreign trade and investment activity of Ukraine; method of analysis and synthesis – for the determination of tendencies of development of integration processed in Ukraine; comparative analysis – for comparison of information concerning development of specific indicators of foreign economic activities in Ukraine. Results. As a result of research, the main blocks of problems, which impede the integration of Ukraine into the global economy, and requirements for their complex solution are determined. Besides, interdependence and interdetermination of problems, which impede the integration of Ukraine into the global economy, and requirements for their complex solution are explained. Political and legal, economic, sociocultural, and infrastructural preconditions that are necessary for effective integration of Ukraine into the global economy are highlighted. Practical implications. Analysis of the existing problems related to the actual economy, investments, innovation processes gives the possibility to determine the vector of development of Ukraine’s economy taking to account recommendations concerning its improvement for the purposes of integration into global economy. Value

  20. 77 FR 3379 - Biorefinery Assistance Guaranteed Loans; Correction

    Science.gov (United States)

    2012-01-24

    ... credit risk analysis. The Agency will require an evaluation and either a credit rating or a credit... provisions as to what an applicant is to do in the event either an appraisal is not completed or a credit... Correction As published, the interim rule requires applicants to submit a ``credit rating'' with the...

  1. Resonating-group method for nuclear many-body problems

    International Nuclear Information System (INIS)

    Tang, Y.C.; LeMere, M.; Thompson, D.R.

    1977-01-01

    The resonating-group method is a microscopic method which uses fully antisymmetric wave functions, treats correctly the motion of the total center of mass, and takes cluster correlation into consideration. In this review, the formulation of this method is discussed for various nuclear many-body problems, and a complex-generator-coordinate technique which has been employed to evaluate matrix elements required in resonating-group calculations is described. Several illustrative examples of bound-state, scattering, and reaction calculations, which serve to demonstrate the usefulness of this method, are presented. Finally, by utilization of the results of these calculations, the role played by the Pauli principle in nuclear scattering and reaction processes is discussed. 21 figures, 2 tables, 185 references

  2. T-branes and α{sup ′}-corrections

    Energy Technology Data Exchange (ETDEWEB)

    Marchesano, Fernando; Schwieger, Sebastian [Instituto de Física Teórica UAM-CSIC,Cantoblanco, 28049 Madrid (Spain)

    2016-11-21

    We study α’-corrections in multiple D7-brane configurations with non-commuting profiles for their transverse position fields. We focus on T-brane systems, crucial in F-theory GUT model building. There α{sup ′}-corrections modify the D-term piece of the BPS equations which, already at leading order, require a non-primitive Abelian worldvolume flux background. We find that α{sup ′}-corrections may either i) leave this flux background invariant, ii) modify the Abelian non-primitive flux profile, or iii) deform it to a non-Abelian profile. The last case typically occurs when primitive fluxes, a necessary ingredient to build 4d chiral models, are added to the system. We illustrate these three cases by solving the α{sup ′}-corrected D-term equations in explicit examples, and describe their appearance in more general T-brane backgrounds. Finally, we discuss implications of our findings for F-theory GUT local models.

  3. Detection and defect correction of operating process

    International Nuclear Information System (INIS)

    Vasendina, Elena; Plotnikova, Inna; Levitskaya, Anastasiya; Kvesko, Svetlana

    2016-01-01

    The article is devoted to the current problem of enterprise competitiveness rise in hard and competitive terms of business environment. The importance of modern equipment for detection of defects and their correction is explained. Production of chipboard is used as an object of research. Short description and main results of estimation efficiency of innovative solutions of enterprises are considered. (paper)

  4. ERRORS AND CORRECTIVE FEEDBACK IN WRITING: IMPLICATIONS TO OUR CLASSROOM PRACTICES

    Directory of Open Access Journals (Sweden)

    Maria Corazon Saturnina A Castro

    2017-10-01

    Full Text Available Error correction is one of the most contentious and misunderstood issues in both foreign and second language teaching. Despite varying positions on the effectiveness of error correction or the lack of it, corrective feedback remains an institution in the writing classes. Given this context, this action research endeavors to survey prevalent attitudes of teachers and students toward corrective feedback and examine their implications to classroom practices.  This paper poses the major problem:  How do teachers’ perspectives on corrective feedback match the students’ views and expectations about error treatment in their writing? Professors of the University of the Philippines who teach composition classes and over a hundred students enrolled in their classes were surveyed.  Results showed that there are differing perceptions of teachers and students regarding corrective feedback. These oppositions must be addressed as they have implications to current pedagogical practices which include constructing and establishing appropriate lesson goals, using alternative corrective strategies, teaching grammar points in class even in the tertiary level, and further understanding the learning process.

  5. Seeing atoms with aberration-corrected sub-Angstroem electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    O' Keefe, Michael A. [Materials Science Division, Lawrence Berkeley National Laboratory, National Center for Electron Microscopy, 2R0200, 1 Cyclotron Road, Berkeley, CA 94720-8197 (United States)], E-mail: sub-Angstrom@comcast.net

    2008-02-15

    High-resolution electron microscopy is able to provide atomic-level characterization of many materials in low-index orientations. To achieve the same level of characterization in more complex orientations requires that instrumental resolution be improved to values corresponding to the sub-Angstroem separations of atom positions projected into these orientations. Sub-Angstroem resolution in the high-resolution transmission electron microscope has been achieved in the last few years by software aberration correction, electron holography, and hardware aberration correction; the so-called 'one-Angstroem barrier' has been left behind. Aberration correction of the objective lens currently allows atomic-resolution imaging at the sub-0.8 A level and is advancing towards resolutions in the deep sub-Angstroem range (near 0.5 A). At current resolution levels, images with sub-Rayleigh resolution require calibration in order to pinpoint atom positions correctly. As resolution levels approach the 'sizes' of atoms, the atoms themselves will produce a limit to resolution, no matter how much the instrumental resolution is improved. By arranging imaging conditions suitably, each atom peak in the image can be narrower, so atoms are imaged smaller and may be resolved at finer separations.

  6. A method of detector correction for cosmic ray muon radiography

    International Nuclear Information System (INIS)

    Liu Yuanyuan; Zhao Ziran; Chen Zhiqiang; Zhang Li; Wang Zhentian

    2008-01-01

    Cosmic ray muon radiography which has good penetrability and sensitivity to high-Z materials is an effective way for detecting shielded nuclear materials. The problem of data correction is one of the key points of muon radiography technique. Because of the influence of environmental background, environmental yawp and error of detectors, the raw data can not be used directly. If we used the raw data as the usable data to reconstruct without any corrections, it would turn up terrible artifacts. Based on the characteristics of the muon radiography system, aimed at the error of detectors, this paper proposes a method of detector correction. The simulation experiments demonstrate that this method can effectively correct the error produced by detectors. Therefore, we can say that it does a further step to let the technique of cosmic muon radiography into out real life. (authors)

  7. Quantum-electrodynamics corrections in pionic hydrogen

    NARCIS (Netherlands)

    Schlesser, S.; Le Bigot, E. -O.; Indelicato, P.; Pachucki, K.

    2011-01-01

    We investigate all pure quantum-electrodynamics corrections to the np --> 1s, n = 2-4 transition energies of pionic hydrogen larger than 1 meV, which requires an accurate evaluation of all relevant contributions up to order alpha 5. These values are needed to extract an accurate strong interaction

  8. Accounting for Chromatic Atmospheric Effects on Barycentric Corrections

    Energy Technology Data Exchange (ETDEWEB)

    Blackman, Ryan T.; Szymkowiak, Andrew E.; Fischer, Debra A.; Jurgenson, Colby A., E-mail: ryan.blackman@yale.edu [Department of Astronomy, Yale University, 52 Hillhouse Avenue, New Haven, CT 06511 (United States)

    2017-03-01

    Atmospheric effects on stellar radial velocity measurements for exoplanet discovery and characterization have not yet been fully investigated for extreme precision levels. We carry out calculations to determine the wavelength dependence of barycentric corrections across optical wavelengths, due to the ubiquitous variations in air mass during observations. We demonstrate that radial velocity errors of at least several cm s{sup −1} can be incurred if the wavelength dependence is not included in the photon-weighted barycentric corrections. A minimum of four wavelength channels across optical spectra (380–680 nm) are required to account for this effect at the 10 cm s{sup −1} level, with polynomial fits of the barycentric corrections applied to cover all wavelengths. Additional channels may be required in poor observing conditions or to avoid strong telluric absorption features. Furthermore, consistent flux sampling on the order of seconds throughout the observation is necessary to ensure that accurate photon weights are obtained. Finally, we describe how a multiple-channel exposure meter will be implemented in the EXtreme PREcision Spectrograph (EXPRES).

  9. Classification of cancerous cells based on the one-class problem approach

    Science.gov (United States)

    Murshed, Nabeel A.; Bortolozzi, Flavio; Sabourin, Robert

    1996-03-01

    One of the most important factors in reducing the effect of cancerous diseases is the early diagnosis, which requires a good and a robust method. With the advancement of computer technologies and digital image processing, the development of a computer-based system has become feasible. In this paper, we introduce a new approach for the detection of cancerous cells. This approach is based on the one-class problem approach, through which the classification system need only be trained with patterns of cancerous cells. This reduces the burden of the training task by about 50%. Based on this approach, a computer-based classification system is developed, based on the Fuzzy ARTMAP neural networks. Experimental results were performed using a set of 542 patterns taken from a sample of breast cancer. Results of the experiment show 98% correct identification of cancerous cells and 95% correct identification of non-cancerous cells.

  10. A solution to nonlinearity problems

    International Nuclear Information System (INIS)

    Neuffer, D.V.

    1989-01-01

    New methods of correcting dynamic nonlinearities resulting from the multipole content of a synchrotron or transport line are presented. In a simplest form, correction elements are places at the center (C) of the accelerator half-cells as well as near the focusing (F) and defocusing (D) quadrupoles. In a first approximation, the corrector strengths follow Simpson's Rule, forming an accurate quasi-local canceling approximation to the nonlinearity. The F, C, and D correctors may also be used to obtain precise control of the horizontal, coupled, and vertical motion. Correction by three or more orders of magnitude can be obtained, and simple solutions to a fundamental problem in beam transport have been obtained. 13 refs., 1 fig., 1 tab

  11. Corrective Action Decision Document for Corrective Action Unit 563: Septic Systems, Nevada Test Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Grant Evenson

    2008-02-01

    This Corrective Action Decision Document has been prepared for Corrective Action Unit (CAU) 563, Septic Systems, in accordance with the Federal Facility Agreement and Consent Order (FFACO, 1996; as amended January 2007). The corrective action sites (CASs) for CAU 563 are located in Areas 3 and 12 of the Nevada Test Site, Nevada, and are comprised of the following four sites: •03-04-02, Area 3 Subdock Septic Tank •03-59-05, Area 3 Subdock Cesspool •12-59-01, Drilling/Welding Shop Septic Tanks •12-60-01, Drilling/Welding Shop Outfalls The purpose of this Corrective Action Decision Document is to identify and provide the rationale for the recommendation of a corrective action alternative (CAA) for the four CASs within CAU 563. Corrective action investigation (CAI) activities were performed from July 17 through November 19, 2007, as set forth in the CAU 563 Corrective Action Investigation Plan (NNSA/NSO, 2007). Analytes detected during the CAI were evaluated against appropriate final action levels (FALs) to identify the contaminants of concern (COCs) for each CAS. The results of the CAI identified COCs at one of the four CASs in CAU 563 and required the evaluation of CAAs. Assessment of the data generated from investigation activities conducted at CAU 563 revealed the following: •CASs 03-04-02, 03-59-05, and 12-60-01 do not contain contamination at concentrations exceeding the FALs. •CAS 12-59-01 contains arsenic and chromium contamination above FALs in surface and near-surface soils surrounding a stained location within the site. Based on the evaluation of analytical data from the CAI, review of future and current operations at CAS 12-59-01, and the detailed and comparative analysis of the potential CAAs, the following corrective actions are recommended for CAU 563.

  12. First turn beam correction for the Advanced Photon Source storage ring

    International Nuclear Information System (INIS)

    Qian, Y.; Crosbie, E.; Teng, L.

    1991-01-01

    A procedure was developed for precise realignment of the quadrupoles in a synchrotron radiation storage ring which can substantially ease the required precision of the initial survey. The procedure consists of first using the injected beam to obtain a closed orbit which is centered on the beam position monitors by the correction dipoles. The strengths of the correction dipoles then give the required fine-adjustment of the quadrupole positions. In this paper the authors discuss only the algorithm for obtaining the closed orbit

  13. Relative Radiometric Normalization and Atmospheric Correction of a SPOT 5 Time Series

    Directory of Open Access Journals (Sweden)

    Matthieu Rumeau

    2008-04-01

    Full Text Available Multi-temporal images acquired at high spatial and temporal resolution are an important tool for detecting change and analyzing trends, especially in agricultural applications. However, to insure a reliable use of this kind of data, a rigorous radiometric normalization step is required. Normalization can be addressed by performing an atmospheric correction of each image in the time series. The main problem is the difficulty of obtaining an atmospheric characterization at a given acquisition date. In this paper, we investigate whether relative radiometric normalization can substitute for atmospheric correction. We develop an automatic method for relative radiometric normalization based on calculating linear regressions between unnormalized and reference images. Regressions are obtained using the reflectances of automatically selected invariant targets. We compare this method with an atmospheric correction method that uses the 6S model. The performances of both methods are compared using 18 images from of a SPOT 5 time series acquired over Reunion Island. Results obtained for a set of manually selected invariant targets show excellent agreement between the two methods in all spectral bands: values of the coefficient of determination (r² exceed 0.960, and bias magnitude values are less than 2.65. There is also a strong correlation between normalized NDVI values of sugarcane fields (r² = 0.959. Despite a relative error of 12.66% between values, very comparable NDVI patterns are observed.

  14. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ...

  15. Calculation and measurement of radiation corrections for plasmon resonances in nanoparticles

    Science.gov (United States)

    Hung, L.; Lee, S. Y.; McGovern, O.; Rabin, O.; Mayergoyz, I.

    2013-08-01

    The problem of plasmon resonances in metallic nanoparticles can be formulated as an eigenvalue problem under the condition that the wavelengths of the incident radiation are much larger than the particle dimensions. As the nanoparticle size increases, the quasistatic condition is no longer valid. For this reason, the accuracy of the electrostatic approximation may be compromised and appropriate radiation corrections for the calculation of resonance permittivities and resonance wavelengths are needed. In this paper, we present the radiation corrections in the framework of the eigenvalue method for plasmon mode analysis and demonstrate that the computational results accurately match analytical solutions (for nanospheres) and experimental data (for nanorings and nanocubes). We also demonstrate that the optical spectra of silver nanocube suspensions can be fully assigned to dipole-type resonance modes when radiation corrections are introduced. Finally, our method is used to predict the resonance wavelengths for face-to-face silver nanocube dimers on glass substrates. These results may be useful for the indirect measurements of the gaps in the dimers from extinction cross-section observations.

  16. Subjective-personal readiness of correctional teachers to education of ASD children

    Directory of Open Access Journals (Sweden)

    Kateryna Ostrovska

    2017-07-01

    Full Text Available ASD teachers require skills that go beyond the realm of most educators including professional competences and high moral qualities. In the work theoretical approaches and experimental research on the problem of subjective personality readiness of correctional teachers in the education of ASD children are carried out. The psychological investigation has been conducted including measurement of psychological indices of 40 teachers of ASD children from the boarding school "Trust" and 40 teachers from mainstream schools of Lviv city aged from 28 to 59 years. The following methods are used: "Questionnaire for the measurement of tolerance" (Magun, Zhamkochyan, Magura, 2000; "Shein’s Career Anchors" method aimed at studying the career orientations of the teachers (Shein, 2010; “Diagnostics of empathy level” (Viktor Boiko, 2001; method of study “Motivation professional activities” by Catelin Zamfir in a modification of Artur Rean (Bordovskaya, & Rean, 2001. Based on the provided studies a program for development of subject-personality readiness of the correctional teacher to work with ASD children is proposed. The program consists of the following components: motivational component (professional competence, self-development, self-determination, self-control; cognitive component (intellectual personality autonomy, self-identification, stability, challenge, integration of lifestyles; emotionally-volitional component (empathy, positive attitude toward a child, intellectual analysis of emotions, self-regulation.

  17. Job requirements compared to medical school education: differences between graduates from problem-based learning and conventional curricula.

    Science.gov (United States)

    Schlett, Christopher L; Doll, Hinnerk; Dahmen, Janosch; Polacsek, Ole; Federkeil, Gero; Fischer, Martin R; Bamberg, Fabian; Butzlaff, Martin

    2010-01-14

    Problem-based Learning (PBL) has been suggested as a key educational method of knowledge acquisition to improve medical education. We sought to evaluate the differences in medical school education between graduates from PBL-based and conventional curricula and to what extent these curricula fit job requirements. Graduates from all German medical schools who graduated between 1996 and 2002 were eligible for this study. Graduates self-assessed nine competencies as required at their day-to-day work and as taught in medical school on a 6-point Likert scale. Results were compared between graduates from a PBL-based curriculum (University Witten/Herdecke) and conventional curricula. Three schools were excluded because of low response rates. Baseline demographics between graduates of the PBL-based curriculum (n = 101, 49% female) and the conventional curricula (n = 4720, 49% female) were similar. No major differences were observed regarding job requirements with priorities for "Independent learning/working" and "Practical medical skills". All competencies were rated to be better taught in PBL-based curriculum compared to the conventional curricula (all p learning/working" (Delta + 0.57), "Psycho-social competence" (Delta + 0.56), "Teamwork" (Delta + 0.39) and "Problem-solving skills" (Delta + 0.36), whereas "Research competence" (Delta--1.23) and "Business competence" (Delta--1.44) in the PBL-based curriculum needed improvement. Among medical graduates in Germany, PBL demonstrated benefits with regard to competencies which were highly required in the job of physicians. Research and business competence deserve closer attention in future curricular development.

  18. Writing testable software requirements

    Energy Technology Data Exchange (ETDEWEB)

    Knirk, D. [Sandia National Labs., Albuquerque, NM (United States)

    1997-11-01

    This tutorial identifies common problems in analyzing requirements in the problem and constructing a written specification of what the software is to do. It deals with two main problem areas: identifying and describing problem requirements, and analyzing and describing behavior specifications.

  19. Relative amplitude preservation processing utilizing surface consistent amplitude correction. Part 3; Surface consistent amplitude correction wo mochiita sotai shinpuku hozon shori. 3

    Energy Technology Data Exchange (ETDEWEB)

    Saeki, T [Japan National Oil Corporation, Tokyo (Japan). Technology Research Center

    1996-10-01

    For the seismic reflection method conducted on the ground surface, generator and geophone are set on the surface. The observed waveforms are affected by the ground surface and surface layer. Therefore, it is required for discussing physical properties of the deep underground to remove the influence of surface layer, preliminarily. For the surface consistent amplitude correction, properties of the generator and geophone were removed by assuming that the observed waveforms can be expressed by equations of convolution. This is a correction method to obtain records without affected by the surface conditions. In response to analysis and correction of waveforms, wavelet conversion was examined. Using the amplitude patterns after correction, the significant signal region, noise dominant region, and surface wave dominant region would be separated each other. Since the amplitude values after correction of values in the significant signal region have only small variation, a representative value can be given. This can be used for analyzing the surface consistent amplitude correction. Efficiency of the process can be enhanced by considering the change of frequency. 3 refs., 5 figs.

  20. Evaluation criteria for communications-related corrective action plans

    International Nuclear Information System (INIS)

    1997-02-01

    This document provides guidance and criteria for US Nuclear Regulatory Commission (NRC) personnel to use in evaluating corrective action plans for nuclear power plant communications. The document begins by describing the purpose, scope, and applicability of the evaluation criteria. Next, it presents background information concerning the communication process, root causes of communication errors, and development and implementation of corrective actions. The document then defines specific criteria for evaluating the effectiveness of the corrective action plan, interview protocols, and an observation protocol related to communication processes. This document is intended only as guidance. It is not intended to have the effect of a regulation, and it does not establish any binding requirements or interpretations of NRC regulations

  1. Aberration-corrected STEM: current performance and future directions

    Energy Technology Data Exchange (ETDEWEB)

    Nellist, P D [Department of Physics, University of Dublin, Trinity College, Dublin 2 (Ireland); Chisholm, M F [Condensed Matter Sciences Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6030 (United States); Lupini, A R [Condensed Matter Sciences Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6030 (United States); Borisevich, A [Condensed Matter Sciences Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6030 (United States); Jr, W H Sides [Condensed Matter Sciences Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6030 (United States); Pennycook, S J [Condensed Matter Sciences Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831-6030 (United States); Dellby, N [Nion Co., 1102 8th St., Kirkland, WA 98033 (United States); Keyse, R [Nion Co., 1102 8th St., Kirkland, WA 98033 (United States); Krivanek, O L [Nion Co., 1102 8th St., Kirkland, WA 98033 (United States); Murfitt, M F [Nion Co., 1102 8th St., Kirkland, WA 98033 (United States); Szilagyi, Z S [Nion Co., 1102 8th St., Kirkland, WA 98033 (United States)

    2006-02-22

    Through the correction of spherical aberration in the scanning transmission electron microscope (STEM), the resolving of a 78 pm atomic column spacing has been demonstrated along with information transfer to 61 pm. The achievement of this resolution required careful control of microscope instabilities, parasitic aberrations and the compensation of uncorrected, higher order aberrations. Many of these issues are improved in a next generation STEM fitted with a new design of aberration corrector, and an initial result demonstrating aberration correction to a convergence semi-angle of 40 mrad is shown. The improved spatial resolution and beam convergence allowed for by such correction has implications for the way in which experiments are performed and how STEM data should be interpreted.

  2. 40 CFR 63.1192 - What recordkeeping requirements must I meet?

    Science.gov (United States)

    2010-07-01

    ... maintenance, include the date and time of the problem, when corrective actions were initiated, the cause of..., corrective action, maintenance, record, or report. The most recent two years of records must be retained at... detection system alarms. Include the date and time of the alarm, when corrective actions were initiated, the...

  3. Where is the problem?

    International Nuclear Information System (INIS)

    Levy-Leblond, J.-M.

    1990-01-01

    This paper examines the problem of the reduction of the state vector in quantum theory. The author suggest that this issue ceases to cause difficulties if viewed from the correct perspective, for example by giving the state vector an auxiliary rather than fundamental status. He advocates changing the conceptual framework of quantum theory and working with quantons rather than particles and/or waves. He denies that reduction is a psychophysiological problem of observation, and raises the relevance of experimental apparatus. He concludes by venturing the suggestion that the problem of the reduction of the quantum state vector lies, not in quantum theory, but in classical perspectives. (UK)

  4. Correction of measured multiplicity distributions by the simulated annealing method

    International Nuclear Information System (INIS)

    Hafidouni, M.

    1993-01-01

    Simulated annealing is a method used to solve combinatorial optimization problems. It is used here for the correction of the observed multiplicity distribution from S-Pb collisions at 200 GeV/c per nucleon. (author) 11 refs., 2 figs

  5. Automatic Correction Algorithm of Hyfrology Feature Attribute in National Geographic Census

    Science.gov (United States)

    Li, C.; Guo, P.; Liu, X.

    2017-09-01

    A subset of the attributes of hydrologic features data in national geographic census are not clear, the current solution to this problem was through manual filling which is inefficient and liable to mistakes. So this paper proposes an automatic correction algorithm of hydrologic features attribute. Based on the analysis of the structure characteristics and topological relation, we put forward three basic principles of correction which include network proximity, structure robustness and topology ductility. Based on the WJ-III map workstation, we realize the automatic correction of hydrologic features. Finally, practical data is used to validate the method. The results show that our method is highly reasonable and efficient.

  6. Haplotyping Problem, A Clustering Approach

    International Nuclear Information System (INIS)

    Eslahchi, Changiz; Sadeghi, Mehdi; Pezeshk, Hamid; Kargar, Mehdi; Poormohammadi, Hadi

    2007-01-01

    Construction of two haplotypes from a set of Single Nucleotide Polymorphism (SNP) fragments is called haplotype reconstruction problem. One of the most popular computational model for this problem is Minimum Error Correction (MEC). Since MEC is an NP-hard problem, here we propose a novel heuristic algorithm based on clustering analysis in data mining for haplotype reconstruction problem. Based on hamming distance and similarity between two fragments, our iterative algorithm produces two clusters of fragments; then, in each iteration, the algorithm assigns a fragment to one of the clusters. Our results suggest that the algorithm has less reconstruction error rate in comparison with other algorithms

  7. Features of an Error Correction Memory to Enhance Technical Texts Authoring in LELIE

    Directory of Open Access Journals (Sweden)

    Patrick SAINT-DIZIER

    2015-12-01

    Full Text Available In this paper, we investigate the notion of error correction memory applied to technical texts. The main purpose is to introduce flexibility and context sensitivity in the detection and the correction of errors related to Constrained Natural Language (CNL principles. This is realized by enhancing error detection paired with relatively generic correction patterns and contextual correction recommendations. Patterns are induced from previous corrections made by technical writers for a given type of text. The impact of such an error correction memory is also investigated from the point of view of the technical writer's cognitive activity. The notion of error correction memory is developed within the framework of the LELIE project an experiment is carried out on the case of fuzzy lexical items and negation, which are both major problems in technical writing. Language processing and knowledge representation aspects are developed together with evaluation directions.

  8. Reed-Solomon Codes and the Deep Hole Problem

    Science.gov (United States)

    Keti, Matt

    In many types of modern communication, a message is transmitted over a noisy medium. When this is done, there is a chance that the message will be corrupted. An error-correcting code adds redundant information to the message which allows the receiver to detect and correct errors accrued during the transmission. We will study the famous Reed-Solomon code (found in QR codes, compact discs, deep space probes,ldots) and investigate the limits of its error-correcting capacity. It can be shown that understanding this is related to understanding the "deep hole" problem, which is a question of determining when a received message has, in a sense, incurred the worst possible corruption. We partially resolve this in its traditional context, when the code is based on the finite field F q or Fq*, as well as new contexts, when it is based on a subgroup of F q* or the image of a Dickson polynomial. This is a new and important problem that could give insight on the true error-correcting potential of the Reed-Solomon code.

  9. THE EFFECT OF NON-ROUTINE GEOMETRY PROBLEM ON ELEMENTARY STUDENTS BELIEF IN MATHEMATICS: A CASE STUDY

    Directory of Open Access Journals (Sweden)

    Khoerul Umam

    2018-03-01

    Full Text Available Many learners hold traditional beliefs about perimeter and area that a shape with a larger area must have a larger perimeter while shape with the same perimeter must have the same area. To address this issue, non-routine geometry problem is given. This qualitative descriptive research used to reach the goal and to explore the effect of non-routine geometry problem on elementary student belief in mathematics. The instrument has been developed to accommodate intuitive student belief and student’s belief about the concept of perimeter. The results provide evidence that students’ intuitive belief about perimeter can be change through non-routine geometry problem which is required understanding and some mathematical analysis. Fortunately, the problem has helped the elementary students revise and correct their beliefs, thoughts, and understandings relating to the circumference of shape.

  10. 77 FR 16802 - Submission for OMB Review; Comment Request, Correction

    Science.gov (United States)

    2012-03-22

    ... information technology should be addressed to: Desk Officer for Agriculture, Office of Information and... DEPARTMENT OF AGRICULTURE Submission for OMB Review; Comment Request, Correction March 19, 2012. The Department of Agriculture has submitted the following information collection requirement(s) to OMB...

  11. Correction of Motion Artifacts for Real-Time Structured Light

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Paulsen, Rasmus Reinhold

    2015-01-01

    While the problem of motion is often mentioned in conjunction with structured light imaging, few solutions have thus far been proposed. A method is demonstrated to correct for object or camera motion during structured light 3D scene acquisition. The method is based on the combination of a suitabl...

  12. 30 CFR 49.6 - Equipment and maintenance requirements.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Equipment and maintenance requirements. 49.6... TRAINING MINE RESCUE TEAMS § 49.6 Equipment and maintenance requirements. (a) Each mine rescue station... indicates that a corrective action is necessary, the corrective action shall be made and the person shall...

  13. Mathematical Problems in Creating Large Astronomical Catalogs

    Directory of Open Access Journals (Sweden)

    Prokhorov M. E.

    2016-12-01

    Full Text Available The next stage after performing observations and their primary reduction is to transform the set of observations into a catalog. To this end, objects that are irrelevant to the catalog should be excluded from observations and gross errors should be discarded. To transform such a prepared data set into a high-precision catalog, we need to identify and correct systematic errors. Therefore, each object of the survey should be observed several, preferably many, times. The problem formally reduces to solving an overdetermined set of equations. However, in the case of catalogs this system of equations has a very specific form: it is extremely sparse, and its sparseness increases rapidly with the number of objects in the catalog. Such equation systems require special methods for storing data on disks and in RAM, and for the choice of the techniques for their solving. Another specific feature of such systems is their high “stiffiness”, which also increases with the volume of a catalog. Special stable mathematical methods should be used in order not to lose precision when solving such systems of equations. We illustrate the problem by the example of photometric star catalogs, although similar problems arise in the case of positional, radial-velocity, and parallax catalogs.

  14. Sociopathic Knowledge Bases: Correct Knowledge Can Be Harmful Even Given Unlimited Computation

    Science.gov (United States)

    1989-08-01

    1757 I Sociopathic Knowledge Bases: Correct Knowledge Can Be Harmful Even Given Unlimited Computation DTIC5 by flELECTE 5David C. Wilkins and Yong...NUMBERSWOKNI PROGRAM RAT TSWOKUI 61153N RR04206 OC 443g-008 11 TITLE (Include Security Classification) Sociopathic Knowledge Bases: Correct Knowledge Can be...probabilistic rules are shown to be sociopathic and so this problem is very widespread. Sociopathicity has important consequences for rule induction

  15. Correcting the Chromatic Aberration in Barrel Distortion of Endoscopic Images

    Directory of Open Access Journals (Sweden)

    Y. M. Harry Ng

    2003-04-01

    Full Text Available Modern endoscopes offer physicians a wide-angle field of view (FOV for minimally invasive therapies. However, the high level of barrel distortion may prevent accurate perception of image. Fortunately, this kind of distortion may be corrected by digital image processing. In this paper we investigate the chromatic aberrations in the barrel distortion of endoscopic images. In the past, chromatic aberration in endoscopes is corrected by achromatic lenses or active lens control. In contrast, we take a computational approach by modifying the concept of image warping and the existing barrel distortion correction algorithm to tackle the chromatic aberration problem. In addition, an error function for the determination of the level of centroid coincidence is proposed. Simulation and experimental results confirm the effectiveness of our method.

  16. Corrective Action Investigation Plan for Corrective Action Unit 232: Area 25 Sewage Lagoons Nevada Test Site, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    DOE/NV Operations Office

    1999-05-01

    This Corrective Action Investigation Plan (CAIP) has been developed in accordance with the Federal Facility Agreement and Consent Order (FFACO) (1996) that was agreed to by the US Department of Energy, Nevada Operations Office (DOE/NV); the State of Nevada Division of Environmental Protection (NDEP); and the US Department of Defense. The CAIP is a document that provides or references all of the specific information for investigation activities associated with Corrective Action Units (CAUs) or Corrective Action Sites (CASs). According to the FFACO, CASs are sites potentially requiring corrective action(s) and may include solid waste management units or individual disposal or release sites. A CAU consists of one or more CASs grouped together based on geography, technical similarity, or agency responsibility for the purpose of determining corrective actions. This CAIP contains the environmental sample collection objectives and criteria for conducting site investigation activities at CAU 232, Area 25 Sewage Lagoons. Corrective Action Unit 232 consists of CAS 25-03-01, Sewage Lagoon, located in Area 25 of the Nevada Test Site (NTS). The NTS is approximately 65 miles (mi) northwest of Las Vegas, Nevada (Figure 1-1) (DOE/NV, 1996a). The Area 25 Sewage Lagoons (Figure 1-2) (IT, 1999b) are located approximately 0.3 mi south of the Test Cell 'C' (TCC) Facility and were used for the discharge of sanitary effluent from the TCC facility. For purposes of this discussion, this site will be referred to as either CAU 232 or the sewage lagoons.

  17. A multilevel correction adaptive finite element method for Kohn-Sham equation

    Science.gov (United States)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  18. Error Correction for Non-Abelian Topological Quantum Computation

    Directory of Open Access Journals (Sweden)

    James R. Wootton

    2014-03-01

    Full Text Available The possibility of quantum computation using non-Abelian anyons has been considered for over a decade. However, the question of how to obtain and process information about what errors have occurred in order to negate their effects has not yet been considered. This is in stark contrast with quantum computation proposals for Abelian anyons, for which decoding algorithms have been tailor-made for many topological error-correcting codes and error models. Here, we address this issue by considering the properties of non-Abelian error correction, in general. We also choose a specific anyon model and error model to probe the problem in more detail. The anyon model is the charge submodel of D(S_{3}. This shares many properties with important models such as the Fibonacci anyons, making our method more generally applicable. The error model is a straightforward generalization of those used in the case of Abelian anyons for initial benchmarking of error correction methods. It is found that error correction is possible under a threshold value of 7% for the total probability of an error on each physical spin. This is remarkably comparable with the thresholds for Abelian models.

  19. Parent, Teacher, and Student Perspectives on How Corrective Lenses Improve Child Wellbeing and School Function.

    Science.gov (United States)

    Dudovitz, Rebecca N; Izadpanah, Nilufar; Chung, Paul J; Slusser, Wendelin

    2016-05-01

    Up to 20 % of school-age children have a vision problem identifiable by screening, over 80 % of which can be corrected with glasses. While vision problems are associated with poor school performance, few studies describe whether and how corrective lenses affect academic achievement and health. Further, there are virtually no studies exploring how children with correctable visual deficits, their parents, and teachers perceive the connection between vision care and school function. We conducted a qualitative evaluation of Vision to Learn (VTL), a school-based program providing free corrective lenses to low-income students in Los Angeles. Nine focus groups with students, parents, and teachers from three schools served by VTL explored the relationships between poor vision, receipt of corrective lenses, and school performance and health. Twenty parents, 25 teachers, and 21 students from three elementary schools participated. Participants described how uncorrected visual deficits reduced students' focus, perseverance, and class participation, affecting academic functioning and psychosocial stress; how receiving corrective lenses improved classroom attention, task persistence, and willingness to practice academic skills; and how serving students in school rather than in clinics increased both access to and use of corrective lenses. for Practice Corrective lenses may positively impact families, teachers, and students coping with visual deficits by improving school function and psychosocial wellbeing. Practices that increase ownership and use of glasses, such as serving students in school, may significantly improve both child health and academic performance.

  20. Corrected Fourier series and its application to function approximation

    Directory of Open Access Journals (Sweden)

    Qing-Hua Zhang

    2005-01-01

    Full Text Available Any quasismooth function f(x in a finite interval [0,x0], which has only a finite number of finite discontinuities and has only a finite number of extremes, can be approximated by a uniformly convergent Fourier series and a correction function. The correction function consists of algebraic polynomials and Heaviside step functions and is required by the aperiodicity at the endpoints (i.e., f(0≠f(x0 and the finite discontinuities in between. The uniformly convergent Fourier series and the correction function are collectively referred to as the corrected Fourier series. We prove that in order for the mth derivative of the Fourier series to be uniformly convergent, the order of the polynomial need not exceed (m+1. In other words, including the no-more-than-(m+1 polynomial has eliminated the Gibbs phenomenon of the Fourier series until its mth derivative. The corrected Fourier series is then applied to function approximation; the procedures to determine the coefficients of the corrected Fourier series are illustrated in detail using examples.

  1. 12 CFR 621.14 - Certification of correctness.

    Science.gov (United States)

    2010-01-01

    ... REQUIREMENTS Report of Condition and Performance § 621.14 Certification of correctness. Each report of financial condition and performance filed with the Farm Credit Administration shall be certified as having... accurate representation of the financial condition and performance of the institution to which it applies...

  2. Self-adjoint extensions and spectral analysis in the Calogero problem

    International Nuclear Information System (INIS)

    Gitman, D M; Tyutin, I V; Voronov, B L

    2010-01-01

    In this paper, we present a mathematically rigorous quantum-mechanical treatment of a one-dimensional motion of a particle in the Calogero potential αx -2 . Although the problem is quite old and well studied, we believe that our consideration based on a uniform approach to constructing a correct quantum-mechanical description for systems with singular potentials and/or boundaries, proposed in our previous works, adds some new points to its solution. To demonstrate that a consideration of the Calogero problem requires mathematical accuracy, we discuss some 'paradoxes' inherent in the 'naive' quantum-mechanical treatment. Using a self-adjoint extension method, we construct and study all possible self-adjoint operators (self-adjoint Hamiltonians) associated with a formal differential expression for the Calogero Hamiltonian. In particular, we discuss a spontaneous scale-symmetry breaking associated with self-adjoint extensions. A complete spectral analysis of all self-adjoint Hamiltonians is presented.

  3. Requirements for fault-tolerant factoring on an atom-optics quantum computer.

    Science.gov (United States)

    Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae

    2013-01-01

    Quantum information processing and its associated technologies have reached a pivotal stage in their development, with many experiments having established the basic building blocks. Moving forward, the challenge is to scale up to larger machines capable of performing computational tasks not possible today. This raises questions that need to be urgently addressed, such as what resources these machines will consume and how large will they be. Here we estimate the resources required to execute Shor's factoring algorithm on an atom-optics quantum computer architecture. We determine the runtime and size of the computer as a function of the problem size and physical error rate. Our results suggest that once the physical error rate is low enough to allow quantum error correction, optimization to reduce resources and increase performance will come mostly from integrating algorithms and circuits within the error correction environment, rather than from improving the physical hardware.

  4. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    Science.gov (United States)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  5. Corrective Action Decision Document/Closure Report for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Patrick

    2013-09-01

    This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 105: Area 2 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada. CAU 105 comprises the following five corrective action sites (CASs): -02-23-04 Atmospheric Test Site - Whitney Closure In Place -02-23-05 Atmospheric Test Site T-2A Closure In Place -02-23-06 Atmospheric Test Site T-2B Clean Closure -02-23-08 Atmospheric Test Site T-2 Closure In Place -02-23-09 Atmospheric Test Site - Turk Closure In Place The purpose of this Corrective Action Decision Document/Closure Report is to provide justification and documentation supporting the recommendation that no further corrective action is needed for CAU 105 based on the implementation of the corrective actions. Corrective action investigation (CAI) activities were performed from October 22, 2012, through May 23, 2013, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 105: Area 2 Yucca Flat Atmospheric Test Sites; and in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices.

  6. Model-Based Illumination Correction for Face Images in Uncontrolled Scenarios

    NARCIS (Netherlands)

    Boom, B.J.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2009-01-01

    Face Recognition under uncontrolled illumination conditions is partly an unsolved problem. Several illumination correction methods have been proposed, but these are usually tested on illumination conditions created in a laboratory. Our focus is more on uncontrolled conditions. We use the Phong model

  7. Corrective Action Decision Document/Closure Report for Corrective Action Unit 570: Area 9 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada, Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Patrick [Navarro-Intera, LLC (N-I), Las Vegas, NV (United States)

    2013-11-01

    This Corrective Action Decision Document/Closure Report presents information supporting the closure of Corrective Action Unit (CAU) 570: Area 9 Yucca Flat Atmospheric Test Sites, Nevada National Security Site, Nevada. This complies with the requirements of the Federal Facility Agreement and Consent Order (FFACO) that was agreed to by the State of Nevada; U.S. Department of Energy (DOE), Environmental Management; U.S. Department of Defense; and DOE, Legacy Management. The purpose of the CADD/CR is to provide justification and documentation supporting the recommendation that no further corrective action is needed.

  8. A Hold-out method to correct PCA variance inflation

    DEFF Research Database (Denmark)

    Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Hansen, Lars Kai

    2012-01-01

    In this paper we analyze the problem of variance inflation experienced by the PCA algorithm when working in an ill-posed scenario where the dimensionality of the training set is larger than its sample size. In an earlier article a correction method based on a Leave-One-Out (LOO) procedure...

  9. Beta/gamma test problems for ITS

    International Nuclear Information System (INIS)

    Mei, G.T.

    1993-01-01

    The Integrated Tiger Series of Coupled Electron/Photon Monte Carlo Transport Codes (ITS 3.0, PC Version) was used at Oak Ridge National Laboratory (ORNL) to compare with and extend the experimental findings of the beta/gamma response of selected health physics instruments. In order to assure that ITS gives correct results, several beta/gamma problems have been tested. ITS was used to simulate these problems numerically, and results for each were compared to the problem's experimental or analytical results. ITS successfully predicted the experimental or analytical results of all tested problems within the statistical uncertainty inherent in the Monte Carlo method

  10. Controlling qubit drift by recycling error correction syndromes

    Science.gov (United States)

    Blume-Kohout, Robin

    2015-03-01

    Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE

  11. System requirements and design description for the environmental requirements management interface (ERMI)

    International Nuclear Information System (INIS)

    Biebesheimer, E.

    1997-01-01

    This document describes system requirements and the design description for the Environmental Requirements Management Interface (ERMI). The ERMI database assists Tank Farm personnel with scheduling, planning, and documenting procedure compliance, performance verification, and selected corrective action tracking activities for Tank Farm S/RID requirements. The ERMI database was developed by Science Applications International Corporation (SAIC). This document was prepared by SAIC and edited by LMHC

  12. Corrective Action Decision Document for Corrective Action Unit 145: Wells and Storage Holes, Nevada Test Site, Nevada, Rev. No.: 0, with ROTC No. 1 and Addendum

    Energy Technology Data Exchange (ETDEWEB)

    David Strand

    2006-04-01

    This Corrective Action Decision Document has been prepared for Corrective Action Unit (CAU) 145, Wells and Storage Holes in Area 3 of the Nevada Test Site, Nevada, in accordance with the ''Federal Facility Agreement and Consent Order'' (1996). Corrective Action Unit 145 is comprised of the following corrective action sites (CASs): (1) 03-20-01, Core Storage Holes; (2) 03-20-02, Decon Pad and Sump; (3) 03-20-04, Injection Wells; (4) 03-20-08, Injection Well; (5) 03-25-01, Oil Spills; and (6) 03-99-13, Drain and Injection Well. The purpose of this Corrective Action Decision Document is to identify and provide the rationale for the recommendation of a corrective action alternative for the six CASs within CAU 145. Corrective action investigation activities were performed from August 1, 2005, through November 8, 2005, as set forth in the CAU 145 Corrective Action Investigation Plan and Record of Technical Change No. 1. Analytes detected during the Corrective Action Investigation (CAI) were evaluated against appropriate final action levels to identify the contaminants of concern for each CAS. The results of the CAI identified contaminants of concern at one of the six CASs in CAU 145 and required the evaluation of corrective action alternatives. Assessment of the data generated from investigation activities conducted at CAU 145 revealed the following: CASs 03-20-01, 03-20-02, 03-20-04, 03-20-08, and 03-99-13 do not contain contamination; and CAS 03-25-01 has pentachlorophenol and arsenic contamination in the subsurface soils. Based on the evaluation of analytical data from the CAI, review of future and current operations at the six CASs, and the detailed and comparative analysis of the potential corrective action alternatives, the following corrective actions are recommended for CAU 145. No further action is the preferred corrective action for CASs 03-20-01, 03-20-02, 03-20-04, 03-20-08, and 03-99-13. Close in place is the preferred corrective action

  13. Color correction for chromatic distortion in a multi-wavelength digital holographic system

    International Nuclear Information System (INIS)

    Lin, Li-Chien; Huang, Yi-Lun; Tu, Han-Yen; Lai, Xin-Ji; Cheng, Chau-Jern

    2011-01-01

    A multi-wavelength digital holographic (MWDH) system has been developed to record and reconstruct color images. In comparison to working with digital cameras, however, high-quality color reproduction is difficult to achieve, because of the imperfections from the light sources, optical components, optical recording devices and recording processes. Thus, we face the problem of correcting the colors altered during the digital holographic process. We therefore propose a color correction scheme to correct the chromatic distortion caused by the MWDH system. The scheme consists of two steps: (1) creating a color correction profile and (2) applying it to the correction of the distorted colors. To create the color correction profile, we generate two algorithms: the sequential algorithm and the integrated algorithm. The ColorChecker is used to generate the distorted colors and their desired corrected colors. The relationship between these two color patches is fixed into a specific mathematical model, the parameters of which are estimated, creating the profile. Next, the profile is used to correct the color distortion of images, capturing and preserving the original vibrancy of the reproduced colors for different reconstructed images

  14. Un probleme d’identification. Correction du modeles analytique en utilisat des données expérimentales

    Directory of Open Access Journals (Sweden)

    Gabriela Covatariu

    2009-01-01

    Full Text Available La procédure de correction d’un modele analytique adopté pour une structure de construction est précédée d’une comparaison entre le set des données expérimentales et celui des données analytiques, pour une vérification préliminaire concernant la correspondance raisonnable entre ces données. Pour l’identification dynamique des parametres ont été élaborées diverses méthodes de correction des matrices de rigidité et de l’amortissement qui ont a leur base la méthode des moindres carrés dans le domaine des fréquences. L’algorithme proposé a comme résultat la correction de la matrice de rigidité d’un modele de calcul en utilisant comme données d’entrée seulement celles enregistrées pendant les essais expérimentaux.

  15. Polytomy refinement for the correction of dubious duplications in gene trees.

    Science.gov (United States)

    Lafond, Manuel; Chauve, Cedric; Dondi, Riccardo; El-Mabrouk, Nadia

    2014-09-01

    Large-scale methods for inferring gene trees are error-prone. Correcting gene trees for weakly supported features often results in non-binary trees, i.e. trees with polytomies, thus raising the natural question of refining such polytomies into binary trees. A feature pointing toward potential errors in gene trees are duplications that are not supported by the presence of multiple gene copies. We introduce the problem of refining polytomies in a gene tree while minimizing the number of created non-apparent duplications in the resulting tree. We show that this problem can be described as a graph-theoretical optimization problem. We provide a bounded heuristic with guaranteed optimality for well-characterized instances. We apply our algorithm to a set of ray-finned fish gene trees from the Ensembl database to illustrate its ability to correct dubious duplications. The C++ source code for the algorithms and simulations described in the article are available at http://www-ens.iro.umontreal.ca/~lafonman/software.php. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  16. The critical spot eraser—a method to interactively control the correction of local hot and cold spots in IMRT planning

    International Nuclear Information System (INIS)

    Süss, Philipp; Bortz, Michael; Küfer, Karl-Heinz; Thieke, Christian

    2013-01-01

    Common problems in inverse radiotherapy planning are localized dose insufficiencies like hot spots in organs at risk or cold spots inside targets. These are hard to correct since the optimization is based on global evaluations like maximum/minimum doses, equivalent uniform doses or dose–volume constraints for whole structures. In this work, we present a new approach to locally correct the dose of any given treatment plan. Once a treatment plan has been found that is acceptable in general but requires local corrections, these areas are marked by the planner. Then the system generates new plans that fulfil the local dose goals. Consequently, it is possible to interactively explore all plans between the locally corrected plans and the original treatment plan, allowing one to exactly adjust the degree of local correction and how the plan changes overall. Both the amount (in Gy) and the size of the local dose change can be navigated. The method is introduced formally as a new mathematical optimization setting, and is evaluated using a clinical example of a meningioma at the base of the skull. It was possible to eliminate a hot spot outside the target volume while controlling the dose changes to all other parts of the treatment plan. The proposed method has the potential to become the final standard step of inverse treatment planning. For more information on this article, see medicalphysicsweb.org (paper)

  17. Corrective Action Plan for Corrective Action Unit 261: Area 25 Test Cell A Leachfield System, Nevada Test Site, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    T. M. Fitzmaurice

    2000-08-01

    This Corrective Action Plan (CAP) has been prepared for the Corrective Action Unit (CAU)261 Area 25 Test Cell A Leachfield System in accordance with the Federal Facility and Consent Order (Nevada Division of Environmental Protection [NDEP] et al., 1996). This CAP provides the methodology for implementing the approved corrective action alternative as listed in the Corrective Action Decision Document (U.S. Department of Energy, Nevada Operations Office, 1999). Investigation of CAU 261 was conducted from February through May of 1999. There were no Constituents of Concern (COCs) identified at Corrective Action Site (CAS) 25-05-07 Acid Waste Leach Pit (AWLP). COCs identified at CAS 25-05-01 included diesel-range organics and radionuclides. The following closure actions will be implemented under this plan: Because COCs were not found at CAS 25-05-07 AWLP, no action is required; Removal of septage from the septic tank (CAS 25-05-01), the distribution box and the septic tank will be filled with grout; Removal of impacted soils identified near the initial outfall area; and Upon completion of this closure activity and approval of the Closure Report by NDEP, administrative controls, use restrictions, and site postings will be used to prevent intrusive activities at the site.

  18. The specialization problem and the completeness of unfolding

    NARCIS (Netherlands)

    S-H. Nienhuys-Cheng (Shan-Hwei); R. de Wolf

    1996-01-01

    textabstractWe discuss the problem of specializing a definite program with respect to sets of positive and negative examples, following Bostrom and Idestam-Almquist. This problem is very relevant in the field of inductive learning. First we show that there exist sets of examples that have no correct

  19. Corrective Action Decision Document for Corrective Action Unit 240: Area 25 Vehicle Washdown, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    US Department of Energy Nevada Operations Office

    1999-01-01

    This Corrective Action Decision Document identifies and rationalizes the U.S. Department of Energy, Nevada Operations Offices's selection of a recommended corrective action alternative (CAA) appropriate to facilitate the closure of Corrective Action Unit (CAU) 240: Area 25 Vehicle Washdown, Nevada Test Site, Nevada. This corrective action investigation was conducted in accordance with the Corrective Action Investigation Plan for CAU 240 as developed under the Federal Facility Agreement and Consent Order. Located in Area 25 at the Nevada Test Site in Nevada, CAU 240 is comprised of three Corrective Action Sites (CASs): 25-07-01, Vehicle Washdown Area (Propellant Pad); 25-07-02, Vehicle Washdown Area (F and J Roads Pad); and 25-07-03, Vehicle Washdown Station (RADSAFE Pad). In March 1999, the corrective action investigation was performed to detect and evaluate analyte concentrations against preliminary action levels (PALs) to determine contaminants of concern (COCs). There were no COCs identified at CAS 25-07-01 or CAS 25-07-03; therefore, there was no need for corrective action at these two CASs. At CAS 25-07-02, diesel-range organics and radionuclide concentrations in soil samples from F and J Roads Pad exceeded PALs. Based on this result, potential CAAs were identified and evaluated to ensure worker, public, and environmental protection against potential exposure to COCs in accordance with Nevada Administrative Code 445A. Following a review of potential exposure pathways, existing data, and future and current operations in Area 25, two CAAs were identified for CAU 240 (CAS 25-07-02): Alternative 1 - No Further Action and Alternative 2 - Clean Closure by Excavation and Disposal. Alternative 2 was identified as the preferred alternative. This alternative was judged to meet all requirements for the technical components evaluated, compliance with all applicable state and federal regulations for closure of the site, as well as minimizing potential future exposure

  20. Software requirements, design, and verification and validation for the FEHM application - a finite-element heat- and mass-transfer code

    International Nuclear Information System (INIS)

    Dash, Z.V.; Robinson, B.A.; Zyvoloski, G.A.

    1997-07-01

    The requirements, design, and verification and validation of the software used in the FEHM application, a finite-element heat- and mass-transfer computer code that can simulate nonisothermal multiphase multicomponent flow in porous media, are described. The test of the DOE Code Comparison Project, Problem Five, Case A, which verifies that FEHM has correctly implemented heat and mass transfer and phase partitioning, is also covered

  1. Facial volumetric correction with injectable poly-L-lactic acid.

    Science.gov (United States)

    Vleggaar, Danny

    2005-11-01

    Polymers of lactic acid'have been widely used for many years in different types of medical devices, such as resorbable sutures, intrabone implants, and soft tissue implants. Injectable poly-L-lactic acid (PLLA; Sculptra), a synthetic, biodegradable polymer, has gained widespread popularity in Europe for the treatment of facial changes associated with aging. To provide background information on injectable PLLA and to describe clinical experience with its use in Europe for facial volume enhancement. Technique varies with site of injection. Generally, the product is implanted subcutaneously or intradermally in a series of treatments. No allergy testing is required. Based on experience in more than 2,500 patients, injectable PLLA has been used successfully for the correction of nasolabial folds, mid- and lower facial volume loss, jawline laxity, and other signs of facial aging. Correction lasts for 18 to 24 months in most patients. Injectable PLLA treatment provides an excellent and prolonged correction of a variety of facial wrinkles, depressions, and laxity with a minimally invasive procedure that does not require allergy testing or a recovery period.

  2. Lecturers’ perception on students’ critical thinking skills development and problems faced by students in developing their critical thinking skills

    OpenAIRE

    Astuti Muh. Amin; Romi Adiansyah

    2018-01-01

    Critical thinking emerges when learners attempt to use their background knowledge to construct meaning through interpreting, analyzing, and manipulating information in responding to a problem or a question that requires more than a single correct answer. Two factors that affect the improvement of the students’ critical thinking skills are lecturers’ activities and students’ activities. This study was a descriptive quantitative study which aimed to investigate (1) how lecturers perceive the de...

  3. CORRECTION OF FORECASTS OF INTERRELATED CURRENCY PAIRS IN TERMS OF SYSTEMS OF BALANCE RATIOS

    Directory of Open Access Journals (Sweden)

    Gertsekovich D. A.

    2015-03-01

    Full Text Available In this paper the problem of exchange rates forecast is logically considered a traditionally as a task of forecast on the base of «stand-alone» equations of autoregression for each currency pair and b as a result of forecast correction of autoregression equations system on the base of boundary conditions of balance ratios systems. As a criterion for quality of forecast constructed with empirical models we take the sum of deficiency quadrates of forecasts estimated for deductive currency pairs. Practical approval confirmed that deductive models meet common requirements, provide accepted precision, show resistance to initial data and are free from series of deficiency of one index. However, extreme forecast errors tell that practical application of the approach offered needs further improvement.

  4. Self-interacting inelastic dark matter: a viable solution to the small scale structure problems

    Energy Technology Data Exchange (ETDEWEB)

    Blennow, Mattias; Clementz, Stefan; Herrero-Garcia, Juan, E-mail: emb@kth.se, E-mail: scl@kth.se, E-mail: juan.herrero-garcia@adelaide.edu.au [Department of Physics, School of Engineering Sciences, KTH Royal Institute of Technology, AlbaNova University Center, 106 91 Stockholm (Sweden)

    2017-03-01

    Self-interacting dark matter has been proposed as a solution to the small-scale structure problems, such as the observed flat cores in dwarf and low surface brightness galaxies. If scattering takes place through light mediators, the scattering cross section relevant to solve these problems may fall into the non-perturbative regime leading to a non-trivial velocity dependence, which allows compatibility with limits stemming from cluster-size objects. However, these models are strongly constrained by different observations, in particular from the requirements that the decay of the light mediator is sufficiently rapid (before Big Bang Nucleosynthesis) and from direct detection. A natural solution to reconcile both requirements are inelastic endothermic interactions, such that scatterings in direct detection experiments are suppressed or even kinematically forbidden if the mass splitting between the two-states is sufficiently large. Using an exact solution when numerically solving the Schrödinger equation, we study such scenarios and find regions in the parameter space of dark matter and mediator masses, and the mass splitting of the states, where the small scale structure problems can be solved, the dark matter has the correct relic abundance and direct detection limits can be evaded.

  5. Coulomb corrections for interferometry analysis of expanding hadron systems

    Energy Technology Data Exchange (ETDEWEB)

    Sinyukov, Yu.M. [Centre National de la Recherche Scientifique, 44 - Nantes (France). Lab. de Physique Subatomique et des Technologies Associees]|[Institute for Theoretical Physics of National Acad. Sci., Kiev (Ukraine); Lednicky, R. [Centre National de la Recherche Scientifique, 44 - Nantes (France). Lab. de Physique Subatomique et des Technologies Associees]|[Institute of Physics, Prague (Czech Republic); Akkelin, S.V. [AN Ukrainskoj SSR, Kiev (Ukraine). Inst. Teoreticheskoj Fiziki; Pluta, J. [Centre National de la Recherche Scientifique, 44 - Nantes (France). Lab. de Physique Subatomique et des Technologies Associees]|[Warsaw Univ. (Poland). Inst. of Physics; Erazmus, B. [Centre National de la Recherche Scientifique, 44 - Nantes (France). Lab. de Physique Subatomique et des Technologies Associees

    1998-10-01

    The problem of the Coulomb corrections to the two-boson correlation functions for the systems formed in ultra-relativistic heavy ion collisions is considered for large effective volumes predicted in the realistic evolution scenarios taking into account the collective flows. A simple modification of the standard zero-distance correction (so called Gamow or Coulomb factor) has been proposed for such a kind of systems. For {pi}{sup +}{pi}{sup +} and K{sup +}K{sup +} correlation functions this approximate analytical approach is compared with the exact numerical results and a good agreement is found for typical conditions at SPS, RHIC and even LHC energies. (author) 21 refs.

  6. Correction and development of psychomotor function of deaf children of midchildhood by facilities of mobile games.

    Directory of Open Access Journals (Sweden)

    Іvahnenko A.A.

    2011-03-01

    Full Text Available The problem of correction and development of psychomotor sphere of deaf children of midchildhood is considered by facilities of physical education, in particular by mobile games. The analysis of publications of research workers is resulted in relation to the problem of development of psychomotor function of deaf children in a theory and practice of correction work. In theory is grounded value of mobile games as effective mean of development of psychomotor sphere of deaf lower boys. The necessity of application of the specially adapted mobile games is set in the process of correction-pedagogical work with the deaf children of midchildhood. The pedagogical looking is presented after the features of playing activity of deaf children of 1-4 classes.

  7. 14 CFR 437.73 - Anomaly recording, reporting and implementation of corrective actions.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Anomaly recording, reporting and implementation of corrective actions. 437.73 Section 437.73 Aeronautics and Space COMMERCIAL SPACE TRANSPORTATION... Requirements § 437.73 Anomaly recording, reporting and implementation of corrective actions. (a) A permittee...

  8. Hemiequilibrium problems

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Noor

    2004-01-01

    Full Text Available We consider a new class of equilibrium problems, known as hemiequilibrium problems. Using the auxiliary principle technique, we suggest and analyze a class of iterative algorithms for solving hemiequilibrium problems, the convergence of which requires either pseudomonotonicity or partially relaxed strong monotonicity. As a special case, we obtain a new method for hemivariational inequalities. Since hemiequilibrium problems include hemivariational inequalities and equilibrium problems as special cases, the results proved in this paper still hold for these problems.

  9. PRA-Code Upgrade to Handle a Generic Problem

    International Nuclear Information System (INIS)

    Wilson, J. R.

    1999-01-01

    During the probabilistic risk assessment (PRA) for the proposed Yucca Mountain nuclear waste repository, a problem came up that could not be handled by most PRA computer codes. This problem deals with dependencies between sequential events in time. Two similar scenarios that illustrate this problem are LOOP nonrecovery and sequential wearout failures with units of time. The purpose of this paper is twofold: To explain the problem generically, and to show how the PRA code at the INEEL, SAPHIRE, has been modified to solve this problem correctly

  10. Attenuation correction for freely moving small animal brain PET studies based on a virtual scanner geometry

    International Nuclear Information System (INIS)

    Angelis, G I; Kyme, A Z; Ryder, W J; Fulton, R R; Meikle, S R

    2014-01-01

    Attenuation correction in positron emission tomography brain imaging of freely moving animals is a very challenging problem since the torso of the animal is often within the field of view and introduces a non negligible attenuating factor that can degrade the quantitative accuracy of the reconstructed images. In the context of unrestrained small animal imaging, estimation of the attenuation correction factors without the need for a transmission scan is highly desirable. An attractive approach that avoids the need for a transmission scan involves the generation of the hull of the animal’s head based on the reconstructed motion corrected emission images. However, this approach ignores the attenuation introduced by the animal’s torso. In this work, we propose a virtual scanner geometry which moves in synchrony with the animal’s head and discriminates between those events that traversed only the animal’s head (and therefore can be accurately compensated for attenuation) and those that might have also traversed the animal’s torso. For each recorded pose of the animal’s head a new virtual scanner geometry is defined and therefore a new system matrix must be calculated leading to a time-varying system matrix. This new approach was evaluated on phantom data acquired on the microPET Focus 220 scanner using a custom-made phantom and step-wise motion. Results showed that when the animal’s torso is within the FOV and not appropriately accounted for during attenuation correction it can lead to bias of up to 10% . Attenuation correction was more accurate when the virtual scanner was employed leading to improved quantitative estimates (bias < 2%), without the need to account for the attenuation introduced by the extraneous compartment. Although the proposed method requires increased computational resources, it can provide a reliable approach towards quantitatively accurate attenuation correction for freely moving animal studies. (paper)

  11. Precise numerical results for limit cycles in the quantum three-body problem

    International Nuclear Information System (INIS)

    Mohr, R.F.; Furnstahl, R.J.; Hammer, H.-W.; Perry, R.J.; Wilson, K.G.

    2006-01-01

    The study of the three-body problem with short-range attractive two-body forces has a rich history going back to the 1930s. Recent applications of effective field theory methods to atomic and nuclear physics have produced a much improved understanding of this problem, and we elucidate some of the issues using renormalization group ideas applied to precise nonperturbative calculations. These calculations provide 11-12 digits of precision for the binding energies in the infinite cutoff limit. The method starts with this limit as an approximation to an effective theory and allows cutoff dependence to be systematically computed as an expansion in powers of inverse cutoffs and logarithms of the cutoff. Renormalization of three-body bound states requires a short range three-body interaction, with a coupling that is governed by a precisely mapped limit cycle of the renormalization group. Additional three-body irrelevant interactions must be determined to control subleading dependence on the cutoff and this control is essential for an effective field theory since the continuum limit is not likely to match physical systems (e.g., few-nucleon bound and scattering states at low energy). Leading order calculations precise to 11-12 digits allow clear identification of subleading corrections, but these corrections have not been computed

  12. Impact of ageing on problem size and proactive interference in arithmetic facts solving.

    Science.gov (United States)

    Archambeau, Kim; De Visscher, Alice; Noël, Marie-Pascale; Gevers, Wim

    2018-02-01

    Arithmetic facts (AFs) are required when solving problems such as "3 × 4" and refer to calculations for which the correct answer is retrieved from memory. Currently, two important effects that modulate the performance in AFs have been highlighted: the problem size effect and the proactive interference effect. The aim of this study is to investigate possible age-related changes of the problem size effect and the proactive interference effect in AF solving. To this end, the performance of young and older adults was compared in a multiplication production task. Furthermore, an independent measure of proactive interference was assessed to further define the architecture underlying this effect in multiplication solving. The results indicate that both young and older adults were sensitive to the effects of interference and of the problem size. That is, both interference and problem size affected performance negatively: the time needed to solve a multiplication problem increases as the level of interference and the size of the problem increase. Regarding the effect of ageing, the problem size effect remains constant with age, indicating a preserved AF network in older adults. Interestingly, sensitivity to proactive interference in multiplication solving was less pronounced in older than in younger adults suggesting that part of the proactive interference has been overcome with age.

  13. [The crooked nose: correction of dorsal and caudal septal deviations].

    Science.gov (United States)

    Foda, H M T

    2010-09-01

    The deviated nose represents a complex cosmetic and functional problem. Septal surgery plays a central role in the successful management of the externally deviated nose. This study included 800 patients seeking rhinoplasty to correct external nasal deviations; 71% of these suffered from variable degrees of nasal obstruction. Septal surgery was necessary in 736 (92%) patients, not only to improve breathing, but also to achieve a straight, symmetric external nose. A graduated surgical approach was adopted to allow correction of the dorsal and caudal deviations of the nasal septum without weakening its structural support to the nasal dorsum or nasal tip. The approach depended on full mobilization of deviated cartilage, followed by straightening of the cartilage and its fixation in the corrected position by using bony splinting grafts through an external rhinoplasty approach.

  14. A new trajectory correction technique for linacs

    International Nuclear Information System (INIS)

    Raubenheimer, T.O.; Ruth, R.D.

    1990-06-01

    In this paper, we describe a new trajectory correction technique for high energy linear accelerators. Current correction techniques force the beam trajectory to follow misalignments of the Beam Position Monitors. Since the particle bunch has a finite energy spread and particles with different energies are deflected differently, this causes ''chromatic'' dilution of the transverse beam emittance. The algorithm, which we describe in this paper, reduces the chromatic error by minimizing the energy dependence of the trajectory. To test the method we compare the effectiveness of our algorithm with a standard correction technique in simulations on a design linac for a Next Linear Collider. The simulations indicate that chromatic dilution would be debilitating in a future linear collider because of the very small beam sizes required to achieve the necessary luminosity. Thus, we feel that this technique will prove essential for future linear colliders. 3 refs., 6 figs., 2 tabs

  15. Correction of congenital ptosis of the eyelid by frontal muscle transposition

    Directory of Open Access Journals (Sweden)

    Jevtović Dobrica

    2002-01-01

    Full Text Available Congenital ptosis (CP represents a significant reconstructive problem Numerous studies have not yet provided full and satisfactory results. In this study, we have presented our experience in the surgical treatment of 108 patients by the use of Son Ye Guang's modified method - frontal muscle transposition. A total of 108 patients with CP were surgically treated at the Clinic for Plastic Surgery and Burns of the Military Medical Academy in the period 1991-2000. Unilateral ptosis was operated in 85 patients, and bilateral in 23 patients. CP was more frequently found in males (58.34% than in females (41.66%. The youngest patient was only 5.5 years old, and the oldest was 42, the average age was 21.3 years. All patients were operated on by the same surgeon, and were monitored monthly during the first six months and then twice a year for the next 3 years. Postoperative results were evaluated after 6 months: the action of raising the eyelids was compared to the full amplitude of movement of the eye on the healthy side. The closure of the eyelids and the symmetry of the palpebral fissure in a steady horizontal view was also assessed. The action of the opening as well as closure of the eyelids in full amplitude was obtained in all operated patients. Asymmetry of the palpebral fissure in a steady horizontal view up to 1 mm did not require additional correction. In 9 cases, asymmetry of the palpebral fissure greater than 1 mm was subsequently corrected. The advantages of this surgical method compared to the other, previously described techniques, were emphasized in the conclusion. The main advantage was the elimination of postoperative lagophthalmos, which represented the problem in all previously used methods.

  16. Corrected simulations for one-dimensional diffusion processes with naturally occurring boundaries.

    Science.gov (United States)

    Shafiey, Hassan; Gan, Xinjun; Waxman, David

    2017-11-01

    To simulate a diffusion process, a usual approach is to discretize the time in the associated stochastic differential equation. This is the approach used in the Euler method. In the present work we consider a one-dimensional diffusion process where the terms occurring, within the stochastic differential equation, prevent the process entering a region. The outcome is a naturally occurring boundary (which may be absorbing or reflecting). A complication occurs in a simulation of this situation. The term involving a random variable, within the discretized stochastic differential equation, may take a trajectory across the boundary into a "forbidden region." The naive way of dealing with this problem, which we refer to as the "standard" approach, is simply to reset the trajectory to the boundary, based on the argument that crossing the boundary actually signifies achieving the boundary. In this work we show, within the framework of the Euler method, that such resetting introduces a spurious force into the original diffusion process. This force may have a significant influence on trajectories that come close to a boundary. We propose a corrected numerical scheme, for simulating one-dimensional diffusion processes with naturally occurring boundaries. This involves correcting the standard approach, so that an exact property of the diffusion process is precisely respected. As a consequence, the proposed scheme does not introduce a spurious force into the dynamics. We present numerical test cases, based on exactly soluble one-dimensional problems with one or two boundaries, which suggest that, for a given value of the discrete time step, the proposed scheme leads to substantially more accurate results than the standard approach. Alternatively, the standard approach needs considerably more computation time to obtain a comparable level of accuracy to the proposed scheme, because the standard approach requires a significantly smaller time step.

  17. Correction factors for assessing immersion suits under harsh conditions.

    Science.gov (United States)

    Power, Jonathan; Tikuisis, Peter; Ré, António Simões; Barwood, Martin; Tipton, Michael

    2016-03-01

    Many immersion suit standards require testing of thermal protective properties in calm, circulating water while these suits are typically used in harsher environments where they often underperform. Yet it can be expensive and logistically challenging to test immersion suits in realistic conditions. The goal of this work was to develop a set of correction factors that would allow suits to be tested in calm water yet ensure they will offer sufficient protection in harsher conditions. Two immersion studies, one dry and the other with 500 mL of water within the suit, were conducted in wind and waves to measure the change in suit insulation. In both studies, wind and waves resulted in a significantly lower immersed insulation value compared to calm water. The minimum required thermal insulation for maintaining heat balance can be calculated for a given mean skin temperature, metabolic heat production, and water temperature. Combining the physiological limits of sustainable cold water immersion and actual suit insulation, correction factors can be deduced for harsh conditions compared to calm. The minimum in-situ suit insulation to maintain thermal balance is 1.553-0.0624·TW + 0.00018·TW(2) for a dry calm condition. Multiplicative correction factors to the above equation are 1.37, 1.25, and 1.72 for wind + waves, 500 mL suit wetness, and both combined, respectively. Calm water certification tests of suit insulation should meet or exceed the minimum in-situ requirements to maintain thermal balance, and correction factors should be applied for a more realistic determination of minimum insulation for harsh conditions. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  18. Test-state approach to the quantum search problem

    International Nuclear Information System (INIS)

    Sehrawat, Arun; Nguyen, Le Huy; Englert, Berthold-Georg

    2011-01-01

    The search for 'a quantum needle in a quantum haystack' is a metaphor for the problem of finding out which one of a permissible set of unitary mappings - the oracles - is implemented by a given black box. Grover's algorithm solves this problem with quadratic speedup as compared with the analogous search for 'a classical needle in a classical haystack'. Since the outcome of Grover's algorithm is probabilistic - it gives the correct answer with high probability, not with certainty - the answer requires verification. For this purpose we introduce specific test states, one for each oracle. These test states can also be used to realize 'a classical search for the quantum needle' which is deterministic - it always gives a definite answer after a finite number of steps - and 3.41 times as fast as the purely classical search. Since the test-state search and Grover's algorithm look for the same quantum needle, the average number of oracle queries of the test-state search is the classical benchmark for Grover's algorithm.

  19. Numerical evaluation of virtual corrections to multi-jet production in massless QCD

    DEFF Research Database (Denmark)

    Badger, S.; Yundin, V.; Biedermann, B.

    2013-01-01

    title: NJet. Catalogue identifier: AEPF_v1_0. Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEPF_v1_0.html. Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: GNU General Public License, version 3. No. of lines in distributed program......, including test data, etc.: 250047. No. of bytes in distributed program, including test data, etc.: 2138947. Distribution format: tar.gz. Programming language: C++, Python. Computer: PC/Workstation. Operating system: No specific requirements - tested on Scientific Linux 5.2. and Mac OS X 10.......7.4. Classification: 11.5. External routines: QCDLoop (http://qcdloop.fnal.gov/), qd (http://crd.lbl.gov/dhbailey/mpdist/), both included in the distribution file. Nature of problem:. Evaluation of virtual corrections for multi-jet production in massless QCD. Solution method:. Purely numerical approach based on tree...

  20. The Self Attenuation Correction for Holdup Measurements, a Historical Perspective

    International Nuclear Information System (INIS)

    Oberer, R. B.; Gunn, C. A.; Chiang, L. G.

    2006-01-01

    Self attenuation has historically caused both conceptual as well as measurement problems. The purpose of this paper is to eliminate some of the historical confusion by reviewing the mathematical basis and by comparing several methods of correcting for self attenuation focusing on transmission as a central concept

  1. One-step genetic correction of hemoglobin E/beta-thalassemia patient-derived iPSCs by the CRISPR/Cas9 system.

    Science.gov (United States)

    Wattanapanitch, Methichit; Damkham, Nattaya; Potirat, Ponthip; Trakarnsanga, Kongtana; Janan, Montira; U-Pratya, Yaowalak; Kheolamai, Pakpoom; Klincumhom, Nuttha; Issaragrisil, Surapol

    2018-02-26

    Thalassemia is the most common genetic disease worldwide; those with severe disease require lifelong blood transfusion and iron chelation therapy. The definitive cure for thalassemia is allogeneic hematopoietic stem cell transplantation, which is limited due to lack of HLA-matched donors and the risk of post-transplant complications. Induced pluripotent stem cell (iPSC) technology offers prospects for autologous cell-based therapy which could avoid the immunological problems. We now report genetic correction of the beta hemoglobin (HBB) gene in iPSCs derived from a patient with a double heterozygote for hemoglobin E and β-thalassemia (HbE/β-thalassemia), the most common thalassemia syndrome in Thailand and Southeast Asia. We used the CRISPR/Cas9 system to target the hemoglobin E mutation from one allele of the HBB gene by homology-directed repair with a single-stranded DNA oligonucleotide template. DNA sequences of the corrected iPSCs were validated by Sanger sequencing. The corrected clones were differentiated into hematopoietic progenitor and erythroid cells to confirm their multilineage differentiation potential and hemoglobin expression. The hemoglobin E mutation of HbE/β-thalassemia iPSCs was seamlessly corrected by the CRISPR/Cas9 system. The corrected clones were differentiated into hematopoietic progenitor cells under feeder-free and OP9 coculture systems. These progenitor cells were further expanded in erythroid liquid culture system and developed into erythroid cells that expressed mature HBB gene and HBB protein. Our study provides a strategy to correct hemoglobin E mutation in one step and these corrected iPSCs can be differentiated into hematopoietic stem cells to be used for autologous transplantation in patients with HbE/β-thalassemia in the future.

  2. Detector to detector corrections: a comprehensive experimental study of detector specific correction factors for beam output measurements for small radiotherapy beams

    DEFF Research Database (Denmark)

    Azangwe, Godfrey; Grochowska, Paulina; Georg, Dietmar

    2014-01-01

    -doped aluminium oxide (Al2O3:C), organic plastic scintillators, diamond detectors, liquid filled ion chamber, and a range of small volume air filled ionization chambers (volumes ranging from 0.002 cm3 to 0.3 cm3). All detector measurements were corrected for volume averaging effect and compared with dose ratios...... measurements, the authors recommend the use of detectors that require relatively little correction, such as unshielded diodes, diamond detectors or microchambers, and solid state detectors such as alanine, TLD, Al2O3:C, or scintillators....

  3. Publisher Correction: Measuring progress from nationally determined contributions to mid-century strategies

    Science.gov (United States)

    Iyer, Gokul; Ledna, Catherine; Clarke, Leon; Edmonds, James; McJeon, Haewon; Kyle, Page; Williams, James H.

    2018-03-01

    In the version of this Article previously published, technical problems led to the wrong summary appearing on the homepage, and an incorrect Supplementary Information file being uploaded. Both errors have now been corrected.

  4. From Novice to Expert: Problem Solving in ICD-10-PCS Procedural Coding

    Science.gov (United States)

    Rousse, Justin Thomas

    2013-01-01

    The benefits of converting to ICD-10-CM/PCS have been well documented in recent years. One of the greatest challenges in the conversion, however, is how to train the workforce in the code sets. The International Classification of Diseases, Tenth Revision, Procedure Coding System (ICD-10-PCS) has been described as a language requiring higher-level reasoning skills because of the system's increased granularity. Training and problem-solving strategies required for correct procedural coding are unclear. The objective of this article is to propose that the acquisition of rule-based logic will need to be augmented with self-evaluative and critical thinking. Awareness of how this process works is helpful for established coders as well as for a new generation of coders who will master the complexities of the system. PMID:23861674

  5. Employing UMLS for generating hints in a tutoring system for medical problem-based learning.

    Science.gov (United States)

    Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan

    2012-06-01

    While problem-based learning has become widely popular for imparting clinical reasoning skills, the dynamics of medical PBL require close attention to a small group of students, placing a burden on medical faculty, whose time is over taxed. Intelligent tutoring systems (ITSs) offer an attractive means to increase the amount of facilitated PBL training the students receive. But typical intelligent tutoring system architectures make use of a domain model that provides a limited set of approved solutions to problems presented to students. Student solutions that do not match the approved ones, but are otherwise partially correct, receive little acknowledgement as feedback, stifling broader reasoning. Allowing students to creatively explore the space of possible solutions is exactly one of the attractive features of PBL. This paper provides an alternative to the traditional ITS architecture by using a hint generation strategy that leverages a domain ontology to provide effective feedback. The concept hierarchy and co-occurrence between concepts in the domain ontology are drawn upon to ascertain partial correctness of a solution and guide student reasoning towards a correct solution. We describe the strategy incorporated in METEOR, a tutoring system for medical PBL, wherein the widely available UMLS is deployed and represented as the domain ontology. Evaluation of expert agreement with system generated hints on a 5-point likert scale resulted in an average score of 4.44 (Spearman's ρ=0.80, p<0.01). Hints containing partial correctness feedback scored significantly higher than those without it (Mann Whitney, p<0.001). Hints produced by a human expert received an average score of 4.2 (Spearman's ρ=0.80, p<0.01). Copyright © 2012 Elsevier Inc. All rights reserved.

  6. 40 CFR 63.10899 - What are my recordkeeping and reporting requirements?

    Science.gov (United States)

    2010-07-01

    ... years following the date of each occurrence, measurement, maintenance, corrective action, report, or... and maintenance requirements § 63.10896 and the corrective action taken. (d) You must submit written... maintenance plan as required by § 63.10896(a) and records that demonstrate compliance with plan requirements...

  7. ecco: An error correcting comparator theory.

    Science.gov (United States)

    Ghirlanda, Stefano

    2018-03-08

    Building on the work of Ralph Miller and coworkers (Miller and Matzel, 1988; Denniston et al., 2001; Stout and Miller, 2007), I propose a new formalization of the comparator hypothesis that seeks to overcome some shortcomings of existing formalizations. The new model, dubbed ecco for "Error-Correcting COmparisons," retains the comparator process and the learning of CS-CS associations based on contingency. ecco assumes, however, that learning of CS-US associations is driven by total error correction, as first introduced by Rescorla and Wagner (1972). I explore ecco's behavior in acquisition, compound conditioning, blocking, backward blocking, and unovershadowing. In these paradigms, ecco appears capable of avoiding the problems of current comparator models, such as the inability to solve some discriminations and some paradoxical effects of stimulus salience. At the same time, ecco exhibits the retrospective revaluation phenomena that are characteristic of comparator theory. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Comparative evaluation of scatter correction techniques in 3D positron emission tomography

    CERN Document Server

    Zaidi, H

    2000-01-01

    Much research and development has been concentrated on the scatter compensation required for quantitative 3D PET. Increasingly sophisticated scatter correction procedures are under investigation, particularly those based on accurate scatter models, and iterative reconstruction-based scatter compensation approaches. The main difference among the correction methods is the way in which the scatter component in the selected energy window is estimated. Monte Carlo methods give further insight and might in themselves offer a possible correction procedure. Methods: Five scatter correction methods are compared in this paper where applicable. The dual-energy window (DEW) technique, the convolution-subtraction (CVS) method, two variants of the Monte Carlo-based scatter correction technique (MCBSC1 and MCBSC2) and our newly developed statistical reconstruction-based scatter correction (SRBSC) method. These scatter correction techniques are evaluated using Monte Carlo simulation studies, experimental phantom measurements...

  9. On-sky Closed-loop Correction of Atmospheric Dispersion for High-contrast Coronagraphy and Astrometry

    Science.gov (United States)

    Pathak, P.; Guyon, O.; Jovanovic, N.; Lozi, J.; Martinache, F.; Minowa, Y.; Kudo, T.; Kotani, T.; Takami, H.

    2018-02-01

    Adaptive optic (AO) systems delivering high levels of wavefront correction are now common at observatories. One of the main limitations to image quality after wavefront correction comes from atmospheric refraction. An atmospheric dispersion compensator (ADC) is employed to correct for atmospheric refraction. The correction is applied based on a look-up table consisting of dispersion values as a function of telescope elevation angle. The look-up table-based correction of atmospheric dispersion results in imperfect compensation leading to the presence of residual dispersion in the point spread function (PSF) and is insufficient when sub-milliarcsecond precision is required. The presence of residual dispersion can limit the achievable contrast while employing high-performance coronagraphs or can compromise high-precision astrometric measurements. In this paper, we present the first on-sky closed-loop correction of atmospheric dispersion by directly using science path images. The concept behind the measurement of dispersion utilizes the chromatic scaling of focal plane speckles. An adaptive speckle grid generated with a deformable mirror (DM) that has a sufficiently large number of actuators is used to accurately measure the residual dispersion and subsequently correct it by driving the ADC. We have demonstrated with the Subaru Coronagraphic Extreme AO (SCExAO) system on-sky closed-loop correction of residual dispersion to instruments which require sub-milliarcsecond correction.

  10. Correction of longitudinal errors in accelerators for heavy-ion fusion

    International Nuclear Information System (INIS)

    Sharp, W.M.; Callahan, D.A.; Barnard, J.J.; Langdon, A.B.; Fessenden, T.J.

    1993-01-01

    Longitudinal space-charge waves develop on a heavy-ion inertial-fusion pulse from initial mismatches or from inappropriately timed or shaped accelerating voltages. Without correction, waves moving backward along the beam can grow due to the interaction with their resistivity retarded image fields, eventually degrading the longitudinal emittance. A simple correction algorithm is presented here that uses a time-dependent axial electric field to reverse the direction of backward-moving waves. The image fields then damp these forward-moving waves. The method is demonstrated by fluid simulations of an idealized inertial-fusion driver, and practical problems in implementing the algorithm are discussed

  11. Work-related well-being of correctional officers in South Africa / Philemon Rampou Mohoje

    OpenAIRE

    Mohoje, Philemon Rampou

    2006-01-01

    Stress among correctional officers is widespread, according to research studies and anecdotal evidence. The threat of inmate violence against correctional officers, actual violence committed by inmates, inmate demands and manipulation and problems with co-workers are conditions that officers have reported in recent years that can cause stress. These factors, combined with understaffing, extensive overtime, rotating shift work, low pay, poor public image, and other sources of st...

  12. Effect of FLR correction on Rayleigh -Taylor instability of quantum and stratified plasma

    International Nuclear Information System (INIS)

    Sharma, P.K.; Tiwari, Anita; Argal, Shraddha; Chhajlani, R.K.

    2013-01-01

    The Rayleigh Taylor instability of stratified incompressible fluids is studied in presence of FLR Correction and quantum effects in bounded medium. The Quantum magneto hydrodynamic equations of the problem are solved by using normal mode analysis method. A dispersion relation is carried out for the case where plasma is bounded by two rigid planes z = 0 and z = h. The dispersion relation is obtained in dimensionless form to discuss the growth rate of Rayleigh Taylor instability in presence of FLR Correction and quantum effects. The stabilizing or destabilizing behavior of quantum effect and FLR correction on the Rayleigh Taylor instability is analyzed. (author)

  13. Comments on `A discrete optimal control problem for descriptor systems'

    DEFF Research Database (Denmark)

    Ravn, Hans

    1990-01-01

    In the above-mentioned work (see ibid., vol.34, p.177-81 (1989)), necessary and sufficient optimality conditions are derived for a discrete-time optimal problem, as well as other specific cases of implicit and explicit dynamic systems. The commenter corrects a mistake and demonstrates that there ......In the above-mentioned work (see ibid., vol.34, p.177-81 (1989)), necessary and sufficient optimality conditions are derived for a discrete-time optimal problem, as well as other specific cases of implicit and explicit dynamic systems. The commenter corrects a mistake and demonstrates...

  14. Nearly degenerate neutrinos, supersymmetry and radiative corrections

    International Nuclear Information System (INIS)

    Casas, J.A.; Espinosa, J.R.; Ibarra, A.; Navarro, I.

    2000-01-01

    If neutrinos are to play a relevant cosmological role, they must be essentially degenerate with a mass matrix of the bimaximal mixing type. We study this scenario in the MSSM framework, finding that if neutrino masses are produced by a see-saw mechanism, the radiative corrections give rise to mass splittings and mixing angles that can accommodate the atmospheric and the (large angle MSW) solar neutrino oscillations. This provides a natural origin for the Δm 2 sol 2 atm hierarchy. On the other hand, the vacuum oscillation solution to the solar neutrino problem is always excluded. We discuss also in the SUSY scenario other possible effects of radiative corrections involving the new neutrino Yukawa couplings, including implications for triviality limits on the Majorana mass, the infrared fixed point value of the top Yukawa coupling, and gauge coupling and bottom-tau unification

  15. Corrective action investigation plan for the Roller Coaster RADSAFE Area, Corrective Action Unit 407, Tonopah Test Range, Nevada

    International Nuclear Information System (INIS)

    1998-04-01

    This Corrective Action Investigation Plan (CAIP) has been developed in accordance with the Federal Facility Agreement and Consent Order (FFACO) that was agreed to by the US Department of Energy, Nevada Operations Office (DOE/NV); the State of Nevada Division of Environmental Protection (NDEP); and the US Department of Defense (FFACO, 1996). The CAIP is a document that provides or references all of the specific information for investigation activities associated with Corrective Action Units (CAUs) or Corrective Action Sites (CASs). According to the FFACO (1996), CASs are sites potentially requiring corrective action(s) and may include solid waste management units or individual disposal or release sites. CAUs consist of one or more CASs grouped together based on geography, technical similarity, or agency responsibility for the purpose of determining corrective actions. This CAIP contains the environmental sample collection objectives and the criteria for conducting site investigation activities at CAU No. 407, the Roller Coaster RADSAFE Area (RCRSA) which is located on the Tonopah Test Range (TTR). The TTR, included in the Nellis Air Force Range Complex, is approximately 255 km (140 mi) northwest of Las Vegas, Nevada. CAU No. 407 is comprised of only one CAS (TA-23-001-TARC). The RCRSA was used during May and June 1963 to decontaminate vehicles, equipment, and personnel from the Clean Slate tests. The surface and subsurface soils are likely to have been impacted by plutonium and other contaminants of potential concern (COPCs) associated with decontamination activities at this site. The purpose of the corrective action investigation described in this CAIP is to: identify the presence and nature of COPCs; determine the vertical and lateral extent of COPCs; and provide sufficient information and data to develop and evaluate appropriate corrective actions for the CAS

  16. From customer satisfaction survey to corrective actions in laboratory services in a university hospital.

    Science.gov (United States)

    Oja, Paula I; Kouri, Timo T; Pakarinen, Arto J

    2006-12-01

    To find out the satisfaction of clinical units with laboratory services in a university hospital, to point out the most important problems and defects in services, to carry out corrective actions, and thereafter to identify the possible changes in satisfaction. and Senior physicians and nurses-in-charge of the clinical units at Oulu University Hospital, Finland. Customer satisfaction survey using a questionnaire was carried out in 2001, indicating the essential aspects of laboratory services. Customer-specific problems were clarified, corrective actions were performed, and the survey was repeated in 2004. In 2001, the highest dissatisfaction rates were recorded for computerized test requesting and reporting, turnaround times of tests, and the schedule of phlebotomy rounds. The old laboratory information system was not amenable to major improvements, and it was renewed in 2004-05. Several clinical units perceived turnaround times to be long, because the tests were ordered as routine despite emergency needs. Instructions about stat requesting were given to these units. However, no changes were evident in the satisfaction level in the 2004 survey. Following negotiations with the clinics, phlebotomy rounds were re-scheduled. This resulted in a distinct increase in satisfaction in 2004. Satisfaction survey is a screening tool that identifies topics of dissatisfaction. Without further clarifications, it is not possible to find out the specific problems of customers and to undertake targeted corrective actions. Customer-specific corrections are rarely seen as improvements in overall satisfaction rates.

  17. On the Design of Error-Correcting Ciphers

    Directory of Open Access Journals (Sweden)

    Mathur Chetan Nanjunda

    2006-01-01

    Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison

  18. The usefulness and the problems of attenuation correction using simultaneous transmission and emission data acquisition method. Studies on normal volunteers and phantom

    International Nuclear Information System (INIS)

    Kijima, Tetsuji; Kumita, Shin-ichiro; Mizumura, Sunao; Cho, Keiichi; Ishihara, Makiko; Toba, Masahiro; Kumazaki, Tatsuo; Takahashi, Munehiro.

    1997-01-01

    Attenuation correction using simultaneous transmission data (TCT) and emission data (ECT) acquisition method was applied to 201 Tl myocardial SPECT with ten normal adults and the phantom in order to validate the efficacy of attenuation correction using this method. Normal adults study demonstrated improved 201 Tl accumulation to the septal wall and the posterior wall of the left ventricle and relative decreased activities in the lateral wall with attenuation correction (p 201 Tl uptake organs such as the liver and the stomach pushed up the activities in the septal wall and the posterior wall. Cardiac dynamic phantom studies showed partial volume effect due to cardiac motion contributed to under-correction of the apex, which might be overcome using gated SPECT. Although simultaneous TCT and ECT acquisition was conceived of the advantageous method for attenuation correction, miss-correction of the special myocardial segments should be taken into account in assessment of attenuation correction compensated images. (author)

  19. Self-adjoint extensions and spectral analysis in the Calogero problem

    Energy Technology Data Exchange (ETDEWEB)

    Gitman, D M [Institute of Physics, University of Sao Paulo (Brazil); Tyutin, I V; Voronov, B L [Lebedev Physical Institute, Moscow (Russian Federation)], E-mail: gitman@dfn.if.usp.br, E-mail: tyutin@lpi.ru, E-mail: voronov@lpi.ru

    2010-04-09

    In this paper, we present a mathematically rigorous quantum-mechanical treatment of a one-dimensional motion of a particle in the Calogero potential {alpha}x{sup -2}. Although the problem is quite old and well studied, we believe that our consideration based on a uniform approach to constructing a correct quantum-mechanical description for systems with singular potentials and/or boundaries, proposed in our previous works, adds some new points to its solution. To demonstrate that a consideration of the Calogero problem requires mathematical accuracy, we discuss some 'paradoxes' inherent in the 'naive' quantum-mechanical treatment. Using a self-adjoint extension method, we construct and study all possible self-adjoint operators (self-adjoint Hamiltonians) associated with a formal differential expression for the Calogero Hamiltonian. In particular, we discuss a spontaneous scale-symmetry breaking associated with self-adjoint extensions. A complete spectral analysis of all self-adjoint Hamiltonians is presented.

  20. Corrective action investigation plan for Corrective Action Unit 342: Area 23 Mercury Fire Training Pit, Nevada Test Site, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    This Corrective Action Investigation Plan (CAIP) has been developed in accordance with the Federal Facility Agreement and Consent Order (FFACO) that was agreed to by the US Department of Energy, Nevada Operations Office (DOE/NV); the State of Nevada Division of Environmental Protection (NDEP); and the US Department of Defense (FFACO, 1996). The CAIP is a document that provides or references all of the specific information for investigation activities associated with Corrective Action Units (CAUs) or Corrective Action Sites (CASs). According to the FFACO, CASs are sites potentially requiring corrective action(s) and may include solid waste management units or individual disposal or release sites (FFACO, 1996). Corrective Action Units consist of one or more CASs grouped together based on geography, technical similarity, or agency responsibility for the purpose of determining corrective actions. This CAIP contains the environmental sample collection objectives and the criteria for conducting site investigation activities at CAU 342, the Area 23 Mercury Fire Training Pit (FTP), which is located in Area 23 at the Nevada Test Site (NTS). The NTS is approximately 88 km (55 mi) northwest of Las Vegas, Nevada. Corrective Action Unit 342 is comprised of CAS 23-56-01. The FTP is an area approximately 100 m by 140 m (350 ft by 450 ft) located west of the town of Mercury, Nevada, which was used between approximately 1965 and 1990 to train fire-fighting personnel (REECo, 1991; Jacobson, 1991). The surface and subsurface soils in the FTP have likely been impacted by hydrocarbons and other contaminants of potential concern (COPC) associated with burn activities and training exercises in the area.

  1. Texture analysis by the Schulz reflection method: Defocalization corrections for thin films

    International Nuclear Information System (INIS)

    Chateigner, D.; Germi, P.; Pernet, M.

    1992-01-01

    A new method is described for correcting experimental data obtained from the texture analysis of thin films. The analysis employed for correcting the data usually requires the experimental curves of defocalization for a randomly oriented specimen. In view of difficulties in finding non-oriented films, a theoretical method for these corrections is proposed which uses the defocalization evolution for a bulk sample, the film thickness and the penetration depth of the incident beam in the material. This correction method is applied to a film of YBa 2 CU 3 O 7-δ on an SrTiO 3 single-crystal substrate. (orig.)

  2. Corrective Action Investigation Plan for Corrective Action Unit No. 423: Building 03-60 Underground Discharge Point, Tonopah Test Range, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    DOE/NV

    1997-10-01

    This Corrective Action Investigation Plan (CAIP) has been developed in accordance with the Federal Facility Agreement and Consent Order (FFACO) that was agreed to by the US Department of Energy, Nevada Operations Office (DOE/NV), the State of Nevada Division of Environmental Protection (NDEP), and the US Department of Defense. The CAIP is a document that provides or references all of the specific information for investigation activities associated with Corrective Action Units (CAUS) or Corrective Action Sites (CASs) (FFACO, 1996). As per the FFACO (1996), CASs are sites potentially requiring corrective action(s) and may include solid waste management units or individual disposal or release sites. Corrective Action Units consist of one or more CASs grouped together based on geography, technical similarity, or agency responsibility for the purpose of determining corrective actions. This CAIP contains the environmental sample collection objectives and the criteria for conducting site investigation activities at CAU No. 423, the Building 03-60 Underground Discharge Point (UDP), which is located in Area 3 at the Tonopah Test Range (TTR). The TTR, part of the Nellis Air Force Range, is approximately 225 kilometers (km) (140 miles [mi]) northwest of Las Vegas, Nevada (Figures 1-1 and 1-2). Corrective Action Unit No. 423 is comprised of only one CAS (No. 03-02-002-0308), which includes the Building 03-60 UDP and an associated discharge line extending from Building 03-60 to a point approximately 73 meters (m) (240 feet [ft]) northwest as shown on Figure 1-3.

  3. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  4. Corrective Action Plan for Corrective Action Unit 261: Area 25 Test Cell A Leachfield System, Nevada Test Site, Nevada; TOPICAL

    International Nuclear Information System (INIS)

    T. M. Fitzmaurice

    2000-01-01

    This Corrective Action Plan (CAP) has been prepared for the Corrective Action Unit (CAU)261 Area 25 Test Cell A Leachfield System in accordance with the Federal Facility and Consent Order (Nevada Division of Environmental Protection[NDEP] et al., 1996). This CAP provides the methodology for implementing the approved corrective action alternative as listed in the Corrective Action Decision Document (U.S. Department of Energy, Nevada Operations Office, 1999). Investigation of CAU 261 was conducted from February through May of 1999. There were no Constituents of Concern (COCs) identified at Corrective Action Site (CAS) 25-05-07 Acid Waste Leach Pit (AWLP). COCs identified at CAS 25-05-01 included diesel-range organics and radionuclides. The following closure actions will be implemented under this plan: Because COCs were not found at CAS 25-05-07 AWLP, no action is required; Removal of septage from the septic tank (CAS 25-05-01), the distribution box and the septic tank will be filled with grout; Removal of impacted soils identified near the initial outfall area; and Upon completion of this closure activity and approval of the Closure Report by NDEP, administrative controls, use restrictions, and site postings will be used to prevent intrusive activities at the site

  5. Energy Efficient Error-Correcting Coding for Wireless Systems

    NARCIS (Netherlands)

    Shao, X.

    2010-01-01

    The wireless channel is a hostile environment. The transmitted signal does not only suffers multi-path fading but also noise and interference from other users of the wireless channel. That causes unreliable communications. To achieve high-quality communications, error correcting coding is required

  6. How Students Circumvent Problem-Solving Strategies that Require Greater Cognitive Complexity.

    Science.gov (United States)

    Niaz, Mansoor

    1996-01-01

    Analyzes the great diversity in problem-solving strategies used by students in solving a chemistry problem and discusses the relationship between these variables and different cognitive variables. Concludes that students try to circumvent certain problem-solving strategies by adapting flexible and stylistic innovations that render the cognitive…

  7. Benefits of visualization in the mammography problem

    DEFF Research Database (Denmark)

    Khan, Azam; Breslav, Simon; Glueck, Michael

    2015-01-01

    Abstract Trying to make a decision between two outcomes, when there is some level of uncertainty, is inherently difficult because it involves probabilistic reasoning. Previous studies have shown that most people do not correctly apply Bayesian inference to solve probabilistic problems for decision...... making under uncertainty. In an effort to improve decision making with Bayesian problems, previous work has studied supplementing the textual description of problems with visualizations, such as graphs and charts. However, results have been varied and generally indicate that visualization...

  8. Integration of a Portfolio-based Approach to Evaluate Aerospace R and D Problem Formulation Into a Parametric Synthesis Tool

    Science.gov (United States)

    Oza, Amit R.

    The focus of this study is to improve R&D effectiveness towards aerospace and defense planning in the early stages of the product development lifecycle. Emphasis is on: correct formulation of a decision problem, with special attention to account for data relationships between the individual design problem and the system capability required to size the aircraft, understanding of the meaning of the acquisition strategy objective and subjective data requirements that are required to arrive at a balanced analysis and/or "correct" mix of technology projects, understanding the meaning of the outputs that can be created from the technology analysis, and methods the researcher can use at effectively support decisions at the acquisition and conceptual design levels through utilization of a research and development portfolio strategy. The primary objectives of this study are to: (1) determine what strategy should be used to initialize conceptual design parametric sizing processes during requirements analysis for the materiel solution analysis stage of the product development lifecycle when utilizing data already constructed in the latter phase when working with a generic database management system synthesis tool integration architecture for aircraft design , and (2) assess how these new data relationships can contribute for innovative decision-making when solving acquisition hardware/technology portfolio problems. As such, an automated composable problem formulation system is developed to consider data interactions for the system architecture that manages acquisition pre-design concept refinement portfolio management, and conceptual design parametric sizing requirements. The research includes a way to: • Formalize the data storage and implement the data relationship structure with a system architecture automated through a database management system. • Allow for composable modeling, in terms of level of hardware abstraction, for the product model, mission model, and

  9. 76 FR 47591 - Agency Information Collection Activities: Proposed Collection; Comment Request; Correction

    Science.gov (United States)

    2011-08-05

    ... State reserve requirements, in the form of Start-up Loans and Solvency Loans. An applicant may apply for...; Correction AGENCY: Centers for Medicare & Medicaid Services, HHS. In compliance with the requirement of... issuer. Solvency Loans are intended to assist loan recipients with meeting the solvency requirements of...

  10. Corrective Action Decision Document for Corrective Action Unit 271: Areas 25, 26, and 27 Septic Systems, Nevada Test Site, Nevada, Rev. 0

    International Nuclear Information System (INIS)

    2002-01-01

    This corrective action decision document (CADD) identifies and rationalizes the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's selection of a recommended corrective action alternative (CAA) appropriate to facilitate the closure of Corrective Action Unit (CAU) 271, Areas 25, 26, and 27 Septic Systems, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order (FFACO). Located on the NTS approximately 65 miles northwest of Las Vegas, CAU 271 consists of fifteen Corrective Action Sites (CASs). The CASs consist of 13 septic systems, a radioactive leachfield, and a contaminated reservoir. The purpose of this CADD is to identify and provide a rationale for the selection of a recommended CAA for each CAS within CAU 271. Corrective action investigation (CAI) activities were performed from October 29, 2001, through February 22, 2002, and April 29, 2002, through June 25, 2002. Analytes detected during the CAI were evaluated against preliminary action levels and regulatory disposal limits to determine contaminants of concern (COC) for each CAS. It was determined that contaminants of concern included hydrocarbon-contaminated media, polychlorinated biphenyls, and radiologically-contaminated media. Three corrective action objectives were identified for these CASs, and subsequently three CAAs developed for consideration based on a review of existing data, future use, and current operations in Areas 25, 26, and 27 of the NTS. These CAAs were: Alternative 1 - No Further Action, Alternative 2 - Clean Closure, and Alternative 3 - Closure in Place with Administrative Controls. Alternative 2, Clean Closure, was chosen as the preferred CAA for all but two of the CASs (25-04-04 and 27-05-02) because Nevada Administrative Control 444.818 requires clean closure of the septic tanks involved with these CASs. Alternative 3, Closure in Place, was chosen for the final two CASs because the short-term risks of

  11. Corrective Action Decision Document for Corrective Action Unit 271: Areas 25, 26, and 27 Septic Systems, Nevada Test Site, Nevada, Rev. 0

    Energy Technology Data Exchange (ETDEWEB)

    NNSA/NV

    2002-09-16

    This corrective action decision document (CADD) identifies and rationalizes the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's selection of a recommended corrective action alternative (CAA) appropriate to facilitate the closure of Corrective Action Unit (CAU) 271, Areas 25, 26, and 27 Septic Systems, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order (FFACO). Located on the NTS approximately 65 miles northwest of Las Vegas, CAU 271 consists of fifteen Corrective Action Sites (CASs). The CASs consist of 13 septic systems, a radioactive leachfield, and a contaminated reservoir. The purpose of this CADD is to identify and provide a rationale for the selection of a recommended CAA for each CAS within CAU 271. Corrective action investigation (CAI) activities were performed from October 29, 2001, through February 22, 2002, and April 29, 2002, through June 25, 2002. Analytes detected during the CAI were evaluated against preliminary action levels and regulatory disposal limits to determine contaminants of concern (COC) for each CAS. It was determined that contaminants of concern included hydrocarbon-contaminated media, polychlorinated biphenyls, and radiologically-contaminated media. Three corrective action objectives were identified for these CASs, and subsequently three CAAs developed for consideration based on a review of existing data, future use, and current operations in Areas 25, 26, and 27 of the NTS. These CAAs were: Alternative 1 - No Further Action, Alternative 2 - Clean Closure, and Alternative 3 - Closure in Place with Administrative Controls. Alternative 2, Clean Closure, was chosen as the preferred CAA for all but two of the CASs (25-04-04 and 27-05-02) because Nevada Administrative Control 444.818 requires clean closure of the septic tanks involved with these CASs. Alternative 3, Closure in Place, was chosen for the final two CASs because the short-term risks of

  12. Academic Corrective Action from a Legal Perspective.

    Science.gov (United States)

    Collura, Frank J.

    1997-01-01

    In cases of cheating, plagiarism, or violations of the law in dental education, a very high level of due process is required. University counsel can help administrators determine whether an accused student is professionally suited to dentistry by characterizing as many corrective actions as possible as academic under the rubric of "suitability to…

  13. Bias correction for rainrate retrievals from satellite passive microwave sensors

    Science.gov (United States)

    Short, David A.

    1990-01-01

    Rainrates retrieved from past and present satellite-borne microwave sensors are affected by a fundamental remote sensing problem. Sensor fields-of-view are typically large enough to encompass substantial rainrate variability, whereas the retrieval algorithms, based on radiative transfer calculations, show a non-linear relationship between rainrate and microwave brightness temperature. Retrieved rainrates are systematically too low. A statistical model of the bias problem shows that bias correction factors depend on the probability distribution of instantaneous rainrate and on the average thickness of the rain layer.

  14. Application of means of health-improving fitness for correction of weight of girls of the senior school age

    Directory of Open Access Journals (Sweden)

    Inna Pavlenko

    2016-12-01

    Full Text Available Purpose: to carry out the theoretical analysis of the problem of application of health-improving fitness for the correction of weight of girls of the senior school age. Material & Methods: analysis and synthesis of data of scientific and methodical literature. Results: it is established that the problem of excess weight at girls of the senior school age is one of the most urgent in modern science. The reasons of obesity of teenagers are defined and the main directions of the solution of this problem are characterized. Conclusions: it is defined that application of means of health-improving fitness promotes the correction of weight at girls of the senior school age. It causes the necessity of development and deployment of innovative technology of correction of weight at girls of the senior school age on the basis of primary use of means of health-improving fitness.

  15. Using Example Problems to Improve Student Learning in Algebra: Differentiating between Correct and Incorrect Examples

    Science.gov (United States)

    Booth, Julie L.; Lange, Karin E.; Koedinger, Kenneth R.; Newton, Kristie J.

    2013-01-01

    In a series of two "in vivo" experiments, we examine whether correct and incorrect examples with prompts for self-explanation can be effective for improving students' conceptual understanding and procedural skill in Algebra when combined with guided practice. In Experiment 1, students working with the Algebra I Cognitive Tutor were randomly…

  16. Evaluation of corrective action data for reportable events at commercial nuclear power plants

    International Nuclear Information System (INIS)

    Mays, G.T.

    1991-01-01

    805The Nuclear Regulatory Commission (NRC) approved the adoption of cause codes for reportable events as a new performance indicator (PI) in March 1989. Corrective action data associated with the causes of events were to be compiled also. The corrective action data was considered as supplemental information but not identified formally as a performance indicator. In support of NRC, the Nuclear Operations Analysis Center (NOAC) at the Oak Ridge National Laboratory (ORNL) has been routinely evaluating licensee event reports (LERs) for cause code and corrective action data since 1989. The compilation of corrective action data by NOAC represents the first systematic and comprehensive compilation of this type data. The thrust of analyzing the corrective action data was to identify areas where licensees allocated resources to solve problems and prevent the recurrence of personnel errors and equipment failures. The predominant areas of corrective action reported by licensees are to be evaluated by NRC to compare with NRC programs designed to improve plant performance. The set of corrective action codes used to correlate with individual cause codes and included in the analyses were: training, procedural modification, corrective discipline, management change, design modification, equipment replacement/adjustment, other, and unknown. 1 fig

  17. Material motion corrections for implicit Monte Carlo radiation transport

    International Nuclear Information System (INIS)

    Gentile, N.A.; Morel, Jim E.

    2011-01-01

    We describe changes to the Implicit Monte Carlo (IMC) algorithm to include the effects of material motion. These changes assume that the problem can be embedded in a global Lorentz frame. We also assume that the material in each zone can be characterized by a single velocity. With this approximation, we show how to make IMC Lorentz invariant, so that the material motion corrections are correct to all orders of v/c. We develop thermal emission and face sources in moving material and discuss the coupling of IMC to the non- relativistic hydrodynamics equations via operator splitting. We discuss the effect of this coupling on the value of the 'Fleck factor' in IMC. (author)

  18. Corrected ROC analysis for misclassified binary outcomes.

    Science.gov (United States)

    Zawistowski, Matthew; Sussman, Jeremy B; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A; Wiitala, Wyndy L

    2017-06-15

    Creating accurate risk prediction models from Big Data resources such as Electronic Health Records (EHRs) is a critical step toward achieving precision medicine. A major challenge in developing these tools is accounting for imperfect aspects of EHR data, particularly the potential for misclassified outcomes. Misclassification, the swapping of case and control outcome labels, is well known to bias effect size estimates for regression prediction models. In this paper, we study the effect of misclassification on accuracy assessment for risk prediction models and find that it leads to bias in the area under the curve (AUC) metric from standard ROC analysis. The extent of the bias is determined by the false positive and false negative misclassification rates as well as disease prevalence. Notably, we show that simply correcting for misclassification while building the prediction model is not sufficient to remove the bias in AUC. We therefore introduce an intuitive misclassification-adjusted ROC procedure that accounts for uncertainty in observed outcomes and produces bias-corrected estimates of the true AUC. The method requires that misclassification rates are either known or can be estimated, quantities typically required for the modeling step. The computational simplicity of our method is a key advantage, making it ideal for efficiently comparing multiple prediction models on very large datasets. Finally, we apply the correction method to a hospitalization prediction model from a cohort of over 1 million patients from the Veterans Health Administrations EHR. Implementations of the ROC correction are provided for Stata and R. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  19. Behavioral and emotional problems in a Kuala Lumpur children's home.

    Science.gov (United States)

    Abd Rahman, Fairuz Nazri; Mohd Daud, Tuti Iryani; Nik Jaafar, Nik Ruzyanei; Shah, Shamsul Azhar; Tan, Susan Mooi Koon; Wan Ismail, Wan Salwina

    2013-08-01

    There is a dearth of studies on behavioral and emotional problems in residential care children in Malaysia. This study describes the behavioral and emotional problems in a sample of children in a government residential care home and compares them with their classmates living with their birth parents. A comparative cross-sectional study was carried out where carers from both groups were asked to fill in the translated Bahasa Melayu version of the Child Behavior Check List. Forms for 53 residential care children and 61 classmates were completed. The residential care children had significantly higher scores on the rule-breaking (P breaking (P = 0.008), DSM conduct problems (P = 0.018) and externalizing scores (P = 0.017). Abuse and neglect cases had higher anxiety and depression scores (P = 0.024). Number of reasons in care positively correlated with several subscales, including total behavioral problem score (P = 0.005). Logistic regression revealed the greater number of reasons for placement a child had was significantly associated with having externalizing scores in the clinical range (P = 0.016). However, after Bonferroni correction, only the initial findings regarding rule-breaking and DSM conduct problem scores remained significant. Challenges exist in managing residential care children in Malaysia, especially regarding externalizing behavior. More studies are required to describe the Malaysian scene. © 2013 The Authors. Pediatrics International © 2013 Japan Pediatric Society.

  20. Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept

    Directory of Open Access Journals (Sweden)

    Ahmed Elsaadany

    2014-01-01

    Full Text Available Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake and the second is devoted to drift correction (canard based-correction fuze. The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  1. Accuracy improvement capability of advanced projectile based on course correction fuze concept.

    Science.gov (United States)

    Elsaadany, Ahmed; Wen-jun, Yi

    2014-01-01

    Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.

  2. A multilevel search algorithm for the maximization of submodular functions applied to the quadratic cost partition problem

    NARCIS (Netherlands)

    Goldengorin, B.; Ghosh, D.

    Maximization of submodular functions on a ground set is a NP-hard combinatorial optimization problem. Data correcting algorithms are among the several algorithms suggested for solving this problem exactly and approximately. From the point of view of Hasse diagrams data correcting algorithms use

  3. Corrective Action Decision Document for Corrective Action Unit 568. Area 3 Plutonium Dispersion Sites, Nevada National Security Site, Nevada Revision 0

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, Patrick [Nevada Field Ofice, Las Vegas, NV (United States). National Nuclear Security Administration

    2015-08-01

    The purpose of this Corrective Action Decision Document is to identify and provide the rationale for the recommendation of corrective action alternatives (CAAs) for the 14 CASs within CAU 568. Corrective action investigation (CAI) activities were performed from April 2014 through May 2015, as set forth in the Corrective Action Investigation Plan for Corrective Action Unit 568: Area 3 Plutonium Dispersion Sites, Nevada National Security Site, Nevada; and in accordance with the Soils Activity Quality Assurance Plan, which establishes requirements, technical planning, and general quality practices. The purpose of the CAI was to fulfill data needs as defined during the DQO process. The CAU 568 dataset of investigation results was evaluated based on a data quality assessment. This assessment demonstrated that the dataset is complete and acceptable for use in fulfilling the DQO data needs. Based on the evaluation of analytical data from the CAI, review of future and current operations at the 14 CASs, and the detailed and comparative analysis of the potential CAAs, the following corrective actions are recommended for CAU 568: • No further action is the preferred corrective action for CASs 03-23-17, 03-23-22, 03-23-26. • Closure in place is the preferred corrective action for CAS 03-23-19; 03-45-01; the SE DCBs at CASs 03-23-20, 03-23-23, 03-23-31, 03-23-32, 03-23-33, and 03-23-34; and the Pascal-BHCA at CAS 03-23-31. • Clean closure is the preferred corrective action for CASs 03-08-04, 03-23-30, and 03-26-04; and the four well head covers at CASs 03-23-20, 03-23-23, 03-23-31, and 03-23-33.

  4. Experience and related research and development in applying corrective measures at the major low-level radioactive waste disposal sites

    International Nuclear Information System (INIS)

    Rose, R.R.; Mahathy, J.M.; Epler, J.S.; Boing, L.E.; Jacobs, D.G.

    1983-07-01

    A review was conducted of experience in responding to problems encountered in shallow land burial of low-level radioactive waste and in research and development related to these problems. The operating histories of eleven major disposal facilities were examined. Based on the review, it was apparent that the most effective corrective measures administered were those developed from an understanding of the site conditions which caused the problems. Accordingly, the information in this document has been organized around the major conditions which have caused problems at existing sites. These include: (1) unstable trench cover, (2) permeable trench cover, (3) subsidence, (4) ground water entering trenches, (5) intrusion by deep-rooted plants, (6) intrusion by burrowing animals, and (7) chemical and physical conditions in trench. Because the burial sites are located in regions that differ in climatologic, geologic, hydrologic, and biologic characteristics, there is variation in the severity of problems among the sites and in the nature of information concerning corrective efforts. Conditions associated with water-related problems have received a great deal of attention. For these, corrective measures have ranged from the creation of diversion systems for reducing the contact of surface water with the trench cover to the installation of seals designed to prevent infiltration from reaching the buried waste. On the other hand, corrective measures for conditions of subsidence or of intrusion by burrowing animals have had limited application and are currently under evaluation or are subjects of research and development activities. 50 references, 20 figures, 10 tables

  5. Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning

    Science.gov (United States)

    Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan

    2009-01-01

    In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…

  6. Corrective Action Decision Document for Corrective Action Unit 516: Septic Systems and Discharge Points, Nevada Test Site, Nevada: Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2004-04-28

    This Corrective Action Decision Document (CADD) identifies and rationalizes the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's selection of a recommended corrective action alternative appropriate to facilitate the closure of Corrective Action Unit (CAU) 516: Septic Systems and Discharge Points, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. Located in Areas 3, 6, and 22 on the NTS, CAU 516 includes six Corrective Action Sites (CASs) consisting of two septic systems, a sump and piping, a clean-out box and piping, dry wells, and a vehicle decontamination area. Corrective action investigation activities were performed from July 22 through August 14, 2003, with supplemental sampling conducted in late 2003 and early 2004. The potential exposure pathways for any contaminants of concern (COCs) identified during the development of the DQOs at CAU 516 gave rise to the following objectives: (1) prevent or mitigate exposure to media containing COCs at concentrations exceeding PALs as defined in the corrective action investigation plan; and (2) prevent the spread of COCs beyond each CAS. The following alternatives have been developed for consideration at CAU 516: Alternative 1 - No Further Action; Alternative 2 - Clean Closure; and Alternative 3 - Closure in Place with Administrative Controls. Alternative 1, No Further Action, is the preferred corrective action for two CASs (06-51-02 and 22-19-04). Alternative 2, Clean Closure, is the preferred corrective action for four CASs (03-59-01, 03-59-02, 06-51-01, and 06-51-03). The selected alternatives were judged to meet all requirements for the technical components evaluated, as well as meeting all applicable state and federal regulations for closure of the site and will further eliminate the contaminated media at CAU 516.

  7. Does correcting astigmatism with toric lenses improve driving performance?

    Science.gov (United States)

    Cox, Daniel J; Banton, Thomas; Record, Steven; Grabman, Jesse H; Hawkins, Ronald J

    2015-04-01

    merits relative to spherical lens correction require further investigation.

  8. Intrafraction Prostate Translations and Rotations During Hypofractionated Robotic Radiation Surgery: Dosimetric Impact of Correction Strategies and Margins

    Energy Technology Data Exchange (ETDEWEB)

    Water, Steven van de, E-mail: s.vandewater@erasmusmc.nl [Erasmus MC Cancer Institute, Department of Radiation Oncology, Rotterdam (Netherlands); Valli, Lorella [Erasmus MC Cancer Institute, Department of Radiation Oncology, Rotterdam (Netherlands); Alma Mater Studiorum, Department of Physics and Astronomy, Bologna University, Bologna (Italy); Aluwini, Shafak [Erasmus MC Cancer Institute, Department of Radiation Oncology, Rotterdam (Netherlands); Lanconelli, Nico [Alma Mater Studiorum, Department of Physics and Astronomy, Bologna University, Bologna (Italy); Heijmen, Ben; Hoogeman, Mischa [Erasmus MC Cancer Institute, Department of Radiation Oncology, Rotterdam (Netherlands)

    2014-04-01

    Purpose: To investigate the dosimetric impact of intrafraction prostate motion and the effect of robot correction strategies for hypofractionated CyberKnife treatments with a simultaneously integrated boost. Methods and Materials: A total of 548 real-time prostate motion tracks from 17 patients were available for dosimetric simulations of CyberKnife treatments, in which various correction strategies were included. Fixed time intervals between imaging/correction (15, 60, 180, and 360 seconds) were simulated, as well as adaptive timing (ie, the time interval reduced from 60 to 15 seconds in case prostate motion exceeded 3 mm or 2° in consecutive images). The simulated extent of robot corrections was also varied: no corrections, translational corrections only, and translational corrections combined with rotational corrections up to 5°, 10°, and perfect rotational correction. The correction strategies were evaluated for treatment plans with a 0-mm or 3-mm margin around the clinical target volume (CTV). We recorded CTV coverage (V{sub 100%}) and dose-volume parameters of the peripheral zone (boost), rectum, bladder, and urethra. Results: Planned dose parameters were increasingly preserved with larger extents of robot corrections. A time interval between corrections of 60 to 180 seconds provided optimal preservation of CTV coverage. To achieve 98% CTV coverage in 98% of the treatments, translational and rotational corrections up to 10° were required for the 0-mm margin plans, whereas translational and rotational corrections up to 5° were required for the 3-mm margin plans. Rectum and bladder were spared considerably better in the 0-mm margin plans. Adaptive timing did not improve delivered dose. Conclusions: Intrafraction prostate motion substantially affected the delivered dose but was compensated for effectively by robot corrections using a time interval of 60 to 180 seconds. A 0-mm margin required larger extents of additional rotational corrections than a 3

  9. Intrafraction Prostate Translations and Rotations During Hypofractionated Robotic Radiation Surgery: Dosimetric Impact of Correction Strategies and Margins

    International Nuclear Information System (INIS)

    Water, Steven van de; Valli, Lorella; Aluwini, Shafak; Lanconelli, Nico; Heijmen, Ben; Hoogeman, Mischa

    2014-01-01

    Purpose: To investigate the dosimetric impact of intrafraction prostate motion and the effect of robot correction strategies for hypofractionated CyberKnife treatments with a simultaneously integrated boost. Methods and Materials: A total of 548 real-time prostate motion tracks from 17 patients were available for dosimetric simulations of CyberKnife treatments, in which various correction strategies were included. Fixed time intervals between imaging/correction (15, 60, 180, and 360 seconds) were simulated, as well as adaptive timing (ie, the time interval reduced from 60 to 15 seconds in case prostate motion exceeded 3 mm or 2° in consecutive images). The simulated extent of robot corrections was also varied: no corrections, translational corrections only, and translational corrections combined with rotational corrections up to 5°, 10°, and perfect rotational correction. The correction strategies were evaluated for treatment plans with a 0-mm or 3-mm margin around the clinical target volume (CTV). We recorded CTV coverage (V 100% ) and dose-volume parameters of the peripheral zone (boost), rectum, bladder, and urethra. Results: Planned dose parameters were increasingly preserved with larger extents of robot corrections. A time interval between corrections of 60 to 180 seconds provided optimal preservation of CTV coverage. To achieve 98% CTV coverage in 98% of the treatments, translational and rotational corrections up to 10° were required for the 0-mm margin plans, whereas translational and rotational corrections up to 5° were required for the 3-mm margin plans. Rectum and bladder were spared considerably better in the 0-mm margin plans. Adaptive timing did not improve delivered dose. Conclusions: Intrafraction prostate motion substantially affected the delivered dose but was compensated for effectively by robot corrections using a time interval of 60 to 180 seconds. A 0-mm margin required larger extents of additional rotational corrections than a 3-mm

  10. Ultimate intra-wafer critical dimension uniformity control by using lithography and etch tool corrections

    Science.gov (United States)

    Kubis, Michael; Wise, Rich; Reijnen, Liesbeth; Viatkina, Katja; Jaenen, Patrick; Luca, Melisa; Mernier, Guillaume; Chahine, Charlotte; Hellin, David; Kam, Benjamin; Sobieski, Daniel; Vertommen, Johan; Mulkens, Jan; Dusa, Mircea; Dixit, Girish; Shamma, Nader; Leray, Philippe

    2016-03-01

    With shrinking design rules, the overall patterning requirements are getting aggressively tighter. For the 7-nm node and below, allowable CD uniformity variations are entering the Angstrom region (ref [1]). Optimizing inter- and intra-field CD uniformity of the final pattern requires a holistic tuning of all process steps. In previous work, CD control with either litho cluster or etch tool corrections has been discussed. Today, we present a holistic CD control approach, combining the correction capability of the etch tool with the correction capability of the exposure tool. The study is done on 10-nm logic node wafers, processed with a test vehicle stack patterning sequence. We include wafer-to-wafer and lot-to-lot variation and apply optical scatterometry to characterize the fingerprints. Making use of all available correction capabilities (lithography and etch), we investigated single application of exposure tool corrections and of etch tool corrections as well as combinations of both to reach the lowest CD uniformity. Results of the final pattern uniformity based on single and combined corrections are shown. We conclude on the application of this holistic lithography and etch optimization to 7nm High-Volume manufacturing, paving the way to ultimate within-wafer CD uniformity control.

  11. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  12. Determining spherical lens correction for astronaut training underwater.

    Science.gov (United States)

    Porter, Jason; Gibson, C Robert; Strauss, Samuel

    2011-09-01

    To develop a model that will accurately predict the distance spherical lens correction needed to be worn by National Aeronautics and Space Administration astronauts while training underwater. The replica space suit's helmet contains curved visors that induce refractive power when submersed in water. Anterior surface powers and thicknesses were measured for the helmet's protective and inside visors. The impact of each visor on the helmet's refractive power in water was analyzed using thick lens calculations and Zemax optical design software. Using geometrical optics approximations, a model was developed to determine the optimal distance spherical power needed to be worn underwater based on the helmet's total induced spherical power underwater and the astronaut's manifest spectacle plane correction in air. The validity of the model was tested using data from both eyes of 10 astronauts who trained underwater. The helmet's visors induced a total power of -2.737 D when placed underwater. The required underwater spherical correction (FW) was linearly related to the spectacle plane spherical correction in air (FAir): FW = FAir + 2.356 D. The mean magnitude of the difference between the actual correction worn underwater and the calculated underwater correction was 0.20 ± 0.11 D. The actual and calculated values were highly correlated (r = 0.971) with 70% of eyes having a difference in magnitude of astronauts. The model accurately predicts the actual values worn underwater and can be applied (more generally) to determine a suitable spectacle lens correction to be worn behind other types of masks when submerged underwater.

  13. Problems pilots face involving wind shear

    Science.gov (United States)

    Melvin, W. W.

    1977-01-01

    Educating pilots and the aviation industry about wind shears presents a major problem associated with this meteorological phenomenon. The pilot's second most pressing problem is the need for a language to discuss wind shear encounters with other pilots so that the reaction of the aircraft to the wind shear encounter can be accurately described. Another problem is the flight director which gives a centered pitch command for a given angular displacement from the glide slope. It was suggested that they should instead be called flight path command and should not center unless the aircraft is actually correcting to the flight path.

  14. THE PROBLEM OF ARCHITECTURE DESIGN IN A CONTEXT OF PARTIALLY KNOWN REQUIREMENTS OF COMPLEX WEB BASED APPLICATION "KSU FEEDBACK"

    Directory of Open Access Journals (Sweden)

    A. Spivakovsky

    2013-03-01

    Full Text Available The problem of flexible architecture design for critical parts of “KSU Feedback” application which do not have full requirements or clearly defined scope. Investigated recommended practices for solving such type of tasks and shown how they are applied in “KSU Feedback” architecture.

  15. Some recoil corrections to the hydrogen hyperfine splitting

    International Nuclear Information System (INIS)

    Bodwin, G.T.; Yennie, D.R.

    1988-01-01

    We compute all of the recoil corrections to the ground-state hyperfine splitting in hydrogen, with the exception of the proton polarizability, that are required to achieve an accuracy of 1 ppm. Our approach includes a unified treatment of the corrections that would arise from a pointlike Dirac proton and the corrections that are due to the proton's non-QED structure. Our principal new results are a calculation of the relative order-α 2 (m/sub e//m/sub p/) contributions that arise from the proton's anomalous magnetic moment and a systematic treatment of the relative order-α(m/sub e//m/sub p/) contributions that arise from form-factor corrections. In the former calculation we introduce some new technical improvements and are able to evaluate all of the expressions analytically. In the latter calculation, which has been the subject of previous investigations by other authors, we express the form-factor corrections in terms of two-dimensional integrals that are convenient for numerical evaluation and present numerical results for the commonly used dipole parametrization of the form factors. Because we use a parametrization of the form factors that differs slightly from the ones used in previous work, our numerical results are shifted from older ones by a small amount

  16. TH-AB-201-08: Ion Chamber Dose Measurements - Problems with the Temperature-Pressure Correction Factor

    Energy Technology Data Exchange (ETDEWEB)

    Bourgouin, A [Carleton University, Ottawa, Ontario (Canada); McEwen, M [National Research Council, Ottawa, ON (Canada)

    2016-06-15

    Purpose: To investigate the behavior of ionization chambers over a wide pressure range. Methods: Three cylindrical and two parallel-plate designs of ion chamber were investigated. The ion chambers were placed in vessel where the pressure was varied from atmospheric (101 kPa) down to 5 kPa. Measurements were made using 60Co and high-energy electron beams. The pressure was measured to better than 0.1% and multiple data sets were obtained for each chamber at both polarities to investigate pressure cycling and dependency on the sign of the charge collected. Results: For all types of chamber, the ionization current, corrected using the standard PTP, showed a similar behaviour. Deviations from the standard theory were generally small for Co-60 but very significant for electron beams, up to 20 % below P = 10 kPa. The effect was found to be always larger when collecting negative charge, suggesting a dependence on free-electron collection. The most likely source of such electrons is low-energy electrons emitted from the electrodes. This signal would be independent of air pressure within the chamber cavity. The data was analyzed to extract this signal and it was found to be a non-negligible component of the ionization current at atmospheric pressure. In the case of the parallel plate chambers, the effect was approximately 0.25 %. For the cylindrical chambers the effect was larger - up to 1.2 % - and dependent on the chamber type, which would be consistent with electron emission from different wall materials. For the electron beams, the correction factor was dependent on the electron energy and approximately double that observed in 60Co. Conclusion: Measurements have indicated significant deviations of the standard pressure correction that are consistent with electron emission from chamber electrodes. This has implications for both primary standard and reference ion chamber-based dosimetry.

  17. Discussion on Boiler Efficiency Correction Method with Low Temperature Economizer-Air Heater System

    Science.gov (United States)

    Ke, Liu; Xing-sen, Yang; Fan-jun, Hou; Zhi-hong, Hu

    2017-05-01

    This paper pointed out that it is wrong to take the outlet flue gas temperature of low temperature economizer as exhaust gas temperature in boiler efficiency calculation based on GB10184-1988. What’s more, this paper proposed a new correction method, which decomposed low temperature economizer-air heater system into two hypothetical parts of air preheater and pre condensed water heater and take the outlet equivalent gas temperature of air preheater as exhaust gas temperature in boiler efficiency calculation. This method makes the boiler efficiency calculation more concise, with no air heater correction. It has a positive reference value to deal with this kind of problem correctly.

  18. Quantum algorithms and quantum maps - implementation and error correction

    International Nuclear Information System (INIS)

    Alber, G.; Shepelyansky, D.

    2005-01-01

    Full text: We investigate the dynamics of the quantum tent map under the influence of errors and explore the possibilities of quantum error correcting methods for the purpose of stabilizing this quantum algorithm. It is known that static but uncontrollable inter-qubit couplings between the qubits of a quantum information processor lead to a rapid Gaussian decay of the fidelity of the quantum state. We present a new error correcting method which slows down this fidelity decay to a linear-in-time exponential one. One of its advantages is that it does not require redundancy so that all physical qubits involved can be used for logical purposes. We also study the influence of decoherence due to spontaneous decay processes which can be corrected by quantum jump-codes. It is demonstrated how universal encoding can be performed in these code spaces. For this purpose we discuss a new entanglement gate which can be used for lowest level encoding in concatenated error-correcting architectures. (author)

  19. Ka-Band ARM Zenith Radar Corrections Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Karen [Brookhaven National Lab. (BNL), Upton, NY (United States); Toto, Tami [Brookhaven National Lab. (BNL), Upton, NY (United States); Giangrande, Scott [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2017-10-15

    The KAZRCOR Value -added Product (VAP) performs several corrections to the ingested KAZR moments and also creates a significant detection mask for each radar mode. The VAP computes gaseous attenuation as a function of time and radial distance from the radar antenna, based on ambient meteorological observations, and corrects observed reflectivities for that effect. KAZRCOR also dealiases mean Doppler velocities to correct velocities whose magnitudes exceed the radar’s Nyquist velocity. Input KAZR data fields are passed through into the KAZRCOR output files, in their native time and range coordinates. Complementary corrected reflectivity and velocity fields are provided, along with a mask of significant detections and a number of data quality flags. This report covers the KAZRCOR VAP as applied to the original KAZR radars and the upgraded KAZR2 radars. Currently there are two separate code bases for the different radar versions, but once KAZR and KAZR2 data formats are harmonized, only a single code base will be required.

  20. Corrective Action Investigation Plan for Corrective Action Unit 487: Thunderwell Site, Tonopah Test Range, Nevada (Rev. No.: 0, January 2001)

    Energy Technology Data Exchange (ETDEWEB)

    DOE/NV

    2001-01-02

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, Nevada Operations Office's (DOE/NV's) approach to collect the data necessary to evaluate corrective action alternatives (CAAs) appropriate for the closure of Corrective Action Unit (CAU) 487, Thunderwell Site, Tonopah Test Range (TTR), Nevada, under the Federal Facility Agreement and Consent Order. Corrective Action Unit 487 consists of a single Corrective Action Site (CAS), RG 26-001-RGRV, Thunderwell Site. The site is located in the northwest portion of the TTR, Nevada, approximately five miles northwest of the Area 3 Control Point and closest to the Cactus Flats broad basin. Historically, Sandia National Laboratories in New Mexico used CAU 487 in the early to mid-1960s for a series of high explosive tests detonated at the bottom of large cylindrical steel tubes. Historical photographs indicate that debris from these tests and subsequent operations may have been scattered and buried throughout the site. A March 2000 walk-over survey and a July 2000 geophysical survey indicated evidence of buried and surface debris in dirt mounds and areas throughout the site; however, a radiological drive-over survey also performed in July 2000 indicated that no radiological hazards were identified at this site. Based on site history, the scope of this plan is to resolve the problem statement identified during the Data Quality Objectives process that detonation activities at this CAU site may have resulted in the release of contaminants of concern into the surface/subsurface soil including total volatile and total semivolatile organic compounds, total Resource Conservation and Recovery Act metals, radionuclides, total petroleum hydrocarbons, and high explosives. Therefore, the scope of corrective action field investigation will involve excavation, drilling, and extensive soil sampling and analysis activities to determine the extent (if any) of both the lateral and vertical contamination

  1. Unsolved problems in number theory

    CERN Document Server

    Guy, Richard K

    1994-01-01

    Unsolved Problems in Number Theory contains discussions of hundreds of open questions, organized into 185 different topics. They represent numerous aspects of number theory and are organized into six categories: prime numbers, divisibility, additive number theory, Diophantine equations, sequences of integers, and miscellaneous. To prevent repetition of earlier efforts or duplication of previously known results, an extensive and up-to-date collection of references follows each problem. In the second edition, not only extensive new material has been added, but corrections and additions have been included throughout the book.

  2. Assessing the impact of representational and contextual problem features on student use of right-hand rules

    Science.gov (United States)

    Kustusch, Mary Bridget

    2016-06-01

    Students in introductory physics struggle with vector algebra and these challenges are often associated with contextual and representational features of the problems. Performance on problems about cross product direction is particularly poor and some research suggests that this may be primarily due to misapplied right-hand rules. However, few studies have had the resolution to explore student use of right-hand rules in detail. This study reviews literature in several disciplines, including spatial cognition, to identify ten contextual and representational problem features that are most likely to influence performance on problems requiring a right-hand rule. Two quantitative measures of performance (correctness and response time) and two qualitative measures (methods used and type of errors made) were used to explore the impact of these problem features on student performance. Quantitative results are consistent with expectations from the literature, but reveal that some features (such as the type of reasoning required and the physical awkwardness of using a right-hand rule) have a greater impact than others (such as whether the vectors are placed together or separate). Additional insight is gained by the qualitative analysis, including identifying sources of difficulty not previously discussed in the literature and revealing that the use of supplemental methods, such as physically rotating the paper, can mitigate errors associated with certain features.

  3. Reorganizing Neural Network System for Two Spirals and Linear Low-Density Polyethylene Copolymer Problems

    Directory of Open Access Journals (Sweden)

    G. M. Behery

    2009-01-01

    Full Text Available This paper presents an automatic system of neural networks (NNs that has the ability to simulate and predict many of applied problems. The system architectures are automatically reorganized and the experimental process starts again, if the required performance is not reached. This processing is continued until the performance obtained. This system is first applied and tested on the two spiral problem; it shows that excellent generalization performance obtained by classifying all points of the two-spirals correctly. After that, it is applied and tested on the shear stress and the pressure drop problem across the short orifice die as a function of shear rate at different mean pressures for linear low-density polyethylene copolymer (LLDPE at 190∘C. The system shows a better agreement with an experimental data of the two cases: shear stress and pressure drop. The proposed system has been also designed to simulate other distributions not presented in the training set (predicted and matched them effectively.

  4. Correction of Cadastral Error: Either the Right or Obligation of the Person Concerned?

    Directory of Open Access Journals (Sweden)

    Magdenko A. Y.

    2014-07-01

    Full Text Available The article is devoted to the institute of cadastral error. Some questions and problems of cadastral error corrections are considered. The material is based on current legislation and judicial practice.

  5. Corrective Action Investigation Plan for Corrective Action Unit 542: Disposal Holes, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Laura Pastor

    2006-01-01

    locate previously unidentified features at CASs 03-20-07, 03-20-09, 03-20-10, 03-20-11, and 06-20-03. (4) Perform field screening. (5) Collect and submit environmental samples for laboratory analysis to determine whether contaminants of concern (COCs) are present. (6) Collect quality control samples for laboratory analyses to evaluate the performance of measurement systems and controls based on the requirements of the data quality indicators. (7) If COCs are present at the surface/near surface (< 15 feet below ground surface), collect additional step-out samples to define the extent of the contamination. (8) If COCs are present in the subsurface (i.e., base of disposal hole), collect additional samples to define the vertical extent of contamination. A conservative use restriction will be used to encompass the lateral extent of subsurface contamination. (9) Stake or flag sample locations in the field, and record coordinates through global positioning systems surveying. (10) Collect samples of investigation-derived waste, as needed, for waste management and minimization purposes. This Corrective Action Investigation Plan has been developed in accordance with the ''Federal Facility Agreement and Consent Order'' that was agreed to by the State of Nevada, the U.S. Department of Energy, and the U.S. Department of Defense. Under the ''Federal Facility Agreement and Consent Order'', this Corrective Action Investigation Plan will be submitted to the Nevada Division of Environmental Protection for approval. Field work will be conducted following approval of the plan

  6. Attenuation correction for the HRRT PET-scanner using transmission scatter correction and total variation regularization.

    Science.gov (United States)

    Keller, Sune H; Svarer, Claus; Sibomana, Merence

    2013-09-01

    In the standard software for the Siemens high-resolution research tomograph (HRRT) positron emission tomography (PET) scanner the most commonly used segmentation in the μ -map reconstruction for human brain scans is maximum a posteriori for transmission (MAP-TR). Bias in the lower cerebellum and pons in HRRT brain images have been reported. The two main sources of the problem with MAP-TR are poor bone/soft tissue segmentation below the brain and overestimation of bone mass in the skull. We developed the new transmission processing with total variation (TXTV) method that introduces scatter correction in the μ-map reconstruction and total variation filtering to the transmission processing. Comparing MAP-TR and the new TXTV with gold standard CT-based attenuation correction, we found that TXTV has less bias as compared to MAP-TR. We also compared images acquired at the HRRT scanner using TXTV to the GE Advance scanner images and found high quantitative correspondence. TXTV has been used to reconstruct more than 4000 HRRT scans at seven different sites with no reports of biases. TXTV-based reconstruction is recommended for human brain scans on the HRRT.

  7. An improved correlated sampling method for calculating correction factor of detector

    International Nuclear Information System (INIS)

    Wu Zhen; Li Junli; Cheng Jianping

    2006-01-01

    In the case of a small size detector lying inside a bulk of medium, there are two problems in the correction factors calculation of the detectors. One is that the detector is too small for the particles to arrive at and collide in; the other is that the ratio of two quantities is not accurate enough. The method discussed in this paper, which combines correlated sampling with modified particle collision auto-importance sampling, and has been realized on the MCNP-4C platform, can solve these two problems. Besides, other 3 variance reduction techniques are also combined with correlated sampling respectively to calculate a simple calculating model of the correction factors of detectors. The results prove that, although all the variance reduction techniques combined with correlated sampling can improve the calculating efficiency, the method combining the modified particle collision auto-importance sampling with the correlated sampling is the most efficient one. (authors)

  8. Is radioactive mixed waste packaging and transportation really a problem

    International Nuclear Information System (INIS)

    McCall, D.L.; Calihan, T.W. III.

    1992-01-01

    Recently, there has been significant concern expressed in the nuclear community over the packaging and transportation of radioactive mixed waste under US Department of Transportation regulation. This concern has grown more intense over the last 5 to 10 years. Generators and regulators have realized that much of the waste shipped as ''low-level radioactive waste'' was in fact ''radioactive mixed waste'' and that these wastes pose unique transportation and disposal problems. Radioactive mixed wastes must, therefore, be correctly identified and classed for shipment. If must also be packaged, marked, labeled, and otherwise prepared to ensure safe transportation and meet applicable storage and disposal requirements, when established. This paper discusses regulations applicable to the packaging and transportation of radioactive mixed waste and identifies effective methods that waste shippers can adopt to meet the current transportation requirements. This paper will include a characterization and description of the waste, authorized packaging, and hazard communication requirements during transportation. Case studies will be sued to assist generators in understanding mixed waste shipment requirements and clarify the requirements necessary to establish a waste shipment program. Although management and disposal of radioactive mixed waste is clearly a critical issue, packaging and transportation of these waste materials is well defined in existing US Department of Transportation hazardous material regulations

  9. NLO corrections to the photon impact factor: Combining real and virtual corrections

    International Nuclear Information System (INIS)

    Bartels, J.; Colferai, D.; Kyrieleis, A.; Gieseke, S.

    2002-08-01

    In this third part of our calculation of the QCD NLO corrections to the photon impact factor we combine our previous results for the real corrections with the singular pieces of the virtual corrections and present finite analytic expressions for the quark-antiquark-gluon intermediate state inside the photon impact factor. We begin with a list of the infrared singular pieces of the virtual correction, obtained in the first step of our program. We then list the complete results for the real corrections (longitudinal and transverse photon polarization). In the next step we defined, for the real corrections, the collinear and soft singular regions and calculate their contributions to the impact factor. We then subtract the contribution due to the central region. Finally, we combine the real corrections with the singular pieces of the virtual corrections and obtain our finite results. (orig.)

  10. Corrective Action Decision Document/Closure Report for Corrective Action Unit 504: 16a-Tunnel Muckpile, Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    NSTec Environmental Restoration

    2010-03-15

    This Corrective Action Decision Document (CADD)/Closure Report (CR) was prepared by the Defense Threat Reduction Agency (DTRA) for Corrective Action Unit (CAU) 504, 16a-Tunnel Muckpile. This CADD/CR is consistent with the requirements of the Federal Facility Agreement and Consent Order (FFACO) agreed to by the State of Nevada; U.S. Department of Energy (DOE), Environmental Management; U.S. Department of Defense; and DOE, Legacy Management. Corrective Action Unit 504 is comprised of four Corrective Action Sites (CASs): • 16-06-01, Muckpile • 16-23-01, Contaminated Burial Pit • 16-23-02, Contaminated Area • 16-99-01, Concrete Construction Waste Corrective Action Site 16-23-01 is not a burial pit; it is part of CAS 16-06-01. Therefore, there is not a separate data analysis and assessment for CAS 16-23-01; it is included as part of the assessment for CAS 16-06-01. In addition to these CASs, the channel between CAS 16-23-02 (Contaminated Area) and Mid Valley Road was investigated with walk-over radiological surveys and soil sampling using hand tools. The purpose of this CADD/CR is to provide justification and documentation supporting the recommendation for closure in place with use restrictions for CAU 504. A CADD was originally submitted for CAU 504 and approved by the Nevada Division of Environmental Protection (NDEP). However, following an agreement between NDEP, DTRA, and the DOE, National Nuclear Security Administration Nevada Site Office to change to a risk-based approach for assessing the corrective action investigation (CAI) data, NDEP agreed that the CAU could be re-evaluated using the risk-based approach and a CADD/CR prepared to close the site.

  11. Correct-by-construction approaches for SoC design

    CERN Document Server

    Sinha, Roopak; Basu, Samik

    2013-01-01

    This book describes an approach for designing Systems-on-Chip such that the system meets precise mathematical requirements. The methodologies presented enable embedded systems designers to reuse intellectual property (IP) blocks from existing designs in an efficient, reliable manner, automatically generating correct SoCs from multiple, possibly mismatching, components.

  12. Problem for theories with spontaneous CP violation and natural flavor conservation

    International Nuclear Information System (INIS)

    Sanda, A.I.

    1981-01-01

    Using a vacuum-saturation approximation, Vainshtein, Zakharov, and Shifman have shown that L = L/sub QCD/+L/sub EW/ can explain the ΔI = 1/2 rule of strange-particle decays. Requiring L/sub EW/ to possess spontaneous CP violation and natural flavor conservation, we estimate epsilon'/epsilon using a similar approximation. We show that a very crude computation results in a very stringent limit 0.050>Vertical Barepsilon'/epsilonVertical Bar>0.048. This estimate is in conflict with the experimental measurement Vertical Barepsilon'/epsilonVertical Bar = 0.003 +- 0.015. This is a problem for theories with spontaneous CP violation and natural flavor conservation if the above understanding of the ΔI = 1/2 rule is correct

  13. Evaluation of thermal network correction program using test temperature data

    Science.gov (United States)

    Ishimoto, T.; Fink, L. C.

    1972-01-01

    An evaluation process to determine the accuracy of a computer program for thermal network correction is discussed. The evaluation is required since factors such as inaccuracies of temperatures, insufficient number of temperature points over a specified time period, lack of one-to-one correlation between temperature sensor and nodal locations, and incomplete temperature measurements are not present in the computer-generated information. The mathematical models used in the evaluation are those that describe a physical system composed of both a conventional and a heat pipe platform. A description of the models used, the results of the evaluation of the thermal network correction, and input instructions for the thermal network correction program are presented.

  14. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    Science.gov (United States)

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  15. Technical fine-tuning problem in renormalized perturbation theory

    International Nuclear Information System (INIS)

    Foda, O.E.

    1983-01-01

    The technical - as opposed to physical - fine tuning problem, i.e. the stability of tree-level gauge hierarchies at higher orders in renormalized perturbation theory, in a number of different models is studied. These include softly-broken supersymmetric models, and non-supersymmetric ones with a hierarchy of spontaneously-broken gauge symmetries. The models are renormalized using the BPHZ prescription, with momentum subtractions. Explicit calculations indicate that the tree-level hierarchy is not upset by the radiative corrections, and consequently no further fine-tuning is required to maintain it. Furthermore, this result is shown to run counter to that obtained via Dimensional Renormalization, (the only scheme used in previous literature on the subject). The discrepancy originates in the inherent local ambiguity in the finite parts of subtracted Feynman integrals. Within fully-renormalized perturbation theory the answer to the technical fine-tuning question (in the sense of whether the radiative corrections will ''readily'' respect the tree level gauge hierarchy or not) is contingent on the renormalization scheme used to define the model at the quantum level, rather than on the model itself. In other words, the need for fine-tuning, when it arises, is an artifact of the application of a certain class of renormalization schemes

  16. Technical fine-tuning problem in renormalized perturbation theory

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O.E.

    1983-01-01

    The technical - as opposed to physical - fine tuning problem, i.e. the stability of tree-level gauge hierarchies at higher orders in renormalized perturbation theory, in a number of different models is studied. These include softly-broken supersymmetric models, and non-supersymmetric ones with a hierarchy of spontaneously-broken gauge symmetries. The models are renormalized using the BPHZ prescription, with momentum subtractions. Explicit calculations indicate that the tree-level hierarchy is not upset by the radiative corrections, and consequently no further fine-tuning is required to maintain it. Furthermore, this result is shown to run counter to that obtained via Dimensional Renormalization, (the only scheme used in previous literature on the subject). The discrepancy originates in the inherent local ambiguity in the finite parts of subtracted Feynman integrals. Within fully-renormalized perturbation theory the answer to the technical fine-tuning question (in the sense of whether the radiative corrections will ''readily'' respect the tree level gauge hierarchy or not) is contingent on the renormalization scheme used to define the model at the quantum level, rather than on the model itself. In other words, the need for fine-tuning, when it arises, is an artifact of the application of a certain class of renormalization schemes.

  17. Arbitrary function generator for APS injector synchrotron correction magnets

    International Nuclear Information System (INIS)

    Despe, O.D.

    1991-01-01

    The APS injector synchrotron has eighty correction magnets around its circumference to provide the vernier field changes required for beam orbit correction during acceleration. The arbitrary function generator (AFG) design is based on scanning out encoded data from a semi-conductor memory, a first-in-first-out (FIFO) device. The data input consists of a maximum of 20 correction values specified within the acceleration window. Additional points between these values are then linearly interpolated to create a uniformly spaced 1000 data-point function stored in the FIFO. Each point, encoded as a 3-bit value is scanned out in synchronism with the injection pulse and used to clock the up/down counter driving the DAC. The DAC produces the analog reference voltage used to control the magnet current. 1 ref., 4 figs

  18. Industrial Requirements for Thermodynamics and Transport Properties

    DEFF Research Database (Denmark)

    Hendriks, Eric; Kontogeorgis, Georgios; Dohrn, Ralf

    2010-01-01

    the direction for future development. The use of new methods, such as SAFT, is increasing, but they are not yet in position to replace traditional methods such as cubic equations of state (especially in oil and gas industry) and the UNIFAC group contribution approach. A common problem with novel methods is lack...... addressed to or written by industrial colleagues, are discussed initially. This provides the context of the survey and material with which the results of the survey can be compared. The results of the survey have been divided into the themes: data, models, systems, properties, education, and collaboration...... of standardization, reference data, and correct and transparent implementations, especially in commercially available simulation programs. The survey indicates a great variety of systems where further work is required. For instance, for electrolyte systems better models are needed, capable of describing all types...

  19. Corrective Action Plan for Corrective Action Unit 254: Area 25 R-MAD Decontamination Facility Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Obi, C.M.

    2000-01-01

    The Area 25 Reactor Maintenance, Assembly, and Disassembly Decontamination Facility is identified in the Federal Facility Agreement and Consent Order (FFACO) as Corrective Action Unit (CAU) 254. CAU 254 is located in Area 25 of the Nevada Test Site and consists of a single Corrective Action Site CAS 25-23-06. CAU 254 will be closed, in accordance with the FFACO of 1996. CAU 254 was used primarily to perform radiological decontamination and consists of Building 3126, two outdoor decontamination pads, and surrounding soil within an existing perimeter fence. The site was used to decontaminate nuclear rocket test-car hardware and tooling from the early 1960s through the early 1970s, and to decontaminate a military tank in the early 1980s. The site characterization results indicate that, in places, the surficial soil and building materials exceed clean-up criteria for organic compounds, metals, and radionuclides. Closure activities are expected to generate waste streams consisting of nonhazardous construction waste. petroleum hydrocarbon waste, hazardous waste, low-level radioactive waste, and mixed waste. Some of the wastes exceed land disposal restriction limits and will require off-site treatment before disposal. The recommended corrective action was revised to Alternative 3- ''Unrestricted Release Decontamination, Verification Survey, and Dismantle Building 3126,'' in an addendum to the Correction Action Decision Document

  20. 75 FR 34527 - Volkswagen Petition for Exemption From the Vehicle Theft Prevention Standard; Correction

    Science.gov (United States)

    2010-06-17

    ... for Exemption From the Vehicle Theft Prevention Standard; Correction AGENCY: National Highway Traffic... the Theft Prevention Standard. This document corrects the model year of the new Volkswagen vehicle... effective in reducing and deterring motor vehicle theft as compliance with the parts marking requirements of...

  1. An Illustration of the Corrective Action Process, The Corrective Action Management Unit at Sandia National Laboratories/New Mexico

    International Nuclear Information System (INIS)

    Irwin, M.; Kwiecinski, D.

    2002-01-01

    Corrective Action Management Units (CAMUs) were established by the Environmental Protection Agency (EPA) to streamline the remediation of hazardous waste sites. Streamlining involved providing cost saving measures for the treatment, storage, and safe containment of the wastes. To expedite cleanup and remove disincentives, EPA designed 40 CFR 264 Subpart S to be flexible. At the heart of this flexibility are the provisions for CAMUs and Temporary Units (TUs). CAMUs and TUs were created to remove cleanup disincentives resulting from other Resource Conservation Recovery Act (RCRA) hazardous waste provisions--specifically, RCRA land disposal restrictions (LDRs) and minimum technology requirements (MTRs). Although LDR and MTR provisions were not intended for remediation activities, LDRs and MTRs apply to corrective actions because hazardous wastes are generated. However, management of RCRA hazardous remediation wastes in a CAMU or TU is not subject to these stringent requirements. The CAMU at Sandia National Laboratories in Albuquerque, New Mexico (SNL/NM) was proposed through an interactive process involving the regulators (EPA and the New Mexico Environment Department), DOE, SNL/NM, and stakeholders. The CAMU at SNL/NM has been accepting waste from the nearby Chemical Waste Landfill remediation since January of 1999. During this time, a number of unique techniques have been implemented to save costs, improve health and safety, and provide the best value and management practices. This presentation will take the audience through the corrective action process implemented at the CAMU facility, from the selection of the CAMU site to permitting and construction, waste management, waste treatment, and final waste placement. The presentation will highlight the key advantages that CAMUs and TUs offer in the corrective action process. These advantages include yielding a practical approach to regulatory compliance, expediting efficient remediation and site closure, and realizing

  2. Development of a Preventive HIV Vaccine Requires Solving Inverse Problems Which Is Unattainable by Rational Vaccine Design

    Directory of Open Access Journals (Sweden)

    Marc H. V. Van Regenmortel

    2018-01-01

    Full Text Available Hypotheses and theories are essential constituents of the scientific method. Many vaccinologists are unaware that the problems they try to solve are mostly inverse problems that consist in imagining what could bring about a desired outcome. An inverse problem starts with the result and tries to guess what are the multiple causes that could have produced it. Compared to the usual direct scientific problems that start with the causes and derive or calculate the results using deductive reasoning and known mechanisms, solving an inverse problem uses a less reliable inductive approach and requires the development of a theoretical model that may have different solutions or none at all. Unsuccessful attempts to solve inverse problems in HIV vaccinology by reductionist methods, systems biology and structure-based reverse vaccinology are described. The popular strategy known as rational vaccine design is unable to solve the multiple inverse problems faced by HIV vaccine developers. The term “rational” is derived from “rational drug design” which uses the 3D structure of a biological target for designing molecules that will selectively bind to it and inhibit its biological activity. In vaccine design, however, the word “rational” simply means that the investigator is concentrating on parts of the system for which molecular information is available. The economist and Nobel laureate Herbert Simon introduced the concept of “bounded rationality” to explain why the complexity of the world economic system makes it impossible, for instance, to predict an event like the financial crash of 2007–2008. Humans always operate under unavoidable constraints such as insufficient information, a limited capacity to process huge amounts of data and a limited amount of time available to reach a decision. Such limitations always prevent us from achieving the complete understanding and optimization of a complex system that would be needed to achieve a truly

  3. Analysis of the Impact on Creative Problem Solving in an Organization

    Directory of Open Access Journals (Sweden)

    Jasmina Žnideršič

    2013-10-01

    Full Text Available Research Question (RQ: What affects creative problem solving in anorganization?Purpose: The aim is to obtain a better picture by using statisticalanalysis on the effects of workers' creativity in problem solving inan organizationMethod: The data was obtained by interviewing employees and usingnonparametric tests (χ2 test, Fisher test and χ2 test with Yates correction for data analysis. Results: The research results showed that fear of failure does not affect creative problem solving nor do creativity test encourageworkers towards greater creativity, but prior knowledge and experience do influence workers' creative problem-solving.Organization: Results of this research study will provide managers inan organization a clearer picture of employees’ views, whether there is dominance of routine work, poor stimulated creativity and other factors that affect their creativity.Society: Opinion of workers in an organization can encourage other organizations to explore the impact on creativity of their employees.Originality: Because the data were obtained from a small organization, the results of this research study can only refer tothe setting it researched.Limitations/Future Research: To obtain a wider picture of the effectson creativity, a greater number of employees would need to be included as well as other factors would need to be analysed.This research study took place in an organization where creativityand problem solving are not required.

  4. Corrective Action Investigation Plan for Corrective Action Unit 487: Thunderwell Site, Tonopah Test Range, Nevada (Rev. No.: 0, January 2001); TOPICAL

    International Nuclear Information System (INIS)

    2001-01-01

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, Nevada Operations Office's (DOE/NV's) approach to collect the data necessary to evaluate corrective action alternatives (CAAs) appropriate for the closure of Corrective Action Unit (CAU) 487, Thunderwell Site, Tonopah Test Range (TTR), Nevada, under the Federal Facility Agreement and Consent Order. Corrective Action Unit 487 consists of a single Corrective Action Site (CAS), RG 26-001-RGRV, Thunderwell Site. The site is located in the northwest portion of the TTR, Nevada, approximately five miles northwest of the Area 3 Control Point and closest to the Cactus Flats broad basin. Historically, Sandia National Laboratories in New Mexico used CAU 487 in the early to mid-1960s for a series of high explosive tests detonated at the bottom of large cylindrical steel tubes. Historical photographs indicate that debris from these tests and subsequent operations may have been scattered and buried throughout the site. A March 2000 walk-over survey and a July 2000 geophysical survey indicated evidence of buried and surface debris in dirt mounds and areas throughout the site; however, a radiological drive-over survey also performed in July 2000 indicated that no radiological hazards were identified at this site. Based on site history, the scope of this plan is to resolve the problem statement identified during the Data Quality Objectives process that detonation activities at this CAU site may have resulted in the release of contaminants of concern into the surface/subsurface soil including total volatile and total semivolatile organic compounds, total Resource Conservation and Recovery Act metals, radionuclides, total petroleum hydrocarbons, and high explosives. Therefore, the scope of corrective action field investigation will involve excavation, drilling, and extensive soil sampling and analysis activities to determine the extent (if any) of both the lateral and vertical contamination and whether

  5. Error Correcting Codes I. Applications of Elementary Algebra to Information Theory. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 346.

    Science.gov (United States)

    Rice, Bart F.; Wilde, Carroll O.

    It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…

  6. Improvement of Nonlinearity Correction for BESIII ETOF Upgrade

    Science.gov (United States)

    Sun, Weijia; Cao, Ping; Ji, Xiaolu; Fan, Huanhuan; Dai, Hongliang; Zhang, Jie; Liu, Shubin; An, Qi

    2015-08-01

    An improved scheme to implement integral non-linearity (INL) correction of time measurements in the Beijing Spectrometer III Endcap Time-of-Flight (BESIII ETOF) upgrade system is presented in this paper. During upgrade, multi-gap resistive plate chambers (MRPC) are introduced as ETOF detectors which increases the total number of time measurement channels to 1728. The INL correction method adopted in BESIII TOF proved to be of limited use, because the sharply increased number of electronic channels required for reading out the detector strips degrade the system configuration efficiency severely. Furthermore, once installed into the spectrometer, BESIII TOF electronics do not support the TDCs' nonlinearity evaluation online. In this proposed method, INL data used for the correction algorithm are automatically imported from a non-volatile read-only memory (ROM) instead of from data acquisition software. This guarantees the real-time performance and system efficiency of the INL correction, especially for the ETOF upgrades with massive number of channels. Besides, a signal that is not synchronized to the system 41.65 MHz clock from BEPCII is sent to the frontend electronics (FEE) to simulate pseudo-random test pulses for the purpose of online nonlinearity evaluation. Test results show that the time measuring INL errors in one module with 72 channels can be corrected online and in real time.

  7. Area 2 Photo Skid Wastewater Pit corrective action decision document Corrective Action Unit Number 332: Part 1, and Closure report: Part 2

    International Nuclear Information System (INIS)

    1997-01-01

    The Area 2 Photo Skid Wastewater Pit, Corrective Action Site (CAS) Number 02-42-03, the only CAS in Corrective Action Unit (CAU) Number 332, has been identified as a source of unquantified, uncontrolled, and unpermitted wastewater discharge. The Photo Skid was used for photographic processing of film for projects related to weapons testing, using Kodak RA4 and GPX film processing facilities for black and white and color photographs. The CAU is located in Area 2 of the Nevada Test Site, Nye County, Nevada. The CAS consists of one unlined pit which received discharged photographic process wastewater from 1984 to 1991. The Corrective Action Decision Document (CADD) and the Closure Report (CR) have been developed to meet the requirements of the Federal Facility Agreement and Consent Order (FFACO, 1996). The CADD and the CR for this CAS have been combined because sample data collected during the site investigation do not exceed regulatory limits established during the Data Quality Objectives (DQO) process. The purpose of the CADD and the CR is to justify why no corrective action is necessary at the CAU based on process knowledge and the results of the corrective action investigation and to request closure of the CAU. This document contains Part 1 of the CADD and Part 2 of the CR

  8. Single-loop renormalizations and properties of radiative corrections in the Fried-Yennie gauge

    International Nuclear Information System (INIS)

    Karshenboim, S.G.; Shelyuto, V.A.; Eides, M.I.

    1988-01-01

    Single-loop radiative corrections are studied in the Fried-Yennie gauge. It is shown that in this gauge the usual subtraction procedure on the mass shell does not require introduction of an infrared photon mass. The behavior of the diagrams containing radiative corrections near the mass shell is investigated, and it is shown that in the Fried-Yennie gauge this behavior is softer than in any other gauge and softer than the behavior of the corresponding graphs without radiative corrections

  9. MONOTONIC DERIVATIVE CORRECTION FOR CALCULATION OF SUPERSONIC FLOWS WITH SHOCK WAVES

    Directory of Open Access Journals (Sweden)

    P. V. Bulat

    2015-07-01

    Full Text Available Subject of Research. Numerical solution methods of gas dynamics problems based on exact and approximate solution of Riemann problem are considered. We have developed an approach to the solution of Euler equations describing flows of inviscid compressible gas based on finite volume method and finite difference schemes of various order of accuracy. Godunov scheme, Kolgan scheme, Roe scheme, Harten scheme and Chakravarthy-Osher scheme are used in calculations (order of accuracy of finite difference schemes varies from 1st to 3rd. Comparison of accuracy and efficiency of various finite difference schemes is demonstrated on the calculation example of inviscid compressible gas flow in Laval nozzle in the case of continuous acceleration of flow in the nozzle and in the case of nozzle shock wave presence. Conclusions about accuracy of various finite difference schemes and time required for calculations are made. Main Results. Comparative analysis of difference schemes for Euler equations integration has been carried out. These schemes are based on accurate and approximate solution for the problem of an arbitrary discontinuity breakdown. Calculation results show that monotonic derivative correction provides numerical solution uniformity in the breakdown neighbourhood. From the one hand, it prevents formation of new points of extremum, providing the monotonicity property, but from the other hand, causes smoothing of existing minimums and maximums and accuracy loss. Practical Relevance. Developed numerical calculation method gives the possibility to perform high accuracy calculations of flows with strong non-stationary shock and detonation waves. At the same time, there are no non-physical solution oscillations on the shock wave front.

  10. Kinetic corrections from analytic non-Maxwellian distribution functions in magnetized plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Izacard, Olivier, E-mail: izacard@llnl.gov [Lawrence Livermore National Laboratory, 7000 East Avenue, L-637, Livermore, California 94550 (United States)

    2016-08-15

    In magnetized plasma physics, almost all developed analytic theories assume a Maxwellian distribution function (MDF) and in some cases small deviations are described using the perturbation theory. The deviations with respect to the Maxwellian equilibrium, called kinetic effects, are required to be taken into account especially for fusion reactor plasmas. Generally, because the perturbation theory is not consistent with observed steady-state non-Maxwellians, these kinetic effects are numerically evaluated by very central processing unit (CPU)-expensive codes, avoiding the analytic complexity of velocity phase space integrals. We develop here a new method based on analytic non-Maxwellian distribution functions constructed from non-orthogonal basis sets in order to (i) use as few parameters as possible, (ii) increase the efficiency to model numerical and experimental non-Maxwellians, (iii) help to understand unsolved problems such as diagnostics discrepancies from the physical interpretation of the parameters, and (iv) obtain analytic corrections due to kinetic effects given by a small number of terms and removing the numerical error of the evaluation of velocity phase space integrals. This work does not attempt to derive new physical effects even if it could be possible to discover one from the better understandings of some unsolved problems, but here we focus on the analytic prediction of kinetic corrections from analytic non-Maxwellians. As applications, examples of analytic kinetic corrections are shown for the secondary electron emission, the Langmuir probe characteristic curve, and the entropy. This is done by using three analytic representations of the distribution function: the Kappa distribution function, the bi-modal or a new interpreted non-Maxwellian distribution function (INMDF). The existence of INMDFs is proved by new understandings of the experimental discrepancy of the measured electron temperature between two diagnostics in JET. As main results, it

  11. Validation of missed space-group symmetry in X-ray powder diffraction structures with dispersion-corrected density functional theory

    DEFF Research Database (Denmark)

    Hempler, Daniela; Schmidt, Martin U.; Van De Streek, Jacco

    2017-01-01

    More than 600 molecular crystal structures with correct, incorrect and uncertain space-group symmetry were energy-minimized with dispersion-corrected density functional theory (DFT-D, PBE-D3). For the purpose of determining the correct space-group symmetry the required tolerance on the atomic...... with missed symmetry were investigated by dispersion-corrected density functional theory. In 98.5% of the cases the correct space group is found....

  12. Correct Linearization of Einstein's Equations

    Directory of Open Access Journals (Sweden)

    Rabounski D.

    2006-06-01

    Full Text Available Regularly Einstein's equations can be reduced to a wave form (linearly dependent from the second derivatives of the space metric in the absence of gravitation, the space rotation and Christoffel's symbols. As shown here, the origin of the problem is that one uses the general covariant theory of measurement. Here the wave form of Einstein's equations is obtained in the terms of Zelmanov's chronometric invariants (physically observable projections on the observer's time line and spatial section. The obtained equations depend on solely the second derivatives even if gravitation, the space rotation and Christoffel's symbols. The correct linearization proves: the Einstein equations are completely compatible with weak waves of the metric.

  13. Correction of dental artifacts within the anatomical surface in PET/MRI using active shape models and k-nearest-neighbors

    DEFF Research Database (Denmark)

    Ladefoged, Claes N.; Andersen, Flemming L.; Keller, Sune H.

    2014-01-01

    n combined PET/MR, attenuation correction (AC) is performed indirectly based on the available MR image information. Metal implant-induced susceptibility artifacts and subsequent signal voids challenge MR-based AC. Several papers acknowledge the problem in PET attenuation correction when dental...... artifacts are ignored, but none of them attempts to solve the problem. We propose a clinically feasible correction method which combines Active Shape Models (ASM) and k- Nearest-Neighbors (kNN) into a simple approach which finds and corrects the dental artifacts within the surface boundaries of the patient...... anatomy. ASM is used to locate a number of landmarks in the T1-weighted MR-image of a new patient. We calculate a vector of offsets from each voxel within a signal void to each of the landmarks. We then use kNN to classify each voxel as belonging to an artifact or an actual signal void using this offset...

  14. On the evaluation of the correction factor μ (rho', tau') for the periodic pulse method

    International Nuclear Information System (INIS)

    Mueller, J.W.

    1976-01-01

    The inconveniences associated with the purely numerical approach we have chosen to solve some of the problems which arise in connection with the source-pulser method are twofold. On the one hand, there is the trouble of calculating the tables for μ, requiring several nights of computer time. On the other hand, apart from some simple limiting values as μ = 1 for tau' = 0 or 1, μ = 1/0.5 + /0.5 - tau'/ for rho' → 0 (and 0 > 1, no appropriate analytical form for the correction factor μ of sufficient precision is known for the moment. This drawback, we hope, is partly removed by a tabulation which should cover the whole region of practical interest. The computer programs for both the evaluation of μ and the Monte Carlo simulation are available upon request

  15. Equation-Method for correcting clipping errors in OFDM signals.

    Science.gov (United States)

    Bibi, Nargis; Kleerekoper, Anthony; Muhammad, Nazeer; Cheetham, Barry

    2016-01-01

    Orthogonal frequency division multiplexing (OFDM) is the digital modulation technique used by 4G and many other wireless communication systems. OFDM signals have significant amplitude fluctuations resulting in high peak to average power ratios which can make an OFDM transmitter susceptible to non-linear distortion produced by its high power amplifiers (HPA). A simple and popular solution to this problem is to clip the peaks before an OFDM signal is applied to the HPA but this causes in-band distortion and introduces bit-errors at the receiver. In this paper we discuss a novel technique, which we call the Equation-Method, for correcting these errors. The Equation-Method uses the Fast Fourier Transform to create a set of simultaneous equations which, when solved, return the amplitudes of the peaks before they were clipped. We show analytically and through simulations that this method can, correct all clipping errors over a wide range of clipping thresholds. We show that numerical instability can be avoided and new techniques are needed to enable the receiver to differentiate between correctly and incorrectly received frequency-domain constellation symbols.

  16. An FFT-based Method for Attenuation Correction in Fluorescence Confocal Microscopy

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Bakker, M.

    1993-01-01

    A problem in three-dimensional imaging by a confocal scanning laser microscope (CSLM) in the (epi)fluorescence mode is the darkening of the deeper layers due to absorption and scattering of both the excitation and the fluorescence light. In this paper we propose a new method to correct for these

  17. Many-Body Energy Decomposition with Basis Set Superposition Error Corrections.

    Science.gov (United States)

    Mayer, István; Bakó, Imre

    2017-05-09

    The problem of performing many-body decompositions of energy is considered in the case when BSSE corrections are also performed. It is discussed that the two different schemes that have been proposed go back to the two different interpretations of the original Boys-Bernardi counterpoise correction scheme. It is argued that from the physical point of view the "hierarchical" scheme of Valiron and Mayer should be preferred and not the scheme recently discussed by Ouyang and Bettens, because it permits the energy of the individual monomers and all the two-body, three-body, etc. energy components to be free of unphysical dependence on the arrangement (basis functions) of other subsystems in the cluster.

  18. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques.

    Science.gov (United States)

    Hofmann, Matthias; Pichler, Bernd; Schölkopf, Bernhard; Beyer, Thomas

    2009-03-01

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data.

  19. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques

    International Nuclear Information System (INIS)

    Hofmann, Matthias; Pichler, Bernd; Schoelkopf, Bernhard; Beyer, Thomas

    2009-01-01

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data. (orig.)

  20. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hofmann, Matthias [Max Planck Institute for Biological Cybernetics, Tuebingen (Germany); University of Tuebingen, Laboratory for Preclinical Imaging and Imaging Technology of the Werner Siemens-Foundation, Department of Radiology, Tuebingen (Germany); University of Oxford, Wolfson Medical Vision Laboratory, Department of Engineering Science, Oxford (United Kingdom); Pichler, Bernd [University of Tuebingen, Laboratory for Preclinical Imaging and Imaging Technology of the Werner Siemens-Foundation, Department of Radiology, Tuebingen (Germany); Schoelkopf, Bernhard [Max Planck Institute for Biological Cybernetics, Tuebingen (Germany); Beyer, Thomas [University Hospital Duisburg-Essen, Department of Nuclear Medicine, Essen (Germany); Cmi-Experts GmbH, Zurich (Switzerland)

    2009-03-15

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data. (orig.)

  1. Step by Step: Biology Undergraduates’ Problem-Solving Procedures during Multiple-Choice Assessment

    Science.gov (United States)

    Prevost, Luanna B.; Lemons, Paula P.

    2016-01-01

    This study uses the theoretical framework of domain-specific problem solving to explore the procedures students use to solve multiple-choice problems about biology concepts. We designed several multiple-choice problems and administered them on four exams. We trained students to produce written descriptions of how they solved the problem, and this allowed us to systematically investigate their problem-solving procedures. We identified a range of procedures and organized them as domain general, domain specific, or hybrid. We also identified domain-general and domain-specific errors made by students during problem solving. We found that students use domain-general and hybrid procedures more frequently when solving lower-order problems than higher-order problems, while they use domain-specific procedures more frequently when solving higher-order problems. Additionally, the more domain-specific procedures students used, the higher the likelihood that they would answer the problem correctly, up to five procedures. However, if students used just one domain-general procedure, they were as likely to answer the problem correctly as if they had used two to five domain-general procedures. Our findings provide a categorization scheme and framework for additional research on biology problem solving and suggest several important implications for researchers and instructors. PMID:27909021

  2. Corrective Action Decision Document/Corrective Action Plan for the 92-Acre Area and Corrective Action Unit 111: Area 5 WMD Retired Mixed Waste Pits, Nevada Test Site, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    NSTec Environmental Restoration

    2009-07-31

    This Corrective Action Decision Document/Corrective Action Plan (CADD/CAP) has been prepared for the 92-Acre Area, the southeast quadrant of the Radioactive Waste Management Site, located in Area 5 of the Nevada Test Site (NTS). The 92-Acre Area includes Corrective Action Unit (CAU) 111, 'Area 5 WMD Retired Mixed Waste Pits.' Data Quality Objectives (DQOs) were developed for the 92-Acre Area, which includes CAU 111. The result of the DQO process was that the 92-Acre Area is sufficiently characterized to provide the input data necessary to evaluate corrective action alternatives (CAAs) without the collection of additional data. The DQOs are included as Appendix A of this document. This CADD/CAP identifies and provides the rationale for the recommended CAA for the 92-Acre Area, provides the plan for implementing the CAA, and details the post-closure plan. When approved, this CADD/CAP will supersede the existing Pit 3 (P03) Closure Plan, which was developed in accordance with Title 40 Code of Federal Regulations (CFR) Part 265, 'Interim Status Standards for Owners and Operators of Hazardous Waste Treatment, Storage, and Disposal Facilities.' This document will also serve as the Closure Plan and the Post-Closure Plan, which are required by 40 CFR 265, for the 92-Acre Area. After closure activities are complete, a request for the modification of the Resource Conservation and Recovery Act Permit that governs waste management activities at the NTS will be submitted to the Nevada Division of Environmental Protection to incorporate the requirements for post-closure monitoring. Four CAAs, ranging from No Further Action to Clean Closure, were evaluated for the 92-Acre Area. The CAAs were evaluated on technical merit focusing on performance, reliability, feasibility, safety, and cost. Based on the evaluation of the data used to develop the conceptual site model; a review of past, current, and future operations at the site; and the detailed and comparative

  3. Corrective Action Decision Document/Corrective Action Plan for the 92-Acre Area and Corrective Action Unit 111: Area 5 WMD Retired Mixed Waste Pits, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    2009-01-01

    This Corrective Action Decision Document/Corrective Action Plan (CADD/CAP) has been prepared for the 92-Acre Area, the southeast quadrant of the Radioactive Waste Management Site, located in Area 5 of the Nevada Test Site (NTS). The 92-Acre Area includes Corrective Action Unit (CAU) 111, 'Area 5 WMD Retired Mixed Waste Pits.' Data Quality Objectives (DQOs) were developed for the 92-Acre Area, which includes CAU 111. The result of the DQO process was that the 92-Acre Area is sufficiently characterized to provide the input data necessary to evaluate corrective action alternatives (CAAs) without the collection of additional data. The DQOs are included as Appendix A of this document. This CADD/CAP identifies and provides the rationale for the recommended CAA for the 92-Acre Area, provides the plan for implementing the CAA, and details the post-closure plan. When approved, this CADD/CAP will supersede the existing Pit 3 (P03) Closure Plan, which was developed in accordance with Title 40 Code of Federal Regulations (CFR) Part 265, 'Interim Status Standards for Owners and Operators of Hazardous Waste Treatment, Storage, and Disposal Facilities.' This document will also serve as the Closure Plan and the Post-Closure Plan, which are required by 40 CFR 265, for the 92-Acre Area. After closure activities are complete, a request for the modification of the Resource Conservation and Recovery Act Permit that governs waste management activities at the NTS will be submitted to the Nevada Division of Environmental Protection to incorporate the requirements for post-closure monitoring. Four CAAs, ranging from No Further Action to Clean Closure, were evaluated for the 92-Acre Area. The CAAs were evaluated on technical merit focusing on performance, reliability, feasibility, safety, and cost. Based on the evaluation of the data used to develop the conceptual site model; a review of past, current, and future operations at the site; and the detailed and comparative analysis of the

  4. Gynaecomastia correction: A review of our experience

    Directory of Open Access Journals (Sweden)

    Arvind Arvind

    2014-01-01

    Full Text Available Introduction: Gynaecomastia is a common problem in the male population with a reported prevalence of up to 36%. Various treatment techniques have been described but none have gained universal acceptance. We reviewed all gynaecomastia patients operated on by one consultant over a 7-year period to assess the morbidity and complication rates associated with the procedure. Materials and Methods: Clinical notes and outpatient records of all patients who underwent gynaecomastia correction at University Hospital North Staffordshire between 01/10/2001 to 01/10/2009 were retrospectively reviewed. A modified version of the Breast Evaluation Questionnaire was used to assess patients satisfaction with the procedure. Results: Twenty-nine patients and a total of 53 breasts were operated on during the study period. Patients underwent either liposuction alone (6 breasts - 11.3%, excision alone (37 breasts - 69.8% or both excision and liposuction (10 breasts - 18.9%. Twelve operated breasts (22.6% experienced some form of complication. Minor complications included seroma (2 patients, superficial wound dehiscence (2 patients and minor bleeding not requiring theatre (3 patients. Two patients developed haematomas requiring evacuation in theatre. No cases of wound infection, major wound dehiscence or revision surgery were encountered. Twenty-six patients (89.7% returned the patient satisfaction questionnaire. Patients scored an average 4.12 with regards comfort of their chest in different settings, 3.98 with regards chest appearance in different settings, and 4.22 with regards satisfaction levels for themselves and their partner/family. Overall complication rate was 22.6%. Grade III patients experienced the highest complication rate (35.7%, followed by grade II (22.7% and grade I (17.6%. Overall complication rates among the excision only group was the highest (29.8% followed by the liposuction only group (16.7% and the liposuction and excision group (10.0%. There

  5. A trust region interior point algorithm for optimal power flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Wang Min [Hefei University of Technology (China). Dept. of Electrical Engineering and Automation; Liu Shengsong [Jiangsu Electric Power Dispatching and Telecommunication Company (China). Dept. of Automation

    2005-05-01

    This paper presents a new algorithm that uses the trust region interior point method to solve nonlinear optimal power flow (OPF) problems. The OPF problem is solved by a primal/dual interior point method with multiple centrality corrections as a sequence of linearized trust region sub-problems. It is the trust region that controls the linear step size and ensures the validity of the linear model. The convergence of the algorithm is improved through the modification of the trust region sub-problem. Numerical results of standard IEEE systems and two realistic networks ranging in size from 14 to 662 buses are presented. The computational results show that the proposed algorithm is very effective to optimal power flow applications, and favors the successive linear programming (SLP) method. Comparison with the predictor/corrector primal/dual interior point (PCPDIP) method is also made to demonstrate the superiority of the multiple centrality corrections technique. (author)

  6. Differences in children and adolescents' ability of reporting two CVS-related visual problems.

    Science.gov (United States)

    Hu, Liang; Yan, Zheng; Ye, Tiantian; Lu, Fan; Xu, Peng; Chen, Hao

    2013-01-01

    The present study examined whether children and adolescents can correctly report dry eyes and blurred distance vision, two visual problems associated with computer vision syndrome. Participants are 913 children and adolescents aged 6-17. They were asked to report their visual problems, including dry eyes and blurred distance vision, and received an eye examination, including tear film break-up time (TFBUT) and visual acuity (VA). Inconsistency was found between participants' reports of dry eyes and TFBUT results among all 913 participants as well as for all of four subgroups. In contrast, consistency was found between participants' reports of blurred distance vision and VA results among 873 participants who had never worn glasses as well as for the four subgroups. It was concluded that children and adolescents are unable to report dry eyes correctly; however, they are able to report blurred distance vision correctly. Three practical implications of the findings were discussed. Little is known about children's ability to report their visual problems, an issue critical to diagnosis and treatment of children's computer vision syndrome. This study compared children's self-reports and clinic examination results and found children can correctly report blurred distance vision but not dry eyes.

  7. Using an isomorphic problem pair to learn introductory physics: Transferring from a two-step problem to a three-step problem

    Directory of Open Access Journals (Sweden)

    Shih-Yin Lin

    2013-10-01

    Full Text Available In this study, we examine introductory physics students’ ability to perform analogical reasoning between two isomorphic problems which employ the same underlying physics principles but have different surface features. 382 students from a calculus-based and an algebra-based introductory physics course were administered a quiz in the recitation in which they had to learn from a solved problem provided and take advantage of what they learned from it to solve another isomorphic problem (which we call the quiz problem. The solved problem provided has two subproblems while the quiz problem has three subproblems, which is known from previous research to be challenging for introductory students. In addition to the solved problem, students also received extra scaffolding supports that were intended to help them discern and exploit the underlying similarities of the isomorphic solved and quiz problems. The data analysis suggests that students had great difficulty in transferring what they learned from a two-step problem to a three-step problem. Although most students were able to learn from the solved problem to some extent with the scaffolding provided and invoke the relevant principles in the quiz problem, they were not necessarily able to apply the principles correctly. We also conducted think-aloud interviews with six introductory students in order to understand in depth the difficulties they had and explore strategies to provide better scaffolding. The interviews suggest that students often superficially mapped the principles employed in the solved problem to the quiz problem without necessarily understanding the governing conditions underlying each principle and examining the applicability of the principle in the new situation in an in-depth manner. Findings suggest that more scaffolding is needed to help students in transferring from a two-step problem to a three-step problem and applying the physics principles appropriately. We outline a few

  8. Using an isomorphic problem pair to learn introductory physics: Transferring from a two-step problem to a three-step problem

    Science.gov (United States)

    Lin, Shih-Yin; Singh, Chandralekha

    2013-12-01

    In this study, we examine introductory physics students’ ability to perform analogical reasoning between two isomorphic problems which employ the same underlying physics principles but have different surface features. 382 students from a calculus-based and an algebra-based introductory physics course were administered a quiz in the recitation in which they had to learn from a solved problem provided and take advantage of what they learned from it to solve another isomorphic problem (which we call the quiz problem). The solved problem provided has two subproblems while the quiz problem has three subproblems, which is known from previous research to be challenging for introductory students. In addition to the solved problem, students also received extra scaffolding supports that were intended to help them discern and exploit the underlying similarities of the isomorphic solved and quiz problems. The data analysis suggests that students had great difficulty in transferring what they learned from a two-step problem to a three-step problem. Although most students were able to learn from the solved problem to some extent with the scaffolding provided and invoke the relevant principles in the quiz problem, they were not necessarily able to apply the principles correctly. We also conducted think-aloud interviews with six introductory students in order to understand in depth the difficulties they had and explore strategies to provide better scaffolding. The interviews suggest that students often superficially mapped the principles employed in the solved problem to the quiz problem without necessarily understanding the governing conditions underlying each principle and examining the applicability of the principle in the new situation in an in-depth manner. Findings suggest that more scaffolding is needed to help students in transferring from a two-step problem to a three-step problem and applying the physics principles appropriately. We outline a few possible strategies

  9. One loop electro-weak radiative corrections in the standard model

    International Nuclear Information System (INIS)

    Kalyniak, P.; Sundaresan, M.K.

    1987-01-01

    This paper reports on the effect of radiative corrections in the standard model. A sensitive test of the three gauge boson vertices is expected to come from the work in LEPII in which the reaction e + e - → W + W - can occur. Two calculations of radiative corrections to the reaction e + e - → W + W - exist at present. The results of the calculations although very similar disagree with one another as to the actual magnitude of the correction. Some of the reasons for the disagreement are understood. However, due to the reasons mentioned below, another look must be taken at these lengthy calculations to resolve the differences between the two previous calculations. This is what is being done in the present work. There are a number of reasons why we must take another look at the calculation of the radiative corrections. The previous calculations were carried out before the UA1, UA2 data on W and Z bosons were obtained. Experimental groups require a computer program which can readily calculate the radiative corrections ab initio for various experimental conditions. The normalization of sin 2 θ w in the previous calculations was done in a way which is not convenient for use in the experimental work. It would be desirable to have the analytical expressions for the corrections available so that the renormalization scheme dependence of the corrections could be studied

  10. Problems in equilibrium theory

    CERN Document Server

    Aliprantis, Charalambos D

    1996-01-01

    In studying General Equilibrium Theory the student must master first the theory and then apply it to solve problems. At the graduate level there is no book devoted exclusively to teaching problem solving. This book teaches for the first time the basic methods of proof and problem solving in General Equilibrium Theory. The problems cover the entire spectrum of difficulty; some are routine, some require a good grasp of the material involved, and some are exceptionally challenging. The book presents complete solutions to two hundred problems. In searching for the basic required techniques, the student will find a wealth of new material incorporated into the solutions. The student is challenged to produce solutions which are different from the ones presented in the book.

  11. Spline-Interpolation Solution of One Elasticity Theory Problem

    CERN Document Server

    Shirakova, Elena A

    2011-01-01

    The book presents methods of approximate solution of the basic problem of elasticity for special types of solids. Engineers can apply the approximate methods (Finite Element Method, Boundary Element Method) to solve the problems but the application of these methods may not be correct for solids with the certain singularities or asymmetrical boundary conditions. The book is recommended for researchers and professionals working on elasticity modeling. It explains methods of solving elasticity problems for special solids. Approximate methods (Finite Element Method, Boundary Element Method) have b

  12. nvj 34 4 corrected.cdr

    African Journals Online (AJOL)

    GRAPHICS DEPT

    offer clinicians the required tools to help them in their ... diagnosis and management of patients' problem is ..... Edinburgh: ... reasoning processes in the assessment and management of patients with shoulder pain: a qualitative study. Aust.

  13. Expert systems applied to two problems in nuclear power plants

    International Nuclear Information System (INIS)

    Kim, K.Y.

    1988-01-01

    This dissertation describes two prototype expert systems applied to two problems in nuclear power plants. One problem is spare parts inventory control, and the other one is radionuclide release from containment during severe accident. The expert system for spare parts inventory control can handle spare parts requirements not only in corrective, preventive, or predictive maintenance, but also when failure rates of components or parts are updated by new data. Costs and benefits of spare parts inventory acquisition are evaluated with qualitative attributes such as spare part availability to provide the inventory manager with an improved basis for decision making. The expert system is implemented with Intelligence/Compiler on an IBM-AT. The other expert system for radionuclide release from containment can estimate magnitude, type, location, and time of release of radioactive materials from containment during a severe accident nearly on line, based on the actual measured physical parameters such as temperature and pressure inside the containment. The expert system has a function to check the validation of sensor data. The expert system is implemented with KEE on a Symbolics LISP machine

  14. The generation problem

    International Nuclear Information System (INIS)

    Ecker, G.

    1983-01-01

    Evidence for the generation structure of quarks and leptons is reviewed. The two main aspects of the generation problem are emphasized. The concept and possible problems of horizontal symmetries are discussed. Two different mechanisms for horizontal symmetries are considered leading to a generalized permutation symmetry in SU(2)sub(L) x u(1) in one case. The second mechanism uses the discrete unbroken subgroup of an axial U(1) with hypercolour anomalies in composite models. A concrete realization in the rishon model is investigated. The two different approaches produce almost identical quark mass matrices for three generations. In addition to a correct prediction for the Cabibbo angle the models yield a very small Kobayashi-Maskawa mixing angle Theta 3 and thus provide for a natural explanation of the smallness of CP violation. (Author)

  15. High-speed parallel forward error correction for optical transport networks

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2010-01-01

    This paper presents a highly parallelized hardware implementation of the standard OTN Reed-Solomon Forward Error Correction algorithm. The proposed circuit is designed to meet the immense throughput required by OTN4, using commercially available FPGA technology....

  16. Prior-based artifact correction (PBAC) in computed tomography

    International Nuclear Information System (INIS)

    Heußer, Thorsten; Brehm, Marcus; Ritschl, Ludwig; Sawall, Stefan; Kachelrieß, Marc

    2014-01-01

    Purpose: Image quality in computed tomography (CT) often suffers from artifacts which may reduce the diagnostic value of the image. In many cases, these artifacts result from missing or corrupt regions in the projection data, e.g., in the case of metal, truncation, and limited angle artifacts. The authors propose a generalized correction method for different kinds of artifacts resulting from missing or corrupt data by making use of available prior knowledge to perform data completion. Methods: The proposed prior-based artifact correction (PBAC) method requires prior knowledge in form of a planning CT of the same patient or in form of a CT scan of a different patient showing the same body region. In both cases, the prior image is registered to the patient image using a deformable transformation. The registered prior is forward projected and data completion of the patient projections is performed using smooth sinogram inpainting. The obtained projection data are used to reconstruct the corrected image. Results: The authors investigate metal and truncation artifacts in patient data sets acquired with a clinical CT and limited angle artifacts in an anthropomorphic head phantom data set acquired with a gantry-based flat detector CT device. In all cases, the corrected images obtained by PBAC are nearly artifact-free. Compared to conventional correction methods, PBAC achieves better artifact suppression while preserving the patient-specific anatomy at the same time. Further, the authors show that prominent anatomical details in the prior image seem to have only minor impact on the correction result. Conclusions: The results show that PBAC has the potential to effectively correct for metal, truncation, and limited angle artifacts if adequate prior data are available. Since the proposed method makes use of a generalized algorithm, PBAC may also be applicable to other artifacts resulting from missing or corrupt data

  17. Radial lens distortion correction with sub-pixel accuracy for X-ray micro-tomography.

    Science.gov (United States)

    Vo, Nghia T; Atwood, Robert C; Drakopoulos, Michael

    2015-12-14

    Distortion correction or camera calibration for an imaging system which is highly configurable and requires frequent disassembly for maintenance or replacement of parts needs a speedy method for recalibration. Here we present direct techniques for calculating distortion parameters of a non-linear model based on the correct determination of the center of distortion. These techniques are fast, very easy to implement, and accurate at sub-pixel level. The implementation at the X-ray tomography system of the I12 beamline, Diamond Light Source, which strictly requires sub-pixel accuracy, shows excellent performance in the calibration image and in the reconstructed images.

  18. Quantum-corrected transient analysis of plasmonic nanostructures

    KAUST Repository

    Uysal, Ismail Enes

    2017-03-08

    A time domain surface integral equation (TD-SIE) solver is developed for quantum-corrected analysis of transient electromagnetic field interactions on plasmonic nanostructures with sub-nanometer gaps. “Quantum correction” introduces an auxiliary tunnel to support the current path that is generated by electrons tunneled between the nanostructures. The permittivity of the auxiliary tunnel and the nanostructures is obtained from density functional theory (DFT) computations. Electromagnetic field interactions on the combined structure (nanostructures plus auxiliary tunnel connecting them) are computed using a TD-SIE solver. Time domain samples of the permittivity and the Green function required by this solver are obtained from their frequency domain samples (generated from DFT computations) using a semi-analytical method. Accuracy and applicability of the resulting quantum-corrected solver scheme are demonstrated via numerical examples.

  19. On the flavor problem in strongly coupled theories

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, Martin

    2012-11-28

    This thesis is on the flavor problem of Randall Sundrum models and their strongly coupled dual theories. These models are particularly well motivated extensions of the Standard Model, because they simultaneously address the gauge hierarchy problem and the hierarchies in the quark masses and mixings. In order to put this into context, special attention is given to concepts underlying the theories which can explain the hierarchy problem and the flavor structure of the Standard Model (SM). The AdS/CFT duality is introduced and its implications for the Randall Sundrum model with fermions in the bulk and general bulk gauge groups is investigated. It is shown that the different terms in the general 5D propagator of a bulk gauge field can be related to the corresponding diagrams of the strongly coupled dual, which allows for a deeper understanding of the origin of flavor changing neutral currents generated by the exchange of the Kaluza Klein excitations of these bulk fields. In the numerical analysis, different observables which are sensitive to corrections from the tree-level exchange of these resonances will be presented on the basis of updated experimental data from the Tevatron and LHC experiments. This includes electroweak precision observables, namely corrections to the S and T parameters followed by corrections to the Zb anti b vertex, flavor changing observables with flavor changes at one vertex, viz. B(B{sub d}{yields}{mu}{sup +}{mu}{sup -}) and B(B{sub s}{yields}{mu}{sup +}{mu}{sup -}), and two vertices, viz. S{sub {psi}{phi}} and vertical stroke {epsilon}{sub K} vertical stroke, as well as bounds from direct detection experiments. The analysis will show that all of these bounds can be brought in agreement with a new physics scale {Lambda}{sub NP} in the TeV range, except for the CP violating quantity vertical stroke {epsilon}{sub K} vertical stroke, which requires {Lambda}{sub NP}=O(10) TeV in the absence of fine-tuning. The numerous modifications of the

  20. On the flavor problem in strongly coupled theories

    International Nuclear Information System (INIS)

    Bauer, Martin

    2012-01-01

    This thesis is on the flavor problem of Randall Sundrum models and their strongly coupled dual theories. These models are particularly well motivated extensions of the Standard Model, because they simultaneously address the gauge hierarchy problem and the hierarchies in the quark masses and mixings. In order to put this into context, special attention is given to concepts underlying the theories which can explain the hierarchy problem and the flavor structure of the Standard Model (SM). The AdS/CFT duality is introduced and its implications for the Randall Sundrum model with fermions in the bulk and general bulk gauge groups is investigated. It is shown that the different terms in the general 5D propagator of a bulk gauge field can be related to the corresponding diagrams of the strongly coupled dual, which allows for a deeper understanding of the origin of flavor changing neutral currents generated by the exchange of the Kaluza Klein excitations of these bulk fields. In the numerical analysis, different observables which are sensitive to corrections from the tree-level exchange of these resonances will be presented on the basis of updated experimental data from the Tevatron and LHC experiments. This includes electroweak precision observables, namely corrections to the S and T parameters followed by corrections to the Zb anti b vertex, flavor changing observables with flavor changes at one vertex, viz. B(B d →μ + μ - ) and B(B s →μ + μ - ), and two vertices, viz. S ψφ and vertical stroke ε K vertical stroke, as well as bounds from direct detection experiments. The analysis will show that all of these bounds can be brought in agreement with a new physics scale Λ NP in the TeV range, except for the CP violating quantity vertical stroke ε K vertical stroke, which requires Λ NP =O(10) TeV in the absence of fine-tuning. The numerous modifications of the Randall Sundrum model in the literature, which try to attenuate this bound are reviewed and categorized

  1. Software tool for resolution of inverse problems using artificial intelligence techniques: an application in neutron spectrometry

    International Nuclear Information System (INIS)

    Castaneda M, V. H.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Leon P, A. A.; Hernandez P, C. F.; Espinoza G, J. G.; Ortiz R, J. M.; Vega C, H. R.; Mendez, R.; Gallego, E.; Sousa L, M. A.

    2016-10-01

    The Taguchi methodology has proved to be highly efficient to solve inverse problems, in which the values of some parameters of the model must be obtained from the observed data. There are intrinsic mathematical characteristics that make a problem known as inverse. Inverse problems appear in many branches of science, engineering and mathematics. To solve this type of problem, researches have used different techniques. Recently, the use of techniques based on Artificial Intelligence technology is being explored by researches. This paper presents the use of a software tool based on artificial neural networks of generalized regression in the solution of inverse problems with application in high energy physics, specifically in the solution of the problem of neutron spectrometry. To solve this problem we use a software tool developed in the Mat Lab programming environment, which employs a friendly user interface, intuitive and easy to use for the user. This computational tool solves the inverse problem involved in the reconstruction of the neutron spectrum based on measurements made with a Bonner spheres spectrometric system. Introducing this information, the neural network is able to reconstruct the neutron spectrum with high performance and generalization capability. The tool allows that the end user does not require great training or technical knowledge in development and/or use of software, so it facilitates the use of the program for the resolution of inverse problems that are in several areas of knowledge. The techniques of Artificial Intelligence present singular veracity to solve inverse problems, given the characteristics of artificial neural networks and their network topology, therefore, the tool developed has been very useful, since the results generated by the Artificial Neural Network require few time in comparison to other techniques and are correct results comparing them with the actual data of the experiment. (Author)

  2. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Corrective Action Decision Document for Corrective Action Unit 254: Area 25 R-MAD Decontamination Facility, Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    2000-01-01

    Release Decontamination and Verification Survey and Dismantling of Building 3126. These alternatives were evaluated based on four general corrective action standards and five remedy selection decision factors, and the preferred CAA chosen on technical merit was Alternative 2. This CAA was judged to meet all requirements for the technical components evaluated and applicable state and federal regulations for closure of the site, and reduce the potential for future exposure pathways

  4. An Exploratory Analysis of Corrective Maintenance During Extended Surface Ship Deployments

    National Research Council Canada - National Science Library

    Werenskjold, G

    1998-01-01

    This thesis illustrates the use of simulation techniques to evaluate the corrective maintenance requirements, and resulting operational availability on-station, for a ship deployed for an extended period of three years...

  5. Improved Correction of Misclassification Bias With Bootstrap Imputation.

    Science.gov (United States)

    van Walraven, Carl

    2018-07-01

    Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.

  6. Non-Uniformity Correction Using Nonlinear Characteristic Performance Curves for Calibration

    Science.gov (United States)

    Lovejoy, McKenna Roberts

    Infrared imaging is an expansive field with many applications. Advances in infrared technology have lead to a greater demand from both commercial and military sectors. However, a known problem with infrared imaging is its non-uniformity. This non-uniformity stems from the fact that each pixel in an infrared focal plane array has its own photoresponse. Many factors such as exposure time, temperature, and amplifier choice affect how the pixels respond to incoming illumination and thus impact image uniformity. To improve performance non-uniformity correction (NUC) techniques are applied. Standard calibration based techniques commonly use a linear model to approximate the nonlinear response. This often leaves unacceptable levels of residual non-uniformity. Calibration techniques often have to be repeated during use to continually correct the image. In this dissertation alternates to linear NUC algorithms are investigated. The goal of this dissertation is to determine and compare nonlinear non-uniformity correction algorithms. Ideally the results will provide better NUC performance resulting in less residual non-uniformity as well as reduce the need for recalibration. This dissertation will consider new approaches to nonlinear NUC such as higher order polynomials and exponentials. More specifically, a new gain equalization algorithm has been developed. The various nonlinear non-uniformity correction algorithms will be compared with common linear non-uniformity correction algorithms. Performance will be compared based on RMS errors, residual non-uniformity, and the impact quantization has on correction. Performance will be improved by identifying and replacing bad pixels prior to correction. Two bad pixel identification and replacement techniques will be investigated and compared. Performance will be presented in the form of simulation results as well as before and after images taken with short wave infrared cameras. The initial results show, using a third order

  7. Power Factor Correction for Thyristor Equipment in Glass Industry ...

    African Journals Online (AJOL)

    Thyristor power controllers are now widely used in the glass industry for controlling furnace temperature. While offering a number of operational advantages, they operate at lagging power factors which require correction for minimum power cost. Harmonic resonance with the utility feed, however, complicate the use of ...

  8. Trajectory Correction and Locomotion Analysis of a Hexapod Walking Robot with Semi-Round Rigid Feet

    Science.gov (United States)

    Zhu, Yaguang; Jin, Bo; Wu, Yongsheng; Guo, Tong; Zhao, Xiangmo

    2016-01-01

    Aimed at solving the misplaced body trajectory problem caused by the rolling of semi-round rigid feet when a robot is walking, a legged kinematic trajectory correction methodology based on the Least Squares Support Vector Machine (LS-SVM) is proposed. The concept of ideal foothold is put forward for the three-dimensional kinematic model modification of a robot leg, and the deviation value between the ideal foothold and real foothold is analyzed. The forward/inverse kinematic solutions between the ideal foothold and joint angular vectors are formulated and the problem of direct/inverse kinematic nonlinear mapping is solved by using the LS-SVM. Compared with the previous approximation method, this correction methodology has better accuracy and faster calculation speed with regards to inverse kinematics solutions. Experiments on a leg platform and a hexapod walking robot are conducted with multi-sensors for the analysis of foot tip trajectory, base joint vibration, contact force impact, direction deviation, and power consumption, respectively. The comparative analysis shows that the trajectory correction methodology can effectively correct the joint trajectory, thus eliminating the contact force influence of semi-round rigid feet, significantly improving the locomotion of the walking robot and reducing the total power consumption of the system. PMID:27589766

  9. Aiding the search: Examining individual differences in multiply-constrained problem solving.

    Science.gov (United States)

    Ellis, Derek M; Brewer, Gene A

    2018-07-01

    Understanding and resolving complex problems is of vital importance in daily life. Problems can be defined by the limitations they place on the problem solver. Multiply-constrained problems are traditionally examined with the compound remote associates task (CRAT). Performance on the CRAT is partially dependent on an individual's working memory capacity (WMC). These findings suggest that executive processes are critical for problem solving and that there are reliable individual differences in multiply-constrained problem solving abilities. The goals of the current study are to replicate and further elucidate the relation between WMC and CRAT performance. To achieve these goals, we manipulated preexposure to CRAT solutions and measured WMC with complex-span tasks. In Experiment 1, we report evidence that preexposure to CRAT solutions improved problem solving accuracy, WMC was correlated with problem solving accuracy, and that WMC did not moderate the effect of preexposure on problem solving accuracy. In Experiment 2, we preexposed participants to correct and incorrect solutions. We replicated Experiment 1 and found that WMC moderates the effect of exposure to CRAT solutions such that high WMC participants benefit more from preexposure to correct solutions than low WMC (although low WMC participants have preexposure benefits as well). Broadly, these results are consistent with theories of working memory and problem solving that suggest a mediating role of attention control processes. Published by Elsevier Inc.

  10. Quality correction factors of composite IMRT beam deliveries: Theoretical considerations

    International Nuclear Information System (INIS)

    Bouchard, Hugo

    2012-01-01

    Purpose: In the scope of intensity modulated radiation therapy (IMRT) dosimetry using ionization chambers, quality correction factors of plan-class-specific reference (PCSR) fields are theoretically investigated. The symmetry of the problem is studied to provide recommendable criteria for composite beam deliveries where correction factors are minimal and also to establish a theoretical limit for PCSR delivery k Q factors. Methods: The concept of virtual symmetric collapsed (VSC) beam, being associated to a given modulated composite delivery, is defined in the scope of this investigation. Under symmetrical measurement conditions, any composite delivery has the property of having a k Q factor identical to its associated VSC beam. Using this concept of VSC, a fundamental property of IMRT k Q factors is demonstrated in the form of a theorem. The sensitivity to the conditions required by the theorem is thoroughly examined. Results: The theorem states that if a composite modulated beam delivery produces a uniform dose distribution in a volume V cyl which is symmetric with the cylindrical delivery and all beams fulfills two conditions in V cyl : (1) the dose modulation function is unchanged along the beam axis, and (2) the dose gradient in the beam direction is constant for a given lateral position; then its associated VSC beam produces no lateral dose gradient in V cyl , no matter what beam modulation or gantry angles are being used. The examination of the conditions required by the theorem lead to the following results. The effect of the depth-dose gradient not being perfectly constant with depth on the VSC beam lateral dose gradient is found negligible. The effect of the dose modulation function being degraded with depth on the VSC beam lateral dose gradient is found to be only related to scatter and beam hardening, as the theorem holds also for diverging beams. Conclusions: The use of the symmetry of the problem in the present paper leads to a valuable theorem showing

  11. Simulations of the magnet misalignments, field errors and orbit correction for the SLC north arc

    International Nuclear Information System (INIS)

    Kheifets, S.; Chao, A.; Jaeger, J.; Shoaee, H.

    1983-11-01

    Given the intensity of linac bunches and their repetition rate the desired luminosity of SLC 1.0 x 10 30 cm -2 sec -1 requires focusing the interaction bunches to a spot size in the micrometer (μm) range. The lattice that achieves this goal is obtained by careful design of both the arcs and the final focus systems. For the micrometer range of the beam spot size both the second order geometric and chromatic aberrations may be completely destructive. The concept of second order achromat proved to be extremely important in this respect and the arcs are built essentially as a sequence of such achromats. Between the end of the linac and the interaction point (IP) there are three special sections in addition to the regular structure: matching section (MS) designed for matching the phase space from the linac to the arcs, reverse bend section (RB) which provides the matching when the sign of the curvature is reversed in the arc and the final focus system (FFS). The second order calculations are done by the program TURTLE. Using the TURTLE histogram in the x-y plane and assuming identical histogram for the south arc, corresponding 'luminosity' L is found. The simulation of the misalignments and error effects have to be done simultaneously with the design and simulation of the orbit correction scheme. Even after the orbit is corrected and the beam can be transmitted through the vacuum chamber, the focusing of the beam to the desired size at the IP remains a serious potential problem. It is found, as will be elaborated later, that even for the best achieved orbit correction, additional corrections of the dispersion function and possibly transfer matrix are needed. This report describes a few of the presently conceived correction schemes and summarizes some results of computer simulations done for the SLC north arc. 8 references, 12 figures, 6 tables

  12. Corrective Action Decision Document/Corrective Action Plan for the 92-Acre Area and Corrective Action Unit 111: Area 5 WMD Retired Mixed Waste Pits, Nevada National Security Site, Nevada

    International Nuclear Information System (INIS)

    2010-01-01

    This Corrective Action Decision Document/Corrective Action Plan (CADD/CAP) has been prepared for the 92-Acre Area, the southeast quadrant of the Radioactive Waste Management Site, located in Area 5 of the Nevada National Security Site (NNSS). The 92-Acre Area includes Corrective Action Unit (CAU) 111, 'Area 5 WMD Retired Mixed Waste Pits.' Data Quality Objectives (DQOs) were developed for the 92-Acre Area, which includes CAU 111. The result of the DQO process was that the 92-Acre Area is sufficiently characterized to provide the input data necessary to evaluate corrective action alternatives (CAAs) without the collection of additional data. The DQOs are included as Appendix A of this document. This CADD/CAP identifies and provides the rationale for the recommended CAA for the 92-Acre Area, provides the plan for implementing the CAA, and details the post-closure plan. When approved, this CADD/CAP will supersede the existing Pit 3 (P03) Closure Plan, which was developed in accordance with Title 40 Code of Federal Regulations (CFR) Part 265, 'Interim Status Standards for Owners and Operators of Hazardous Waste Treatment, Storage, and Disposal Facilities.' This document will also serve as the Closure Plan and the Post-Closure Plan, which are required by 40 CFR 265, for the 92-Acre Area. After closure activities are complete, a request for the modification of the Resource Conservation and Recovery Act Permit that governs waste management activities at the NNSS will be submitted to the Nevada Division of Environmental Protection to incorporate the requirements for post-closure monitoring. Four CAAs, ranging from No Further Action to Clean Closure, were evaluated for the 92-Acre Area. The CAAs were evaluated on technical merit focusing on performance, reliability, feasibility, safety, and cost. Based on the evaluation of the data used to develop the conceptual site model; a review of past, current, and future operations at the site; and the detailed and comparative

  13. Corrective Action Decision Document/Corrective Action Plan for the 92-Acre Area and Corrective Action Unit 111: Area 5 WMD Retired Mixed Waste Pits, Nevada National Security Site, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    NSTec Environmental Restoration

    2010-11-22

    This Corrective Action Decision Document/Corrective Action Plan (CADD/CAP) has been prepared for the 92-Acre Area, the southeast quadrant of the Radioactive Waste Management Site, located in Area 5 of the Nevada National Security Site (NNSS). The 92-Acre Area includes Corrective Action Unit (CAU) 111, 'Area 5 WMD Retired Mixed Waste Pits.' Data Quality Objectives (DQOs) were developed for the 92-Acre Area, which includes CAU 111. The result of the DQO process was that the 92-Acre Area is sufficiently characterized to provide the input data necessary to evaluate corrective action alternatives (CAAs) without the collection of additional data. The DQOs are included as Appendix A of this document. This CADD/CAP identifies and provides the rationale for the recommended CAA for the 92-Acre Area, provides the plan for implementing the CAA, and details the post-closure plan. When approved, this CADD/CAP will supersede the existing Pit 3 (P03) Closure Plan, which was developed in accordance with Title 40 Code of Federal Regulations (CFR) Part 265, 'Interim Status Standards for Owners and Operators of Hazardous Waste Treatment, Storage, and Disposal Facilities.' This document will also serve as the Closure Plan and the Post-Closure Plan, which are required by 40 CFR 265, for the 92-Acre Area. After closure activities are complete, a request for the modification of the Resource Conservation and Recovery Act Permit that governs waste management activities at the NNSS will be submitted to the Nevada Division of Environmental Protection to incorporate the requirements for post-closure monitoring. Four CAAs, ranging from No Further Action to Clean Closure, were evaluated for the 92-Acre Area. The CAAs were evaluated on technical merit focusing on performance, reliability, feasibility, safety, and cost. Based on the evaluation of the data used to develop the conceptual site model; a review of past, current, and future operations at the site; and the detailed

  14. Improving Performance in Quantum Mechanics with Explicit Incentives to Correct Mistakes

    Science.gov (United States)

    Brown, Benjamin R.; Mason, Andrew; Singh, Chandralekha

    2016-01-01

    An earlier investigation found that the performance of advanced students in a quantum mechanics course did not automatically improve from midterm to final exam on identical problems even when they were provided the correct solutions and their own graded exams. Here, we describe a study, which extended over four years, in which upper-level…

  15. The Maryland Division of Correction hospice program.

    Science.gov (United States)

    Boyle, Barbara A

    2002-10-01

    The Maryland Division of Correction houses 24,000 inmates in 27 geographically disparate facilities. The inmate population increasingly includes a frail, elderly component, as well as many inmates with chronic or progressive diseases. The Division houses about 900 human immunodeficiency virus (HIV)-positive detainees, almost one quarter with an acquired immune deficiency syndrome (AIDS) diagnosis. A Ryan White Special Project of National Significance (SPNS) grant and the interest of a community hospice helped transform prison hospice from idea to reality. One site is operational and a second site is due to open in the future. Both facilities serve only male inmates, who comprise more than 95% of Maryland's incarcerated. "Medical parole" is still the preferred course for terminally ill inmates; a number have been sent to various local community inpatient hospices or released to the care of their families. There will always be some who cannot be medically paroled, for whom hospice is appropriate. Maryland's prison hospice program requires a prognosis of 6 months or less to live, a do-not-resuscitate (DNR) order and patient consent. At times, the latter two of these have been problematic. Maintaining the best balance between security requirements and hospice services to dying inmates takes continual communication, coordination and cooperation. Significant complications in some areas remain: visitation to dying inmates by family and fellow prisoners; meeting special dietary requirements; what role, if any, will be played by inmate volunteers. Hospice in Maryland's Division of Correction is a work in progress.

  16. Pollution problems plague Poland

    International Nuclear Information System (INIS)

    Bajsarowicz, J.F.

    1989-01-01

    Poland's environmental problems are said to stem from investments in heavy industries that require enormous quantities of power and from the exploitation of two key natural resources: coal and sulfur. Air and water pollution problems and related public health problems are discussed

  17. New Approach to Analyzing Physics Problems: A Taxonomy of Introductory Physics Problems

    Science.gov (United States)

    Teodorescu, Raluca E.; Bennhold, Cornelius; Feldman, Gerald; Medsker, Larry

    2013-01-01

    This paper describes research on a classification of physics problems in the context of introductory physics courses. This classification, called the Taxonomy of Introductory Physics Problems (TIPP), relates physics problems to the cognitive processes required to solve them. TIPP was created in order to design educational objectives, to develop…

  18. Temporal Gain Correction for X-Ray Calorimeter Spectrometers

    Science.gov (United States)

    Porter, F. S.; Chiao, M. P.; Eckart, M. E.; Fujimoto, R.; Ishisaki, Y.; Kelley, R. L.; Kilbourne, C. A.; Leutenegger, M. A.; McCammon, D.; Mitsuda, K.

    2016-01-01

    Calorimetric X-ray detectors are very sensitive to their environment. The boundary conditions can have a profound effect on the gain including heat sink temperature, the local radiation temperature, bias, and the temperature of the readout electronics. Any variation in the boundary conditions can cause temporal variations in the gain of the detector and compromise both the energy scale and the resolving power of the spectrometer. Most production X-ray calorimeter spectrometers, both on the ground and in space, have some means of tracking the gain as a function of time, often using a calibration spectral line. For small gain changes, a linear stretch correction is often sufficient. However, the detectors are intrinsically non-linear and often the event analysis, i.e., shaping, optimal filters etc., add additional non-linearity. Thus for large gain variations or when the best possible precision is required, a linear stretch correction is not sufficient. Here, we discuss a new correction technique based on non-linear interpolation of the energy-scale functions. Using Astro-HSXS calibration data, we demonstrate that the correction can recover the X-ray energy to better than 1 part in 104 over the entire spectral band to above 12 keV even for large-scale gain variations. This method will be used to correct any temporal drift of the on-orbit per-pixel gain using on-board calibration sources for the SXS instrument on the Astro-H observatory.

  19. eNAL: An Extension of the NAL Setup Correction Protocol for Effective Use of Weekly Follow-up Measurements

    International Nuclear Information System (INIS)

    Boer, Hans C.J. de; Heijmen, Ben J.M.

    2007-01-01

    Purpose: The no action level (NAL) protocol reduces systematic displacements relative to the planning CT scan by using the mean displacement of the first few treatment fractions as a setup correction in all subsequent fractions. This approach may become nonoptimal in case of time trends or transitions in the systematic displacement of a patient. Here, the extended NAL (eNAL) protocol is introduced to cope with this problem. Methods and Materials: The initial setup correction of eNAL is the same as in NAL. However, in eNAL, additional weekly follow-up measurements are performed. The setup correction is updated after each follow-up measurement based on linear regression of the available measured displacements to track and correct systematic time-dependent changes. We investigated the performance of eNAL with Monte Carlo simulations for populations without systematic displacement changes over time, with large gradual changes (time trends), and with large sudden changes (transitions). Weekly follow-up measurements were simulated for 35 treatment fractions. We compared the outcome of eNAL with NAL and optimized shrinking action level (SAL) protocol with weekly measurements. Results: Without time-dependent changes, eNAL, SAL, and NAL performed comparably, but SAL required the largest imaging workload. For time trends and transitions, eNAL performed superiorly to the other protocols and reduced systematic displacements to the same magnitude as in case of no time-dependent changes (SD ∼1 mm). Conclusion: Extended NAL can reduce systematic displacements to a minor level irrespective of the precise nature of the systematic time-dependent changes that may occur in a population

  20. Stray light correction on array spectroradiometers for optical radiation risk assessment in the workplace

    International Nuclear Information System (INIS)

    Barlier-Salsi, A

    2014-01-01

    The European directive 2006/25/EC requires the employer to assess and, if necessary, measure the levels of exposure to optical radiation in the workplace. Array spectroradiometers can measure optical radiation from various types of sources; however poor stray light rejection affects their accuracy. A stray light correction matrix, using a tunable laser, was developed at the National Institute of Standards and Technology (NIST). As tunable lasers are very expensive, the purpose of this study was to implement this method using only nine low power lasers; other elements of the correction matrix being completed by interpolation and extrapolation. The correction efficiency was evaluated by comparing CCD spectroradiometers with and without correction and a scanning double monochromator device as reference. Similar to findings recorded by NIST, these experiments show that it is possible to reduce the spectral stray light by one or two orders of magnitude. In terms of workplace risk assessment, this spectral stray light correction method helps determine exposure levels, with an acceptable degree of uncertainty, for the majority of workplace situations. The level of uncertainty depends upon the model of spectroradiometers used; the best results are obtained with CCD detectors having an enhanced spectral sensitivity in the UV range. Thus corrected spectroradiometers require a validation against a scanning double monochromator spectroradiometer before using them for risk assessment in the workplace. (paper)

  1. Stray light correction on array spectroradiometers for optical radiation risk assessment in the workplace.

    Science.gov (United States)

    Barlier-Salsi, A

    2014-12-01

    The European directive 2006/25/EC requires the employer to assess and, if necessary, measure the levels of exposure to optical radiation in the workplace. Array spectroradiometers can measure optical radiation from various types of sources; however poor stray light rejection affects their accuracy. A stray light correction matrix, using a tunable laser, was developed at the National Institute of Standards and Technology (NIST). As tunable lasers are very expensive, the purpose of this study was to implement this method using only nine low power lasers; other elements of the correction matrix being completed by interpolation and extrapolation. The correction efficiency was evaluated by comparing CCD spectroradiometers with and without correction and a scanning double monochromator device as reference. Similar to findings recorded by NIST, these experiments show that it is possible to reduce the spectral stray light by one or two orders of magnitude. In terms of workplace risk assessment, this spectral stray light correction method helps determine exposure levels, with an acceptable degree of uncertainty, for the majority of workplace situations. The level of uncertainty depends upon the model of spectroradiometers used; the best results are obtained with CCD detectors having an enhanced spectral sensitivity in the UV range. Thus corrected spectroradiometers require a validation against a scanning double monochromator spectroradiometer before using them for risk assessment in the workplace.

  2. 77 FR 59139 - Prompt Corrective Action, Requirements for Insurance, and Promulgation of NCUA Rules and Regulations

    Science.gov (United States)

    2012-09-26

    ... accounting principles and voluntary audits; prompt corrective action for new credit unions; and assistance... in assets accounted for only 18 percent of losses, although accounting for 222, or over 84 percent... to adhere to fundamental federalism principles. This proposed rule and IRPS would not have a...

  3. Determination of velocity correction factors for real-time air velocity monitoring in underground mines

    OpenAIRE

    Zhou, Lihong; Yuan, Liming; Thomas, Rick; Iannacchione, Anthony

    2017-01-01

    When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction fac...

  4. Vision impairment and corrective considerations of civil airmen.

    Science.gov (United States)

    Nakagawara, V B; Wood, K J; Montgomery, R W

    1995-08-01

    Civil aviation is a major commercial and technological industry in the United States. The Federal Aviation Administration (FAA) is responsible for the regulation and promotion of aviation safety in the National Airspace System. To guide FAA policy changes and educational programs for aviation personnel about vision impairment and the use of corrective ophthalmic devices, the demographics of the civil airman population were reviewed. Demographic data from 1971-1991 were extracted from FAA publications and databases. Approximately 48 percent of the civil airman population is equal to or older than 40 years of age (average age = 39.8 years). Many of these aviators are becoming presbyopic and will need corrective devices for near and intermediate vision. In fact, there has been approximately a 12 percent increase in the number of aviators with near vision restrictions during the past decade. Ophthalmic considerations for prescribing and dispensing eyewear for civil aviators are discussed. The correction of near and intermediate vision conditions for older pilots will be a major challenge for eye care practitioners in the next decade. Knowledge of the unique vision and environmental requirements of the civilian airman can assist clinicians in suggesting alternative vision corrective devices better suited for a particular aviation activity.

  5. High-speed atmospheric correction for spectral image processing

    Science.gov (United States)

    Perkins, Timothy; Adler-Golden, Steven; Cappelaere, Patrice; Mandl, Daniel

    2012-06-01

    Land and ocean data product generation from visible-through-shortwave-infrared multispectral and hyperspectral imagery requires atmospheric correction or compensation, that is, the removal of atmospheric absorption and scattering effects that contaminate the measured spectra. We have recently developed a prototype software system for automated, low-latency, high-accuracy atmospheric correction based on a C++-language version of the Spectral Sciences, Inc. FLAASH™ code. In this system, pre-calculated look-up tables replace on-the-fly MODTRAN® radiative transfer calculations, while the portable C++ code enables parallel processing on multicore/multiprocessor computer systems. The initial software has been installed on the Sensor Web at NASA Goddard Space Flight Center, where it is currently atmospherically correcting new data from the EO-1 Hyperion and ALI sensors. Computation time is around 10 s per data cube per processor. Further development will be conducted to implement the new atmospheric correction software on board the upcoming HyspIRI mission's Intelligent Payload Module, where it would generate data products in nearreal time for Direct Broadcast to the ground. The rapid turn-around of data products made possible by this software would benefit a broad range of applications in areas of emergency response, environmental monitoring and national defense.

  6. Correction of spectral and temporal phases for ultra-intense lasers; Correction des phases spectrale et temporelle pour les lasers ultra-intenses

    Energy Technology Data Exchange (ETDEWEB)

    Salmon, E

    2000-12-15

    The discovery of new regimes of interaction between laser and matter requires to produce laser pulses presenting higher luminous flux density. The only solutions that allow us to reach important power (about ten peta-watts) imply the correction of non-linear effects before compressing the laser pulse so that we do not transfer the phase modulation to the amplitude modulation. The aim of this work is the correction of the spectral phase through the modulation of the temporal phase. The first chapter is dedicated to the review of the physical phenomena involved in the interaction of ultra-intense laser pulse with matter. The peta-watt laser operating on the LIL (integrated laser line), the prototype line of the Megajoule Laser, is described in the second chapter. The third chapter presents the method used and optimized for getting an absolute measurement of the spectral phase in our experimental configuration. The fourth chapter details the analogy existing between the spatial domain and the temporal domain particularly between diffraction and dispersion. This analogy has allowed us to benefit from the knowledge cumulated in the spatial domain, particularly the treatment of the aberrations and their impact on the focal spot and to use it in the temporal domain. The principle of the phase correction is exposed in the fifth chapter. We have formalized the correspondence of the phase modulation between temporal domain and the spectral domain for strongly stretched pulses. In this way a modulation of the temporal phase is turned into a modulation of the spectral phase. All the measurements concerning phases and modulation spectral phase correction are presented in the sixth chapter. In the last chapter we propose an extension of the temporal phase correction by correcting non-linear effects directly in the temporal phase. This correction will improve the performances of the peta-watt laser. Numerical simulations show that the temporal phase correction can lead to a

  7. A Generalized Cauchy Distribution Framework for Problems Requiring Robust Behavior

    Directory of Open Access Journals (Sweden)

    Carrillo RafaelE

    2010-01-01

    Full Text Available Statistical modeling is at the heart of many engineering problems. The importance of statistical modeling emanates not only from the desire to accurately characterize stochastic events, but also from the fact that distributions are the central models utilized to derive sample processing theories and methods. The generalized Cauchy distribution (GCD family has a closed-form pdf expression across the whole family as well as algebraic tails, which makes it suitable for modeling many real-life impulsive processes. This paper develops a GCD theory-based approach that allows challenging problems to be formulated in a robust fashion. Notably, the proposed framework subsumes generalized Gaussian distribution (GGD family-based developments, thereby guaranteeing performance improvements over traditional GCD-based problem formulation techniques. This robust framework can be adapted to a variety of applications in signal processing. As examples, we formulate four practical applications under this framework: (1 filtering for power line communications, (2 estimation in sensor networks with noisy channels, (3 reconstruction methods for compressed sensing, and (4 fuzzy clustering.

  8. The family mass hierarchy problem in bosonic technicolor

    International Nuclear Information System (INIS)

    Kagan, A.; Samuel, S.

    1990-01-01

    We use a multiple Higgs system to analyze the family mass hierarchy problem in bosonic technicolor. Dependence on a wide range of Yukawa couplings, λ, for quark and lepton mass generation is greatly reduced, i.e., λ ≅ 0.1 to 1. Third and second generation masses are produced at tree-level, the latter via a see-saw mechanism. We use radiative corrections as a source for many mixing angles and first generation masses. A hierarchy of family masses with small of-diagonal Kobayashi-Maskawa entries naturally arises. A higher scale of 1-10 TeV for Higgs masses and supersymmetry breaking is needed to alleviate difficulties with flavor-changing effects. Such a large scale is a feature of bosonic technicolor and no fine-tuning is required to obtain electroweak breaking at ≅ 100 GeV. Bosonic technicolor is therefore a natural framework for multi-Higgs systems. (orig.)

  9. Correcting for particle counting bias error in turbulent flow

    Science.gov (United States)

    Edwards, R. V.; Baratuci, W.

    1985-01-01

    An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.

  10. True coincidence summing correction determination for 214Bi principal gamma lines in NORM samples

    International Nuclear Information System (INIS)

    Haddad, Kh.

    2014-01-01

    The gamma lines 609.3 and 1,120.3 keV are two of the most intensive γ emissions of 214 Bi, but they have serious true coincidence summing (TCS) effects due to the complex decay schemes with multi-cascading transitions. TCS effects cause inaccurate count rate and hence erroneous results. A simple and easy experimental method for determination of TCS correction of 214 Bi gamma lines was developed in this work using naturally occurring radioactive material samples. Height efficiency and self attenuation corrections were determined as well. The developed method has been formulated theoretically and validated experimentally. The corrections problems were solved simply with neither additional standard source nor simulation skills. (author)

  11. Capturing security requirements for software systems.

    Science.gov (United States)

    El-Hadary, Hassan; El-Kassas, Sherif

    2014-07-01

    Security is often an afterthought during software development. Realizing security early, especially in the requirement phase, is important so that security problems can be tackled early enough before going further in the process and avoid rework. A more effective approach for security requirement engineering is needed to provide a more systematic way for eliciting adequate security requirements. This paper proposes a methodology for security requirement elicitation based on problem frames. The methodology aims at early integration of security with software development. The main goal of the methodology is to assist developers elicit adequate security requirements in a more systematic way during the requirement engineering process. A security catalog, based on the problem frames, is constructed in order to help identifying security requirements with the aid of previous security knowledge. Abuse frames are used to model threats while security problem frames are used to model security requirements. We have made use of evaluation criteria to evaluate the resulting security requirements concentrating on conflicts identification among requirements. We have shown that more complete security requirements can be elicited by such methodology in addition to the assistance offered to developers to elicit security requirements in a more systematic way.

  12. Capturing security requirements for software systems

    Directory of Open Access Journals (Sweden)

    Hassan El-Hadary

    2014-07-01

    Full Text Available Security is often an afterthought during software development. Realizing security early, especially in the requirement phase, is important so that security problems can be tackled early enough before going further in the process and avoid rework. A more effective approach for security requirement engineering is needed to provide a more systematic way for eliciting adequate security requirements. This paper proposes a methodology for security requirement elicitation based on problem frames. The methodology aims at early integration of security with software development. The main goal of the methodology is to assist developers elicit adequate security requirements in a more systematic way during the requirement engineering process. A security catalog, based on the problem frames, is constructed in order to help identifying security requirements with the aid of previous security knowledge. Abuse frames are used to model threats while security problem frames are used to model security requirements. We have made use of evaluation criteria to evaluate the resulting security requirements concentrating on conflicts identification among requirements. We have shown that more complete security requirements can be elicited by such methodology in addition to the assistance offered to developers to elicit security requirements in a more systematic way.

  13. Capturing security requirements for software systems

    Science.gov (United States)

    El-Hadary, Hassan; El-Kassas, Sherif

    2014-01-01

    Security is often an afterthought during software development. Realizing security early, especially in the requirement phase, is important so that security problems can be tackled early enough before going further in the process and avoid rework. A more effective approach for security requirement engineering is needed to provide a more systematic way for eliciting adequate security requirements. This paper proposes a methodology for security requirement elicitation based on problem frames. The methodology aims at early integration of security with software development. The main goal of the methodology is to assist developers elicit adequate security requirements in a more systematic way during the requirement engineering process. A security catalog, based on the problem frames, is constructed in order to help identifying security requirements with the aid of previous security knowledge. Abuse frames are used to model threats while security problem frames are used to model security requirements. We have made use of evaluation criteria to evaluate the resulting security requirements concentrating on conflicts identification among requirements. We have shown that more complete security requirements can be elicited by such methodology in addition to the assistance offered to developers to elicit security requirements in a more systematic way. PMID:25685514

  14. Some problems in the interpretation of isotope measurements in British aquifers

    International Nuclear Information System (INIS)

    Evans, G.V.; Otlet, R.L.

    1978-01-01

    Corrections allowing for the dilution of biogenic carbon in the determination of groundwater age by 14 C measurements are discussed. A reappraisal is made of the procedure used in earlier work (the Lincolnshire Limestone) to allow for dilution during an incongruent stage of dissolution and a new correction formula is derived for groundwaters which have undergone the congruent and incongruent stages of dissolution. The significance of such corrections in relation to other limitations imposed by the mixing of groundwaters of different ages is also discussed. Some of the practical problems encountered are illustrated by groudnwater dating of waters from the Carboniferous Limestone and the Lower Greensand. In the Lower Greensand, for example, the problem is further complicated by uncertainty in the 'rock' carbonate value of delta 13 C which is affected by the presence of both freshwater and marine carbonate cements. (orig.) 891 RB [de

  15. ADC common noise correction and zero suppression in the PIBETA detector

    International Nuclear Information System (INIS)

    Frlez, E.; Pocanic, D.; Ritt, S.

    2001-01-01

    We describe a simple procedure for reducing Analog-to-Digital Converter (ADC) common noise in modular detectors that does not require additional hardware. A method using detector noise groups should work well for modular particle detectors such as segmented electromagnetic calorimeters, plastic scintillator hodoscopes, cathode strip wire chambers, segmented active targets, and the like. We demonstrate a 'second pedestal noise correction' method by comparing representative ADC pedestal spectra for various elements of the PIBETA detector before and after the applied correction

  16. Advances in orbit drift correction in the advanced photon source storage ring

    International Nuclear Information System (INIS)

    Emery, L.; Borland, M.

    1997-01-01

    The Advanced Photon Source storage ring is required to provide X-ray beams of high positional stability, specified as 17 μm rms in the horizontal plane and 4.4 μm rms in the vertical plane. The author reports on the difficult task of stabilizing the slow drift component of the orbit motion down to a few microns rms using workstation-based orbit correction. There are two aspects to consider separately the correction algorithm and the configuration of the beam position monitors (BPMs) and correctors. Three notable features of the correction algorithm are: low-pass digital filtering of BPM readbacks; open-quotes despikingclose quotes of the filtered orbit to desensitize the orbit correction to spurious BPM readbacks without having to change the correction matrix; and BPM intensity-dependent offset compensation. The BPM/corrector configuration includes all of the working BPMs but only a small set of correctors distributed around the ring. Thus only those orbit modes that are most likely to be representative of real beam drift are handled by the correction algorithm

  17. Efficient bias correction for magnetic resonance image denoising.

    Science.gov (United States)

    Mukherjee, Partha Sarathi; Qiu, Peihua

    2013-05-30

    Magnetic resonance imaging (MRI) is a popular radiology technique that is used for visualizing detailed internal structure of the body. Observed MRI images are generated by the inverse Fourier transformation from received frequency signals of a magnetic resonance scanner system. Previous research has demonstrated that random noise involved in the observed MRI images can be described adequately by the so-called Rician noise model. Under that model, the observed image intensity at a given pixel is a nonlinear function of the true image intensity and of two independent zero-mean random variables with the same normal distribution. Because of such a complicated noise structure in the observed MRI images, denoised images by conventional denoising methods are usually biased, and the bias could reduce image contrast and negatively affect subsequent image analysis. Therefore, it is important to address the bias issue properly. To this end, several bias-correction procedures have been proposed in the literature. In this paper, we study the Rician noise model and the corresponding bias-correction problem systematically and propose a new and more effective bias-correction formula based on the regression analysis and Monte Carlo simulation. Numerical studies show that our proposed method works well in various applications. Copyright © 2012 John Wiley & Sons, Ltd.

  18. The Unreasonable Destructiveness of Political Correctness in Philosophy

    Directory of Open Access Journals (Sweden)

    Manuel Doria

    2017-08-01

    Full Text Available I submit that epistemic progress in key areas of contemporary academic philosophy has been compromised by politically correct (“PC” ideology. First, guided by an evolutionary account of ideology, results from social and cognitive psychology and formal philosophical methods, I expose evidence for political bias in contemporary Western academia and sketch a formalization for the contents of beliefs from the PC worldview taken to be of core importance, the theory of social oppression and the thesis of anthropological mental egalitarianism. Then, aided by discussions from contemporary epistemology on epistemic values, I model the problem of epistemic appraisal using the frameworks of multi-objective optimization theory and multi-criteria decision analysis and apply it to politically correct philosophy. I conclude that philosophy guided by politically correct values is bound to produce constructs that are less truth-conducive and that spurious values which are ideologically motivated should be abandoned. Objections to my framework stemming from contextual empiricism, the feminine voice in ethics and political philosophy are considered. I conclude by prescribing the epistemic value of epistemic adequacy, the contextual value of political diversity and the moral virtue of moral courage to reverse unwarranted trends in academic philosophy due to PC ideology.

  19. Source brightness fluctuation correction of solar absorption fourier transform mid infrared spectra

    Directory of Open Access Journals (Sweden)

    T. Ridder

    2011-06-01

    Full Text Available The precision and accuracy of trace gas observations using solar absorption Fourier Transform infrared spectrometry depend on the stability of the light source. Fluctuations in the source brightness, however, cannot always be avoided. Current correction schemes, which calculate a corrected interferogram as the ratio of the raw DC interferogram and a smoothed DC interferogram, are applicable only to near infrared measurements. Spectra in the mid infrared spectral region below 2000 cm−1 are generally considered uncorrectable, if they are measured with a MCT detector. Such measurements introduce an unknown offset to MCT interferograms, which prevents the established source brightness fluctuation correction. This problem can be overcome by a determination of the offset using the modulation efficiency of the instrument. With known modulation efficiency the offset can be calculated, and the source brightness correction can be performed on the basis of offset-corrected interferograms. We present a source brightness fluctuation correction method which performs the smoothing of the raw DC interferogram in the interferogram domain by an application of a running mean instead of high-pass filtering the corresponding spectrum after Fourier transformation of the raw DC interferogram. This smoothing can be performed with the onboard software of commercial instruments. The improvement of MCT spectra and subsequent ozone profile and total column retrievals is demonstrated. Application to InSb interferograms in the near infrared spectral region proves the equivalence with the established correction scheme.

  20. Non-uniformity Correction of Infrared Images by Midway Equalization

    Directory of Open Access Journals (Sweden)

    Yohann Tendero

    2012-07-01

    Full Text Available The non-uniformity is a time-dependent noise caused by the lack of sensor equalization. We present here the detailed algorithm and on line demo of the non-uniformity correction method by midway infrared equalization. This method was designed to suit infrared images. Nevertheless, it can be applied to images produced for example by scanners, or by push-broom satellites. The obtained single image method works on static images, is fully automatic, having no user parameter, and requires no registration. It needs no camera motion compensation, no closed aperture sensor equalization and is able to correct for a fully non-linear non-uniformity.