WorldWideScience

Sample records for process requiring constant

  1. Development code for group constant processing

    International Nuclear Information System (INIS)

    Su'ud, Z.

    1997-01-01

    In this paper methods, formalism and algorithm related to group constant processing problem from basic library such as ENDF/B VI will be described. Basically the problems can be grouped as follows; the treatment of resolved resonance using NR approximation, the treatment of unresolved resonance using statistical method, the treatment of low lying resonance using intermediate resonance approximation, the treatment of thermal energy regions, and the treatment group transfer matrices cross sections. it is necessary to treat interference between resonance properly especially in the unresolved region. in this paper the resonance problems are treated based on Breit-wigner method, and doppler function is treated using Pade approximation for calculation efficiency. finally, some samples of calculational result for some nuclei, mainly for comparison between many methods are discussed in this paper

  2. ℋ∞ constant gain state feedback stabilization of stochastic hybrid systems with Wiener process

    Directory of Open Access Journals (Sweden)

    E. K. Boukas

    2004-01-01

    Full Text Available This paper considers the stabilization problem of the class of continuous-time linear stochastic hybrid systems with Wiener process. The ℋ∞ state feedback stabilization problem is treated. A state feedback controller with constant gain that does not require access to the system mode is designed. LMI-based conditions are developed to design the state feedback controller with constant gain that stochastically stabilizes the studied class of systems and, at the same time, achieve the disturbance rejection of a desired level. The minimum disturbance rejection is also determined. Numerical examples are given to show the usefulness of the proposed results.

  3. Investigation of the evaporation process conditions on the optical constants of zirconia films

    International Nuclear Information System (INIS)

    Dobrowolski, J.A.; Grant, P.D.; Simpson, R.; Waldorf, A.

    1989-01-01

    Deposition parameters required for producing zirconia films for use in optical multilayer systems by electron-beam gun evaporation of zirconia and zirconium starting materials were investigated. The optical constants were determined as a function of distance, partial pressure of oxygen, and angle of incidence. The direct and reactive evaporation processes yielded ZrO 2 films with refractive indices of 2.08 and 2.14, respectively, for vapor incident on the substrate at normal incidence

  4. Process heat supply requirements on HTGRs

    International Nuclear Information System (INIS)

    Schad, M.K.

    1989-01-01

    Since it has been claimed that the MHTGR is competitive with coal in producing electricity, the MHTGR must be competitive in producing process heat. There is a huge process heat market and there are quite a number of processes where the industrial MHTGR = HTRI could supply the necessary process heat and energy. However, to enhance its introduction on the market and to conquer a reasonable share of the market, the HTRI should fulfill the following major requirements: Unlimited constant and flexible heat supply, no secondary heat transport system at higher temperatures and low radioactive contamination level of the primary helium. Unlimited constant and flexible heat supply could be achieved with smaller HTRIs having heat generation capacities below 100 MW-th. The process heat generated by smaller HTRIs need not be more expensive since the installed necessary heat supply redundancy is smaller and the excess power density lower. The process heat at elevated temperatures generated by a HTRI with a secondary heat transfer system is much more expensive due to the additional investment and operating cost as well as the reduced helium temperature span available. For some processes, the HTRI is not able to cover the total process heat requirement while other processes can consume only part of the heat offered. These limitations could be reduced by using higher core outlet and inlet temperatures or both. Due to the considerably lower heat transfer rates and the resulting larger heat transfer areas in process plants, the diffusion of nuclear activity at elevated temperatures may increase so that a more efficient helium cleaning system may be required. (author). 5 figs, 3 tabs

  5. Thermal time constant: optimising the skin temperature predictive modelling in lower limb prostheses using Gaussian processes.

    Science.gov (United States)

    Mathur, Neha; Glesk, Ivan; Buis, Arjan

    2016-06-01

    Elevated skin temperature at the body/device interface of lower-limb prostheses is one of the major factors that affect tissue health. The heat dissipation in prosthetic sockets is greatly influenced by the thermal conductive properties of the hard socket and liner material employed. However, monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated. This is due to the flexible nature of the interface liners used which requires consistent positioning of sensors during donning and doffing. Predicting the residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets. To predict the residual limb temperature, a machine learning algorithm - Gaussian processes is employed, which utilizes the thermal time constant values of commonly used socket and liner materials. This Letter highlights the relevance of thermal time constant of prosthetic materials in Gaussian processes technique which would be useful in addressing the challenge of non-invasively monitoring the residual limb skin temperature. With the introduction of thermal time constant, the model can be optimised and generalised for a given prosthetic setup, thereby making the predictions more reliable.

  6. Heavy oils processing materials requirements crude processing

    Energy Technology Data Exchange (ETDEWEB)

    Sloley, Andrew W. [CH2M Hill, Englewood, CO (United States)

    2012-07-01

    Over time, recommended best practices for crude unit materials selection have evolved to accommodate new operating requirements, feed qualities, and product qualities. The shift to heavier oil processing is one of the major changes in crude feed quality occurring over the last 20 years. The three major types of crude unit corrosion include sulfidation attack, naphthenic acid attack, and corrosion resulting from hydrolyzable chlorides. Heavy oils processing makes all three areas worse. Heavy oils have higher sulfur content; higher naphthenic acid content; and are more difficult to desalt, leading to higher chloride corrosion rates. Materials selection involves two major criteria, meeting required safety standards, and optimizing economics of the overall plant. Proper materials selection is only one component of a plant integrity approach. Materials selection cannot eliminate all corrosion. Proper materials selection requires appropriate support from other elements of an integrity protection program. The elements of integrity preservation include: materials selection (type and corrosion allowance); management limits on operating conditions allowed; feed quality control; chemical additives for corrosion reduction; and preventive maintenance and inspection (PMI). The following discussion must be taken in the context of the application of required supporting work in all the other areas. Within that context, specific materials recommendations are made to minimize corrosion due to the most common causes in the crude unit. (author)

  7. Solid-state fast voltage compensator for pulsed power applications requiring constant AC power consumption

    CERN Document Server

    Magallanes, Francisco Cabaleiro; Viarouge, Philippe; Cros, Jérôme

    2015-01-01

    This paper proposes a novel topological solution for pulsed power converters based on capacitor-discharge topologies, integrating a Fast Voltage Compensator which allows an operation at constant power consumption from the utility grid. This solution has been retained as a possible candidate for the CLIC project under study at CERN, which requires more than a thousand synchronously-operated klystron modulators producing a total pulsed power of almost 40 GW. The proposed Fast Voltage Compensator is integrated in the modulator such that it only has to treat the capacitor charger current and a fraction of the charging voltage, meaning that its dimensioning power and cost are minimized. This topology can be used to improve the AC power quality of any pulsed converters based on capacitor-discharge concept. A prototype has been built and exploited to validate the operating principle and demonstrate the benefits of the proposed solution.

  8. Application of dielectric constant measurement in microwave sludge disintegration and wastewater purification processes.

    Science.gov (United States)

    Kovács, Petra Veszelovszki; Lemmer, Balázs; Keszthelyi-Szabó, Gábor; Hodúr, Cecilia; Beszédes, Sándor

    2018-05-01

    It has been numerously verified that microwave radiation could be advantageous as a pre-treatment for enhanced disintegration of sludge. Very few data related to the dielectric parameters of wastewater of different origins are available; therefore, the objective of our work was to measure the dielectric constant of municipal and meat industrial wastewater during a continuous flow operating microwave process. Determination of the dielectric constant and its change during wastewater and sludge processing make it possible to decide on the applicability of dielectric measurements for detecting the organic matter removal efficiency of wastewater purification process or disintegration degree of sludge. With the measurement of dielectric constant as a function of temperature, total solids (TS) content and microwave specific process parameters regression models were developed. Our results verified that in the case of municipal wastewater sludge, the TS content has a significant effect on the dielectric constant and disintegration degree (DD), as does the temperature. The dielectric constant has a decreasing tendency with increasing temperature for wastewater sludge of low TS content, but an adverse effect was found for samples with high TS and organic matter contents. DD of meat processing wastewater sludge was influenced significantly by the volumetric flow rate and power level, as process parameters of continuously flow microwave pre-treatments. It can be concluded that the disintegration process of food industry sludge can be detected by dielectric constant measurements. From technical purposes the applicability of dielectric measurements was tested in the purification process of municipal wastewater, as well. Determination of dielectric behaviour was a sensitive method to detect the purification degree of municipal wastewater.

  9. Numerical Simulations of Electrokinetic Processes Comparing the Use of a Constant Voltage Difference or a Constant Current as Driving Force

    DEFF Research Database (Denmark)

    Paz-Garcia, Juan Manuel; Johannesson, Björn; Ottosen, Lisbeth M.

    Electrokinetic techniques are characterized by the use of a DC current for the removal of contaminants from porous materials. The method can be applied for several purposes, such as the recuperation of soil contaminated by heavy metals or organic compounds, the desalination of construction...... materials and the prevention of the reinforced concrete corrosion. The electrical energy applied in an electrokinetic process produces electrochemical reactions at the electrodes. Different electrode processes can occur. When considering inert electrodes in aqueous solutions, the reduction of water...... at the cathode is usually the dominant in the process. On the other hand, the electrode processes at the anode depend on the ions present in its vicinity. Oxidation of water and chloride are typically assumed to be the most common processes taking place. Electrons produced in the electrode processes...

  10. Process and Microstructure to Achieve Ultra-high Dielectric Constant in Ceramic-Polymer Composites

    Science.gov (United States)

    Zhang, Lin; Shan, Xiaobing; Bass, Patrick; Tong, Yang; Rolin, Terry D.; Hill, Curtis W.; Brewer, Jeffrey C.; Tucker, Dennis S.; Cheng, Z.-Y.

    2016-01-01

    Influences of process conditions on microstructure and dielectric properties of ceramic-polymer composites are systematically studied using CaCu3Ti4O12 (CCTO) as filler and P(VDF-TrFE) 55/45 mol.% copolymer as the matrix by combining solution-cast and hot-pressing processes. It is found that the dielectric constant of the composites can be significantly enhanced–up to about 10 times – by using proper processing conditions. The dielectric constant of the composites can reach more than 1,000 over a wide temperature range with a low loss (tan δ ~ 10−1). It is concluded that besides the dense structure of composites, the uniform distribution of the CCTO particles in the matrix plays a key role on the dielectric enhancement. Due to the influence of the CCTO on the microstructure of the polymer matrix, the composites exhibit a weaker temperature dependence of the dielectric constant than the polymer matrix. Based on the results, it is also found that the loss of the composites at low temperatures, including room temperature, is determined by the real dielectric relaxation processes including the relaxation process induced by the mixing. PMID:27767184

  11. Contamination aspects in integrating high dielectric constant and ferroelectric materials into CMOS processes

    OpenAIRE

    Boubekeur, Hocine

    2004-01-01

    n memory technology, new materials are being intensively investigated to overcome the integration limits of conventional dielectrics for Giga-bit scale integration, or to be able to produce new types of non-volatile low power memories such as FeRAM. Perovskite type high dielectric constant films for use in Giga-bit scale memories or layered perovskite films for use in non-volatile memories involve materials to semiconductor process flows, which entail a high risk of contamination. The introdu...

  12. Individual variability and mortality required for constant final yield in simulated plant populations

    Czech Academy of Sciences Publication Activity Database

    Fibich, P.; Lepš, Jan; Weiner, J.

    2014-01-01

    Roč. 7, č. 3 (2014), s. 263-271 ISSN 1874-1738 Grant - others:GA ČR(CZ) GA-1317118S; GA MŠk(CZ) LM2010005 Institutional support: RVO:60077344 Keywords : constant final yield * variability * mortality Subject RIV: EH - Ecology, Behaviour Impact factor: 1.553, year: 2014 http://link.springer.com/article/10.1007%2Fs12080-014-0216-x#

  13. Effect of difference between group constants processed by codes TIMS and ETOX on integral quantities

    International Nuclear Information System (INIS)

    Takano, Hideki; Ishiguro, Yukio; Matsui, Yasushi.

    1978-06-01

    Group constants of 235 U, 238 U, 239 Pu, 240 Pu and 241 Pu have been produced with the processing code TIMS using the evaluated nuclear data of JENDL-1. The temperature and composition dependent self-shielding factors have been calculated for the two cases with and without considering mutual interference resonant nuclei. By using the group constants set produced by the TIMS code, the integral quantities, i.e. multiplication factor, Na-void reactivity effect and Doppler reactivity effect, are calculated and compared with those calculated with the use of the cross sections set produced by the ETOX code to evaluate accuracy of the approximate calculation method in ETOX. There is much difference in self-shielding factor in each energy group between the two codes. For the fast reactor assemblies under study, however, the integral quantities calculated with these two sets are in good agreement with each other, because of eventual cancelation of errors. (auth.)

  14. Consideration of demand rate in overall equipment effetiveness (OEE on equipment with constant process time

    Directory of Open Access Journals (Sweden)

    Perumal Puvanasvaran

    2013-06-01

    Full Text Available Purpose: The paper is primarily done on the purpose of introducing new concept in defining the Overall Equipment Effectiveness (OEE with the consideration of both machine utilization and customer demand requested. Previous literature concerning the limitation and difficulty of OEE implementation has been investigated in order to track out the potential opportunities to be improved, since the OEE has been widely accepted by most of the industries regardless their manufacturing environment.Design/methodology/approach: The paper is conducting the study based on literature review and the computerized data collection. In details, the novel definition and method of processing the computerized data are all interpreted based on similar studies performed by others and supported by related journals in proving the validation of the output. Over the things, the computerized data are the product amount and total time elapsed on each production which is automatically recorded by the system at the manufacturing site.Findings: The finding of this paper is firstly the exposure and emphasis of limitation exists in current implementation of OEE, which showing that high utilization of the machine is encouraged regardless of the customer demand and is having conflict with the inventory handling cost. This is certainly obvious with overproduction issue especially during low customer demand period. The second limitation in general implementation of OEE is the difficulty in obtaining the ideal cycle time, especially those equipments with constant process time. The section of this paper afterward comes out with the proposed solution in fixing this problem through the definition of performance ratio and then usage of this definition in measuring the machine utilization from time to time. Before this, the time available for the production is calculated incorporating the availability of OEE, which is then used to get the Takt time.Research limitations/implications: Future

  15. The suppression of UT tis using MPO constant Q split spectrum processing

    International Nuclear Information System (INIS)

    Koo, Kil Mo; Jun, Kye Suk

    1997-01-01

    It is very important for ultrasonic test method to evaluate the integrity of the class I components in nuclear power plants. However, as the ultrasonic test is affected by internal structures and configurations of test materials, backscattering, that is, time invariant signal(TIS) is generated in large grain size materials. Due to the above reason, the received signal results in tow signal to noise(S/N) ratio. Split spectrum processing (SSP) technique is effective to suppress the time invariant signal like a grain noise. The conventional SSP technique, however, has been applied to unique algorithm. This paper shows that MPO(minimization and polarity threshold) algorithm which is applied simultaneously two algorithms to be utilized, and the signal processing time was shorten by using the new constant-Q SSP with the finite impulse response(FIR) filter of which frequency to bandwidth ratio is constant and the optimum parameters were analysed for the signal processing to longitudinal wave and shear wave with the same condition of inspection on nuclear power plant site. Moreover, the new ultrasonic test instrument, the reference block of the same product form and material specification, stainless steel test specimens and copper test specimens were designed and fabricated for the application of the new SSP technique. As the result of experimental test with new ultrasonic test instrument and test specimens, the signal to noise ratio was improved by applying the new SSP technique.

  16. Catalytic Reforming: Methodology and Process Development for a Constant Optimisation and Performance Enhancement

    Directory of Open Access Journals (Sweden)

    Avenier Priscilla

    2016-05-01

    Full Text Available Catalytic reforming process has been used to produce high octane gasoline since the 1940s. It would appear to be an old process that is well established and for which nothing new could be done. It is however not the case and constant improvements are proposed at IFP Energies nouvelles. With a global R&D approach using new concepts and forefront methodology, IFPEN is able to: propose a patented new reactor concept, increasing capacity; ensure efficiency and safety of mechanical design for reactor using modelization of the structure; develop new catalysts to increase process performance due to a high comprehension of catalytic mechanism by using, an experimental and innovative analytical approach (119Sn Mössbauer and X-ray absorption spectroscopies and also a Density Functional Theory (DFT calculations; have efficient, reliable and adapted pilots to validate catalyst performance.

  17. Critical Review of NOAA's Observation Requirements Process

    Science.gov (United States)

    LaJoie, M.; Yapur, M.; Vo, T.; Templeton, A.; Bludis, D.

    2017-12-01

    NOAA's Observing Systems Council (NOSC) maintains a comprehensive database of user observation requirements. The requirements collection process engages NOAA subject matter experts to document and effectively communicate the specific environmental observation measurements (parameters and attributes) needed to produce operational products and pursue research objectives. User observation requirements documented using a structured and standardized manner and framework enables NOAA to assess its needs across organizational lines in an impartial, objective, and transparent manner. This structure provides the foundation for: selecting, designing, developing, acquiring observing technologies, systems and architectures; budget and contract formulation and decision-making; and assessing in a repeatable fashion the productivity, efficiency and optimization of NOAA's observing system enterprise. User observation requirements are captured independently from observing technologies. Therefore, they can be addressed by a variety of current or expected observing capabilities and allow flexibility to be remapped to new and evolving technologies. NOAA's current inventory of user observation requirements were collected over a ten-year period, and there have been many changes in policies, mission priorities, and funding levels during this time. In light of these changes, the NOSC initiated a critical, in-depth review to examine all aspects of user observation requirements and associated processes during 2017. This presentation provides background on the NOAA requirements process, major milestones and outcomes of the critical review, and plans for evolving and connecting observing requirements processes in the next year.

  18. A methodology to describe process control requirements

    International Nuclear Information System (INIS)

    Carcagno, R.; Ganni, V.

    1994-01-01

    This paper presents a methodology to describe process control requirements for helium refrigeration plants. The SSC requires a greater level of automation for its refrigeration plants than is common in the cryogenics industry, and traditional methods (e.g., written descriptions) used to describe process control requirements are not sufficient. The methodology presented in this paper employs tabular and graphic representations in addition to written descriptions. The resulting document constitutes a tool for efficient communication among the different people involved in the design, development, operation, and maintenance of the control system. The methodology is not limited to helium refrigeration plants, and can be applied to any process with similar requirements. The paper includes examples

  19. Constant-scale natural boundary mapping to reveal global and cosmic processes

    CERN Document Server

    Clark, Pamela Elizabeth

    2013-01-01

    Whereas conventional maps can be expressed as outward-expanding formulae with well-defined central features and relatively poorly defined edges, Constant Scale Natural Boundary (CSNB) maps have well-defined boundaries that result from natural processes and thus allow spatial and dynamic relationships to be observed in a new way useful to understanding these processes. CSNB mapping presents a new approach to visualization that produces maps markedly different from those produced by conventional cartographic methods. In this approach, any body can be represented by a 3D coordinate system. For a regular body, with its surface relatively smooth on the scale of its size, locations of features can be represented by definite geographic grid (latitude and longitude) and elevation, or deviation from the triaxial ellipsoid defined surface. A continuous surface on this body can be segmented, its distinctive regional terranes enclosed, and their inter-relationships defined, by using selected morphologically identifiable ...

  20. Responsibilities in the Usability Requirements Elicitation Process

    Directory of Open Access Journals (Sweden)

    Marianella Aveledo

    2008-12-01

    Full Text Available Like any other software system quality attribute, usability places requirements on software components. In particular, it has been demonstrated that certain usability features have a direct impact throughout the software process. This paper details an approach that looks at how to deal with certain usability features in the early software development stages. In particular, we consider usability features as functional usability requirements using patterns that have been termed usability patterns to elicit requirements. Additionally, we clearly establish the responsibilities of all the players at the usability requirements elicitation stage.

  1. Specifying process requirements for holistic care.

    Science.gov (United States)

    Poulymenopoulou, M; Malamateniou, F; Vassilacopoulos, G

    2013-09-01

    Holistic (health and social) care aims at providing comprehensive care to the community, especially to elderly people and people with multiple illnesses. In turn, this requires using health and social care resources more efficiently through enhanced collaboration and coordination among the corresponding organizations and delivering care closer to patient needs and preferences. This paper takes a patient-centered, process view of holistic care delivery and focuses on requirements elicitation for supporting holistic care processes and enabling authorized users to access integrated patient information at the point of care when needed. To this end, an approach to holistic care process-support requirements elicitation is presented which is based on business process modeling and places particular emphasis on empowering collaboration, coordination and information sharing among health and social care organizations by actively involving users and by providing insights for alternative process designs. The approach provides a means for integrating diverse legacy applications in a process-oriented environment using a service-oriented architecture as an appropriate solution for supporting and automating holistic care processes. The approach is applied in the context of emergency medical care aiming at streamlining and providing support technology to cross-organizational health and social care processes to address global patient needs.

  2. Deficiency tracking system, conceptual business process requirements

    Energy Technology Data Exchange (ETDEWEB)

    Hermanson, M.L.

    1997-04-18

    The purpose of this document is to describe the conceptual business process requirements of a single, site-wide, consolidated, automated, deficiency management tracking, trending, and reporting system. This description will be used as the basis for the determination of the automated system acquisition strategy including the further definition of specific requirements, a ''make or buy'' determination and the development of specific software design details.

  3. Deficiency tracking system, conceptual business process requirements

    International Nuclear Information System (INIS)

    Hermanson, M.L.

    1997-01-01

    The purpose of this document is to describe the conceptual business process requirements of a single, site-wide, consolidated, automated, deficiency management tracking, trending, and reporting system. This description will be used as the basis for the determination of the automated system acquisition strategy including the further definition of specific requirements, a ''make or buy'' determination and the development of specific software design details

  4. Change in requirements during the design process

    DEFF Research Database (Denmark)

    Sudin, Mohd Nizam Bin; Ahmed-Kristensen, Saeema

    2011-01-01

    Specification is an integral part of the product development process. Frequently, more than a single version of a specification is produced due to changes in requirements. These changes are often necessary to ensure the scope of the design problem is as clear as possible. However, the negative...... on a pre-defined coding scheme. The results of the study shows that change in requirements were initiated by internal stakeholders through analysis and evaluation activities during the design process, meanwhile external stakeholders were requested changes during the meeting with consultant. All...

  5. Thermodynamic Modeling and Optimization of the Copper Flash Converting Process Using the Equilibrium Constant Method

    Science.gov (United States)

    Li, Ming-zhou; Zhou, Jie-min; Tong, Chang-ren; Zhang, Wen-hai; Chen, Zhuo; Wang, Jin-liang

    2018-05-01

    Based on the principle of multiphase equilibrium, a mathematical model of the copper flash converting process was established by the equilibrium constant method, and a computational system was developed with the use of MetCal software platform. The mathematical model was validated by comparing simulated outputs, industrial data, and published data. To obtain high-quality blister copper, a low copper content in slag, and increased impurity removal rate, the model was then applied to investigate the effects of the operational parameters [oxygen/feed ratio (R OF), flux rate (R F), and converting temperature (T)] on the product weights, compositions, and the distribution behaviors of impurity elements. The optimized results showed that R OF, R F, and T should be controlled at approximately 156 Nm3/t, within 3.0 pct, and at approximately 1523 K (1250 °C), respectively.

  6. Design of a Solid-State Fast Voltage Compensator for klystron modulators requiring constant AC power consumption

    CERN Document Server

    Aguglia, Davide; Viarouge, Philippe; Cros, Jerome

    2014-01-01

    This paper proposes a novel topological solution for klystron modulators integrating a Fast Voltage Compensator which allows an operation at constant power consumption from the utility grid. This kind of solution is mandatory for the CLIC project under study, which requires several hundreds of synchronously operated klystron modulators for a total pulsed power of 39 GW. The topology is optimized for the challenging CLIC specifications, which require a very precise output voltage flat-top as well as fast rise and fall times (3µs). The Fast Voltage Compensator is integrated in the modulator such that it only has to manage the capacitor charger current and a fraction of the charging voltage. Consequently, its dimensioning power and cost is minimized.

  7. Aerodynamic isotope separation processes for uranium enrichment: process requirements

    International Nuclear Information System (INIS)

    Malling, G.F.; Von Halle, E.

    1976-01-01

    The pressing need for enriched uranium to fuel nuclear power reactors, requiring that as many as ten large uranium isotope separation plants be built during the next twenty years, has inspired an increase of interest in isotope separation processes for uranium enrichment. Aerodynamic isotope separation processes have been prominently mentioned along with the gas centrifuge process and the laser isotope separation methods as alternatives to the gaseous diffusion process, currently in use, for these future plants. Commonly included in the category of aerodynamic isotope separation processes are: (a) the separation nozzle process; (b) opposed gas jets; (c) the gas vortex; (d) the separation probes; (e) interacting molecular beams; (f) jet penetration processes; and (g) time of flight separation processes. A number of these aerodynamic isotope separation processes depend, as does the gas centrifuge process, on pressure diffusion associated with curved streamlines for the basic separation effect. Much can be deduced about the process characteristics and the economic potential of such processes from a simple and elementary process model. In particular, the benefit to be gained from a light carrier gas added to the uranium feed is clearly demonstrated. The model also illustrates the importance of transient effects in this class of processes

  8. Structure disorder degree of polysilicon thin films grown by different processing: Constant C from Raman spectroscopy

    International Nuclear Information System (INIS)

    Wang, Quan; Zhang, Yanmin; Hu, Ran; Ren, Naifei; Ge, Daohan

    2013-01-01

    Flat, low-stress, boron-doped polysilicon thin films were prepared on single crystalline silicon substrates by low pressure chemical vapor deposition. It was found that the polysilicon films with different deposition processing have different microstructure properties. The confinement effect, tensile stresses, defects, and the Fano effect all have a great influence on the line shape of Raman scattering peak. But the effect results are different. The microstructure and the surface layer are two important mechanisms dominating the internal stress in three types of polysilicon thin films. For low-stress polysilicon thin film, the tensile stresses are mainly due to the change of microstructure after thermal annealing. But the tensile stresses in flat polysilicon thin film are induced by the silicon carbide layer at surface. After the thin film doped with boron atoms, the phenomenon of the tensile stresses increasing can be explained by the change of microstructure and the increase in the content of silicon carbide. We also investigated the disorder degree states for three polysilicon thin films by analyzing a constant C. It was found that the disorder degree of low-stress polysilicon thin film larger than that of flat and boron-doped polysilicon thin films due to the phase transformation after annealing. After the flat polysilicon thin film doped with boron atoms, there is no obvious change in the disorder degree and the disorder degree in some regions even decreases

  9. Structure disorder degree of polysilicon thin films grown by different processing: Constant C from Raman spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Quan, E-mail: wangq@mail.ujs.edu.cn [School of mechanical engineering, Jiangsu University, Zhenjiang 212013 (China); State Key Laboratory of Solid Lubrication, Lanzhou Institute of Chemical Physics, Chinese Academy of Sciences, Lanzhou 730000 (China); Zhang, Yanmin; Hu, Ran; Ren, Naifei [School of mechanical engineering, Jiangsu University, Zhenjiang 212013 (China); Ge, Daohan [School of mechanical engineering, Jiangsu University, Zhenjiang 212013 (China); State Key Laboratory of Transducer Technology, Chinese Academy of Sciences, Shanghai 200050 (China)

    2013-11-14

    Flat, low-stress, boron-doped polysilicon thin films were prepared on single crystalline silicon substrates by low pressure chemical vapor deposition. It was found that the polysilicon films with different deposition processing have different microstructure properties. The confinement effect, tensile stresses, defects, and the Fano effect all have a great influence on the line shape of Raman scattering peak. But the effect results are different. The microstructure and the surface layer are two important mechanisms dominating the internal stress in three types of polysilicon thin films. For low-stress polysilicon thin film, the tensile stresses are mainly due to the change of microstructure after thermal annealing. But the tensile stresses in flat polysilicon thin film are induced by the silicon carbide layer at surface. After the thin film doped with boron atoms, the phenomenon of the tensile stresses increasing can be explained by the change of microstructure and the increase in the content of silicon carbide. We also investigated the disorder degree states for three polysilicon thin films by analyzing a constant C. It was found that the disorder degree of low-stress polysilicon thin film larger than that of flat and boron-doped polysilicon thin films due to the phase transformation after annealing. After the flat polysilicon thin film doped with boron atoms, there is no obvious change in the disorder degree and the disorder degree in some regions even decreases.

  10. An exclusion process on a tree with constant aggregate hopping rate

    International Nuclear Information System (INIS)

    Mottishaw, Peter; Waclaw, Bartlomiej; Evans, Martin R

    2013-01-01

    We introduce a model of a totally asymmetric simple exclusion process (TASEP) on a tree network where the aggregate hopping rate is constant from level to level. With this choice for hopping rates the model shows the same phase diagram as the one-dimensional case. The potential applications of our model are in the area of distribution networks, where a single large source supplies material to a large number of small sinks via a hierarchical network. We show that mean-field theory (MFT) for our model is identical to that of the one-dimensional TASEP and that this MFT is exact for the TASEP on a tree in the limit of large branching ratio, b (or equivalently large coordination number). We then present an exact solution for the two level tree (or star network) that allows the computation of any correlation function and confirm how mean-field results are recovered as b → ∞. As an example we compute the steady-state current as a function of branching ratio. We present simulation results that confirm these results and indicate that the convergence to MFT with large branching ratio is quite rapid. (paper)

  11. Optimal Constant-Stress Accelerated Degradation Test Plans Using Nonlinear Generalized Wiener Process

    Directory of Open Access Journals (Sweden)

    Zhen Chen

    2016-01-01

    Full Text Available Accelerated degradation test (ADT has been widely used to assess highly reliable products’ lifetime. To conduct an ADT, an appropriate degradation model and test plan should be determined in advance. Although many historical studies have proposed quite a few models, there is still room for improvement. Hence we propose a Nonlinear Generalized Wiener Process (NGWP model with consideration of the effects of stress level, product-to-product variability, and measurement errors for a higher estimation accuracy and a wider range of use. Then under the constraints of sample size, test duration, and test cost, the plans of constant-stress ADT (CSADT with multiple stress levels based on the NGWP are designed by minimizing the asymptotic variance of the reliability estimation of the products under normal operation conditions. An optimization algorithm is developed to determine the optimal stress levels, the number of units allocated to each level, inspection frequency, and measurement times simultaneously. In addition, a comparison based on degradation data of LEDs is made to show better goodness-of-fit of the NGWP than that of other models. Finally, optimal two-level and three-level CSADT plans under various constraints and a detailed sensitivity analysis are demonstrated through examples in this paper.

  12. Constant versus variable response signal delays in speed accuracy trade-offs : Effects of advance preparation for processing time

    OpenAIRE

    Miller, Jeff; Sproesser, Gudrun; Ulrich, Rolf

    2008-01-01

    In two experiments, we used response signals (RSs) to control processing time and trace out speed accuracy trade-off (SAT) functions in a difficult perceptual discrimination task. Each experiment compared performance in blocks of trials with constant and, hence, temporally predictable RS lags against performance in blocks with variable, unpredictable RS lags. In both experiments, essentially equivalent SAT functions were observed with constant and variable RS lags. We conclude that there is l...

  13. Constant versus variable response signal delays in speed--accuracy trade-offs: effects of advance preparation for processing time.

    Science.gov (United States)

    Miller, Jeff; Sproesser, Gudrun; Ulrich, Rolf

    2008-07-01

    In two experiments, we used response signals (RSs) to control processing time and trace out speed--accuracy trade-off(SAT) functions in a difficult perceptual discrimination task. Each experiment compared performance in blocks of trials with constant and, hence, temporally predictable RS lags against performance in blocks with variable, unpredictable RS lags. In both experiments, essentially equivalent SAT functions were observed with constant and variable RS lags. We conclude that there is little effect of advance preparation for a given processing time, suggesting that the discrimination mechanisms underlying SAT functions are driven solely by bottom-up information processing in perceptual discrimination tasks.

  14. Effects of a constant rate infusion of detomidine on cardiovascular function, isoflurane requirements and recovery quality in horses.

    Science.gov (United States)

    Schauvliege, Stijn; Marcilla, Miguel Gozalo; Verryken, Kirsten; Duchateau, Luc; Devisscher, Lindsey; Gasthuys, Frank

    2011-11-01

    To examine the influence of a detomidine constant rate infusion (CRI) on cardiovascular function, isoflurane requirements and recovery quality in horses undergoing elective surgery. Prospective, randomized, blinded, clinical trial. Twenty adult healthy horses. After sedation (detomidine, 10 μg kg(-1) intravenously [IV]) and induction of anaesthesia (midazolam 0.06 mg kg(-1) , ketamine 2.2 mg kg(-1) IV), anaesthesia was maintained with isoflurane in oxygen/air (inspiratory oxygen fraction 55%). When indicated, the lungs were mechanically ventilated. Dobutamine was administered when MAPdetomidine (5 μg kg(-1)  hour(-1) ) (D) or saline (S) CRI, with the anaesthetist unaware of the treatment. Monitoring included end-tidal isoflurane concentration, arterial pH, PaCO(2) , PaO(2) , dobutamine administration rate, heart rate (HR), arterial pressure, cardiac index (CI), systemic vascular resistance (SVR), stroke index and oxygen delivery index (ḊO(2) I). For recovery from anaesthesia, all horses received 2.5 μg kg(-1) detomidine IV. Recovery quality and duration were recorded in each horse. For statistical analysis, anova, Pearson chi-square and Wilcoxon rank sum tests were used as relevant. Heart rate (p=0.0176) and ḊO(2) I (p= 0.0084) were lower and SVR higher (p=0.0126) in group D, compared to group S. Heart rate (p=0.0011) and pH (p=0.0187) increased over time. Significant differences in isoflurane requirements were not detected. Recovery quality and duration were comparable between treatments. A detomidine CRI produced cardiovascular effects typical for α(2) -agonists, without affecting isoflurane requirements, recovery duration or recovery quality. © 2011 The Authors. Veterinary Anaesthesia and Analgesia. © 2011 Association of Veterinary Anaesthetists and the American College of Veterinary Anesthesiologists.

  15. Effect of detomidine or romifidine constant rate infusion on plasma lactate concentration and inhalant requirements during isoflurane anaesthesia in horses.

    Science.gov (United States)

    Niimura Del Barrio, M C; Bennett, Rachel C; Hughes, J M Lynne

    2017-05-01

    Influence of detomidine or romifidine constant rate infusion (CRI) on plasma lactate concentration and isoflurane requirements in horses undergoing elective surgery. Prospective, randomised, blinded, clinical trial. A total of 24 adult healthy horses. All horses were administered intramuscular acepromazine (0.02 mg kg -1 ) and either intravenous detomidine (0.02 mg kg -1 ) (group D), romifidine (0.08 mg kg -1 ) (group R) or xylazine (1.0 mg kg -1 ) (group C) prior to anaesthesia. Group D was administered detomidine CRI (10 μg kg -1 hour -1 ) in lactated Ringer's solution (LRS), group R romifidine CRI (40 μg kg -1 hour -1 ) in LRS and group C an equivalent amount of LRS intraoperatively. Anaesthesia was induced with ketamine and diazepam and maintained with isoflurane in oxygen. Plasma lactate samples were taken prior to anaesthesia (baseline), intraoperatively (three samples at 30 minute intervals) and in recovery (at 10 minutes, once standing and 3 hours after end of anaesthesia). End-tidal isoflurane percentage (Fe'Iso) was analysed by allocating values into three periods: Prep (15 minutes after the start anaesthesia-start surgery); Surgery 1 (start surgery-30 minutes later); and Surgery 2 (end Surgery 1-end anaesthesia). A linear mixed model was used to analyse the data. A value of pdetomidine or romifidine CRI in horses did not result in a clinically significant increase in plasma lactate compared with control group. Detomidine and romifidine infusions decreased isoflurane requirements during surgery. Copyright © 2017 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. Published by Elsevier Ltd. All rights reserved.

  16. Numerical methods for realizing nonstationary Poisson processes with piecewise-constant instantaneous-rate functions

    DEFF Research Database (Denmark)

    Harrod, Steven; Kelton, W. David

    2006-01-01

    Nonstationary Poisson processes are appropriate in many applications, including disease studies, transportation, finance, and social policy. The authors review the risks of ignoring nonstationarity in Poisson processes and demonstrate three algorithms for generation of Poisson processes...

  17. Constant current coulometric method for the determination of uranium in active process solutions

    International Nuclear Information System (INIS)

    Chitnis, R.T.; Talnikar, S.G.; Paranjape, A.H.

    1980-01-01

    The determination of uranium in the range of 2.5-5 mg by constant current coulometry is described. The procedure is based on the modified version of the DAVIES - GRAY method, wherein uranium, after the reduction step, is oxidized by adding a known amount of potassium dichromate, and the excess of dichromate is determined by titration with Fe 2+ solution. Fe 2+ ions needed for the titration are generated in situ with 100% current efficiency by electrolytic reduction of Fe 3+ . The method is found to be accurate with a coefficient of variation better than 0.2%. (author)

  18. A Bayesian MCMC method for point process models with intractable normalising constants

    DEFF Research Database (Denmark)

    Berthelsen, Kasper Klitgaard; Møller, Jesper

    2004-01-01

    to simulate from the "unknown distribution", perfect simulation algorithms become useful. We illustrate the method in cases whre the likelihood is given by a Markov point process model. Particularly, we consider semi-parametric Bayesian inference in connection to both inhomogeneous Markov point process models...... and pairwise interaction point processes....

  19. TIMS-1: a processing code for production of group constants of heavy resonant nuclei

    International Nuclear Information System (INIS)

    Takano, Hideki; Ishiguro, Yukio; Matsui, Yasushi.

    1980-09-01

    The TIMS-1 code calculates the infinitely dilute group cross sections and the temperature dependent self-shielding factors for arbitrary values of σ 0 and R, where σ 0 is the effective background cross section of potential scattering and R the ratio of the atomic number densities for two resonant nuclei if any. This code is specifically programmed to use the evaluated nuclear data file of ENDF/B or JENDL as input data. In the unresolved resonance region, the resonance parameters and the level spacings are generated by using Monte Carlo method from the Porter-Thomas and Wigner distributions respectively. The Doppler broadened cross sections are calculated on the ultra-fine lethargy meshes of about 10 -3 -- 10 -5 using the generated and resolved resonance parameters. The effective group constants are calculated by solving the neutron slowing down equation with the use of the recurrence formula for the neutron slowing down source. The output of the calculated results is given in a format being consistent with the JAERI-Fast set (JFS) or the Standard Reactor Analysis Code (SRAC) library. Both FACOM 230/75 and M200 versions of TIMS-1 are available. (author)

  20. Improving the requirements process in Axiomatic Design Theory

    DEFF Research Database (Denmark)

    Thompson, Mary Kathryn

    2013-01-01

    This paper introduces a model to integrate the traditional requirements process into Axiomatic Design Theory and proposes a method to structure the requirements process. The method includes a requirements classification system to ensure that all requirements information can be included...... in the Axiomatic Design process, a stakeholder classification system to reduce the chances of excluding one or more key stakeholders, and a table to visualize the mapping between the stakeholders and their requirements....

  1. Understanding Modeling Requirements of Unstructured Business Processes

    NARCIS (Netherlands)

    Allah Bukhsh, Zaharah; van Sinderen, Marten J.; Sikkel, Nicolaas; Quartel, Dick

    2017-01-01

    Management of structured business processes is of interest to both academia and industry, where academia focuses on the development of methods and techniques while industry focuses on the development of supporting tools. With the shift from routine to knowledge work, the relevance of management of

  2. Churn in the Aircraft Spares Requirements Process.

    Science.gov (United States)

    1988-04-01

    compute the BY and EY (198.4 and 1935 ) requi rements. The EY requirement woDuld have been Used forY the-_ 1985 budget reque-st submit ted tc...ami ne thws _ hanges ~a~dby chu..rn, a sample was ad cc ,Led. Thixsrrpiewas no--t r andc:m and c an no.:t be assumed to- be r epr e- en t t iye of...in the FY86, Apprpriations Act, and the budglet r eqUest was reduc-ed 3(. - mill ion dcll1ar s based on a HO’’ E? Surveys aind Investigation : epr that

  3. Evaluation of Chemical Kinetic for Mathematics Model Reduction of Cadmium Reaction Rate, Constant and Reaction Orde in to Electrochemical Process

    International Nuclear Information System (INIS)

    Prayitno

    2007-01-01

    The experiment was reduction of cadmium rate with electrochemical influenced by time process, concentration, current strength and type of electrode plate. The aim of the experiment was to know the influence, mathematic model reduction of cadmium the reaction rate, reaction rate constant and reaction orde influenced by time process, concentration, current strength and type of electrode plate. Result of research indicate the time processing if using plate of copper electrode is during 30 minutes and using plate of aluminium electrode is during 20 minutes. Condition of strong current that used in process of electrochemical is only 0.8 ampere and concentration effective is 5.23 mg/l. The most effective type Al of electrode plate for reduction from waste and the efficiency of reduction is 98 %. (author)

  4. The necessary presence of constant motivation in the educational process. Reflections and Illustrations

    Directory of Open Access Journals (Sweden)

    Irma de las Nieves Hernández López

    2013-03-01

    Full Text Available In this article the necessary presence of underlying motivation in the teaching-learning process, in this specific case, in the delivery of the conference as a way for university teaching excelle nce consistently exemplified with regard to the subject of accentuation corresponding to the subject of spelling that develops in the first year of the School of Accounting. It is based on the use of situations, drama, elements of culture in its broadest s sense, highlighting those related to the profession, corroborating its importance, to positively affect the ownership of the content and integrity of students.

  5. A Precise Method for Processing Data to Determine the Dissociation Constants of Polyhydroxy Carboxylic Acids via Potentiometric Titration.

    Science.gov (United States)

    Huang, Kaixuan; Xu, Yong; Lu, Wen; Yu, Shiyuan

    2017-12-01

    The thermodynamic dissociation constants of xylonic acid and gluconic acid were studied via potentiometric methods, and the results were verified using lactic acid, which has a known pKa value, as a model compound. Solutions of xylonic acid and gluconic acid were titrated with a standard solution of sodium hydroxide. The determined pKa data were processed via the method of derivative plots using computer software, and the accuracy was validated using the Gran method. The dissociation constants associated with the carboxylic acid group of xylonic and gluconic acids were determined to be pKa 1  = 3.56 ± 0.07 and pKa 1  = 3.74 ± 0.06, respectively. Further, the experimental data showed that the second deprotonation constants associated with a hydroxyl group of each of the two acids were pKa 2  = 8.58 ± 0.12 and pKa 2  = 7.06 ± 0.08, respectively. The deprotonation behavior of polyhydroxy carboxylic acids was altered using various ratios with Cu(II) to form complexes in solution, and this led to proposing a hypothesis for further study.

  6. Optical Constants of Crystallized TiO2 Coatings Prepared by Sol-Gel Process

    Directory of Open Access Journals (Sweden)

    Jun Shen

    2013-07-01

    Full Text Available Titanium oxide coatings have been deposited by the sol-gel dip-coating method. Crystallization of titanium oxide coatings was then achieved through thermal annealing at temperatures above 400 °C. The structural properties and surface morphology of the crystallized coatings were studied by micro-Raman spectroscopy and atomic force microscopy, respectively. Characterization technique, based on least-square fitting to the measured reflectance and transmittance spectra, is used to determine the refractive indices of the crystallized TiO2 coatings. The stability of the synthesized sol was also investigated by dynamic light scattering particle size analyzer. The influence of the thermal annealing on the optical properties was then discussed. The increase in refractive index with high temperature thermal annealing process was observed, obtaining refractive index values from 1.98 to 2.57 at He-Ne laser wavelength of 633 nm. The Raman spectroscopy and atomic force microscopy studies indicate that the index variation is due to the changes in crystalline phase, density, and morphology during thermal annealing.

  7. Shielding requirements for constant-potential diagnostic x-ray beams determined by a Monte Carlo calculation

    International Nuclear Information System (INIS)

    Simpkin, D.J.

    1989-01-01

    A Monte Carlo calculation has been performed to determine the transmission of broad constant-potential x-ray beams through Pb, concrete, gypsum wallboard, steel and plate glass. The EGS4 code system was used with a simple broad-beam geometric model to generate exposure transmission curves for published 70, 100, 120 and 140-kVcp x-ray spectra. These curves are compared to measured three-phase generated x-ray transmission data in the literature and found to be reasonable. For calculation ease the data are fit to an equation previously shown to describe such curves quite well. These calculated transmission data are then used to create three-phase shielding tables for Pb and concrete, as well as other materials not available in Report No. 49 of the NCRP

  8. Shielding requirements for constant-potential diagnostic x-ray beams determined by a Monte Carlo calculation.

    Science.gov (United States)

    Simpkin, D J

    1989-02-01

    A Monte Carlo calculation has been performed to determine the transmission of broad constant-potential x-ray beams through Pb, concrete, gypsum wallboard, steel and plate glass. The EGS4 code system was used with a simple broad-beam geometric model to generate exposure transmission curves for published 70, 100, 120 and 140-kVcp x-ray spectra. These curves are compared to measured three-phase generated x-ray transmission data in the literature and found to be reasonable. For calculation ease the data are fit to an equation previously shown to describe such curves quite well. These calculated transmission data are then used to create three-phase shielding tables for Pb and concrete, as well as other materials not available in Report No. 49 of the NCRP.

  9. Aligning Requirements-Driven Software Processes with IT Governance

    OpenAIRE

    Nguyen Huynh Anh, Vu; Kolp, Manuel; Heng, Samedi; Wautelet, Yves

    2017-01-01

    Requirements Engineering is closely intertwined with Information Technology (IT) Governance. Aligning IT Governance principles with Requirements-Driven Software Processes allows them to propose governance and management rules for software development to cope with stakeholders’ requirements and expectations. Typically, the goal of IT Governance in software engineering is to ensure that the results of a software organization business processes meet the strategic requirements of the organization...

  10. Effects of the amount and schedule of varied practice after constant practice on the adaptive process of motor learning

    Directory of Open Access Journals (Sweden)

    Umberto Cesar Corrêa

    2014-12-01

    Full Text Available This study investigated the effects of different amounts and schedules of varied practice, after constant practice, on the adaptive process of motor learning. Participants were one hundred and seven children with a mean age of 11.1 ± 0.9 years. Three experiments were carried out using a complex anticipatory timing task manipulating the following components in the varied practice: visual stimulus speed (experiment 1; sequential response pattern (experiment 2; and visual stimulus speed plus sequential response pattern (experiment 3. In all experiments the design involved three amounts (18, 36, and 63 trials, and two schedules (random and blocked of varied practice. The experiments also involved two learning phases: stabilization and adaptation. The dependent variables were the absolute, variable, and constant errors related to the task goal, and the relative timing of the sequential response. Results showed that all groups worsened the performances in the adaptation phase, and no difference was observed between them. Altogether, the results of the three experiments allow the conclusion that the amounts of trials manipulated in the random and blocked practices did not promote the diversification of the skill since no adaptation was observed.

  11. Business Process Simulation: Requirements for Business and Resource Models

    Directory of Open Access Journals (Sweden)

    Audrius Rima

    2015-07-01

    Full Text Available The purpose of Business Process Model and Notation (BPMN is to provide easily understandable graphical representation of business process. Thus BPMN is widely used and applied in various areas one of them being a business process simulation. This paper addresses some BPMN model based business process simulation problems. The paper formulate requirements for business process and resource models in enabling their use for business process simulation.

  12. The reactions of neutral iron clusters with D2O: Deconvolution of equilibrium constants from multiphoton processes

    International Nuclear Information System (INIS)

    Weiller, B.H.; Bechthold, P.S.; Parks, E.K.; Pobo, L.G.; Riley, S.J.

    1989-01-01

    The chemical reactions of neutral iron clusters with D 2 O are studied in a continuous flow tube reactor by molecular beam sampling and time-of-flight mass spectrometry with laser photoionization. Product distributions are invariant to a four-fold change in reaction time demonstrating that equilibrium is attained between free and adsorbed D 2 O. The observed negative temperature dependence is consistent with an exothermic, molecular addition reaction at equilibrium. Under our experimental conditions, there is significant photodesorption of D 2 O (Fe/sub n/(D 2 O)/sub m/ + hν → Fe/sub n/ + m D 2 O) along with ionization due to absorption of multiple photons from the ionizing laser. Using a simple model based on a rate equation analysis, we are able to quantitatively deconvolute this desorption process from the equilibrium constants. 8 refs., 1 fig

  13. A Compositional Knowledge Level Process Model of Requirements Engineering

    NARCIS (Netherlands)

    Herlea, D.E.; Jonker, C.M.; Treur, J.; Wijngaards, W.C.A.

    2002-01-01

    In current literature few detailed process models for Requirements Engineering are presented: usually high-level activities are distinguished, without a more precise specification of each activity. In this paper the process of Requirements Engineering has been analyzed using knowledge-level

  14. Nuclear constants

    International Nuclear Information System (INIS)

    Foos, J.

    1999-01-01

    This paper is written in two tables. The first one describes the different particles (bosons and fermions). The second one gives the isotopes nuclear constants of the different elements, for Z = 1 to 56. (A.L.B.)

  15. Nuclear constants

    International Nuclear Information System (INIS)

    Foos, J.

    2000-01-01

    This paper is written in two tables. The first one describes the different particles (bosons and fermions). The second one gives the isotopes nuclear constants of the different elements, for Z = 56 to 68. (A.L.B.)

  16. Nuclear constants

    International Nuclear Information System (INIS)

    Foos, J.

    1998-01-01

    This paper is made of two tables. The first table describes the different particles (bosons and fermions) while the second one gives the nuclear constants of isotopes from the different elements with Z = 1 to 25. (J.S.)

  17. Nuclear constants

    International Nuclear Information System (INIS)

    Foos, J.

    1999-01-01

    This paper is written in two tables. The first one describes the different particles (bosons and fermions). The second one gives the isotopes nuclear constants of the different elements, for Z = 56 to 68. (A.L.B.)

  18. Cosmological Hubble constant and nuclear Hubble constant

    International Nuclear Information System (INIS)

    Horbuniev, Amelia; Besliu, Calin; Jipa, Alexandru

    2005-01-01

    The evolution of the Universe after the Big Bang and the evolution of the dense and highly excited nuclear matter formed by relativistic nuclear collisions are investigated and compared. Values of the Hubble constants for cosmological and nuclear processes are obtained. For nucleus-nucleus collisions at high energies the nuclear Hubble constant is obtained in the frame of different models involving the hydrodynamic flow of the nuclear matter. Significant difference in the values of the two Hubble constant - cosmological and nuclear - is observed

  19. Are fundamental constants really constant

    International Nuclear Information System (INIS)

    Norman, E.B.

    1986-01-01

    Reasons for suspecting that fundamental constants might change with time are reviewed. Possible consequences of such variations are examined. The present status of experimental tests of these ideas is discussed

  20. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling.

    Science.gov (United States)

    Núñez, M; Robie, T; Vlachos, D G

    2017-10-28

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  1. Effect of input data variability on estimations of the equivalent constant temperature time for microbial inactivation by HTST and retort thermal processing.

    Science.gov (United States)

    Salgado, Diana; Torres, J Antonio; Welti-Chanes, Jorge; Velazquez, Gonzalo

    2011-08-01

    Consumer demand for food safety and quality improvements, combined with new regulations, requires determining the processor's confidence level that processes lowering safety risks while retaining quality will meet consumer expectations and regulatory requirements. Monte Carlo calculation procedures incorporate input data variability to obtain the statistical distribution of the output of prediction models. This advantage was used to analyze the survival risk of Mycobacterium avium subspecies paratuberculosis (M. paratuberculosis) and Clostridium botulinum spores in high-temperature short-time (HTST) milk and canned mushrooms, respectively. The results showed an estimated 68.4% probability that the 15 sec HTST process would not achieve at least 5 decimal reductions in M. paratuberculosis counts. Although estimates of the raw milk load of this pathogen are not available to estimate the probability of finding it in pasteurized milk, the wide range of the estimated decimal reductions, reflecting the variability of the experimental data available, should be a concern to dairy processors. Knowledge of the C. botulinum initial load and decimal thermal time variability was used to estimate an 8.5 min thermal process time at 110 °C for canned mushrooms reducing the risk to 10⁻⁹ spores/container with a 95% confidence. This value was substantially higher than the one estimated using average values (6.0 min) with an unacceptable 68.6% probability of missing the desired processing objective. Finally, the benefit of reducing the variability in initial load and decimal thermal time was confirmed, achieving a 26.3% reduction in processing time when standard deviation values were lowered by 90%. In spite of novel technologies, commercialized or under development, thermal processing continues to be the most reliable and cost-effective alternative to deliver safe foods. However, the severity of the process should be assessed to avoid under- and over-processing

  2. Academic Freedom Requires Constant Vigilance

    Science.gov (United States)

    Emery, Kim

    2009-01-01

    Traditionally, academic freedom has been understood as an individual right and a negative liberty. As William Tierney and Vincente Lechuga explain, "Academic freedom, although an institutional concept, was vested in the individual professor." The touchstone document on academic freedom, the American Association of University Professor's (AAUP)…

  3. Usage of information safety requirements in improving tube bending process

    Science.gov (United States)

    Livshitz, I. I.; Kunakov, E.; Lontsikh, P. A.

    2018-05-01

    This article is devoted to an improvement of the technological process's analysis with the information security requirements implementation. The aim of this research is the competition increase analysis in aircraft industry enterprises due to the information technology implementation by the example of the tube bending technological process. The article analyzes tube bending kinds and current technique. In addition, a potential risks analysis in a tube bending technological process is carried out in terms of information security.

  4. 78 FR 38555 - Importer Permit Requirements for Tobacco Products and Processed Tobacco, and Other Requirements...

    Science.gov (United States)

    2013-06-27

    ..., and Other Requirements for Tobacco Products, Processed Tobacco, and Cigarette Papers and Tubes AGENCY... administration and enforcement of importer permits over the past decade, TTB believes that it can gain... minimum manufacturing and marking requirements for tobacco products and cigarette papers and tubes, and...

  5. Two-stage Lagrangian modeling of ignition processes in ignition quality tester and constant volume combustion chambers

    KAUST Repository

    Alfazazi, Adamu

    2016-08-10

    The ignition characteristics of isooctane and n-heptane in an ignition quality tester (IQT) were simulated using a two-stage Lagrangian (TSL) model, which is a zero-dimensional (0-D) reactor network method. The TSL model was also used to simulate the ignition delay of n-dodecane and n-heptane in a constant volume combustion chamber (CVCC), which is archived in the engine combustion network (ECN) library (http://www.ca.sandia.gov/ecn). A detailed chemical kinetic model for gasoline surrogates from the Lawrence Livermore National Laboratory (LLNL) was utilized for the simulation of n-heptane and isooctane. Additional simulations were performed using an optimized gasoline surrogate mechanism from RWTH Aachen University. Validations of the simulated data were also performed with experimental results from an IQT at KAUST. For simulation of n-dodecane in the CVCC, two n-dodecane kinetic models from the literature were utilized. The primary aim of this study is to test the ability of TSL to replicate ignition timings in the IQT and the CVCC. The agreement between the model and the experiment is acceptable except for isooctane in the IQT and n-heptane and n-dodecane in the CVCC. The ability of the simulations to replicate observable trends in ignition delay times with regard to changes in ambient temperature and pressure allows the model to provide insights into the reactions contributing towards ignition. Thus, the TSL model was further employed to investigate the physical and chemical processes responsible for controlling the overall ignition under various conditions. The effects of exothermicity, ambient pressure, and ambient oxygen concentration on first stage ignition were also studied. Increasing ambient pressure and oxygen concentration was found to shorten the overall ignition delay time, but does not affect the timing of the first stage ignition. Additionally, the temperature at the end of the first stage ignition was found to increase at higher ambient pressure

  6. Room temperature plasma oxidation: A new process for preparation of ultrathin layers of silicon oxide, and high dielectric constant materials

    International Nuclear Information System (INIS)

    Tinoco, J.C.; Estrada, M.; Baez, H.; Cerdeira, A.

    2006-01-01

    In this paper we present basic features and oxidation law of the room temperature plasma oxidation (RTPO), as a new process for preparation of less than 2 nm thick layers of SiO 2 , and high-k layers of TiO 2 . We show that oxidation rate follows a potential law dependence on oxidation time. The proportionality constant is function of pressure, plasma power, reagent gas and plasma density, while the exponent depends only on the reactive gas. These parameters are related to the physical phenomena occurring inside the plasma, during oxidation. Metal-Oxide-Semiconductor (MOS) capacitors fabricated with these layers are characterized by capacitance-voltage, current-voltage and current-voltage-temperature measurements. Less than 2.5 nm SiO 2 layers with surface roughness similar to thermal oxide films, surface state density below 3 x 10 11 cm -2 and current density in the expected range for each corresponding thickness, were obtained by RTPO in a parallel-plate reactor, at 180 mW/cm 2 and pressure range between 9.33 and 66.5 Pa (0.07 and 0.5 Torr) using O 2 and N 2 O as reactive gases. MOS capacitors with TiO 2 layers formed by RTPO of sputtered Ti layers are also characterized. Finally, MOS capacitors with stacked layers of TiO 2 over SiO 2 , both layers obtained by RTPO, were prepared and evaluated to determine the feasibility of the use of TiO 2 as a candidate for next technology nodes

  7. Business Process Quality Computation : Computing Non-Functional Requirements to Improve Business Processes

    NARCIS (Netherlands)

    Heidari, F.

    2015-01-01

    Business process modelling is an important part of system design. When designing or redesigning a business process, stakeholders specify, negotiate, and agree on business requirements to be satisfied, including non-functional requirements that concern the quality of the business process. This thesis

  8. The minimum work required for air conditioning process

    International Nuclear Information System (INIS)

    Alhazmy, Majed M.

    2006-01-01

    This paper presents a theoretical analysis based on the second law of thermodynamics to estimate the minimum work required for the air conditioning process. The air conditioning process for hot and humid climates involves reducing air temperature and humidity. In the present analysis the inlet state is the state of the environment which has also been chosen as the dead state. The final state is the human thermal comfort fixed at 20 o C dry bulb temperature and 60% relative humidity. The general air conditioning process is represented by an equivalent path consisting of an isothermal dehumidification followed by a sensible cooling. An exergy analysis is performed on each process separately. Dehumidification is analyzed as a separation process of an ideal mixture of air and water vapor. The variations of the minimum work required for the air conditioning process with the ambient conditions is estimated and the ratio of the work needed for dehumidification to the total work needed to perform the entire process is presented. The effect of small variations in the final conditions on the minimum required work is evaluated. Tolerating a warmer or more humid final condition can be an easy solution to reduce the energy consumptions during critical load periods

  9. Green Software Engineering Adaption In Requirement Elicitation Process

    Directory of Open Access Journals (Sweden)

    Umma Khatuna Jannat

    2015-08-01

    Full Text Available A recent technology investigates the role of concern in the environment software that is green software system. Now it is widely accepted that the green software can fit all process of software development. It is also suitable for the requirement elicitation process. Now a days software companies have used requirements elicitation techniques in an enormous majority. Because this process plays more and more important roles in software development. At the present time most of the requirements elicitation process is improved by using some techniques and tools. So that the intention of this research suggests to adapt green software engineering for the intention of existing elicitation technique and recommend suitable actions for improvement. This research being involved qualitative data. I used few keywords in my searching procedure then searched IEEE ACM Springer Elsevier Google scholar Scopus and Wiley. Find out articles which published in 2010 until 2016. Finding from the literature review Identify 15 traditional requirement elicitations factors and 23 improvement techniques to convert green engineering. Lastly The paper includes a squat review of the literature a description of the grounded theory and some of the identity issues related finding of the necessity for requirements elicitation improvement techniques.

  10. Compliance with NRC subsystem requirements in the repository licensing process

    International Nuclear Information System (INIS)

    Minwalla, H.

    1994-01-01

    Section 121 of the Nuclear Waste Policy Act of 1982 requires the Nuclear Regulatory Commission (Commission) to issue technical requirements and criteria, for the use of a system of multiple barriers in the design of the repository, that are not inconsistent with any comparable standard promulgated by the Environmental Protection Agency (EPA). The Administrator of the EPA is required to promulgate generally applicable standards for protection of the general environment from offsite releases from radioactive material in repositories. The Commission's regulations pertaining to geologic repositories are provided in 10 CFR part 60. The Commission has provided in 10 CFR 60.112 the overall post-closure system performance objective which is used to demonstrate compliance with the EPA high-level waste (HLW) disposal standard. In addition, the Commission has provided, in 10 CFR 60.113, subsystem performance requirements for substantially complete containment, fractional release rate, and groundwater travel time; however, none of these subsystem performance requirements have a causal technical nexus with the EPA HLW disposal standard. This paper examines the issue of compliance with the conflicting dual regulatory role of subsystem performance requirements in the repository licensing process and recommends several approaches that would appropriately define the role of subsystem performance requirements in the repository licensing process

  11. An analytical approach to customer requirement information processing

    Science.gov (United States)

    Zhou, Zude; Xiao, Zheng; Liu, Quan; Ai, Qingsong

    2013-11-01

    'Customer requirements' (CRs) management is a key component of customer relationship management (CRM). By processing customer-focused information, CRs management plays an important role in enterprise systems (ESs). Although two main CRs analysis methods, quality function deployment (QFD) and Kano model, have been applied to many fields by many enterprises in the past several decades, the limitations such as complex processes and operations make them unsuitable for online businesses among small- and medium-sized enterprises (SMEs). Currently, most SMEs do not have the resources to implement QFD or Kano model. In this article, we propose a method named customer requirement information (CRI), which provides a simpler and easier way for SMEs to run CRs analysis. The proposed method analyses CRs from the perspective of information and applies mathematical methods to the analysis process. A detailed description of CRI's acquisition, classification and processing is provided.

  12. Integrating reuse measurement practices into the ERP requirements engineering process

    NARCIS (Netherlands)

    Daneva, Maia; Münich, Jürgen; Vierimaa, Matias

    2006-01-01

    The management and deployment of reuse-driven and architecturecentric requirements engineering processes have become common in many organizations adopting Enterprise Resource Planning solutions. Yet, little is known about the variety of reusability aspects in ERP projects at the level of

  13. PI and PID controller tuning rule design for processes with delay, to achieve constant gain and phase margins for all values of delay

    OpenAIRE

    O'Dwyer, Aidan

    2001-01-01

    This paper will discuss the design of PI and PID controller tuning rules to compensate processes with delay, that are modelled in a number of ways. The rules allow the achievement of constant gain and phase margins as the delay varies.

  14. REQUIREMENTS PROCESSING TOOLS AND THE BUILDING DESIGNERS MOTIVATION ON USE

    Directory of Open Access Journals (Sweden)

    Camila Pegoraro

    2017-04-01

    Full Text Available The successful development of projects requires, among other conditions, the ability to process requirements. In the construction literature, researchers have figured out that human difficulties was often at the root of Requirements Processing (RP problems throughout the design phases, and that the employment of tools could be a key factor for RP implementation. To check these outcomes and to look at how current practitioners behave in relation to the RP tools, an exploratory case study was conducted with a building design team from a public university. The aim of this paper was to investigate the perception of benefits and the motivation of designers regarding the RP tools. The results indicated that 42% of the participants are highly motivated to use new tools and that they have more interest in tools that deal directly with design activities than in those focused on data. Validation tools aroused interest as the most useful tools for designers. 66,7% of the participants mentioned that the tools can make the design process clearer, and that training and adaptation are crucial to promote acceptance and commitment to RP. The main contribution is the indication of gaps for further research and for tools improvement from the designers’ perspective.

  15. Does body size of dairy cows, at constant ratio of maintenance to production requirements, affect productivity in a pasture-based production system?

    Science.gov (United States)

    Hofstetter, P; Steiger Burgos, M; Petermann, R; Münger, A; Blum, J W; Thomet, P; Menzi, H; Kohler, S; Kunz, P

    2011-12-01

    This study compared productivity of dairy cows with different body weight (BW), but a constant ratio of maintenance to production requirements in their first lactation, in a pasture-based production system with spring calving. Two herds, Herd L (13 and 14 large cows in 2003 and 2004 respectively; average BW after calving, 721 kg) and Herd S (16 small cows in both years; 606 kg) [Correction added after online publication 14 January 2011: 16 small cows in both years; 621 kg was changed to 16 small cows in both years; 606 kg], all in their second or following lactations, were each allocated 6 ha of pasture and rotationally grazed on 10 parallel paddocks with equal herbage offer and nutritional values. Winter hay, harvested from the same pastures, was offered ad libitum in the indoor periods in a tied stall barn. Each herd received, per lactation and year, approximately 2000 kg dry matter (DM) of concentrates and of fodder beets, equally distributed to every individual. Indoors, the L-cows ingested more DM than the S-cows (18.7 vs. 16.3 kg DM/cow per day; p pasture (17.9 vs. 15.5 kg DM/cow per day; p dairy cow types were equally efficient in utilising pasture-based forage. © 2010 Blackwell Verlag GmbH.

  16. Quality assurance requirements for dedication process in Angra 1

    Energy Technology Data Exchange (ETDEWEB)

    Baliza, Ana Rosa, E-mail: baliza@eletronuclear.gov.br [Eletrobras Termonuclear S.A. (ELETRONUCLEAR), Angra dos Reis, RJ (Brazil). Departamento GQO.G; Morghi, Youssef, E-mail: ymo@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2015-07-01

    In Brazil the regulatory body is CNEN (Comissao Nacional de Energia Nuclear), according to its requirements, when there is not a Brazilian standard, the utilities shall follow the requirements of the designer. For Angra 1, the designer is an American company - Westinghouse. So, the requirements for dedication of U.S. NRC (United States Nuclear Regulatory Commission) shall be applied, these requirements are in 10CFR21 - Reporting of Defects and Noncompliance. According to 10CFR21, when applied to nuclear power plants licensed dedication is an acceptance process undertaken to provide reasonable assurance that a commercial grade item to be used as a basic component will perform its intended safety function and, in this respect, is deemed equivalent to an item designed and manufactured under a quality assurance program standard for nuclear power plant. This assurance is achieved by identifying the critical characteristics of the item and verifying their acceptability by inspections, tests, or analyses by the purchaser or third-party dedicating entity. (author)

  17. Quality assurance requirements for dedication process in Angra 1

    International Nuclear Information System (INIS)

    Baliza, Ana Rosa

    2015-01-01

    In Brazil the regulatory body is CNEN (Comissao Nacional de Energia Nuclear), according to its requirements, when there is not a Brazilian standard, the utilities shall follow the requirements of the designer. For Angra 1, the designer is an American company - Westinghouse. So, the requirements for dedication of U.S. NRC (United States Nuclear Regulatory Commission) shall be applied, these requirements are in 10CFR21 - Reporting of Defects and Noncompliance. According to 10CFR21, when applied to nuclear power plants licensed dedication is an acceptance process undertaken to provide reasonable assurance that a commercial grade item to be used as a basic component will perform its intended safety function and, in this respect, is deemed equivalent to an item designed and manufactured under a quality assurance program standard for nuclear power plant. This assurance is achieved by identifying the critical characteristics of the item and verifying their acceptability by inspections, tests, or analyses by the purchaser or third-party dedicating entity. (author)

  18. Project Interface Requirements Process Including Shuttle Lessons Learned

    Science.gov (United States)

    Bauch, Garland T.

    2010-01-01

    Most failures occur at interfaces between organizations and hardware. Processing interface requirements at the start of a project life cycle will reduce the likelihood of costly interface changes/failures later. This can be done by adding Interface Control Documents (ICDs) to the Project top level drawing tree, providing technical direction to the Projects for interface requirements, and by funding the interface requirements function directly from the Project Manager's office. The interface requirements function within the Project Systems Engineering and Integration (SE&I) Office would work in-line with the project element design engineers early in the life cycle to enhance communications and negotiate technical issues between the elements. This function would work as the technical arm of the Project Manager to help ensure that the Project cost, schedule, and risk objectives can be met during the Life Cycle. Some ICD Lessons Learned during the Space Shuttle Program (SSP) Life Cycle will include the use of hardware interface photos in the ICD, progressive life cycle design certification by analysis, test, & operations experience, assigning interface design engineers to Element Interface (EI) and Project technical panels, and linking interface design drawings with project build drawings

  19. Determination of constant of chemical reaction rate in the process of steel treatment in the endothermal atmosphere

    International Nuclear Information System (INIS)

    Gyulikhandanov, E.L.; Kislenkov, V.V.

    1978-01-01

    The high-temperature method was applied to measuring a relative variation in the electrical resistance of a thin steel foil prepared from the 12KhN3A, 18Kh2N4VA, 20KhGNR, and 20Kh3MVF steels during its carburization and decarburization, and determined was the temperature dependence of the reaction rate of the interaction of the endothermal atmosphere of different compositions with the analloyed γ-Fe. A connection has been established between the reaction rate constant and the thermodynamic activity of carbon in the alloyed austenite at the temperature of about 925 deg C, corresponding to the cementation temperature. This provides the quantitative estimation of the above value for any alloyed steels and with the presence of numerical values of diffusion coefficients; this also enables one to carry out an accurate calculation of the distribution of carbon throughout the depth of a layer when effecting the cementation in the endothermal atmosphere

  20. Fuel processing requirements and techniques for fuel cell propulsion power

    Science.gov (United States)

    Kumar, R.; Ahmed, S.; Yu, M.

    Fuels for fuel cells in transportation systems are likely to be methanol, natural gas, hydrogen, propane, or ethanol. Fuels other than hydrogen will need to be reformed to hydrogen on-board the vehicle. The fuel reformer must meet stringent requirements for weight and volume, product quality, and transient operation. It must be compact and lightweight, must produce low levels of CO and other byproducts, and must have rapid start-up and good dynamic response. Catalytic steam reforming, catalytic or noncatalytic partial oxidation reforming, or some combination of these processes may be used. This paper discusses salient features of the different kinds of reformers and describes the catalysts and processes being examined for the oxidation reforming of methanol and the steam reforming of ethanol. Effective catalysts and reaction conditions for the former have been identified; promising catalysts and reaction conditions for the latter are being investigated.

  1. 76 FR 36078 - Milk for Manufacturing Purposes and Its Production and Processing; Requirements Recommended for...

    Science.gov (United States)

    2011-06-21

    ...] Milk for Manufacturing Purposes and Its Production and Processing; Requirements Recommended for... to quality and sanitation requirements for the production and processing of manufacturing grade milk... Manufacturing Purposes and Its Production and Processing; Recommended Requirements for Adoption by State...

  2. Possibility of reconstructing the mechanism and rate constants of elementary processes in the gas-discharge plasma of a rapid-flow laser

    International Nuclear Information System (INIS)

    Gontar, V.G.; Pashkin, S.V.; Surguchenko, S.A.

    1982-01-01

    The procedure is given for reconstructing the mechanism of elementary processes in the plasma of a gas-discharge laser on the basis of a statistical analysis of the experimental data. The method of writing the initial equations described here permits automation of the procedure for constructing a mathematical model of the discharge. A new iteration procedure for estimating the rate constants of the elementary processes by the method of least squares is proposed which has a wide region of convergence. The proposed methods are analyzed on test problems

  3. Auto type-selection of constant supporting in nuclear power stations

    International Nuclear Information System (INIS)

    Liu Hu; Wang Fujun; Liu Wei; Li Zhaoqing

    2013-01-01

    To solve the type-selection of constant supporting in nuclear power stations, combining the characteristics of constant supporting which can adjust in the certain scope and the rules of load-displacement, the requirements and process for the type-selection of constant supporting is proposed, and the process of type-selection is optimized by Visual Basic. After inputting of the known parameters, the process can automatically select the most economical and reasonable constant supporting by array and function. (authors)

  4. Plug and Process Loads Capacity and Power Requirements Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sheppy, M.; Gentile-Polese, L.

    2014-09-01

    This report addresses gaps in actionable knowledge that would help reduce the plug load capacities designed into buildings. Prospective building occupants and real estate brokers lack accurate references for plug and process load (PPL) capacity requirements, so they often request 5-10 W/ft2 in their lease agreements. Limited initial data, however, suggest that actual PPL densities in leased buildings are substantially lower. Overestimating PPL capacity leads designers to oversize electrical infrastructure and cooling systems. Better guidance will enable improved sizing and design of these systems, decrease upfront capital costs, and allow systems to operate more energy efficiently. The main focus of this report is to provide industry with reliable, objective third-party guidance to address the information gap in typical PPL densities for commercial building tenants. This could drive changes in negotiations about PPL energy demands.

  5. Optimizing the solar photo-Fenton process in the treatment of contaminated water. Determination of intrinsic kinetic constants for scale-up

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, Miguel [Universidad de Los Andes, Escuela Basica de Ingenieria, La Hechicera, Merida (Venezuela); Malato, Sixto [Plataforma Solar de Almeria, Tabernas (PSA) (Spain); Pulgarin, Cesar [Institute of Environmental Engineering, Laboratory for Environmental Biotechnology, Swiss Federal Institute of Technology (EPFL), CH-1015 Lausanne (Switzerland); Contreras, Sandra; Curco, David; Gimenez, Jaime; Esplugas, Santiago [Department d' Enginyeria Quimica i Metallurgia, Universitat de Barcelona, Marti i Franques 1, 08028 Barcelona (Spain)

    2005-10-01

    The elimination of aromatic compounds present in surface water by photo-Fenton with sunlight as the source of radiation was studied. The concentrations of Fe{sup 3+} and H{sub 2}O{sub 2} are key factors for this process. A solar simulator and a prototype parabolic collector were used as laboratory-scale reactors to find the parameters of those key factors to be used in the CPC (compound parabolic collector) pilot plant reactor. The initial mineralization rate constant (k{sub obs}) was determined and evaluated at different Fe{sup 3+} and H{sub 2}O{sub 2} concentrations to find the best values for maximum efficiency. In all the experiments the mineralization of an aqueous phenol solution was described by assuming a pseudo-first-order reaction. The intrinsic kinetic constants not dependent on the lighting conditions were also estimated for scale-up. (author)

  6. Improvement of Requirement Elicitation Process through Cognitive Psychology

    Directory of Open Access Journals (Sweden)

    Sana Fatima

    2017-06-01

    Full Text Available Proper requirement elicitation is necessary for client satisfaction along with the overall project success, but requirement engineers face problems in understanding user requirements and the users of the required system fail to make requirement engineering team understand what they actually want. It is then responsibility of requirement engineers to extract proper requirements. This paper discusses how to use cognitive psychology and learning style models (LSM to understand the psychology of clients. Moreover, it also discusses usage of proper elicitation technique according to one’s learning style and gather the right requirements.

  7. Pulsed Electrochemical Mass Spectrometry for Operando Tracking of Interfacial Processes in Small-Time-Constant Electrochemical Devices such as Supercapacitors.

    Science.gov (United States)

    Batisse, Nicolas; Raymundo-Piñero, Encarnación

    2017-11-29

    A more detailed understanding of the electrode/electrolyte interface degradation during the charging cycle in supercapacitors is of great interest for exploring the voltage stability range and therefore the extractable energy. The evaluation of the gas evolution during the charging, discharging, and aging processes is a powerful tool toward determining the stability and energy capacity of supercapacitors. Here, we attempt to fit the gas analysis resolution to the time response of a low-gas-generation power device by adopting a modified pulsed electrochemical mass spectrometry (PEMS) method. The pertinence of the method is shown using a symmetric carbon/carbon supercapacitor operating in different aqueous electrolytes. The differences observed in the gas levels and compositions as a function of the cell voltage correlate to the evolution of the physicochemical characteristics of the carbon electrodes and to the electrochemical performance, giving a complete picture of the processes taking place at the electrode/electrolyte interface.

  8. MINIMIZE ENERGY AND COSTS REQUIREMENT OF WEEDING AND FERTILIZING PROCESS FOR FIBER CROPS IN SMALL FARMS

    Directory of Open Access Journals (Sweden)

    Tarek FOUDA

    2015-06-01

    Full Text Available The experimental work was carried out through agricultural summer season of 2014 at the experimental farm of Gemmiza Research Station, Gharbiya governorate to minimize energy and costs in weeding and fertilizing processes for fiber crops (Kenaf and Roselle in small farms. The manufactured multipurpose unit performance was studied as a function of change in machine forward speed (2.2, 2.8, 3.4 and 4 Km/h fertilizing rates (30,45 and 60 Kg.N.fed-1,and constant soil moisture content was 20%(d.b in average. Performance of the manufactured machine was evaluated in terms of fuel consumption, power and energy requirements, effective field capacity, theoretical field capacity, field efficiency, and operational costs as a machine measurements .The experiment results reveled that the manufactured machine decreased energy and increased effective field capacity and efficiency under the following conditions: -machine forward speed 2.2Kmlh. -moisture content average 20%.

  9. The determinants of response time in a repeated constant-sum game: A robust Bayesian hierarchical dual-process model.

    Science.gov (United States)

    Spiliopoulos, Leonidas

    2018-03-01

    The investigation of response time and behavior has a long tradition in cognitive psychology, particularly for non-strategic decision-making. Recently, experimental economists have also studied response time in strategic interactions, but with an emphasis on either one-shot games or repeated social-dilemmas. I investigate the determinants of response time in a repeated (pure-conflict) game, admitting a unique mixed strategy Nash equilibrium, with fixed partner matching. Response times depend upon the interaction of two decision models embedded in a dual-process framework (Achtziger and Alós-Ferrer, 2014; Alós-Ferrer, 2016). The first decision model is the commonly used win-stay/lose-shift heuristic and the second the pattern-detecting reinforcement learning model in Spiliopoulos (2013b). The former is less complex and can be executed more quickly than the latter. As predicted, conflict between these two models (i.e., each one recommending a different course of action) led to longer response times than cases without conflict. The dual-process framework makes other qualitative response time predictions arising from the interaction between the existence (or not) of conflict and which one of the two decision models the chosen action is consistent with-these were broadly verified by the data. Other determinants of RT were hypothesized on the basis of existing theory and tested empirically. Response times were strongly dependent on the actions chosen by both players in the previous rounds and the resulting outcomes. Specifically, response time was shortest after a win in the previous round where the maximum possible payoff was obtained; response time after losses was significantly longer. Strongly auto-correlated behavior (regardless of its sign) was also associated with longer response times. I conclude that, similar to other tasks, there is a strong coupling in repeated games between behavior and RT, which can be exploited to further our understanding of decision

  10. In-process tool rotational speed variation with constant heat input in friction stir welding of AZ31 sheets with variable thickness

    Science.gov (United States)

    Buffa, Gianluca; Campanella, Davide; Forcellese, Archimede; Fratini, Livan; Simoncini, Michela

    2017-10-01

    In the present work, friction stir welding experiments on AZ31 magnesium alloy sheets, characterized by a variable thickness along the welding line, were carried out. The approach adapted during welding consisted in maintaining constant the heat input to the joint. To this purpose, the rotational speed of the pin tool was increased with decreasing thickness and decreased with increasing thickness in order to obtain the same temperatures during welding. The amount by which the rotational speed was changed as a function of the sheet thickness was defined on the basis of the results given by FEM simulations of the FSW process. Finally, the effect of the in-process variation of the tool rotational speed on the mechanical and microstructural properties of FSWed joints was analysed by comparing both the nominal stress vs. nominal strain curves and microstructure of FSWed joints obtained in different process conditions. It was observed that FSW performed by keeping constant the heat input to the joint leads to almost coincident results both in terms of the curve shape, ultimate tensile strength and ultimate elongation values, and microstructure.

  11. 75 FR 61418 - Milk for Manufacturing Purposes and Its Production and Processing; Requirements Recommended for...

    Science.gov (United States)

    2010-10-05

    ... for Manufacturing Purposes and Its Production and Processing; Requirements Recommended for Adoption by... sanitation requirements for the production and processing of manufacturing grade milk. These Recommended... comments. SUMMARY: This document proposes to amend the recommended manufacturing milk requirements...

  12. 75 FR 6683 - Notice of Proposed Information Collection: Comment Request; Technical Processing Requirements for...

    Science.gov (United States)

    2010-02-10

    ... Information Collection: Comment Request; Technical Processing Requirements for Multifamily Project Mortgage... information: Title of Proposal: Technical Processing Requirements for Multifamily Project Mortgage Insurance... information collection requirement described below will be submitted to the Office of Management and Budget...

  13. 40 CFR 63.2252 - What are the requirements for process units that have no control or work practice requirements?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true What are the requirements for process units that have no control or work practice requirements? 63.2252 Section 63.2252 Protection of... Pollutants: Plywood and Composite Wood Products General Compliance Requirements § 63.2252 What are the...

  14. What do information reuse and automated processing require in engineering design? Semantic process

    Directory of Open Access Journals (Sweden)

    Ossi Nykänen

    2011-12-01

    Full Text Available Purpose: The purpose of this study is to characterize, analyze, and demonstrate machine-understandable semantic process for validating, integrating, and processing technical design information. This establishes both a vision and tools for information reuse and semi-automatic processing in engineering design projects, including virtual machine laboratory applications with generated components.Design/methodology/approach: The process model has been developed iteratively in terms of action research, constrained by the existing technical design practices and assumptions (design documents, expert feedback, available technologies (pre-studies and experiments with scripting and pipeline tools, benchmarking with other process models and methods (notably the RUP and DITA, and formal requirements (computability and the critical information paths for the generated applications. In practice, the work includes both quantitative and qualitative components.Findings: Technical design processes may be greatly enhanced in terms of semantic process thinking, by enriching design information, and automating information validation and transformation tasks. Contemporary design information, however, is mainly intended for human consumption, and needs to be explicitly enriched with the currently missing data and interfaces. In practice, this may require acknowledging the role of technical information or knowledge engineer, to lead the development of the semantic design information process in a design organization. There is also a trade-off between machine-readability and system complexity that needs to be studied further, both empirically and in theory.Research limitations/implications: The conceptualization of the semantic process is essentially an abstraction based on the idea of progressive design. While this effectively allows implementing semantic processes with, e.g., pipeline technologies, the abstraction is valid only when technical design is organized into

  15. Dosimetry control for radiation processing - basic requirements and standards

    International Nuclear Information System (INIS)

    Ivanova, M.; Tsrunchev, Ts.

    2004-01-01

    A brief review of the basic international codes and standards for dosimetry control for radiation processing (high doses dosimetry), setting up a dosimetry control for radiation processing and metrology control of the dosimetry system is made. The present state of dosimetry control for food processing and the Bulgarian long experience in food irradiation (three irradiation facilities are operational at these moment) are presented. The absence of neither national standard for high doses nor accredited laboratory for calibration and audit of radiation processing dosimetry systems is also discussed

  16. Development of plasma cutting process at observation of environmental requirements

    International Nuclear Information System (INIS)

    Czech, J.; Matusiak, J.; Pasek-Siurek, H.

    1997-01-01

    Plasma cutting is one of the basic methods for thermal cutting of metals. It is characterized by high productivity and quality of the cut surface. However, the plasma cutting process is one of the most harmful processes for environment and human health. It results from many agents being a potential environmental risk The large amount of dust and gases emitted during the process as well as an intensive radiation of electric arc and excessive noise are considered as the most harmful hazards. The existing ventilation and filtration systems are not able to solve all problems resulting from the process. Plasma cutting under water is worthy of notice, especially during an advancement of plasma cutting process, because of human safety and environment protection. Such a solution allows to reduce considerably the emission of dust and gases, as well as to decrease the noise level and ultraviolet radiation. An additional advantage of underwater plasma cutting is a reduction in the width of material heating zone and a decrease in strains of elements being cut. However, the productivity of this process is a little lower what results in an increase in cutting cost. In the paper, it has been presented the results of the investigations made at the Institute of Welding in Gliwice on the area of plasma cutting equipment with energy-saving inverter power supplies used in automated processes of underwater plasma cutting as well as the results of testing of welding environment contamination and safety hazards. (author)

  17. Hanford Tanks Initiative requirements and document management process guide

    International Nuclear Information System (INIS)

    Schaus, P.S.

    1998-01-01

    This revision of the guide provides updated references to project management level Program Management and Assessment Configuration Management activities, and provides working level directions for submitting requirements and project documentation related to the Hanford Tanks Initiative (HTI) project. This includes documents and information created by HTI, as well as non-HTI generated materials submitted to the project

  18. Embedding stakeholder values in the requirements engineering process

    NARCIS (Netherlands)

    Harbers, M.; Detweiler, C.; Neerincx, M.A.

    2015-01-01

    Software has become an integral part of our daily lives and should therefore account for human values such as trust, autonomy and privacy. Human values have received increased attention in the field of Requirements Engineering over the last few years, but existing work offers no systematic way to

  19. 48 CFR 1337.110-70 - Personnel security processing requirements.

    Science.gov (United States)

    2010-10-01

    ... information technology (IT) system, as required by the Department of Commerce Security Manual and Department of Commerce Security Program Policy and Minimum Implementation Standards. (b) Insert clause 1352.237... as National Security Contracts that will be performed on or within a Department of Commerce facility...

  20. Constant physics and characteristics of fundamental constant

    International Nuclear Information System (INIS)

    Tarrach, R.

    1998-01-01

    We present some evidence which supports a surprising physical interpretation of the fundamental constants. First, we relate two of them through the renormalization group. This leaves as many fundamental constants as base units. Second, we introduce and a dimensional system of units without fundamental constants. Third, and most important, we find, while interpreting the units of the a dimensional system, that is all cases accessible to experimentation the fundamental constants indicate either discretization at small values or boundedness at large values of the corresponding physical quantity. (Author) 12 refs

  1. Defining DSL design principles for enhancing the requirements elicitation process

    Directory of Open Access Journals (Sweden)

    Gustavo Arroyo

    2012-03-01

    Full Text Available La Elicitación de Requisitos propicia el entendimiento de las necesidades de los usuarios con respecto a un desarrollo de software. Los métodos que se emplean provienen de las ciencias sociales por lo que se carece de una retroalimentación ejecutable. Consecuentemente, la primera versión del software podría no cumplir con las expectativas. El uso de DSLs como herramientas para el descubrimiento de requisitos es una idea aceptada, desafortunadamente, muy pocos trabajos en la literatura se enfocan en la definición de principios de diseño de DSLs. En este trabajo planteamos principios de diseño de DSLs orientados a la elicitación de requisitos, enseguida, generamos casos de prueba en ANTLR, Ruby y Curry. También, enunciamos el perfil que debe tener el nuevo analista de software. Con ello, se incrementa la retroalimentación entre los involucrados en el desarrollo de software y se mejora el producto.Requirements elicitation is concerned with learning and understanding the needs of users w.r.t. a new software development. Frequently the methods employed for requirements elicitation are adapted from areas like social sciences that do not include executable (prototype based on feedback. As a consequence, it is relatively common to discover that the first release does not fit the requirements defined at the beginning of the project. Using domain-specific languages (DSLs as an auxiliary tool for requirements elicitation is a commonly well accepted idea. Unfortunately, there are few works in the literature devoted to the definition of design principles for DSLs to be experienced in the frameworks for DSL developing such as ANTLR, Ruby, and Curry. We propose design principles for the DSL development (regardless of paradigm which are sufficient to model the domain in a requirements phase. Further more we enunciate a new profile for the requirements analyst and a set of elicitation steps. The use of DSLs not only giveus an immediate feedback with

  2. 78 FR 64146 - 30-Day Notice of Proposed Information Collection: Technical Processing Requirements for...

    Science.gov (United States)

    2013-10-25

    ... Information Collection: Technical Processing Requirements for Multifamily Project Mortgage Insurance AGENCY: Office of the Chief Information Officer, HUD. ACTION: Notice. SUMMARY: HUD has submitted the proposed... Information Collection Title of Information Collection: Technical Processing Requirements for Multifamily...

  3. 78 FR 65695 - 30-Day Notice of Proposed Information Collection: Technical Processing Requirements for...

    Science.gov (United States)

    2013-11-01

    ... Information Collection: Technical Processing Requirements for Multifamily Project Mortgage Insurance AGENCY: Office of the Chief Information Officer, HUD. ACTION: Correction, notice. SUMMARY: On October 25, 2013 at... Collection Title of Information Collection: Technical Processing Requirements for Multifamily Project...

  4. 40 CFR 74.17 - Application requirements for process sources. [Reserved

    Science.gov (United States)

    2010-07-01

    ... requirements for process sources. [Reserved] ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Application requirements for process sources. [Reserved] 74.17 Section 74.17 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY...

  5. Mathematical Formulation Requirements and Specifications for the Process Models

    International Nuclear Information System (INIS)

    Steefel, C.; Moulton, D.; Pau, G.; Lipnikov, K.; Meza, J.; Lichtner, P.; Wolery, T.; Bacon, D.; Spycher, N.; Bell, J.; Moridis, G.; Yabusaki, S.; Sonnenthal, E.; Zyvoloski, G.; Andre, B.; Zheng, L.; Davis, J.

    2010-01-01

    The Advanced Simulation Capability for Environmental Management (ASCEM) is intended to be a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM program is aimed at addressing critical EM program needs to better understand and quantify flow and contaminant transport behavior in complex geological systems. It will also address the long-term performance of engineered components including cementitious materials in nuclear waste disposal facilities, in order to reduce uncertainties and risks associated with DOE EM's environmental cleanup and closure activities. Building upon national capabilities developed from decades of Research and Development in subsurface geosciences, computational and computer science, modeling and applied mathematics, and environmental remediation, the ASCEM initiative will develop an integrated, open-source, high-performance computer modeling system for multiphase, multicomponent, multiscale subsurface flow and contaminant transport. This integrated modeling system will incorporate capabilities for predicting releases from various waste forms, identifying exposure pathways and performing dose calculations, and conducting systematic uncertainty quantification. The ASCEM approach will be demonstrated on selected sites, and then applied to support the next generation of performance assessments of nuclear waste disposal and facility decommissioning across the EM complex. The Multi-Process High Performance Computing (HPC) Simulator is one of three thrust areas in ASCEM. The other two are the Platform and Integrated Toolsets (dubbed the Platform) and Site Applications. The primary objective of the HPC Simulator is to provide a flexible and extensible computational engine to simulate the coupled processes and flow scenarios described by the conceptual models developed using the ASCEM Platform. The graded and iterative approach to assessments naturally

  6. Deriving business processes with service level agreements from early requirements

    NARCIS (Netherlands)

    Frankova, Ganna; Seguran, Magali; Gilcher, Florian; Trabelsi, Slim; Doerflinger, Joerg; Aiello, Marco

    When designing a service-based business process employing loosely coupled services, one is not only interested in guaranteeing a certain flow of work, but also in how the work will be performed. This involves the consideration of non-functional properties which go from execution time and costs, to

  7. Model of the process with piecewise-constant extremals to minimize losses of vitamins during the melting of melons and gourds

    Directory of Open Access Journals (Sweden)

    E. V. Inochkina

    2017-01-01

    Full Text Available The extension of periods of storage of fruits of gourds is an urgent task processing industry. The most developed and available for injection is a method of dehydration of raw materials due to supply of heat transfer fluids. In addition to solid dry frame in raw materials is 80–90% water. In the period of moisture removal from raw material changes of thermal-physical and structural-mechanical and physicochemical characteristics. The ratio of water and dry matter in vegetative raw materials largely determines the modes of drying and storage conditions of the finished product. During drying, there are a number of limitations: the drying temperature should not exceed the degradation temperature of vitamins and proteins, and the magnitude of course, the moisture content of the product depends on the reaction prevention malonodinitrile sugars at the critical moisture content. An important problem of the drying of production is quality control stages of drying, the dynamics of which is quite difficult to describe using mathematical models. The main factors of optimization of industrial drying processes is preservation of valuable components of the feedstock, the drying time, energy and resource conservation. Development of effective control algorithm for the process of dehydration of raw materials described in the article on the example of drying of slices of melon. Experimental approach a two-stage process of drying of melon varieties Taman, the proposed regression model with the relaxation-based on humidity and content of vitamin C from the variable in time temperature and pressure, based on the available literature and own experimental data. According to the optimal control of the drying process to search for the thermobaric regime that maximizes the vitamin C content at the end of the drying, under specified conditions, the humidity. The main findings are the solution of the problem for the case of piecewise constant temperature and pressure in

  8. Mathematical Formulation Requirements and Specifications for the Process Models

    Energy Technology Data Exchange (ETDEWEB)

    Steefel, C.; Moulton, D.; Pau, G.; Lipnikov, K.; Meza, J.; Lichtner, P.; Wolery, T.; Bacon, D.; Spycher, N.; Bell, J.; Moridis, G.; Yabusaki, S.; Sonnenthal, E.; Zyvoloski, G.; Andre, B.; Zheng, L.; Davis, J.

    2010-11-01

    The Advanced Simulation Capability for Environmental Management (ASCEM) is intended to be a state-of-the-art scientific tool and approach for understanding and predicting contaminant fate and transport in natural and engineered systems. The ASCEM program is aimed at addressing critical EM program needs to better understand and quantify flow and contaminant transport behavior in complex geological systems. It will also address the long-term performance of engineered components including cementitious materials in nuclear waste disposal facilities, in order to reduce uncertainties and risks associated with DOE EM's environmental cleanup and closure activities. Building upon national capabilities developed from decades of Research and Development in subsurface geosciences, computational and computer science, modeling and applied mathematics, and environmental remediation, the ASCEM initiative will develop an integrated, open-source, high-performance computer modeling system for multiphase, multicomponent, multiscale subsurface flow and contaminant transport. This integrated modeling system will incorporate capabilities for predicting releases from various waste forms, identifying exposure pathways and performing dose calculations, and conducting systematic uncertainty quantification. The ASCEM approach will be demonstrated on selected sites, and then applied to support the next generation of performance assessments of nuclear waste disposal and facility decommissioning across the EM complex. The Multi-Process High Performance Computing (HPC) Simulator is one of three thrust areas in ASCEM. The other two are the Platform and Integrated Toolsets (dubbed the Platform) and Site Applications. The primary objective of the HPC Simulator is to provide a flexible and extensible computational engine to simulate the coupled processes and flow scenarios described by the conceptual models developed using the ASCEM Platform. The graded and iterative approach to assessments

  9. The effect of cation:anion ratio in solution on the mechanism of barite growth at constant supersaturation: Role of the desolvation process on the growth kinetics

    Science.gov (United States)

    Kowacz, M.; Putnis, C. V.; Putnis, A.

    2007-11-01

    The mechanism of barite growth has been investigated in a fluid cell of an Atomic Force Microscope by passing solutions of constant supersaturation ( Ω) but variable ion activity ratio ( r=a/a) over a barite substrate.The observed dependence of step-spreading velocity on solution stoichiometry can be explained by considering non-equivalent attachment frequency factors for the cation and anion. We show that the potential for two-dimensional nucleation changes under a constant thermodynamic driving force due to the kinetics of barium integration into the surface, and that the growth mode changes from preexisting step advancement to island spreading as the cation/anion activity ratio increases. Scanning electron microscopy studies of crystals grown in bulk solutions support our findings that matching the ion ratio in the fluid to that of the crystal lattice does not result in maximum growth and nucleation rates. Significantly more rapid rates correspond to solution stoichiometries where [Ba 2+] is in excess with respect to [ SO42-]. Experiments performed in dilute aqueous solutions of methanol show that even 0.02 molar fraction of organic cosolvent in the growth solution significantly accelerates step growth velocity and nucleation rates (while keeping Ω the same as in the reference solution in water). Our observations suggest that the effect of methanol on barite growth results first of all from reduction of the barrier that prevents the Ba 2+ from reaching the surface and corroborate the hypothesis that desolvation of the cation and of the surface is the rate limiting kinetic process for two-dimensional nucleation and for crystal growth.

  10. Process requirements of galactose oxidase catalyzed oxidation of alcohols

    DEFF Research Database (Denmark)

    Pedersen, Asbjørn Toftgaard; R. Birmingham, William; Rehn, Gustav

    2015-01-01

    -electron oxidants to reactivate the enzyme upon loss of the amino acid radical in its active site. In this work, the addition of catalase, single-electron oxidants, and copper ions was investigated systematically in order to find the minimum concentrations required to obtain a fully active GOase. Furthermore....... GOase was shown to be completely stable for 120 h in buffer with stirring at 25 °C, and the activity even increased 30% if the enzyme solution was also aerated in a similar experiment. The high Km for oxygen of GOase (>5 mM) relative to the solubility of oxygen in water reveals a trade-off between...... supplying oxygen at a sufficiently high rate and ensuring a high degree of enzyme utilization (i.e., ensuring the highest possible specific rate of reaction). Nevertheless, the good stability and high activity of GOase bode well for its future application as an industrial biocatalyst....

  11. 40 CFR 63.7506 - Do any boilers or process heaters have limited requirements?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 13 2010-07-01 2010-07-01 false Do any boilers or process heaters have..., and Institutional Boilers and Process Heaters General Compliance Requirements § 63.7506 Do any boilers or process heaters have limited requirements? (a) New or reconstructed boilers and process heaters in...

  12. Specific requirements of GS-R3 related to process implementation

    International Nuclear Information System (INIS)

    Florescu, N.

    2009-01-01

    The PowerPoint presentation gives: an overview of IAEA management system requirements or topics; - the requirements specific to processes and process implementation; - the key practical challenge of using the process approach specified in IAEA SG GS-G3.1 and GS-G3.5. The following items are thoroughly discussed: - Requirements related to specific process implementation and developing processes; - Process management; Generic management; - System processes: - Control of documents; Control of products; Control of records; - Purchasing; - Communication; - Managing organizational change; - Other requirements concerning the process management system; - General management system; - Grading; - Documentation; - Fulfilling the requirements of interested parties; - Management responsibility; - Planning responsibility and authority for the management system monitoring and measurement; - Independent assessment; - Management system review; - Non-conformances, corrective and preventive actions; - Improvement key practical challenge of using the process approach specified in IAEA SG GS-G3.1 and GS-G3.5; - Key challenge: - Process common to all stages; - Phases of process development proposed by IAEA. The following conclusions complete the presentation: GS-R-3 sets basic requirements for process-based integrated management system; - Some key generic processes required, no specific process model favoured namely, no reference to management, core and support processes; - Up to organization to determine appropriate process model; - Easily applicable to a wide range of facilities and activities, including those of a regulatory body; - Specific requirements are found in specific Safety Guide. (author)

  13. Proposal for elicitation and analysis of environmental requirements into the construction design process: a case study

    Directory of Open Access Journals (Sweden)

    Camila Pegoraro

    2010-05-01

    Full Text Available Proposal: As new demands from sustainable development, environmental requirements arise as another challenge to design process management. It is already known that companies which design buildings are usually exposed to many managerial difficulties. Faced to the environmental demands, these companies require new facilities to align environmental requirements to the business goals and to include them properly in design process. This paper is based on a case study in a construction company, which was developed through interviews and document analysis. It is intended to present a procedure for the project environmental requirements elicitation, organization and analysis, which is based on the requirements engineering (ER concepts. As results it was concluded that the ER concepts are useful for the environmental requirements integration into the design process and that strategic planning should give directions for the effective environmental requirements adherence. Moreover, a procedure for environmental requirements modeling is proposed. Key-words: Design process, Requirements management, Environmental requirements, Construction

  14. Is a pre-analytical process for urinalysis required?

    Science.gov (United States)

    Petit, Morgane; Beaudeux, Jean-Louis; Majoux, Sandrine; Hennequin, Carole

    2017-10-01

    For the reliable urinary measurement of calcium, phosphate and uric acid, a pre-analytical process by adding acid or base to urine samples at laboratory is recommended in order to dissolve precipitated solutes. Several studies on different kind of samples and analysers have previously shown that a such pre-analytical treatment is useless. The objective was to study the necessity of pre-analytical treatment of urine on samples collected using the V-Monovette ® (Sarstedt) system and measured on the analyser Architect C16000 (Abbott Diagnostics). Sixty urinary samples of hospitalized patients were selected (n=30 for calcium and phosphate, and n=30 for uric acid). After acidification of urine samples for measurement of calcium and phosphate, and alkalinisation for measurement of uric acid respectively, differences between results before and after the pre-analytical treatment were compared to acceptable limits recommended by the French society of clinical biology (SFBC). No difference in concentration between before and after pre-analytical treatment of urine samples exceeded acceptable limits from SFBC for measurement of calcium and uric acid. For phosphate, only one sample exceeded these acceptable limits, showing a result paradoxically lower after acidification. In conclusion, in agreement with previous study, our results show that acidification or alkalinisation of urine samples from 24 h urines or from urination is not a pre-analytical necessity for measurement of calcium, phosphate and uric acid.

  15. 25 CFR 42.6 - When does due process require a formal disciplinary hearing?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false When does due process require a formal disciplinary... RIGHTS § 42.6 When does due process require a formal disciplinary hearing? Unless local school policies and procedures provide for less, a formal disciplinary hearing is required before a suspension in...

  16. FORMATION CONSTANTS AND THERMODYNAMIC ...

    African Journals Online (AJOL)

    KEY WORDS: Metal complexes, Schiff base ligand, Formation constant, DFT calculation ... best values for the formation constants of the proposed equilibrium model by .... to its positive charge distribution and the ligand deformation geometry.

  17. Ion exchange equilibrium constants

    CERN Document Server

    Marcus, Y

    2013-01-01

    Ion Exchange Equilibrium Constants focuses on the test-compilation of equilibrium constants for ion exchange reactions. The book first underscores the scope of the compilation, equilibrium constants, symbols used, and arrangement of the table. The manuscript then presents the table of equilibrium constants, including polystyrene sulfonate cation exchanger, polyacrylate cation exchanger, polymethacrylate cation exchanger, polysterene phosphate cation exchanger, and zirconium phosphate cation exchanger. The text highlights zirconium oxide anion exchanger, zeolite type 13Y cation exchanger, and

  18. Non-dimensional characterization of the friction stir/spot welding process using a simple Couette flow model part I: Constant property Bingham plastic solution

    International Nuclear Information System (INIS)

    Buck, Gregory A.; Langerman, Michael

    2004-01-01

    A simplified model for the material flow created during a friction stir/spot welding process has been developed using a boundary driven cylindrical Couette flow model with a specified heat flux at the inner cylinder for a Bingham plastic material. Non-dimensionalization of the constant property governing equations identified three parameters that influence the velocity and temperature fields. Analytic solutions to these equations are presented and some representative results from a parametric study (parameters chosen and varied over ranges expected for the welding of a wide variety of metals) are discussed. The results also provide an expression for the critical radius (location of vanishing material velocity) as functions of the relevant non-dimensional parameters. A final study was conducted in which values for the non-dimensional heat flux parameter were chosen to produce peak dimensional temperatures on the order of 80% of the melting temperature for a typical 2000 series aluminum. Under these conditions it was discovered that the ratio of the maximum rate of shear work within the material (viscous dissipation) to the rate of energy input at the boundary due to frictional heating, ranged from about 0.0005% for the lowest pin tool rotation rate, to about 1.3% for the highest tool rotation rate studied. Curve fits to previous Gleeble data taken for a number of aluminum alloys provide reasonable justification for the Bingham plastic constitutive model, and although these fits indicate a strong temperature dependence for critical flow stress and viscosity, this work provides a simple tool for more sophisticated model validation. Part II of this study will present numerical solutions for velocity and temperature fields resulting from the non-linear coupling of the momentum and energy equations created by temperature dependent transport properties

  19. Assessment of energy requirements in proven and new copper processes. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Pitt, C.H.; Wadsworth, M.E.

    1980-12-31

    Energy requirements are presented for thirteen pyrometallurgical and eight hydrometallurgical processes for the production of copper. Front end processing, mining, mineral processing, gas cleaning, and acid plant as well as mass balances are included. Conventional reverberatory smelting is used as a basis for comparison. Recommendations for needed process research in copper production are presented.

  20. Spacelab Level 4 Programmatic Implementation Assessment Study. Volume 2: Ground Processing requirements

    Science.gov (United States)

    1978-01-01

    Alternate ground processing options are summarized, including installation and test requirements for payloads, space processing, combined astronomy, and life sciences. The level 4 integration resource requirements are also reviewed for: personnel, temporary relocation, transportation, ground support equipment, and Spacelab flight hardware.

  1. Data acquisition and online processing requirements for experimentation at the superconducting super collider

    International Nuclear Information System (INIS)

    Lankford, A.J.; Barsotti, E.; Gaines, I.

    1990-01-01

    Differences in scale between data acquisition and online processing requirements for detectors at the Superconducting Super Collider and systems for existing large detectors will require new architectures and technological advances in these systems. Emerging technologies will be employed for data transfer, processing, and recording. (orig.)

  2. 10 CFR 70.64 - Requirements for new facilities or new processes at existing facilities.

    Science.gov (United States)

    2010-01-01

    ... postulated accidents that could lead to loss of safety functions. (5) Chemical protection. The design must... 10 Energy 2 2010-01-01 2010-01-01 false Requirements for new facilities or new processes at... Critical Mass of Special Nuclear Material § 70.64 Requirements for new facilities or new processes at...

  3. Data acquisition and online processing requirements for experimentation at the Superconducting Super Collider

    International Nuclear Information System (INIS)

    Lankford, A.J.; Barsotti, E.; Gaines, I.

    1989-07-01

    Differences in scale between data acquisition and online processing requirements for detectors at the Superconducting Super Collider and systems for existing large detectors will require new architectures and technological advances in these systems. Emerging technologies will be employed for data transfer, processing, and recording. 9 refs., 3 figs

  4. Characterization of In-Flight Processing of Alumina Powder Using a DC-RF Hybrid Plasma Flow System at Constant Low Operating Power

    Science.gov (United States)

    Nishiyama, H.; Onodera, M.; Igawa, J.; Nakajima, T.

    2009-12-01

    The aim of this study is to provide the optimum operating conditions for enhancing in-flight alumina particle heating as much as possible for particle spheroidization and aggregation of melted particles using a DC-RF hybrid plasma flow system even at constant low operating power based on the thermofluid considerations. It is clarified that the swirl flow and higher operating pressure enhance the particle melting and aggregation of melted particles coupled with increasing gas temperature downstream of a plasma uniformly in the radial direction at constant electrical discharge conditions.

  5. Process and utility water requirements for cellulosic ethanol production processes via fermentation pathway

    Science.gov (United States)

    The increasing need of additional water resources for energy production is a growing concern for future economic development. In technology development for ethanol production from cellulosic feedstocks, a detailed assessment of the quantity and quality of water required, and the ...

  6. 21 CFR 111.315 - What are the requirements for laboratory control processes?

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What are the requirements for laboratory control... MANUFACTURING, PACKAGING, LABELING, OR HOLDING OPERATIONS FOR DIETARY SUPPLEMENTS Production and Process Control System: Requirements for Laboratory Operations § 111.315 What are the requirements for laboratory control...

  7. Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations

    Science.gov (United States)

    Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad

    2012-11-01

    This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.

  8. Process of establishing design requirements and selecting alternative configurations for conceptual design of a VLA

    Directory of Open Access Journals (Sweden)

    Bo-Young Bae

    2017-04-01

    Full Text Available In this study, a process for establishing design requirements and selecting alternative configurations for the conceptual phase of aircraft design has been proposed. The proposed process uses system-engineering-based requirement-analysis techniques such as objective tree, analytic hierarchy process, and quality function deployment to establish logical and quantitative standards. Moreover, in order to perform a logical selection of alternative aircraft configurations, it uses advanced decision-making methods such as morphological matrix and technique for order preference by similarity to the ideal solution. In addition, a preliminary sizing tool has been developed to check the feasibility of the established performance requirements and to evaluate the flight performance of the selected configurations. The present process has been applied for a two-seater very light aircraft (VLA, resulting in a set of tentative design requirements and two families of VLA configurations: a high-wing configuration and a low-wing configuration. The resulting set of design requirements consists of three categories: customer requirements, certification requirements, and performance requirements. The performance requirements include two mission requirements for the flight range and the endurance by reflecting the customer requirements. The flight performances of the two configuration families were evaluated using the sizing tool developed and the low-wing configuration with conventional tails was selected as the best baseline configuration for the VLA.

  9. Process qualification and control in electron beams--requirements, methods, new concepts and challenges

    International Nuclear Information System (INIS)

    Mittendorfer, J.; Gratzl, F.; Hanis, D.

    2004-01-01

    In this paper the status of process qualification and control in electron beam irradiation is analyzed in terms of requirements, concepts, methods and challenges for a state-of-the-art process control concept for medical device sterilization. Aspects from process qualification to routine process control are described together with the associated process variables. As a case study the 10 MeV beams at Mediscan GmbH are considered. Process control concepts like statistical process control (SPC) and a new concept to determine process capability is briefly discussed

  10. The Fine Structure Constant

    Indian Academy of Sciences (India)

    IAS Admin

    The article discusses the importance of the fine structure constant in quantum mechanics, along with the brief history of how it emerged. Al- though Sommerfelds idea of elliptical orbits has been replaced by wave mechanics, the fine struc- ture constant he introduced has remained as an important parameter in the field of ...

  11. 15 CFR 713.4 - Advance declaration requirements for additionally planned production, processing, or consumption...

    Science.gov (United States)

    2010-01-01

    ... additionally planned production, processing, or consumption of Schedule 2 chemicals. 713.4 Section 713.4..., processing, or consumption of Schedule 2 chemicals. (a) Declaration requirements for additionally planned activities. (1) You must declare additionally planned production, processing, or consumption of Schedule 2...

  12. UNDERSTANDING THAI CULTURE AND ITS IMPACT ON REQUIREMENTS ENGINEERING PROCESS MANAGEMENT DURING INFORMATION SYSTEMS DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Theerasak Thanasankit

    2002-01-01

    Full Text Available This paper explores the impact of Thai culture on managing the decision making process in requirements engineering and contribution a better understand of its influence on the management of requirements engineering process. The paper illustrates the interaction of technology and culture and shows that rather than technology changing culture, culture can change the way technology is used. Thai culture is naturally inherent in Thai daily life and Thais bring that into their work practices. The concepts of power and uncertainty in Thai culture contribute toward hierarchical forms of communication and decision making process in Thailand, especially during requirements engineering, where information systems requirements need to be established for further development. The research shows that the decision making process in Thailand tends to take a much longer time, as every stage during requirements engineering needs to be reported to management for final decisions. The tall structure of Thai organisations also contributes to a bureaucratic, elongated decision-making process during information systems development. Understanding the influence of Thai culture on requirements engineering and information systems development will assist multinational information systems consulting organisations to select, adapt, better manage, or change requirements engineering process and information systems developments methodologies to work best with Thai organisations.

  13. Cosmological constants and variations

    International Nuclear Information System (INIS)

    Barrow, John D

    2005-01-01

    We review properties of theories for the variation of the gravitation and fine structure 'constants'. We highlight some general features of the cosmological models that exist in these theories with reference to recent quasar data that is consistent with time-variation in the fine structure 'constant' since a redshift of 3.5. The behaviour of a simple class of varying alpha cosmologies is outlined in the light of all the observational constraints. We also discuss some of the consequences of varying 'constants' for oscillating universes and show by means of exact solutions that they appear to evolve monotonically in time even though the scale factor of the universe oscillates

  14. Systematics of constant roll inflation

    Science.gov (United States)

    Anguelova, Lilia; Suranyi, Peter; Wijewardhana, L. C. R.

    2018-02-01

    We study constant roll inflation systematically. This is a regime, in which the slow roll approximation can be violated. It has long been thought that this approximation is necessary for agreement with observations. However, recently it was understood that there can be inflationary models with a constant, and not necessarily small, rate of roll that are both stable and compatible with the observational constraint ns ≈ 1. We investigate systematically the condition for such a constant-roll regime. In the process, we find a whole new class of inflationary models, in addition to the known solutions. We show that the new models are stable under scalar perturbations. Finally, we find a part of their parameter space, in which they produce a nearly scale-invariant scalar power spectrum, as needed for observational viability.

  15. 41 CFR 102-38.225 - What are the additional requirements in the bid process?

    Science.gov (United States)

    2010-07-01

    ... OF PERSONAL PROPERTY Bids Acceptance of Bids § 102-38.225 What are the additional requirements in the bid process? All sales except fixed price sales must contain a certification of independent price...

  16. [Precautions of physical performance requirements and test methods during product standard drafting process of medical devices].

    Science.gov (United States)

    Song, Jin-Zi; Wan, Min; Xu, Hui; Yao, Xiu-Jun; Zhang, Bo; Wang, Jin-Hong

    2009-09-01

    The major idea of this article is to discuss standardization and normalization for the product standard of medical devices. Analyze the problem related to the physical performance requirements and test methods during product standard drafting process and make corresponding suggestions.

  17. Waste Receiving and Processing Facility Module 1 Data Management System software requirements specification

    International Nuclear Information System (INIS)

    Rosnick, C.K.

    1996-01-01

    This document provides the software requirements for Waste Receiving and Processing (WRAP) Module 1 Data Management System (DMS). The DMS is one of the plant computer systems for the new WRAP 1 facility (Project W-0126). The DMS will collect, store and report data required to certify the low level waste (LLW) and transuranic (TRU) waste items processed at WRAP 1 as acceptable for shipment, storage, or disposal

  18. Waste Receiving and Processing Facility Module 1 Data Management System Software Requirements Specification

    International Nuclear Information System (INIS)

    Brann, E.C. II.

    1994-01-01

    This document provides the software requirements for Waste Receiving and Processing (WRAP) Module 1 Data Management System (DMS). The DMS is one of the plant computer systems for the new WRAP 1 facility (Project W-026). The DMS will collect, store and report data required to certify the low level waste (LLW) and transuranic (TRU) waste items processed at WRAP 1 as acceptable for shipment, storage, or disposal

  19. Waste Receiving and Processing Facility Module 1 Data Management System Software Requirements Specification

    Energy Technology Data Exchange (ETDEWEB)

    Brann, E.C. II

    1994-09-09

    This document provides the software requirements for Waste Receiving and Processing (WRAP) Module 1 Data Management System (DMS). The DMS is one of the plant computer systems for the new WRAP 1 facility (Project W-026). The DMS will collect, store and report data required to certify the low level waste (LLW) and transuranic (TRU) waste items processed at WRAP 1 as acceptable for shipment, storage, or disposal.

  20. The cosmological constant problem

    International Nuclear Information System (INIS)

    Dolgov, A.D.

    1989-05-01

    A review of the cosmological term problem is presented. Baby universe model and the compensating field model are discussed. The importance of more accurate data on the Hubble constant and the Universe age is stressed. 18 refs

  1. Orbiter data reduction complex data processing requirements for the OFT mission evaluation team (level C)

    Science.gov (United States)

    1979-01-01

    This document addresses requirements for post-test data reduction in support of the Orbital Flight Tests (OFT) mission evaluation team, specifically those which are planned to be implemented in the ODRC (Orbiter Data Reduction Complex). Only those requirements which have been previously baselined by the Data Systems and Analysis Directorate configuration control board are included. This document serves as the control document between Institutional Data Systems Division and the Integration Division for OFT mission evaluation data processing requirements, and shall be the basis for detailed design of ODRC data processing systems.

  2. Impact of Requirements Elicitation Processes on Success of Information System Development Projects

    Directory of Open Access Journals (Sweden)

    Bormane Līga

    2016-12-01

    Full Text Available Requirements articulating user needs and corresponding to enterprise business processes are a key to successful implementation of information system development projects. However, the parties involved in projects frequently are not able to agree on a common development vision and have difficulties expressing their needs. Several industry experts have acknowledged that requirements elicitation is one of the most difficult tasks in development projects. This study investigates the impact of requirements elicitation processes on project outcomes depending on the applied project development methodology.

  3. 77 FR 52692 - NIST Federal Information Processing Standard (FIPS) 140-3 (Second Draft), Security Requirements...

    Science.gov (United States)

    2012-08-30

    ...-03] NIST Federal Information Processing Standard (FIPS) 140-3 (Second Draft), Security Requirements....'' Authority: Federal Information Processing Standards (FIPS) are issued by the National Institute of Standards... Standards and Technology (NIST) seeks additional comments on specific sections of Federal Information...

  4. Order acceptance in food processing systems with random raw material requirements

    NARCIS (Netherlands)

    Kilic, Onur A.; van Donk, Dirk Pieter; Wijngaard, Jacob; Tarim, S. Armagan

    This study considers a food production system that processes a single perishable raw material into several products having stochastic demands. In order to process an order, the amount of raw material delivery from storage needs to meet the raw material requirement of the order. However, the amount

  5. The construction of emotional experience requires the integration of implicit and explicit emotional processes.

    Science.gov (United States)

    Quirin, Markus; Lane, Richard D

    2012-06-01

    Although we agree that a constructivist approach to emotional experience makes sense, we propose that implicit (visceromotor and somatomotor) emotional processes are dissociable from explicit (attention and reflection) emotional processes, and that the conscious experience of emotion requires an integration of the two. Assessments of implicit emotion and emotional awareness can be helpful in the neuroscientific investigation of emotion.

  6. 48 CFR 1352.237-72 - Security processing requirements-national security contracts.

    Science.gov (United States)

    2010-10-01

    ... requirements-national security contracts. 1352.237-72 Section 1352.237-72 Federal Acquisition Regulations... Provisions and Clauses 1352.237-72 Security processing requirements—national security contracts. As prescribed in 48 CFR 1337.110-70(d), use the following clause: Security Processing Requirements—National...

  7. Requirements for the design and implementation of checklists for surgical processes

    NARCIS (Netherlands)

    Verdaasdonk, E.G.G.; Stassen, L.P.S.; Widhiasmara, P.P.; Dankelman, J.

    2008-01-01

    Background- The use of checklists is a promising strategy for improving patient safety in all types of surgical processes inside and outside the operating room. This article aims to provide requirements and implementation of checklists for surgical processes. Methods- The literature on checklist use

  8. A Theory of Information Quality and a Framework for its Implementation in the Requirements Engineering Process

    Science.gov (United States)

    Grenn, Michael W.

    This dissertation introduces a theory of information quality to explain macroscopic behavior observed in the systems engineering process. The theory extends principles of Shannon's mathematical theory of communication [1948] and statistical mechanics to information development processes concerned with the flow, transformation, and meaning of information. The meaning of requirements information in the systems engineering context is estimated or measured in terms of the cumulative requirements quality Q which corresponds to the distribution of the requirements among the available quality levels. The requirements entropy framework (REF) implements the theory to address the requirements engineering problem. The REF defines the relationship between requirements changes, requirements volatility, requirements quality, requirements entropy and uncertainty, and engineering effort. The REF is evaluated via simulation experiments to assess its practical utility as a new method for measuring, monitoring and predicting requirements trends and engineering effort at any given time in the process. The REF treats the requirements engineering process as an open system in which the requirements are discrete information entities that transition from initial states of high entropy, disorder and uncertainty toward the desired state of minimum entropy as engineering effort is input and requirements increase in quality. The distribution of the total number of requirements R among the N discrete quality levels is determined by the number of defined quality attributes accumulated by R at any given time. Quantum statistics are used to estimate the number of possibilities P for arranging R among the available quality levels. The requirements entropy H R is estimated using R, N and P by extending principles of information theory and statistical mechanics to the requirements engineering process. The information I increases as HR and uncertainty decrease, and the change in information AI needed

  9. 40 CFR Table 4 to Subpart Vvvvvv... - Emission Limits and Compliance Requirements for Metal HAP Process Vents

    Science.gov (United States)

    2010-07-01

    ... Requirements for Metal HAP Process Vents 4 Table 4 to Subpart VVVVVV of Part 63 Protection of Environment... of Part 63—Emission Limits and Compliance Requirements for Metal HAP Process Vents As required in § 63.11496(f), you must comply with the requirements for metal HAP process vents as shown in the...

  10. Radiographic constant exposure technique

    DEFF Research Database (Denmark)

    Domanus, Joseph Czeslaw

    1985-01-01

    The constant exposure technique has been applied to assess various industrial radiographic systems. Different X-ray films and radiographic papers of two producers were compared. Special attention was given to fast film and paper used with fluorometallic screens. Radiographic image quality...... was tested by the use of ISO wire IQI's and ASTM penetrameters used on Al and Fe test plates. Relative speed and reduction of kilovoltage obtained with the constant exposure technique were calculated. The advantages of fast radiographic systems are pointed out...

  11. Photodissociation constant of NO2

    International Nuclear Information System (INIS)

    Nootebos, M.A.; Bange, P.

    1992-01-01

    The velocity of the dissociation of NO 2 into ozone and NO mainly depends on the ultraviolet sunlight quantity, and with that the cloudiness. A correct value for this reaction constant is important for the accurate modelling of O 3 - and NO 2 -concentrations in plumes of electric power plants, in particular in the case of determination of the amount of photochemical summer smog. An advanced signal processing method (deconvolution, correlation) was applied on the measurements. The measurements were carried out from aeroplanes

  12. On the cosmical constant

    International Nuclear Information System (INIS)

    Chandra, R.

    1977-01-01

    On the grounds of the two correspondence limits, the Newtonian limit and the special theory limit of Einstein field equations, a modification of the cosmical constant has been proposed which gives realistic results in the case of a homogeneous universe. Also, according to this modification an explanation for the negative pressure in the steady-state model of the universe has been given. (author)

  13. Cosmological constant problem

    International Nuclear Information System (INIS)

    Weinberg, S.

    1989-01-01

    Cosmological constant problem is discussed. History of the problem is briefly considered. Five different approaches to solution of the problem are described: supersymmetry, supergravity, superstring; anthropic approach; mechamism of lagrangian alignment; modification of gravitation theory and quantum cosmology. It is noted that approach, based on quantum cosmology is the most promising one

  14. The Yamabe constant

    International Nuclear Information System (INIS)

    O Murchadha, N.

    1991-01-01

    The set of riemannian three-metrics with positive Yamabe constant defines the space of independent data for the gravitational field. The boundary of this set is investigated, and it is shown that metrics close to the boundary satisfy the positive-energy theorem. (Author) 18 refs

  15. Analysis of the Requirements Generation Process for the Logistics Analysis and Wargame Support Tool

    Science.gov (United States)

    2017-06-01

    impact everything from strategic logistic operations down to the energy demands at the company level. It also looks at the force structure of the...this requirement. 34. The system shall determine the efficiency of the logistics network with respect to an estimated cost of fuel used to deliver...REQUIREMENTS GENERATION PROCESS FOR THE LOGISTICS ANALYSIS AND WARGAME SUPPORT TOOL by Jonathan M. Swan June 2017 Thesis Advisor

  16. Developing a survey to collect expertise in agile product line requirements engineering processes

    OpenAIRE

    Feng, Kunwu; Lempert, Meli; Tang, Yan; Tian, Kun; Cooper, Kendra M.L.; Franch Gutiérrez, Javier

    2007-01-01

    Current agile methods are focused on practices of small, rapid developing and iteration, more people oriented, less documentation projects, and the use of the methods in large, product line projects are somehow difficult. UTD and GESSI have started a project to develop an expert system that can assist a requirements enginer in selecting a requirements engineering process that is well suited for their project, in particular with respect to the use of agile and product line engineering methods....

  17. Constant exposure technique in industrial radiography

    International Nuclear Information System (INIS)

    Domanus, J.C.

    1983-08-01

    The principles and advantages of the constant exposure technique are explained. Choice of exposure factors is analyzed. Film, paper and intensifying screens used throughout the investigation and film and paper processing are described. Exposure technique and the use of image quality indicators are given. Methods of determining of radiographic image quality are presented. Conclusions about the use of constant exposure vs. constant kilovoltage technique are formulated. (author)

  18. The reaction O((3)P) + HOBr: Temperature dependence of the rate constant and importance of the reaction as an HOBr stratospheric loss process

    Science.gov (United States)

    Nesbitt, F. L.; Monks, P. S.; Payne, W. A.; Stief, L. J.; Toumi, R.

    1995-01-01

    The absolute rate constant for the reaction O((3)P) + HOBr has been measured between T = 233K and 423K using the discharge-flow kinetic technique coupled to mass spectrometric detection. The value of the rate coefficient at room temperature is (2.5 +/- 0.6) x 10(exp -11)cu cm/molecule/s and the derived Arrhenius expression is (1.4 +/- 0.5) x 10(exp -10) exp((-430 +/- 260)/T)cu cm/molecule/s. From these rate data the atmospheric lifetime of HOBr with respect to reaction with O((3)P) is about 0.6h at z = 25 km which is comparable to the photolysis lifetime based on recent measurements of the UV cross section for HOBr. Implications for HOBr loss in the stratosphere have been tested using a 1D photochemical box model. With the inclusion of the rate parameters and products for the O + HOBr reaction, calculated concentration profiles of BrO increase by up to 33% around z = 35 km. This result indicates that the inclusion of the O + HOBr reaction in global atmospheric chemistry models may have an impact on bromine partitioning in the middle atmosphere.

  19. Data requirements for valuing externalities: The role of existing permitting processes

    Energy Technology Data Exchange (ETDEWEB)

    Lee, A.D.; Baechler, M.C.; Callaway, J.M.

    1990-08-01

    While the assessment of externalities, or residual impacts, will place new demands on regulators, utilities, and developers, existing processes already require certain data and information that may fulfill some of the data needs for externality valuation. This paper examines existing siting, permitting, and other processes and highlights similarities and differences between their data requirements and the data required to value environmental externalities. It specifically considers existing requirements for siting new electricity resources in Oregon and compares them with the information and data needed to value externalities for such resources. This paper also presents several observations about how states can take advantage of data acquired through processes already in place as they move into an era when externalities are considered in utility decision-making. It presents other observations on the similarities and differences between the data requirements under existing processes and those for valuing externalities. This paper also briefly discusses the special case of cumulative impacts. And it presents recommendations on what steps to take in future efforts to value externalities. 35 refs., 2 tabs.

  20. A Scenario-Based Process for Requirements Development: Application to Mission Operations Systems

    Science.gov (United States)

    Bindschadler, Duane L.; Boyles, Carole A.

    2008-01-01

    The notion of using operational scenarios as part of requirements development during mission formulation (Phases A & B) is widely accepted as good system engineering practice. In the context of developing a Mission Operations System (MOS), there are numerous practical challenges to translating that notion into the cost-effective development of a useful set of requirements. These challenges can include such issues as a lack of Project-level focus on operations issues, insufficient or improper flowdown of requirements, flowdown of immature or poor-quality requirements from Project level, and MOS resource constraints (personnel expertise and/or dollars). System engineering theory must be translated into a practice that provides enough structure and standards to serve as guidance, but that retains sufficient flexibility to be tailored to the needs and constraints of a particular MOS or Project. We describe a detailed, scenario-based process for requirements development. Identifying a set of attributes for high quality requirements, we show how the portions of the process address many of those attributes. We also find that the basic process steps are robust, and can be effective even in challenging Project environments.

  1. Beyond the Hubble Constant

    Science.gov (United States)

    1995-08-01

    of SN 1995K of about 22.7, but the uncertainty of this value is still so large that this measurement alone cannot be used to determine the value of q0. This will require many more observations of supernovae at least as distant as the present one, a daunting task that may nevertheless be possible within this broad, international programme. It is estimated that a reliable measurement of q0 may become possible when about 20 Type Ia supernovae with accurate peak magnitudes have been measured. According to the discovery predictions, this could be possible within the next couple of years. In this connection, it is of some importance that for this investigation, it is in principle not necessary to know the correct value of the Hubble constant H0 in advance; q0 may still be determined by comparing the relative distance scale of distant supernovae with that of nearby ones. This research is described in more detail in a forthcoming article in the September 1995 issue of the ESO Messenger. Notes: [1] Brian P. Schmidt (Mount Stromlo and Siding Spring Observatories, Australia), Bruno Leibundgut, Jason Spyromilio, Jeremy Walsh (ESO), Mark M. Phillips, Nicholas B. Suntzeff, Mario Hamuy, Robert A. Schommer (Cerro Tololo Inter-American Observatory), Roberto Aviles (formerly Cerro Tololo Inter-American Observatory; now at ESO), Robert P. Kirshner, Adam Riess, Peter Challis, Peter Garnavich (Center for Astrophysics, Cambridge, Massachussetts, U.S.A.), Christopher Stubbs, Craig Hogan (University of Washington, Seattle, U.S.A.), Alan Dressler (Carnegie Observatories, U.S.A.) and Robin Ciardullo (Pennsylvania State University, U.S.A.) [2] In astronomy, the redshift denotes the fraction by which the lines in the spectrum of an object are shifted towards longer wavelengths. The observed redshift of a distant galaxy gives a direct estimate of the apparent recession velocity as caused by the universal expansion. Since the expansion rate increases with the distance, the velocity is itself a

  2. Requirements on software lifecycle process (RSLP) for KALIMER digital computer-based MMIS design

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jang Soo; Kwon, Kee Choon; Kim, Jang Yeol [Korea Atomic Energy Research Institute, Taejon (Korea)

    1998-04-01

    Digital Man Machine Interface System (MMIS) systems of Korea Advanced Liquid MEtal Reactor (KALIMER) may share code, data transmission, data, and process equipment to a greater degree than analog systems. Although this sharing is the basis for many of the advantages of digital systems, it also raises a key concern: a design using shared data or code has the potential to propagate a common-cause or common-mode failure via software errors, thus defeating the redundancy achieved by the hardware architectural structure. Greater sharing of process equipment among functions within a channel increases the consequences of the failure of a single hardware module and reduces the amount of diversity available within a single safety channel. The software safety plan describes the safety analysis implementation tasks that are to be carried out during the software life cycle. Documentation should exist that shows that the safety analysis activities have been successfully accomplished for each life cycle activity group. In particular, the documentation should show that the system safety requirement have been adequately addressed for each life cycle activity group, that no new hazards have been introduced, and that the software requirements, design elements, and code elements that can affect safety have been identified. Because the safety of software can be assured through both the process Verification and Validation (V and V) itself and the V and V of all the intermediate and final products during the software development lifecycle, the development of KALIMER Software Safety Framework (KSSF) must be established. As the first activity for establishing KSSF, we have developed this report, Requirement on Software Life-cycle Process (RSLP) for designing KALIMER digital MMIS. This report is organized as follows. Section I describes the background, definitions, and references of RSLP. Section II describes KALIMER safety software categorization. In Section III, we define the

  3. Production in constant evolution

    International Nuclear Information System (INIS)

    Lozano, T.

    2009-01-01

    The Cofrentes Nuclear Power Plant now has 25 years of operation behind it: a quarter century adding value and demonstrating the reasons why it is one of the most important energy producing facilities in the Spanish power market. Particularly noteworthy is the enterprising spirit of the plant, which has strived to continuously improve with the large number of modernization projects that it has undertaken over the past 25 years. The plant has constantly evolved thanks to the amount of investments made to improve safety and reliability and the perseverance to stay technologically up to date. Efficiency, training and teamwork have been key to the success of the plant over these 25 years of constant change and progress. (Author)

  4. Is the sun constant

    International Nuclear Information System (INIS)

    Blake, J.B.; Dearborn, D.S.P.

    1979-01-01

    Small fluctuations in the solar constant can occur on timescales much shorter than the Kelvin time. Changes in the ability of convection to transmit energy through the superadiabatic and transition regions of the convection zone cause structure adjustments which can occur on a time scale of days. The bulk of the convection zone reacts to maintain hydrostatic equilibrium (though not thermal equilibrium) and causes a luminosity change. While small radius variations will occur, most of the change will be seen in temperature

  5. Stabilized power constant alimentation

    International Nuclear Information System (INIS)

    Roussel, L.

    1968-06-01

    The study and realization of a stabilized power alimentation variable from 5 to 100 watts are described. In order to realize a constant power drift of Lithium compensated diodes, we have searched a 1 per cent precision of regulation and a response time minus than 1 sec. Recent components like Hall multiplicator and integrated amplifiers give this possibility and it is easy to use permutable circuits. (author) [fr

  6. 76 FR 13101 - Requirements for Processing, Clearing, and Transfer of Customer Positions

    Science.gov (United States)

    2011-03-10

    ... for Processing, Clearing, and Transfer of Customer Positions AGENCY: Commodity Futures Trading... (Commission) is proposing regulations to implement Title VII of the Dodd-Frank Wall Street Reform and Consumer...), requiring a DCO, upon customer request, to promptly transfer customer positions and related funds from one...

  7. HIGH RESOLUTION RESISTIVITY LEAK DETECTION DATA PROCESSING and EVALUATION MEHTODS and REQUIREMENTS

    International Nuclear Information System (INIS)

    SCHOFIELD JS

    2007-01-01

    This document has two purposes: (sm b ullet) Describe how data generated by High Resolution REsistivity (HRR) leak detection (LD) systems deployed during single-shell tank (SST) waste retrieval operations are processed and evaluated. (sm b ullet) Provide the basic review requirements for HRR data when Hrr is deployed as a leak detection method during SST waste retrievals

  8. Isotopic dilution requirements for 233U criticality safety in processing and disposal facilities

    International Nuclear Information System (INIS)

    Elam, K.R.; Forsberg, C.W.; Hopper, C.M.; Wright, R.Q.

    1997-11-01

    The disposal of excess 233 U as waste is being considered. Because 233 U is a fissile material, one of the key requirements for processing 233 U to a final waste form and disposing of it is to avoid nuclear criticality. For many processing and disposal options, isotopic dilution is the most feasible and preferred option to avoid nuclear criticality. Isotopic dilution is dilution of fissile 233 U with nonfissile 238 U. The use of isotopic dilution removes any need to control nuclear criticality in process or disposal facilities through geometry or chemical composition. Isotopic dilution allows the use of existing waste management facilities, that are not designed for significant quantities of fissile materials, to be used for processing and disposing of 233 U. The amount of isotopic dilution required to reduce criticality concerns to reasonable levels was determined in this study to be ∼ 0.66 wt% 233 U. The numerical calculations used to define this limit consisted of a homogeneous system of silicon dioxide (SiO 2 ), water (H 2 O), 233 U, and depleted uranium (DU) in which the ratio of each component was varied to determine the conditions of maximum nuclear reactivity. About 188 parts of DU (0.2 wt% 235 U) are required to dilute 1 part of 233 U to this limit in a water-moderated system with no SiO 2 present. Thus, for the US inventory of 233 U, several hundred metric tons of DU would be required for isotopic dilution

  9. Utilization of respiratory energy in higher plants : requirements for 'maintenance' and transport processes

    NARCIS (Netherlands)

    Bouma, T.J.

    1995-01-01

    Quantitative knowledge of both photosynthesis and respiration is required to understand plant growth and resulting crop yield. However, especially the nature of the energy demanding processes that are dependent on dark respiration in full-grown tissues is largely unknown. The main objective

  10. 78 FR 52963 - 60-Day Notice of Proposed Information Collection: Technical Processing Requirements for...

    Science.gov (United States)

    2013-08-27

    ... Information Collection: Technical Processing Requirements for Multifamily Project Mortgage Insurance AGENCY...: HUD is seeking approval from the Office of Management and Budget (OMB) for the information collection... interested parties on the proposed collection of information. The purpose of this notice is to allow for 60...

  11. 21 CFR 111.460 - What requirements apply to holding in-process material?

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to holding in-process material? 111.460 Section 111.460 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CURRENT GOOD MANUFACTURING PRACTICE IN...

  12. 10 CFR 51.26 - Requirement to publish notice of intent and conduct scoping process.

    Science.gov (United States)

    2010-01-01

    ... publish notice of intent and conduct scoping process. (a) Whenever the appropriate NRC staff director... 10 Energy 2 2010-01-01 2010-01-01 false Requirement to publish notice of intent and conduct... action, a notice of intent will be prepared as provided in § 51.27, and will be published in the Federal...

  13. 40 CFR 270.24 - Specific part B information requirements for process vents.

    Science.gov (United States)

    2010-07-01

    ... incinerator, flare, boiler, process heater, condenser, or carbon adsorption system to comply with the... each compliance test required by § 264.1033(k). (3) A design analysis, specifications, drawings... texts acceptable to the Regional Administrator that present basic control device information. The design...

  14. The NERV Methodology: Non-Functional Requirements Elicitation, Reasoning and Validation in Agile Processes

    Science.gov (United States)

    Domah, Darshan

    2013-01-01

    Agile software development has become very popular around the world in recent years, with methods such as Scrum and Extreme Programming (XP). Literature suggests that functionality is the primary focus in Agile processes while non-functional requirements (NFR) are either ignored or ill-defined. However, for software to be of good quality both…

  15. Universe of constant

    Science.gov (United States)

    Yongquan, Han

    2016-10-01

    The ideal gas state equation is not applicable to ordinary gas, it should be applied to the Electromagnetic ``gas'' that is applied to the radiation, the radiation should be the ultimate state of matter changes or initial state, the universe is filled with radiation. That is, the ideal gas equation of state is suitable for the Singular point and the universe. Maybe someone consider that, there is no vessel can accommodate radiation, it is because the Ordinary container is too small to accommodate, if the radius of your container is the distance that Light through an hour, would you still think it can't accommodates radiation? Modern scientific determinate that the radius of the universe now is about 1027 m, assuming that the universe is a sphere whose volume is approximately: V = 4.19 × 1081 cubic meters, the temperature radiation of the universe (cosmic microwave background radiation temperature of the universe, should be the closest the average temperature of the universe) T = 3.15k, radiation pressure P = 5 × 10-6 N / m 2, according to the law of ideal gas state equation, PV / T = constant = 6 × 1075, the value of this constant is the universe, The singular point should also equal to the constant Author: hanyongquan

  16. Connecting Fundamental Constants

    International Nuclear Information System (INIS)

    Di Mario, D.

    2008-01-01

    A model for a black hole electron is built from three basic constants only: h, c and G. The result is a description of the electron with its mass and charge. The nature of this black hole seems to fit the properties of the Planck particle and new relationships among basic constants are possible. The time dilation factor in a black hole associated with a variable gravitational field would appear to us as a charge; on the other hand the Planck time is acting as a time gap drastically limiting what we are able to measure and its dimension will appear in some quantities. This is why the Planck time is numerically very close to the gravitational/electric force ratio in an electron: its difference, disregarding a π√(2) factor, is only 0.2%. This is not a coincidence, it is always the same particle and the small difference is between a rotating and a non-rotating particle. The determination of its rotational speed yields accurate numbers for many quantities, including the fine structure constant and the electron magnetic moment

  17. Low power constant fraction discriminator

    International Nuclear Information System (INIS)

    Krishnan, Shanti; Raut, S.M.; Mukhopadhyay, P.K.

    2001-01-01

    This paper describes the design of a low power ultrafast constant fraction discriminator, which significantly reduces the power consumption. A conventional fast discriminator consumes about 1250 MW of power whereas this low power version consumes about 440 MW. In a multi detector system, where the number of discriminators is very large, reduction of power is of utmost importance. This low power discriminator is being designed for GRACE (Gamma Ray Atmospheric Cerenkov Experiments) telescope where 1000 channels of discriminators are required. A novel method of decreasing power consumption has been described. (author)

  18. Alignment of process compliance and monitoring requirements in dynamic business collaborations

    Science.gov (United States)

    Comuzzi, Marco

    2017-07-01

    Dynamic business collaborations are intrinsically characterised by change because processes can be distributed or outsourced and partners may be substituted by new ones with enhanced or different capabilities. In this context, compliance requirements management becomes particularly challenging. Partners in a collaboration may join and leave dynamically and tasks over which compliance requirements are specified may be consequently distributed or delegated to new partners. This article considers the issue of aligning compliance requirements in a dynamic business collaboration with the monitoring requirements induced on the collaborating partners when change occurs. We first provide a conceptual model of business collaborations and their compliance requirements, introducing the concept of monitoring capabilities induced by compliance requirements. Then, we present a set of mechanisms to ensure consistency between monitoring and compliance requirements in the presence of change, e.g. when tasks are delegated or backsourced in-house. We also discuss a set of metrics to evaluate the status of a collaboration in respect of compliance monitorability. Finally, we discuss a prototype implementation of our framework.

  19. The Hubble Constant

    Directory of Open Access Journals (Sweden)

    Neal Jackson

    2015-09-01

    Full Text Available I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H_0 values of around 72–74 km s^–1 Mpc^–1, with typical errors of 2–3 km s^–1 Mpc^–1. This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67–68 km s^–1 Mpc^–1 and typical errors of 1–2 km s^–1 Mpc^–1. The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.

  20. The inconstant solar constant

    International Nuclear Information System (INIS)

    Willson, R.C.; Hudson, H.

    1984-01-01

    The Active Cavity Radiometer Irradiance Monitor (ACRIM) of the Solar Maximum Mission satellite measures the radiant power emitted by the sun in the direction of the earth and has worked flawlessly since 1980. The main motivation for ACRIM's use to measure the solar constant is the determination of the extent to which this quantity's variations affect earth weather and climate. Data from the solar minimum of 1986-1987 is eagerly anticipated, with a view to the possible presence of a solar cycle variation in addition to that caused directly by sunspots

  1. Information processing requirements for on-board monitoring of automatic landing

    Science.gov (United States)

    Sorensen, J. A.; Karmarkar, J. S.

    1977-01-01

    A systematic procedure is presented for determining the information processing requirements for on-board monitoring of automatic landing systems. The monitoring system detects landing anomalies through use of appropriate statistical tests. The time-to-correct aircraft perturbations is determined from covariance analyses using a sequence of suitable aircraft/autoland/pilot models. The covariance results are used to establish landing safety and a fault recovery operating envelope via an event outcome tree. This procedure is demonstrated with examples using the NASA Terminal Configured Vehicle (B-737 aircraft). The procedure can also be used to define decision height, assess monitoring implementation requirements, and evaluate alternate autoland configurations.

  2. 34 CFR 4.1 - Service of process required to be served on or delivered to Secretary.

    Science.gov (United States)

    2010-07-01

    ... 34 Education 1 2010-07-01 2010-07-01 false Service of process required to be served on or... OF PROCESS § 4.1 Service of process required to be served on or delivered to Secretary. Summons... authorized to accept service of such process. (Authority: 5 U.S.C. 301) [47 FR 16780, Apr. 20, 1982] ...

  3. Glovebox design requirements for molten salt oxidation processing of transuranic waste

    Energy Technology Data Exchange (ETDEWEB)

    Ramsey, K.B.; Acosta, S.V. [Los Alamos National Lab., NM (United States); Wernly, K.D. [Molten Salt Oxidation Corp., Bensalem, PA (United States)

    1998-12-31

    This paper presents an overview of potential technologies for stabilization of {sup 238}Pu-contaminated combustible waste. Molten salt oxidation (MSO) provides a method for removing greater than 99.999% of the organic matrix from combustible waste. Implementation of MSO processing at the Los Alamos National Laboratory (LANL) Plutonium Facility will eliminate the combustible matrix from {sup 238}Pu-contaminated waste and consequently reduce the cost of TRU waste disposal operations at LANL. The glovebox design requirements for unit operations including size reduction and MSO processing will be presented.

  4. Glovebox design requirements for molten salt oxidation processing of transuranic waste

    International Nuclear Information System (INIS)

    Ramsey, K.B.; Acosta, S.V.; Wernly, K.D.

    1998-01-01

    This paper presents an overview of potential technologies for stabilization of 238 Pu-contaminated combustible waste. Molten salt oxidation (MSO) provides a method for removing greater than 99.999% of the organic matrix from combustible waste. Implementation of MSO processing at the Los Alamos National Laboratory (LANL) Plutonium Facility will eliminate the combustible matrix from 238 Pu-contaminated waste and consequently reduce the cost of TRU waste disposal operations at LANL. The glovebox design requirements for unit operations including size reduction and MSO processing will be presented

  5. Data requirements for the Ferrocyanide Safety Issue developed through the data quality objectives process

    International Nuclear Information System (INIS)

    Meacham, J.E.; Cash, R.J.; Dukelow, G.T.; Babad, H.; Buck, J.W.; Anderson, C.M.; Pulsipher, B.A.; Toth, J.J.; Turner, P.J.

    1994-08-01

    This document records the data quality objectives (DQO) process applied to the Ferrocyanide Safety Issue at the Hanford Site. Specifically, the major recommendations and findings from this Ferrocyanide DQO process are presented. The decision logic diagrams and decision error tolerances also are provided. The document includes the DQO sample-size formulas for determining specific tank sampling requirements, and many of the justifications for decision thresholds and decision error tolerances are briefly described. More detailed descriptions are presented in other Ferrocyanide Safety Program companion documents referenced in this report. This is a living document, and the assumptions contained within will be refined as more data from sampling and characterization become available

  6. Defining Constellation Suit Helmet Field of View Requirements Employing a Mission Segment Based Reduction Process

    Science.gov (United States)

    McFarland, Shane

    2009-01-01

    Field of view has always been a design feature paramount to helmets, and in particular space suits, where the helmet must provide an adequate field of view for a large range of activities, environments, and body positions. For Project Constellation, a different approach to helmet requirement maturation was utilized; one that was less a direct function of body position and suit pressure and more a function of the mission segment in which the field of view will be required. Through taxonimization of various parameters that affect suited field of view, as well as consideration for possible nominal and contingency operations during that mission segment, a reduction process was employed to condense the large number of possible outcomes to only six unique field of view angle requirements that still captured all necessary variables while sacrificing minimal fidelity.

  7. The conventional ammunition requirements determination process of the U.S. Navy.

    OpenAIRE

    Mawson, John III

    1985-01-01

    Approved for public release; distribution is unlimited The objective of this thesis is to analyze the Requirements Determination procedures in the Navy's Conventional Gun Ammunition System in an attempt to identify areas for potential improvement. The Conventional Gun Ammunition System involves a logical progression of steps initiated on an annual basis. The Secretary of Defense begins the process by issuing broad guidance for the development of documentation to support b...

  8. Tank waste remediation system privatization infrastructure program requirements and document management process guide

    International Nuclear Information System (INIS)

    ROOT, R.W.

    1999-01-01

    This guide provides the Tank Waste Remediation System Privatization Infrastructure Program management with processes and requirements to appropriately control information and documents in accordance with the Tank Waste Remediation System Configuration Management Plan (Vann 1998b). This includes documents and information created by the program, as well as non-program generated materials submitted to the project. It provides appropriate approval/control, distribution and filing systems

  9. A mechanism for revising accreditation standards: a study of the process, resources required and evaluation outcomes.

    Science.gov (United States)

    Greenfield, David; Civil, Mike; Donnison, Andrew; Hogden, Anne; Hinchcliff, Reece; Westbrook, Johanna; Braithwaite, Jeffrey

    2014-11-21

    The study objective was to identify and describe the process, resources and expertise required for the revision of accreditation standards, and report outcomes arising from such activities. Secondary document analysis of materials from an accreditation standards development agency. The Royal Australian College of General Practitioners' (RACGP) documents, minutes and reports related to the revision of the accreditation standards were examined. The RACGP revision of the accreditation standards was conducted over a 12 month period and comprised six phases with multiple tasks, including: review methodology planning; review of the evidence base and each standard; new material development; constructing field trial methodology; drafting, trialling and refining new standards; and production of new standards. Over 100 individuals participated, with an additional 30 providing periodic input and feedback. Participants were drawn from healthcare professional associations, primary healthcare services, accreditation agencies, government agencies and public health organisations. Their expertise spanned: project management; standards development and writing; primary healthcare practice; quality and safety improvement methodologies; accreditation implementation and surveying; and research. The review and development process was shaped by five issues: project expectations; resource and time requirements; a collaborative approach; stakeholder engagement; and the product produced. The RACGP evaluation was that participants were positive about their experience, the standards produced and considered them relevant for the sector. The revision of accreditation standards requires considerable resources and expertise, drawn from a broad range of stakeholders. Collaborative, inclusive processes that engage key stakeholders helps promote greater industry acceptance of the standards.

  10. Rolling Stock Planning at DSB S-tog - Processes, Cost Structures and Requirements

    DEFF Research Database (Denmark)

    Thorlacius, Per

    A central issue for operators of suburban passenger train transport systems is providing sufficient number of seats for the passengers while at the same time minimising operating costs. The process of providing this is called rolling stock planning. This technical report documents the terminology......, the processes, the cost structures and the requirements for rolling stock planning at DSB S-tog, the suburban passenger train operator of the City of Copenhagen. The focus of the technical report is directed at practical train operator oriented issues. The technical report is thought to serve as a basis...... for investigating better methods to perform the rolling stock planning (to be the topic of later papers). This technical report is produced as a part of the current industrial Ph. D. project to improve the rolling stock planning process of DSB S-tog....

  11. Potential constants and centrifugal distortion constants of octahedral hexafluoride molecules

    Energy Technology Data Exchange (ETDEWEB)

    Manivannan, G [Government Thirumagal Mill' s Coll., Gudiyattam, Tamil Nadu (India)

    1981-04-01

    The kinetic constants method outlined by Thirugnanasambandham (1964) based on Wilson's (1955) group theory has been adapted in evaluating the potential constants for SF/sub 6/, SeF/sub 6/, WF/sub 6/, IrF/sub 6/, UF/sub 6/, NpF/sub 6/, and PuF/sub 6/ using the experimentally observed vibrational frequency data. These constants are used to calculate the centrifugal distortion constants for the first time.

  12. Radiation-resistant requirements analysis of device and control component for advanced spent fuel management process

    Energy Technology Data Exchange (ETDEWEB)

    Song, Tai Gil; Park, G. Y.; Kim, S. Y.; Lee, J. Y.; Kim, S. H.; Yoon, J. S. [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-02-01

    It is known that high levels of radiation can cause significant damage by altering the properties of materials. A practical understanding of the effects of radiation - how radiation affects various types of materials and components - is required to design equipment to operate reliably in a gamma radiation environment. When designing equipment to operate in a high gamma radiation environment, such as will be present in a nuclear spent fuel handling facility, several important steps should be followed. In order to active test of the advanced spent fuel management process, the radiation-resistant analysis of the device and control component for active test which is concerned about the radiation environment is conducted. Also the system design process is analysis and reviewed. In the foreign literature, 'threshold' values are generally reported. the threshold values are normally the dose required to begin degradation in a particular material property. The radiation effect analysis for the device of vol-oxidation and metalization, which are main device for the advanced spent fuel management process, is performed by the SCALE 4.4 code. 5 refs., 4 figs., 13 tabs. (Author)

  13. Estimating energy requirement in cashew (Anacardium occidentale L.) nut processing operations

    Energy Technology Data Exchange (ETDEWEB)

    Jekayinfa, S.O. [Department of Agricultural Engineering, Ladoke Akintola University of Technology, P.M.B. 4000, Ogbomoso, Oyo State (Nigeria); Bamgboye, A.I. [Department of Agricultural Engineering, University of Ibadan, Ibadan (Nigeria)

    2006-07-15

    This work deals with a study on estimation of energy consumption in eight readily defined unit operations of cashew nut processing. Data for analysis were collected from nine cashew nut mills stratified into small, medium and large categories to represent different mechanization levels. Series of equations were developed to easily compute requirements of electricity, fuel and labour for each of the unit operations. The computation of energy use was done using spreadsheet program on Microsoft Excel. The results of application test of the equations show that the total energy intensity in the cashew nut mills varied between 0.21 and 1.161MJ/kg. Electrical energy intensity varied between 0.0052 and 0.029MJ/kg, while thermal energy intensity varied from 0.085 to 1.064MJ/kg. The two identified energy intensive operations in cashew nut processing are cashew nut drying and cashew nut roasting, altogether accounting for over 85% of the total energy consumption in all the three mill categories. Thermal energy, obtained from diesel fuel, represented about 90% of the unit energy cost for cashew nut processing. The developed equations have therefore proven to be a useful tool for carrying out budgeting, forecasting energy requirements and planning plant expansion. (author)

  14. The Identification and Comparison of the Requirements Placed on Product Managers during the Recruitment Process

    Directory of Open Access Journals (Sweden)

    Wroblowská Zuzana

    2015-09-01

    Full Text Available The submitted paper focuses on personality traits and behavioural competencies of a key role bearer in product oriented marketing management, generally referred to as product management. An interdisciplinary approach was applied while looking into this subject, since both research into theoretical bases and analysis of the current state of the topic and the tendencies of its development required work in several fields of study. Based on research in the field of secondary data, the assumption was set that a product manager is an example of a knowledge worker of the 21st century and that the business practice sees him/her as such, which has an effect on the requirements a candidate for this position is confronted with in the recruitment process. An independent research project was carried out and it confirmed that product managers are considered to be knowledge workers and that independence and analytical thinking skills were among the most common requirements for product managers both in 2007 and 2014. A comparison of results from 2007 and 2014 also showed some differences. The statistical verification confirmed a shift in requirements within the interpersonal competency group. The findings were used to formulate recommendations for the recruitment strategy and realization of selection for positions in product management.

  15. Reactor group constants and benchmark test

    Energy Technology Data Exchange (ETDEWEB)

    Takano, Hideki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    The evaluated nuclear data files such as JENDL, ENDF/B-VI and JEF-2 are validated by analyzing critical mock-up experiments for various type reactors and assessing applicability for nuclear characteristics such as criticality, reaction rates, reactivities, etc. This is called Benchmark Testing. In the nuclear calculations, the diffusion and transport codes use the group constant library which is generated by processing the nuclear data files. In this paper, the calculation methods of the reactor group constants and benchmark test are described. Finally, a new group constants scheme is proposed. (author)

  16. Formas estructurales de fuerza constante

    Directory of Open Access Journals (Sweden)

    Zalewski, Waclaw

    1963-05-01

    Full Text Available The author seeks to prove the need to obtain the most essential form in the various types of structures by applying a number of rational principles, of which the constant stress principle is one of the most decisive. The structural form should be a logical consequence of all its functional circumstances, and this requires a clear understanding of the general behaviour of each part of the structure, and also of the main stresses which operate on it, considered as a unitary whole. To complete his theoretical argument, the author gives some examples, in the design of which the criterion of constant stress has been adopted. The author considers the various aspects which are involved in obtaining a structural design that satisfies given functional and aesthetic requirements. In doing so he refers to his personal experience within Poland, and infers technical principles of general validity which should determine the rational design of the form, as an integrated aspect of the structural pattern. The projects which illustrate this paper are Polish designs of undoubted constructive significance, in which the principle of constant stress has been applied. Finally the author condenses his whole theory in a simple and straightforward practical formula, which should be followed if a truly rational form is to be achieved: the constancy of stress in the various structural elements.El autor se esfuerza en mostrar la necesidad de llegar a la forma real en las distintas estructuras siguiendo una serie de principios racionales, entre los que domina el criterio de la fuerza constante. La forma ha de ser una consecuencia lógica en todos sus aspectos, y esto exige un claro conocimiento del comportamiento general de cada una de las partes de la estructura, y de los esfuerzos generales que dominan en la misma al considerarla como un todo. Para completar la exposición de orden teórico, el autor presenta algunos ejemplos en cuyo proyecto se ha seguido el criterio de

  17. Cost account in agriculture with particular accordance to requirements of decision making process and controls

    Directory of Open Access Journals (Sweden)

    Tomasz Kondraszuk

    2009-01-01

    Full Text Available The article presents cost account applicability in the agricultural companies regarding the general theory of economic and organisation of enterprises. The main focus was laid down to analyse the unit total cost account with variable costs and their applicability in three spheres: stock valuation and profit, requirements of planning and decision making processes and controlling. It was concluded that cost calculation at the level of agricultural enterprise should be an inherent element of integrated information system at the level of registry, planning, decision making and control.

  18. Parametrised Constants and Replication for Spatial Mobility

    DEFF Research Database (Denmark)

    Hüttel, Hans; Haagensen, Bjørn

    2009-01-01

    Parametrised replication and replication are common ways of expressing infinite computation in process calculi. While parametrised constants can be encoded using replication in the π-calculus, this changes in the presence of spatial mobility as found in e.g. the distributed π- calculus...... of the distributed π-calculus with parametrised constants and replication are incomparable. On the other hand, we shall see that there exists a simple encoding of recursion in mobile ambients....

  19. Association constants of telluronium salts

    International Nuclear Information System (INIS)

    Kovach, N.A.; Rivkin, B.B.; Sadekov, T.D.; Shvajka, O.P.

    1996-01-01

    Association constants in acetonitrile of triphenyl telluronium salts, which are dilute electrolytes, are determined through the conductometry method. Satisfactory correlation dependence of constants of interion association and threshold molar electroconductivity on the Litvinenko-Popov constants for depositing groups is identified. 6 refs

  20. Anisotropic constant-roll inflation

    Energy Technology Data Exchange (ETDEWEB)

    Ito, Asuka; Soda, Jiro [Kobe University, Department of Physics, Kobe (Japan)

    2018-01-15

    We study constant-roll inflation in the presence of a gauge field coupled to an inflaton. By imposing the constant anisotropy condition, we find new exact anisotropic constant-roll inflationary solutions which include anisotropic power-law inflation as a special case. We also numerically show that the new anisotropic solutions are attractors in the phase space. (orig.)

  1. Quintessence and the cosmological constant

    International Nuclear Information System (INIS)

    Doran, M.; Wetterich, C.

    2003-01-01

    Quintessence -- the energy density of a slowly evolving scalar field -- may constitute a dynamical form of the homogeneous dark energy in the universe. We review the basic idea in the light of the cosmological constant problem. Cosmological observations or a time variation of fundamental 'constants' can distinguish quintessence from a cosmological constant

  2. Chandra Independently Determines Hubble Constant

    Science.gov (United States)

    2006-08-01

    A critically important number that specifies the expansion rate of the Universe, the so-called Hubble constant, has been independently determined using NASA's Chandra X-ray Observatory. This new value matches recent measurements using other methods and extends their validity to greater distances, thus allowing astronomers to probe earlier epochs in the evolution of the Universe. "The reason this result is so significant is that we need the Hubble constant to tell us the size of the Universe, its age, and how much matter it contains," said Max Bonamente from the University of Alabama in Huntsville and NASA's Marshall Space Flight Center (MSFC) in Huntsville, Ala., lead author on the paper describing the results. "Astronomers absolutely need to trust this number because we use it for countless calculations." Illustration of Sunyaev-Zeldovich Effect Illustration of Sunyaev-Zeldovich Effect The Hubble constant is calculated by measuring the speed at which objects are moving away from us and dividing by their distance. Most of the previous attempts to determine the Hubble constant have involved using a multi-step, or distance ladder, approach in which the distance to nearby galaxies is used as the basis for determining greater distances. The most common approach has been to use a well-studied type of pulsating star known as a Cepheid variable, in conjunction with more distant supernovae to trace distances across the Universe. Scientists using this method and observations from the Hubble Space Telescope were able to measure the Hubble constant to within 10%. However, only independent checks would give them the confidence they desired, considering that much of our understanding of the Universe hangs in the balance. Chandra X-ray Image of MACS J1149.5+223 Chandra X-ray Image of MACS J1149.5+223 By combining X-ray data from Chandra with radio observations of galaxy clusters, the team determined the distances to 38 galaxy clusters ranging from 1.4 billion to 9.3 billion

  3. Hazard Analysis of Software Requirements Specification for Process Module of FPGA-based Controllers in NPP

    Energy Technology Data Exchange (ETDEWEB)

    Jung; Sejin; Kim, Eui-Sub; Yoo, Junbeom [Konkuk University, Seoul (Korea, Republic of); Keum, Jong Yong; Lee, Jang-Soo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    Software in PLC, FPGA which are used to develop I and C system also should be analyzed to hazards and risks before used. NUREG/CR-6430 proposes the method for performing software hazard analysis. It suggests analysis technique for software affected hazards and it reveals that software hazard analysis should be performed with the aspects of software life cycle such as requirements analysis, design, detailed design, implements. It also provides the guide phrases for applying software hazard analysis. HAZOP (Hazard and operability analysis) is one of the analysis technique which is introduced in NUREG/CR-6430 and it is useful technique to use guide phrases. HAZOP is sometimes used to analyze the safety of software. Analysis method of NUREG/CR-6430 had been used in Korea nuclear power plant software for PLC development. Appropriate guide phrases and analysis process are selected to apply efficiently and NUREG/CR-6430 provides applicable methods for software hazard analysis is identified in these researches. We perform software hazard analysis of FPGA software requirements specification with two approaches which are NUREG/CR-6430 and HAZOP with using general GW. We also perform the comparative analysis with them. NUREG/CR-6430 approach has several pros and cons comparing with the HAZOP with general guide words and approach. It is enough applicable to analyze the software requirements specification of FPGA.

  4. Filament instability under constant loads

    Science.gov (United States)

    Monastra, A. G.; Carusela, M. F.; D’Angelo, M. V.; Bruno, L.

    2018-04-01

    Buckling of semi-flexible filaments appears in different systems and scales. Some examples are: fibers in geophysical applications, microtubules in the cytoplasm of eukaryotic cells and deformation of polymers freely suspended in a flow. In these examples, instabilities arise when a system’s parameter exceeds a critical value, being the Euler force the most known. However, the complete time evolution and wavelength of buckling processes are not fully understood. In this work we solve analytically the time evolution of a filament under a constant compressive force in the small amplitude approximation. This gives an insight into the variable force scenario in terms of normal modes. The evolution is highly sensitive to the initial configuration and to the magnitude of the compressive load. This model can be a suitable approach to many different real situations.

  5. 21 CFR 111.55 - What are the requirements to implement a production and process control system?

    Science.gov (United States)

    2010-04-01

    ... production and process control system? 111.55 Section 111.55 Food and Drugs FOOD AND DRUG ADMINISTRATION... to Establish a Production and Process Control System § 111.55 What are the requirements to implement a production and process control system? You must implement a system of production and process...

  6. An Observation-based Assessment of Instrument Requirements for a Future Precipitation Process Observing System

    Science.gov (United States)

    Nelson, E.; L'Ecuyer, T. S.; Wood, N.; Smalley, M.; Kulie, M.; Hahn, W.

    2017-12-01

    Global models exhibit substantial biases in the frequency, intensity, duration, and spatial scales of precipitation systems. Much of this uncertainty stems from an inadequate representation of the processes by which water is cycled between the surface and atmosphere and, in particular, those that govern the formation and maintenance of cloud systems and their propensity to form the precipitation. Progress toward improving precipitation process models requires observing systems capable of quantifying the coupling between the ice content, vertical mass fluxes, and precipitation yield of precipitating cloud systems. Spaceborne multi-frequency, Doppler radar offers a unique opportunity to address this need but the effectiveness of such a mission is heavily dependent on its ability to actually observe the processes of interest in the widest possible range of systems. Planning for a next generation precipitation process observing system should, therefore, start with a fundamental evaluation of the trade-offs between sensitivity, resolution, sampling, cost, and the overall potential scientific yield of the mission. Here we provide an initial assessment of the scientific and economic trade-space by evaluating hypothetical spaceborne multi-frequency radars using a combination of current real-world and model-derived synthetic observations. Specifically, we alter the field of view, vertical resolution, and sensitivity of a hypothetical Ka- and W-band radar system and propagate those changes through precipitation detection and intensity retrievals. The results suggest that sampling biases introduced by reducing sensitivity disproportionately affect the light rainfall and frozen precipitation regimes that are critical for warm cloud feedbacks and ice sheet mass balance, respectively. Coarser spatial resolution observations introduce regime-dependent biases in both precipitation occurrence and intensity that depend on cloud regime, with even the sign of the bias varying within a

  7. 21 CFR 111.60 - What are the design requirements for the production and process control system?

    Science.gov (United States)

    2010-04-01

    ... to Establish a Production and Process Control System § 111.60 What are the design requirements for... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What are the design requirements for the production and process control system? 111.60 Section 111.60 Food and Drugs FOOD AND DRUG ADMINISTRATION...

  8. Quality Control and Peer Review of Data Sets: Mapping Data Archiving Processes to Data Publication Requirements

    Science.gov (United States)

    Mayernik, M. S.; Daniels, M.; Eaker, C.; Strand, G.; Williams, S. F.; Worley, S. J.

    2012-12-01

    ? What data set review can be done pre-publication, and what must be done post-publication? What components of the data sets review processes can be automated, and what components will always require human expertise and evaluation?

  9. Some consistency requirements on thermophysicsal properties used in metal processing simulations

    International Nuclear Information System (INIS)

    Bertran, L.A.

    1992-01-01

    Stresses and deformations during metal processing are being simulated from room temperature to melting. This requires that thermophysical data be obtained at high temperature, an expensive task even when the sampling of elevated temperatures is quite sparse. Use of constitutive models with complex inelastic responses along with sparsely sampled data increases uncertainties as to physical meaning of simulation results. Identification of dependences among the physical parameters as a function of temperature offer increased confidence and lower experimental costs. In this paper examples for bilinear stress-strain materials are given (e.g., yield strain, the ratio of yield stress to Young's modulus, should be continuous where the coefficient of thermal expansion is continuous in T) and similar constraints are discussed for more complex constitutive responses

  10. Better Building Alliance, Plug and Process Loads in Commercial Buildings: Capacity and Power Requirement Analysis (Brochure)

    Energy Technology Data Exchange (ETDEWEB)

    2014-09-01

    This brochure addresses gaps in actionable knowledge that can help reduce the plug load capacities designed into buildings. Prospective building occupants and real estate brokers lack accurate references for plug and process load (PPL) capacity requirements, so they often request 5-10 W/ft2 in their lease agreements. This brochure should be used to make these decisions so systems can operate more energy efficiently; upfront capital costs will also decrease. This information can also be used to drive changes in negotiations about PPL energy demands. It should enable brokers and tenants to agree about lower PPL capacities. Owner-occupied buildings will also benefit. Overestimating PPL capacity leads designers to oversize electrical infrastructure and cooling systems.

  11. Methods for calculating energy and current requirements for industrial electron beam processing

    International Nuclear Information System (INIS)

    Cleland, M.R.; Farrell, J.P.

    1976-01-01

    The practical problems of determining electron beam parameters for industrial irradiation processes are discussed. To assist the radiation engineer in this task, the physical aspects of electron beam absorption are briefly described. Formulas are derived for calculating the surface dose in the treated material using the electron energy, beam current and the area thruput rate of the conveyor. For thick absorbers electron transport results are used to obtain the depth-dose distributions. From these the average dose in the material, anti D, and the beam power utilization efficiency, F/sub p/, can be found by integration over the distributions. These concepts can be used to relate the electron beam power to the mass thruput rate. Qualitatively, the thickness of the material determines the beam energy, the area thruput rate and surface dose determine the beam current while the mass thruput rate and average depth-dose determine the beam power requirements. Graphs are presented showing these relationships as a function of electron energy from 0.2 to 4.0 MeV for polystyrene. With this information, the determination of electron energy and current requirements is a relatively simple procedure

  12. A novel explosive process is required for the gamma-ray burst GRB 060614.

    Science.gov (United States)

    Gal-Yam, A; Fox, D B; Price, P A; Ofek, E O; Davis, M R; Leonard, D C; Soderberg, A M; Schmidt, B P; Lewis, K M; Peterson, B A; Kulkarni, S R; Berger, E; Cenko, S B; Sari, R; Sharon, K; Frail, D; Moon, D-S; Brown, P J; Cucchiara, A; Harrison, F; Piran, T; Persson, S E; McCarthy, P J; Penprase, B E; Chevalier, R A; MacFadyen, A I

    2006-12-21

    Over the past decade, our physical understanding of gamma-ray bursts (GRBs) has progressed rapidly, thanks to the discovery and observation of their long-lived afterglow emission. Long-duration (> 2 s) GRBs are associated with the explosive deaths of massive stars ('collapsars', ref. 1), which produce accompanying supernovae; the short-duration (< or = 2 s) GRBs have a different origin, which has been argued to be the merger of two compact objects. Here we report optical observations of GRB 060614 (duration approximately 100 s, ref. 10) that rule out the presence of an associated supernova. This would seem to require a new explosive process: either a massive collapsar that powers a GRB without any associated supernova, or a new type of 'engine', as long-lived as the collapsar but without a massive star. We also show that the properties of the host galaxy (redshift z = 0.125) distinguish it from other long-duration GRB hosts and suggest that an entirely new type of GRB progenitor may be required.

  13. Inflation with a constant rate of roll

    International Nuclear Information System (INIS)

    Motohashi, Hayato; Starobinsky, Alexei A.; Yokoyama, Jun'ichi

    2015-01-01

    We consider an inflationary scenario where the rate of inflaton roll defined by ·· φ/H φ-dot remains constant. The rate of roll is small for slow-roll inflation, while a generic rate of roll leads to the interesting case of 'constant-roll' inflation. We find a general exact solution for the inflaton potential required for such inflaton behaviour. In this model, due to non-slow evolution of background, the would-be decaying mode of linear scalar (curvature) perturbations may not be neglected. It can even grow for some values of the model parameter, while the other mode always remains constant. However, this always occurs for unstable solutions which are not attractors for the given potential. The most interesting particular cases of constant-roll inflation remaining viable with the most recent observational data are quadratic hilltop inflation (with cutoff) and natural inflation (with an additional negative cosmological constant). In these cases even-order slow-roll parameters approach non-negligible constants while the odd ones are asymptotically vanishing in the quasi-de Sitter regime

  14. The requirements for processing tritium recovered from liquid lithium blankets: The blanket interface

    International Nuclear Information System (INIS)

    Clemmer, R.G.; Finn, P.A.; Greenwood, L.R.; Grimm, T.L.; Sze, D.K.; Bartlit, J.R.; Anderson, J.L.; Yoshida, H.; Naruse.

    1988-03-01

    We have initiated a study to define a blanket processing mockup for Tritium Systems Test Assembly. Initial evaluation of the requirements of the blanket processing system have been started. The first step of the work is to define the condition of the gaseous tritium stream from the blanket tritium recovery system. This report summarizes this part of the work for one particular blanket concept, i.e., a self-cooled lithium blanket. The total gas throughput, the hydrogen to tritium ratio, the corrosive chemicals, and the radionuclides are defined. The key discoveries are: the throughput of the blanket gas stream (including the helium carrier gas) is about two orders of magnitude higher than the plasma exhaust stream;the protium to tritium ratio is about 1, the deuterium to tritium ratio is about 0.003;the corrosion chemicals are dominated by halides;the radionuclides are dominated by C-14, P-32, and S-35;their is high level of nitrogen contamination in the blanket stream. 77 refs., 6 figs., 13 tabs

  15. Elongational flow of polymer melts at constant strain rate, constant stress and constant force

    Science.gov (United States)

    Wagner, Manfred H.; Rolón-Garrido, Víctor H.

    2013-04-01

    Characterization of polymer melts in elongational flow is typically performed at constant elongational rate or rarely at constant tensile stress conditions. One of the disadvantages of these deformation modes is that they are hampered by the onset of "necking" instabilities according to the Considère criterion. Experiments at constant tensile force have been performed even more rarely, in spite of the fact that this deformation mode is free from necking instabilities and is of considerable industrial relevance as it is the correct analogue of steady fiber spinning. It is the objective of the present contribution to present for the first time a full experimental characterization of a long-chain branched polyethylene melt in elongational flow. Experiments were performed at constant elongation rate, constant tensile stress and constant tensile force by use of a Sentmanat Extensional Rheometer (SER) in combination with an Anton Paar MCR301 rotational rheometer. The accessible experimental window and experimental limitations are discussed. The experimental data are modelled by using the Wagner I model. Predictions of the steady-start elongational viscosity in constant strain rate and creep experiments are found to be identical, albeit only by extrapolation of the experimental data to Hencky strains of the order of 6. For constant stress experiments, a minimum in the strain rate and a corresponding maximum in the elongational viscosity is found at a Hencky strain of the order of 3, which, although larger than the steady-state value, follows roughly the general trend of the steady-state elongational viscosity. The constitutive analysis also reveals that constant tensile force experiments indicate a larger strain hardening potential than seen in constant elongation rate or constant tensile stress experiments. This may be indicative of the effect of necking under constant elongation rate or constant tensile stress conditions according to the Considère criterion.

  16. Fluoxetine requires the endfeet protein aquaporin-4 to enhance plasticity of astrocyte processes

    Directory of Open Access Journals (Sweden)

    Barbara eDi Benedetto

    2016-02-01

    in HAB brains. Thus, we suggest that longer treatment regimes may be needed to properly restore the coverage of BVs or to relocate AQP-4 to astrocyte endfeet. In conclusion, FLX requires AQP-4 to modulate the plasticity of astrocyte processes and this effect might be essential to re-establish a functional glia-vasculature interface necessary for a physiological communication between bloodstream and brain parenchyma.

  17. Spectrophotometric determination of association constant

    DEFF Research Database (Denmark)

    2016-01-01

    Least-squares 'Systematic Trial-and-Error Procedure' (STEP) for spectrophotometric evaluation of association constant (equilibrium constant) K and molar absorption coefficient E for a 1:1 molecular complex, A + B = C, with error analysis according to Conrow et al. (1964). An analysis of the Charge...

  18. Improving the Simplified Acquisition of Base Engineering Requirements (SABER) Delivery Order Award Process: Results of a Process Improvement Plan

    Science.gov (United States)

    1991-09-01

    putting all tasks directed towsrds achieving an outcome in aequence. The tasks can be viewed as steps in the process (39:2.3). Using this...improvement opportunity is investigated. A plan is developed, root causes are identified, and solutions are tested and implemented. The process is... solutions , check for actual improvement, and integrate the successful improvements into the process. ?UP 7. Check Improvement Performance. Finally, the

  19. Time constant of logarithmic creep and relaxation

    CSIR Research Space (South Africa)

    Nabarro, FRN

    2001-07-15

    Full Text Available length and hardness which vary logarithmically with time. For dimensional reasons, a logarithmic variation must involve a time constant tau characteristic of the process, so that the deformation is proportional to ln(t/tau). Two distinct mechanisms...

  20. Export Control Requirements for Tritium Processing Design and R&D

    Energy Technology Data Exchange (ETDEWEB)

    Hollis, William Kirk [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Maynard, Sarah-Jane Wadsworth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-05

    This document will address requirements of export control associated with tritium plant design and processes. Los Alamos National Laboratory has been working in the area of tritium plant system design and research and development (R&D) since the early 1970’s at the Tritium Systems Test Assembly (TSTA). This work has continued to the current date with projects associated with the ITER project and other Office of Science Fusion Energy Science (OS-FES) funded programs. ITER is currently the highest funding area for the DOE OS-FES. Although export control issues have been integrated into these projects in the past a general guidance document has not been available for reference in this area. To address concerns with currently funded tritium plant programs and assist future projects for FES, this document will identify the key reference documents and specific sections within related to tritium research. Guidance as to the application of these sections will be discussed with specific detail to publications and work with foreign nationals.

  1. Export Control Requirements for Tritium Processing Design and R&D

    Energy Technology Data Exchange (ETDEWEB)

    Hollis, William Kirk [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Maynard, Sarah-Jane Wadsworth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-30

    This document will address requirements of export control associated with tritium plant design and processes. Los Alamos National Laboratory has been working in the area of tritium plant system design and research and development (R&D) since the early 1970’s at the Tritium Systems Test Assembly (TSTA). This work has continued to the current date with projects associated with the ITER project and other Office of Science Fusion Energy Science (OS-FES) funded programs. ITER is currently the highest funding area for the DOE OS-FES. Although export control issues have been integrated into these projects in the past a general guidance document has not been available for reference in this area. To address concerns with currently funded tritium plant programs and assist future projects for FES, this document will identify the key reference documents and specific sections within related to tritium research. Guidance as to the application of these sections will be discussed with specific detail to publications and work with foreign nationals.

  2. 21 CFR 212.60 - What requirements apply to the laboratories where I test components, in-process materials, and...

    Science.gov (United States)

    2010-04-01

    ... maintenance. Each laboratory must have and follow written procedures to ensure that equipment is routinely... 21 Food and Drugs 4 2010-04-01 2010-04-01 false What requirements apply to the laboratories where...) Laboratory Controls § 212.60 What requirements apply to the laboratories where I test components, in-process...

  3. Requirements for a modern process control system (PLS); Anforderungen an ein modernes Prozessleitsystem (PLS)

    Energy Technology Data Exchange (ETDEWEB)

    Maurer, Michael [SAR Elektronic GmbH, Dingolfing (Germany)

    2012-11-01

    The process control system is of crucial importance for the process management of process engineering systems. The process control system has to enable the operation and surveillance of processes, to register critical process conditions and to provide process data for the evaluation of processes. The process control system delivers real time process data to superior process engineering systems (MES, ERP) and implements control commands of superior systems on the plant. The market of process control engineering systems is characterized by a variety of different systems, most different project specific and customer specific configurations as well as different releases. Control systems are in competition to programmable logic controllers and black boxes. The satisfaction of the user with his process control system depends significantly on the attained quality of execution of the supplying company and the used power plant library. It does not depend on the chosen brand of process control system. The availability of process control systems depends on the chosen system architecture and the chosen components, but not from the brand of process control system.

  4. Economic analysis of solar industrial process heat systems: A methodology to determine annual required revenue and internal rate of return

    Science.gov (United States)

    Dickinson, W. C.; Brown, K. C.

    1981-08-01

    An economic evaluation of solar industrial process heat systems, is developed to determine the annual required revenue and the internal rate of return. First, a format is provided to estimate the solar system's installed cost, annual operating and maintenance expenses, and net annual solar energy delivered to the industrial process. The annual required revenue and the price of solar is calculated. The economic attractiveness of the potential solar investment can be determined by comparing the price of solar energy with the price of fossilfuel, both expressed in levelized terms. This requires calcuation of the internal rate of return on the solar investment or, in certain cases, the growth rate of return.

  5. Multiple constant multiplication optimizations for field programmable gate arrays

    CERN Document Server

    Kumm, Martin

    2016-01-01

    This work covers field programmable gate array (FPGA)-specific optimizations of circuits computing the multiplication of a variable by several constants, commonly denoted as multiple constant multiplication (MCM). These optimizations focus on low resource usage but high performance. They comprise the use of fast carry-chains in adder-based constant multiplications including ternary (3-input) adders as well as the integration of look-up table-based constant multipliers and embedded multipliers to get the optimal mapping to modern FPGAs. The proposed methods can be used for the efficient implementation of digital filters, discrete transforms and many other circuits in the domain of digital signal processing, communication and image processing. Contents Heuristic and ILP-Based Optimal Solutions for the Pipelined Multiple Constant Multiplication Problem Methods to Integrate Embedded Multipliers, LUT-Based Constant Multipliers and Ternary (3-Input) Adders An Optimized Multiple Constant Multiplication Architecture ...

  6. Varying Constants, Gravitation and Cosmology

    Directory of Open Access Journals (Sweden)

    Jean-Philippe Uzan

    2011-03-01

    Full Text Available Fundamental constants are a cornerstone of our physical laws. Any constant varying in space and/or time would reflect the existence of an almost massless field that couples to matter. This will induce a violation of the universality of free fall. Thus, it is of utmost importance for our understanding of gravity and of the domain of validity of general relativity to test for their constancy. We detail the relations between the constants, the tests of the local position invariance and of the universality of free fall. We then review the main experimental and observational constraints that have been obtained from atomic clocks, the Oklo phenomenon, solar system observations, meteorite dating, quasar absorption spectra, stellar physics, pulsar timing, the cosmic microwave background and big bang nucleosynthesis. At each step we describe the basics of each system, its dependence with respect to the constants, the known systematic effects and the most recent constraints that have been obtained. We then describe the main theoretical frameworks in which the low-energy constants may actually be varying and we focus on the unification mechanisms and the relations between the variation of different constants. To finish, we discuss the more speculative possibility of understanding their numerical values and the apparent fine-tuning that they confront us with.

  7. Treated effluent disposal system process control computer software requirements and specification

    International Nuclear Information System (INIS)

    Graf, F.A. Jr.

    1994-01-01

    The software requirements for the monitor and control system that will be associated with the effluent collection pipeline system known as the 200 Area Treated Effluent Disposal System is covered. The control logic for the two pump stations and specific requirements for the graphic displays are detailed

  8. Electromagnetic corrections to pseudoscalar decay constants

    Energy Technology Data Exchange (ETDEWEB)

    Glaessle, Benjamin Simon

    2017-03-06

    First principles Lattice quantum chromodynamics (LQCD) calculations enable the determination of low energy hadronic amplitudes. Precision LQCD calculations with relative errors smaller than approximately 1% require the inclusion of electromagnetic effects. We demonstrate that including (quenched) quantum electrodynamics effects in the LQCD calculation effects the values obtained for pseudoscalar decay constants in the per mille range. The importance of systematic effects, including finite volume effects and the charge dependence of renormalization and improvement coefficients, is highlighted.

  9. 20 CFR 653.501 - Requirements for accepting and processing clearance orders.

    Science.gov (United States)

    2010-04-01

    ... the job order. (i) No agricultural or food processing order shall be included in job bank listings... clearance any job order seeking workers to perform agricultural or food processing work before reviewing it... order seeking workers to perform agricultural or food processing work into intrastate clearance unless...

  10. Stabilized power constant alimentation; Alimentation regulee a puissance constante

    Energy Technology Data Exchange (ETDEWEB)

    Roussel, L [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1968-06-01

    The study and realization of a stabilized power alimentation variable from 5 to 100 watts are described. In order to realize a constant power drift of Lithium compensated diodes, we have searched a 1 per cent precision of regulation and a response time minus than 1 sec. Recent components like Hall multiplicator and integrated amplifiers give this possibility and it is easy to use permutable circuits. (author) [French] On decrit l'etude et la realisation d'une alimentation a puissance constante reglable dans une gamme de 5 a 100 watts. Prevue pour le drift a puissance constante des diodes compensees au lithium, l'etude a ete menee en vue d'obtenir une precision de regulation de 1 pour cent et un temps de reponse inferieur a la seconde. Des systemes recents tels que multiplicateurs a effet Hall et circuits integres ont permis d'atteindre ce but tout en facilitant l'emploi de modules interchangeables. (auteur)

  11. From the Rydberg constant to the fundamental constants metrology

    International Nuclear Information System (INIS)

    Nez, F.

    2005-06-01

    This document reviews the theoretical and experimental achievements of the author since the beginning of his scientific career. This document is dedicated to the spectroscopy of hydrogen, deuterium and helium atoms. The first part is divided into 6 sub-sections: 1) the principles of hydrogen spectroscopy, 2) the measurement of the 2S-nS/nD transitions, 3) other optical frequency measurements, 4) our contribution to the determination of the Rydberg constant, 5) our current experiment on the 1S-3S transition, 6) the spectroscopy of the muonic hydrogen. Our experiments have improved the accuracy of the Rydberg Constant by a factor 25 in 15 years and we have achieved the first absolute optical frequency measurement of a transition in hydrogen. The second part is dedicated to the measurement of the fine structure constant and the last part deals with helium spectroscopy and the search for optical references in the near infrared range. (A.C.)

  12. Learning Read-constant Polynomials of Constant Degree modulo Composites

    DEFF Research Database (Denmark)

    Chattopadhyay, Arkadev; Gavaldá, Richard; Hansen, Kristoffer Arnsfelt

    2011-01-01

    Boolean functions that have constant degree polynomial representation over a fixed finite ring form a natural and strict subclass of the complexity class \\textACC0ACC0. They are also precisely the functions computable efficiently by programs over fixed and finite nilpotent groups. This class...... is not known to be learnable in any reasonable learning model. In this paper, we provide a deterministic polynomial time algorithm for learning Boolean functions represented by polynomials of constant degree over arbitrary finite rings from membership queries, with the additional constraint that each variable...

  13. Constant-pH molecular dynamics using stochastic titration

    Science.gov (United States)

    Baptista, António M.; Teixeira, Vitor H.; Soares, Cláudio M.

    2002-09-01

    A new method is proposed for performing constant-pH molecular dynamics (MD) simulations, that is, MD simulations where pH is one of the external thermodynamic parameters, like the temperature or the pressure. The protonation state of each titrable site in the solute is allowed to change during a molecular mechanics (MM) MD simulation, the new states being obtained from a combination of continuum electrostatics (CE) calculations and Monte Carlo (MC) simulation of protonation equilibrium. The coupling between the MM/MD and CE/MC algorithms is done in a way that ensures a proper Markov chain, sampling from the intended semigrand canonical distribution. This stochastic titration method is applied to succinic acid, aimed at illustrating the method and examining the choice of its adjustable parameters. The complete titration of succinic acid, using constant-pH MD simulations at different pH values, gives a clear picture of the coupling between the trans/gauche isomerization and the protonation process, making it possible to reconcile some apparently contradictory results of previous studies. The present constant-pH MD method is shown to require a moderate increase of computational cost when compared to the usual MD method.

  14. Incorporating functional requirements into the structural design of the Defense Waste Processing Facility

    International Nuclear Information System (INIS)

    Hsiu, F.J.; Ng, C.K.; Almuti, A.M.

    1986-01-01

    Vitrification Building-type structures have unique features and design needs. The structural design requires new concepts and custom detailing. The above special structural designs have demonstrated the importance of the five design considerations listed in the introduction. Innovative ideas and close coordination are required to achieve the design objectives. Many of these innovations have been applied to the DWPF facility which is a first of a kind

  15. Processing of unconventional stimuli requires the recruitment of the non-specialized hemisphere

    Directory of Open Access Journals (Sweden)

    Yoed Nissan Kenett

    2015-02-01

    Full Text Available In the present study we investigate hemispheric processing of conventional and unconventional visual stimuli in the context of visual and verbal creative ability. In Experiment 1, we studied two unconventional visual recognition tasks – Mooney face and objects' silhouette recognition – and found a significant relationship between measures of verbal creativity and unconventional face recognition. In Experiment 2 we used the split visual field paradigm to investigate hemispheric processing of conventional and unconventional faces and its relation to verbal and visual characteristics of creativity. Results showed that while conventional faces were better processed by the specialized right hemisphere, unconventional faces were better processed by the non-specialized left hemisphere. In addition, only unconventional face processing by the non-specialized left hemisphere was related to verbal and visual measures of creative ability. Our findings demonstrate the role of the non-specialized hemisphere in processing unconventional stimuli and how it relates to creativity.

  16. A regulatory adjustment process for the determination of the optimal percentage requirement in an electricity market with Tradable Green Certificates

    International Nuclear Information System (INIS)

    Currier, Kevin M.

    2013-01-01

    A system of Tradable Green Certificates (TGCs) is a market-based subsidy scheme designed to promote electricity generation from renewable energy sources such as wind power. Under a TGC system, the principal policy instrument is the “percentage requirement,” which stipulates the percentage of total electricity production (“green” plus “black”) that must be obtained from renewable sources. In this paper, we propose a regulatory adjustment process that a regulator can employ to determine the socially optimal percentage requirement, explicitly accounting for environmental damages resulting from black electricity generation. - Highlights: • A Tradable Green Certificate (TGC) system promotes energy production from renewable sources. • We consider an electricity oligopoly operated under a TGC system. • Welfare analysis must account for damages from “black” electricity production. • We characterize the welfare maximizing (optimal) “percentage requirement.” • We present a regulatory adjustment process that computes the optimal percentage requirement iteratively

  17. Guidelines for the Deployment of Product-Related Environmental Legislation into Requirements for the Product Development Process

    DEFF Research Database (Denmark)

    Ferraz, Mariana; Pigosso, Daniela Cristina Antelmi; Teixeira, Cláudia Echevenguá

    2013-01-01

    Environmental legislation is increasingly changing its focus from end-of-pipe approaches to a life cycle perspective. Therefore, manufacturing companies are increasingly identifying the need of deploying and incorporating product-related environmental requirements into the product development...... process. This paper presents twelve guidelines, clustered into three groups, to support companies in the identification, analysis and deployment of product requirements from product-related environmental legislation....

  18. Congruence from the operator's point of view: compositionality requirements on process semantics

    NARCIS (Netherlands)

    Gazda, M.; Fokkink, W.J.

    2010-01-01

    One of the basic sanity properties of a behavioural semantics is that it constitutes a congruence with respect to standard process operators. This issue has been traditionally addressed by the development of rule formats for transition system specifications that define process algebras. In this

  19. Congruence from the operator's point of view : compositionality requirements on process semantics

    NARCIS (Netherlands)

    Gazda, M.W.; Fokkink, W.J.; Aceto, L.; Sobocinski, P.

    2010-01-01

    One of the basic sanity properties of a behavioural semantics is that it constitutes a congruence with respect to standard process operators. This issue has been traditionally addressed by the development of rule formats for transition system specifications that define process algebras. In this

  20. Process-based models are required to manage ecological systems in a changing world

    Science.gov (United States)

    K. Cuddington; M.-J. Fortin; L.R. Gerber; A. Hastings; A. Liebhold; M. OConnor; C. Ray

    2013-01-01

    Several modeling approaches can be used to guide management decisions. However, some approaches are better fitted than others to address the problem of prediction under global change. Process-based models, which are based on a theoretical understanding of relevant ecological processes, provide a useful framework to incorporate specific responses to altered...

  1. The governance of green IT The role of processes in reducing data center energy requirements

    CERN Document Server

    Spafford, George

    2008-01-01

    To sustain support, IT must implement processes to ensure proper value creation and protection of organizational goals.  To this end, this book sets forth a Green IT process that will enable value creation and protection in the areas of data center power and cooling.

  2. From the Rydberg constant to the fundamental constants metrology; De la constante de Rydberg a la metrologie des constantes fondamentales

    Energy Technology Data Exchange (ETDEWEB)

    Nez, F

    2005-06-15

    This document reviews the theoretical and experimental achievements of the author since the beginning of his scientific career. This document is dedicated to the spectroscopy of hydrogen, deuterium and helium atoms. The first part is divided into 6 sub-sections: 1) the principles of hydrogen spectroscopy, 2) the measurement of the 2S-nS/nD transitions, 3) other optical frequency measurements, 4) our contribution to the determination of the Rydberg constant, 5) our current experiment on the 1S-3S transition, 6) the spectroscopy of the muonic hydrogen. Our experiments have improved the accuracy of the Rydberg Constant by a factor 25 in 15 years and we have achieved the first absolute optical frequency measurement of a transition in hydrogen. The second part is dedicated to the measurement of the fine structure constant and the last part deals with helium spectroscopy and the search for optical references in the near infrared range. (A.C.)

  3. Multiphoton amplitude in a constant background field

    Science.gov (United States)

    Ahmad, Aftab; Ahmadiniaz, Naser; Corradini, Olindo; Kim, Sang Pyo; Schubert, Christian

    2018-01-01

    In this contribution, we present our recent compact master formulas for the multiphoton amplitudes of a scalar propagator in a constant background field using the worldline fomulation of quantum field theory. The constant field has been included nonperturbatively, which is crucial for strong external fields. A possible application is the scattering of photons by electrons in a strong magnetic field, a process that has been a subject of great interest since the discovery of astrophysical objects like radio pulsars, which provide evidence that magnetic fields of the order of 1012G are present in nature. The presence of a strong external field leads to a strong deviation from the classical scattering amplitudes. We explicitly work out the Compton scattering amplitude in a magnetic field, which is a process of potential relevance for astrophysics. Our final result is compact and suitable for numerical integration.

  4. Risk-based Strategy to Determine Testing Requirement for the Removal of Residual Process Reagents as Process-related Impurities in Bioprocesses.

    Science.gov (United States)

    Qiu, Jinshu; Li, Kim; Miller, Karen; Raghani, Anil

    2015-01-01

    The purpose of this article is to recommend a risk-based strategy for determining clearance testing requirements of the process reagents used in manufacturing biopharmaceutical products. The strategy takes account of four risk factors. Firstly, the process reagents are classified into two categories according to their safety profile and history of use: generally recognized as safe (GRAS) and potential safety concern (PSC) reagents. The clearance testing of GRAS reagents can be eliminated because of their safe use historically and process capability to remove these reagents. An estimated safety margin (Se) value, a ratio of the exposure limit to the estimated maximum reagent amount, is then used to evaluate the necessity for testing the PSC reagents at an early development stage. The Se value is calculated from two risk factors, the starting PSC reagent amount per maximum product dose (Me), and the exposure limit (Le). A worst-case scenario is assumed to estimate the Me value, that is common. The PSC reagent of interest is co-purified with the product and no clearance occurs throughout the entire purification process. No clearance testing is required for this PSC reagent if its Se value is ≥1; otherwise clearance testing is needed. Finally, the point of the process reagent introduction to the process is also considered in determining the necessity of the clearance testing for process reagents. How to use the measured safety margin as a criterion for determining PSC reagent testing at process characterization, process validation, and commercial production stages are also described. A large number of process reagents are used in the biopharmaceutical manufacturing to control the process performance. Clearance testing for all of the process reagents will be an enormous analytical task. In this article, a risk-based strategy is described to eliminate unnecessary clearance testing for majority of the process reagents using four risk factors. The risk factors included

  5. Ferrocyanide Safety Program: Data requirements for the ferrocyanide safety issue developed through the data quality objectives (DQO) process

    International Nuclear Information System (INIS)

    Buck, J.W.; Anderson, C.M.; Pulsipher, B.A.; Toth, J.J.; Turner, P.J.; Cash, R.J.; Dukelow, G.T.; Meacham, J.E.

    1993-12-01

    This document records the data quality objectives (DQO) process applied to the Ferrocyanide Waste Tank Safety Issue at the Hanford Site by the Pacific Northwest Laboratory and Westinghouse Hanford Company. Specifically, the major recommendations and findings from this Ferrocyanide DQO process are presented so that decision makers can determine the type, quantity, and quality of data required for addressing tank safety issues. The decision logic diagrams and error tolerance equations also are provided. Finally, the document includes the DQO sample-size formulas for determining specific tank sampling requirements

  6. Slow Off-rates and Strong Product Binding Are Required for Processivity and Efficient Degradation of Recalcitrant Chitin by Family 18 Chitinases.

    Science.gov (United States)

    Kurašin, Mihhail; Kuusk, Silja; Kuusk, Piret; Sørlie, Morten; Väljamäe, Priit

    2015-11-27

    Processive glycoside hydrolases are the key components of enzymatic machineries that decompose recalcitrant polysaccharides, such as chitin and cellulose. The intrinsic processivity (P(Intr)) of cellulases has been shown to be governed by the rate constant of dissociation from polymer chain (koff). However, the reported koff values of cellulases are strongly dependent on the method used for their measurement. Here, we developed a new method for determining koff, based on measuring the exchange rate of the enzyme between a non-labeled and a (14)C-labeled polymeric substrate. The method was applied to the study of the processive chitinase ChiA from Serratia marcescens. In parallel, ChiA variants with weaker binding of the N-acetylglucosamine unit either in substrate-binding site -3 (ChiA-W167A) or the product-binding site +1 (ChiA-W275A) were studied. Both ChiA variants showed increased off-rates and lower apparent processivity on α-chitin. The rate of the production of insoluble reducing groups on the reduced α-chitin was an order of magnitude higher than koff, suggesting that the enzyme can initiate several processive runs without leaving the substrate. On crystalline chitin, the general activity of the wild type enzyme was higher, and the difference was magnifying with hydrolysis time. On amorphous chitin, the variants clearly outperformed the wild type. A model is proposed whereby strong interactions with polymer in the substrate-binding sites (low off-rates) and strong binding of the product in the product-binding sites (high pushing potential) are required for the removal of obstacles, like disintegration of chitin microfibrils. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.

  7. Strain fluctuations and elastic constants

    Energy Technology Data Exchange (ETDEWEB)

    Parrinello, M.; Rahman, A.

    1982-03-01

    It is shown that the elastic strain fluctuations are a direct measure of elastic compliances in a general anisotropic medium; depending on the ensemble in which the fluctuation is measured either the isothermal or the adiabatic compliances are obtained. These fluctuations can now be calculated in a constant enthalpy and pressure, and hence, constant entropy, ensemble due to recent develpments in the molecular dynamics techniques. A calculation for a Ni single crystal under uniform uniaxial 100 tensile or compressive load is presented as an illustration of the relationships derived between various strain fluctuations and the elastic modulii. The Born stability criteria and the behavior of strain fluctuations are shown to be related.

  8. Business Process Modelling is an Essential Part of a Requirements Analysis. Contribution of EFMI Primary Care Working Group.

    Science.gov (United States)

    de Lusignan, S; Krause, P; Michalakidis, G; Vicente, M Tristan; Thompson, S; McGilchrist, M; Sullivan, F; van Royen, P; Agreus, L; Desombre, T; Taweel, A; Delaney, B

    2012-01-01

    To perform a requirements analysis of the barriers to conducting research linking of primary care, genetic and cancer data. We extended our initial data-centric approach to include socio-cultural and business requirements. We created reference models of core data requirements common to most studies using unified modelling language (UML), dataflow diagrams (DFD) and business process modelling notation (BPMN). We conducted a stakeholder analysis and constructed DFD and UML diagrams for use cases based on simulated research studies. We used research output as a sensitivity analysis. Differences between the reference model and use cases identified study specific data requirements. The stakeholder analysis identified: tensions, changes in specification, some indifference from data providers and enthusiastic informaticians urging inclusion of socio-cultural context. We identified requirements to collect information at three levels: micro- data items, which need to be semantically interoperable, meso- the medical record and data extraction, and macro- the health system and socio-cultural issues. BPMN clarified complex business requirements among data providers and vendors; and additional geographical requirements for patients to be represented in both linked datasets. High quality research output was the norm for most repositories. Reference models provide high-level schemata of the core data requirements. However, business requirements' modelling identifies stakeholder issues and identifies what needs to be addressed to enable participation.

  9. 5 CFR 582.203 - Information minimally required to accompany legal process.

    Science.gov (United States)

    2010-01-01

    ... CIVIL SERVICE REGULATIONS COMMERCIAL GARNISHMENT OF FEDERAL EMPLOYEES' PAY Service of Legal Process... to the court, or other authority, with an explanation of the deficiency. However, prior to returning...

  10. Requirements concerning radiosterilization process organization; Wymagania dotyczace organizacji procesu sterylizacji radiacyjnej

    Energy Technology Data Exchange (ETDEWEB)

    Kaluska, I [Institute of Nuclear Chemistry and Technology, Warsaw (Poland)

    1997-10-01

    Administrative procedure connecting for licensing new materials or consumer products appropriated to radiosterilization have been performed and explained. Also the organization of irradiation process for attaining the proper result have been described in detail. 4 refs, 1 tab.

  11. Effective Electronic Security: Process for the Development and Validation from Requirements to Testing

    Science.gov (United States)

    2013-06-01

    ABBREVIATIONS ANSI American National Standards Institute ASIS American Society of Industrial Security CCTV Closed Circuit Television CONOPS...is globally recognized for the development and maintenance of standards. ASTM defines a specification as an explicit set of requirements...www.rkb.us/saver/. One of the SAVER reports titled CCTV Technology Handbook has a chapter on system design. The report uses terms like functional

  12. Requirements for a systems-based research and development management process in transport infrastructure engineering

    CSIR Research Space (South Africa)

    Rust, FC

    2015-05-01

    Full Text Available are not suitable for the management of such multi-disciplinary projects. This study focuses on determining the key characteristics required for a systems-based approach to the management of R&D projects. The information and data was compiled from literature reviews...

  13. 9 CFR 318.23 - Heat-processing and stabilization requirements for uncured meat patties.

    Science.gov (United States)

    2010-01-01

    ... requirements for uncured meat patties. 318.23 Section 318.23 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE AGENCY ORGANIZATION AND TERMINOLOGY; MANDATORY MEAT AND POULTRY... uncured meat patties. (a) Definitions. For purposes of this section, the following definitions shall apply...

  14. 42 CFR 3.102 - Process and requirements for initial and continued listing of PSOs.

    Science.gov (United States)

    2010-10-01

    ... conduct of patient safety activities, will take appropriate security measures to prevent unauthorized... SERVICES GENERAL PROVISIONS PATIENT SAFETY ORGANIZATIONS AND PATIENT SAFETY WORK PRODUCT PSO Requirements... patient safety reporting system to which health care providers (other than members of the entity's...

  15. 75 FR 41875 - Technical Processing Requirements for Multifamily Project Mortgage Insurance

    Science.gov (United States)

    2010-07-19

    ... is used to determine if key principals are acceptable and have the ability to manage the development... principals are acceptable and have the ability to manage the development, construction, completion, and... Requirements for Multifamily Project Mortgage Insurance AGENCY: Office of the Chief Information Officer, HUD...

  16. 9 CFR 590.544 - Spray process powder; definitions and requirements.

    Science.gov (United States)

    2010-01-01

    ... requirements. 590.544 Section 590.544 Animals and Animal Products FOOD SAFETY AND INSPECTION SERVICE... removed from the primary or main drying chamber while the drying unit is in operation. (2) Secondary... bag collector chamber while the drying unit is in operation. (3) Sweep-down powder is that powder...

  17. Evaluation of the Executive Information Requirements for the Market Research Process.

    Science.gov (United States)

    Lanser, Michael A.

    A study examined the marketing research information required by those executives of Lakeshore Technical College (Wisconsin) whose decisions affect the college's direction. Data were gathered from the following sources: literature review; development of a data dictionary framework; analysis of the college's current information system through…

  18. Expected requirements in support tool for software process improvement in SMEs

    OpenAIRE

    Muñoz Mata, Mirna; Mejía Miranda, Jezreel; Amescua Seco, Antonio; Calvo-Manzano Villalón, José Antonio; Cuevas Agustín, Gonzalo; San Feliu Gilabert, Tomás

    2012-01-01

    Nowadays being competitive is an important challenge for software development organizations. In order to achieve this, since last years, software process improvement has been an obvious and logical way. Unfortunately, even when many organizations are motivated to implement software process initiatives, not all know how best to do so, especially in Small and Medium Enterprises (SMEs) where due to its especial features, they have to be carefully in how to manage its resources to assure their ma...

  19. Universal relation between spectroscopic constants

    Indian Academy of Sciences (India)

    (3) The author has used eq. (6) of his paper to calculate De. This relation leads to a large deviation from the correct value depending upon the extent to which experimental values are known. Guided by this fact, in our work, we used experimentally observed De values to derive the relation between spectroscopic constants.

  20. Tachyon constant-roll inflation

    Science.gov (United States)

    Mohammadi, A.; Saaidi, Kh.; Golanbari, T.

    2018-04-01

    The constant-roll inflation is studied where the inflaton is taken as a tachyon field. Based on this approach, the second slow-roll parameter is taken as a constant which leads to a differential equation for the Hubble parameter. Finding an exact solution for the Hubble parameter is difficult and leads us to a numerical solution for the Hubble parameter. On the other hand, since in this formalism the slow-roll parameter η is constant and could not be assumed to be necessarily small, the perturbation parameters should be reconsidered again which, in turn, results in new terms appearing in the amplitude of scalar perturbations and the scalar spectral index. Utilizing the numerical solution for the Hubble parameter, we estimate the perturbation parameter at the horizon exit time and compare it with observational data. The results show that, for specific values of the constant parameter η , we could have an almost scale-invariant amplitude of scalar perturbations. Finally, the attractor behavior for the solution of the model is presented, and we determine that the feature could be properly satisfied.

  1. An Einstein-Cartan Fine Structure Constant Definition

    Directory of Open Access Journals (Sweden)

    Stone R. A. Jr.

    2010-01-01

    Full Text Available The fine structure constant definition given in Stone R.A. Jr. Progress in Physics, 2010, v.1, 11-13 is compared to an Einstein-Cartan fine structure constant definition. It is shown that the Einstein-Cartan definition produces the correct pure theory value, just not the measure value. To produce the measured value, the pure theory Einstein-Cartan fine structure constant requires only the new variables and spin coupling of the fine structure constant definition in [1].

  2. GRUCAL, a computer program for calculating macroscopic group constants

    International Nuclear Information System (INIS)

    Woll, D.

    1975-06-01

    Nuclear reactor calculations require material- and composition-dependent, energy averaged nuclear data to describe the interaction of neutrons with individual isotopes in material compositions of reactor zones. The code GRUCAL calculates these macroscopic group constants for given compositions from the material-dependent data of the group constant library GRUBA. The instructions for calculating group constants are not fixed in the program, but will be read at the actual execution time from a separate instruction file. This allows to accomodate GRUCAL to various problems or different group constant concepts. (orig.) [de

  3. Determination of isotopic purity in heavy water to suit process requirement (Preprint No. CA-15)

    Energy Technology Data Exchange (ETDEWEB)

    Kanthiah, W S.A.; Srinivasan, K; Usuf Ali, M C.M. [Heavy Water Plant, Tuticorin (India)

    1989-04-01

    In hydrogen/ammonia based heavy water plants, a simple specific gravity determination of heavy water without any purification or thermostating has proved to be simple and easy. The accuracy is found to be well within +- 0.5% in the isotopic purity (I.P) range of 30 to 90% W/W. There are three main methods that can be adopted for determination of I.P in this range: (1)refractometry, (2) infrared spectrophotometry, and (3) pycnometry. Refractrometry requires thermostating and the practical accuracy attainable is +- 1.5% W/W. Infrared spectrophotometer has a reported accuracy/ precision of +- 0.4%. Pycnometric analysis is simple and requires much less expertise and most suited for plant analyses. An accuracy better than +- 0.5% is attained without giving any correction for buoyancy, weighing to accuracy +- 0.1 mg, measuring temperature +- 0.2degC and sample having pH upto 3. (author). 8 annexures.

  4. 37 CFR 201.15 - Special handling of pending claims requiring expedited processing for purposes of litigation.

    Science.gov (United States)

    2010-07-01

    ... 37 Patents, Trademarks, and Copyrights 1 2010-07-01 2010-07-01 false Special handling of pending... PROVISIONS § 201.15 Special handling of pending claims requiring expedited processing for purposes of... compelling need for the service exists due to pending or prospective litigation, customs matters, or contract...

  5. Congruence from the Operator's Point of View: Compositionality Requirements on Process Semantics

    Directory of Open Access Journals (Sweden)

    Maciej Gazda

    2010-08-01

    Full Text Available One of the basic sanity properties of a behavioural semantics is that it constitutes a congruence with respect to standard process operators. This issue has been traditionally addressed by the development of rule formats for transition system specifications that define process algebras. In this paper we suggest a novel, orthogonal approach. Namely, we focus on a number of process operators, and for each of them attempt to find the widest possible class of congruences. To this end, we impose restrictions on sublanguages of Hennessy-Milner logic, so that a semantics whose modal characterization satisfies a given criterion is guaranteed to be a congruence with respect to the operator in question. We investigate action prefix, alternative composition, two restriction operators, and parallel composition.

  6. Stabilized power constant alimentation; Alimentation regulee a puissance constante

    Energy Technology Data Exchange (ETDEWEB)

    Roussel, L. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1968-06-01

    The study and realization of a stabilized power alimentation variable from 5 to 100 watts are described. In order to realize a constant power drift of Lithium compensated diodes, we have searched a 1 per cent precision of regulation and a response time minus than 1 sec. Recent components like Hall multiplicator and integrated amplifiers give this possibility and it is easy to use permutable circuits. (author) [French] On decrit l'etude et la realisation d'une alimentation a puissance constante reglable dans une gamme de 5 a 100 watts. Prevue pour le drift a puissance constante des diodes compensees au lithium, l'etude a ete menee en vue d'obtenir une precision de regulation de 1 pour cent et un temps de reponse inferieur a la seconde. Des systemes recents tels que multiplicateurs a effet Hall et circuits integres ont permis d'atteindre ce but tout en facilitant l'emploi de modules interchangeables. (auteur)

  7. Vega library for processing DICOM data required in Monte Carlo verification of radiotherapy treatment plans

    International Nuclear Information System (INIS)

    Locke, C.; Zavgorodni, S.; British Columbia Cancer Agency, Vancouver Island Center, Victoria BC

    2008-01-01

    Monte Carlo (MC) methods provide the most accurate to-date dose calculations in heterogeneous media and complex geometries, and this spawns increasing interest in incorporating MC calculations into treatment planning quality assurance process. This involves MC dose calculations for clinically produced treatment plans. To perform these calculations, a number of treatment plan parameters specifying radiation beam

  8. 40 CFR Table 3 to Subpart Ooo of... - Batch Process Vent Monitoring Requirements

    Science.gov (United States)

    2010-07-01

    ... records as specified in § 63.1416(d).b Condenser a Exit (product side) temperature Continuous records as....1416(d).b Boiler or process heater with a design heat input capacity less than 44 megawatts and where... inspections were performed as specified in § 63.1416(d). Scrubber, absorber, condenser, and carbon adsorber...

  9. 48 CFR 1352.237-70 - Security processing requirements-high or moderate risk contracts.

    Science.gov (United States)

    2010-10-01

    ... background inquiries pertaining to verification of name, physical description, marital status, present and... undergo security processing by the Department's Office of Security before being eligible to work on the.... citizens must have: (1) Official legal status in the United States; (2) Continuously resided in the United...

  10. Integrating Behavioral-Motive and Experiential-Requirement Perspectives on Psychological Needs: A Two Process Model

    Science.gov (United States)

    Sheldon, Kennon M.

    2011-01-01

    Psychological need theories offer much explanatory potential for behavioral scientists, but there is considerable disagreement and confusion about what needs are and how they work. A 2-process model of psychological needs is outlined, viewing needs as evolved functional systems that provide both (a) innate psychosocial motives that tend to impel…

  11. Requirements for the workflow-based support of release management processes in the automotive sector

    NARCIS (Netherlands)

    Bestfleisch, U.; Herbst, J.; Reichert, M.U.; Abdelmalek, B.

    One of the challenges the automotive industry currently has to master is the complexity of the electrical/elctronic system of a car. One key factor for reaching short product development cycles and high quality in this area are well-defined, properly executed test and release processes. In this

  12. Arabidopsis Intracellular NHX-Type Sodium-Proton Antiporters are Required for Seed Storage Protein Processing.

    Science.gov (United States)

    Ashnest, Joanne R; Huynh, Dung L; Dragwidge, Jonathan M; Ford, Brett A; Gendall, Anthony R

    2015-11-01

    The Arabidopsis intracellular sodium-proton exchanger (NHX) proteins AtNHX5 and AtNHX6 have a well-documented role in plant development, and have been used to improve salt tolerance in a variety of species. Despite evidence that intracellular NHX proteins are important in vacuolar trafficking, the mechanism of this role is poorly understood. Here we show that NHX5 and NHX6 are necessary for processing of the predominant seed storage proteins, and also influence the processing and activity of a vacuolar processing enzyme. Furthermore, we show by yeast two-hybrid and bimolecular fluorescence complementation (BiFC) technology that the C-terminal tail of NHX6 interacts with a component of Retromer, another component of the cell sorting machinery, and that this tail is critical for NHX6 activity. These findings demonstrate that NHX5 and NHX6 are important in processing and activity of vacuolar cargo, and suggest a mechanism by which NHX intracellular (IC)-II antiporters may be involved in subcellular trafficking. © The Author 2015. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  13. Some aspects of preparation and testing of group constants group constant system ABBN-90

    International Nuclear Information System (INIS)

    Nikolaev, M.N.; Tsiboulia, A.M.; Manturov, G.N.

    1996-01-01

    This paper presents an overview of activities performed to prepare and test the group constants ABBN-90. The ABBN-90 set is designed for application calculations of fast, intermediate and thermal nuclear reactors. The calculations of subgroup parameters are discussed. The processing code system GRUCON is mentioned in comparison to the NJOY code system. Proposals are made for future activities. (author). Figs, tabs

  14. Modelling health care processes for eliciting user requirements: a way to link a quality paradigm and clinical information system design.

    Science.gov (United States)

    Staccini, P; Joubert, M; Quaranta, J F; Fieschi, D; Fieschi, M

    2001-12-01

    Healthcare institutions are looking at ways to increase their efficiency by reducing costs while providing care services with a high level of safety. Thus, hospital information systems have to support quality improvement objectives. The elicitation of the requirements has to meet users' needs in relation to both the quality (efficacy, safety) and the monitoring of all health care activities (traceability). Information analysts need methods to conceptualise clinical information systems that provide actors with individual benefits and guide behavioural changes. A methodology is proposed to elicit and structure users' requirements using a process-oriented analysis, and it is applied to the blood transfusion process. An object-oriented data model of a process has been defined in order to organise the data dictionary. Although some aspects of activity, such as 'where', 'what else', and 'why' are poorly represented by the data model alone, this method of requirement elicitation fits the dynamic of data input for the process to be traced. A hierarchical representation of hospital activities has to be found for the processes to be interrelated, and for their characteristics to be shared, in order to avoid data redundancy and to fit the gathering of data with the provision of care.

  15. Isoprenylation is required for the processing of the lamin A precursor

    International Nuclear Information System (INIS)

    Beck, L.A.; Hosick, T.J.; Sinensky, M.

    1990-01-01

    The nuclear lamina proteins, prelamin A, lamin B, and a 70-kD lamina-associated protein, are posttranslationally modified by a metabolite derived from mevalonate. This modification can be inhibited by treatment with (3-R,S)-3-fluoromevalonate, demonstrating that it is isoprenoid in nature. We have examined the association between isoprenoid metabolism and processing of the lamin A precursor in human and hamster cells. Inhibition of 3-hydroxy-3-methylglutaryl coenzyme A reductase by mevinolin (lovastatin) specifically depletes endogenous isoprenoid pools and inhibits the conversion of prelamin A to lamin A. Prelamin A processing is also blocked by mevalonate starvation of Mev-1, a CHO cell line auxotrophic for mevalonate. Moreover, inhibition of prelamin A processing by mevinolin treatment is rapidly reversed by the addition of exogenous mevalonate. Processing of prelamin A is, therefore, dependent on isoprenoid metabolism. Analysis of the conversion of prelamin A to lamin A by two independent methods, immunoprecipitation and two-dimensional nonequilibrium pH gel electrophoresis, demonstrates that a precursor-product relationship exists between prelamin A and lamin A. Analysis of R,S-[5-3H(N)]mevalonate-labeled cells shows that the rate of turnover of the isoprenoid group from prelamin A is comparable to the rate of conversion of prelamin A to lamin A. These results suggest that during the proteolytic maturation of prelamin A, the isoprenylated moiety is lost. A significant difference between prelamin A processing, and that of p21ras and the B-type lamins that undergo isoprenylation-dependent proteolytic maturation, is that the mature form of lamin A is no longer isoprenylated

  16. Performing the processing required for automatically get a PDF/A version of the CERN Library documentation

    CERN Document Server

    Molina Garcia-Retamero, Antonio

    2015-01-01

    The aim of the project was to perform the processing required for automatically get a PDF/A version of the CERN Library documentation. For this, it is necessary to extract as much metadata as possible from the sources files, inject the required data into the original source files creating new ones ready for being compiled with all related dependencies. Besides, I’ve proposed the creation of a HTML version consistent with the PDF and navigable for easy access, I’ve been trying to perform some Natural Language Processing for extracting metadata, I’ve proposed the injection of the cern library documentation into the HTML version of the long writeups where it is referenced (for instance, when a CERN Library function is referenced in a sample code) Finally, I’ve designed and implemented a Graphical User Interface in order to simplify the process for the user.

  17. Implications of safety requirements for the treatment of THMC processes in geological disposal systems for radioactive waste

    Directory of Open Access Journals (Sweden)

    Frédéric Bernier

    2017-06-01

    Full Text Available The mission of nuclear safety authorities in national radioactive waste disposal programmes is to ensure that people and the environment are protected against the hazards of ionising radiations emitted by the waste. It implies the establishment of safety requirements and the oversight of the activities of the waste management organisation in charge of implementing the programme. In Belgium, the safety requirements for geological disposal rest on the following principles: defence-in-depth, demonstrability and the radiation protection principles elaborated by the International Commission on Radiological Protection (ICRP. Applying these principles requires notably an appropriate identification and characterisation of the processes upon which the safety functions fulfilled by the disposal system rely and of the processes that may affect the system performance. Therefore, research and development (R&D on safety-relevant thermo-hydro-mechanical-chemical (THMC issues is important to build confidence in the safety assessment. This paper points out the key THMC processes that might influence radionuclide transport in a disposal system and its surrounding environment, considering the dynamic nature of these processes. Their nature and significance are expected to change according to prevailing internal and external conditions, which evolve from the repository construction phase to the whole heating–cooling cycle of decaying waste after closure. As these processes have a potential impact on safety, it is essential to identify and to understand them properly when developing a disposal concept to ensure compliance with relevant safety requirements. In particular, the investigation of THMC processes is needed to manage uncertainties. This includes the identification and characterisation of uncertainties as well as for the understanding of their safety-relevance. R&D may also be necessary to reduce uncertainties of which the magnitude does not allow

  18. EUROPEAN INTEGRATION: A MULTILEVEL PROCESS THAT REQUIRES A MULTILEVEL STATISTICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Roxana-Otilia-Sonia HRITCU

    2015-11-01

    Full Text Available A process of market regulation and a system of multi-level governance and several supranational, national and subnational levels of decision making, European integration subscribes to being a multilevel phenomenon. The individual characteristics of citizens, as well as the environment where the integration process takes place, are important. To understand the European integration and its consequences it is important to develop and test multi-level theories that consider individual-level characteristics, as well as the overall context where individuals act and express their characteristics. A central argument of this paper is that support for European integration is influenced by factors operating at different levels. We review and present theories and related research on the use of multilevel analysis in the European area. This paper draws insights on various aspects and consequences of the European integration to take stock of what we know about how and why to use multilevel modeling.

  19. Decision-tree approach to evaluating inactive uranium-processing sites for liner requirements

    International Nuclear Information System (INIS)

    Relyea, J.F.

    1983-03-01

    Recently, concern has been expressed about potential toxic effects of both radon emission and release of toxic elements in leachate from inactive uranium mill tailings piles. Remedial action may be required to meet disposal standards set by the states and the US Environmental Protection Agency (EPA). In some cases, a possible disposal option is the exhumation and reburial (either on site or at a new location) of tailings and reliance on engineered barriers to satisfy the objectives established for remedial actions. Liners under disposal pits are the major engineered barrier for preventing contaminant release to ground and surface water. The purpose of this report is to provide a logical sequence of action, in the form of a decision tree, which could be followed to show whether a selected tailings disposal design meets the objectives for subsurface contaminant release without a liner. This information can be used to determine the need and type of liner for sites exhibiting a potential groundwater problem. The decision tree is based on the capability of hydrologic and mass transport models to predict the movement of water and contaminants with time. The types of modeling capabilities and data needed for those models are described, and the steps required to predict water and contaminant movement are discussed. A demonstration of the decision tree procedure is given to aid the reader in evaluating the need for the adequacy of a liner

  20. Determination of isotopic purity in heavy water to suit process requirement (Preprint No. CA-15)

    International Nuclear Information System (INIS)

    Kanthiah, W.S.A.; Srinivasan, K.; Usuf Ali, M.C.M.

    1989-04-01

    In hydrogen/ammonia based heavy water plants, a simple specific gravity determination of heavy water without any purification or thermostating has proved to be simple and easy. The accuracy is found to be well within ± 0.5% in the isotopic purity (I.P) range of 30 to 90% W/W. There are three main methods that can be adopted for determination of I.P in this range: (1)refractometry, (2) infrared spectrophotometry, and (3) pycnometry. Refractrometry requires thermostating and the practical accuracy attainable is ± 1.5% W/W. Infrared spectrophotometer has a reported accuracy/ precision of ± 0.4%. Pycnometric analysis is simple and requires much less expertise and most suited for plant analyses. An accuracy better than ± 0.5% is attained without giving any correction for buoyancy, weighing to accuracy ± 0.1 mg, measuring temperature ± 0.2degC and sample having pH upto 3. (author). 8 annexures

  1. 40 CFR 63.118 - Process vent provisions-periodic reporting and recordkeeping requirements.

    Science.gov (United States)

    2010-07-01

    ... device or other means to achieve and maintain a TRE index value greater than 1.0 but less than 4.0 as... subpart and who elects to demonstrate compliance with the TRE index value greater than 4.0 under § 63.113... § 63.115(e) of this subpart, is made that causes a Group 2 process vent with a TRE greater than 4.0 to...

  2. Reprocessing and disposal of used lubricating and process materials. requirements, problems, and solution methods

    Energy Technology Data Exchange (ETDEWEB)

    Matzke, U D

    1978-02-01

    A discussion covers West German laws concerning used oil disposal and re-refining (316,000 tons were reprocessed in 1976); disposal of sulfuric acid resins or tar and fuller's earth containing mineral oils by solidification (with added lime, alkali ash, clay, etc.) or pyrolysis; disposal of rolling mill scale and sludge containing oil and grease by rolling with a solid carbonaceous material and processing to high-grade sponge iron; and the breaking of oil-water emulsions.

  3. On data processing required to derive mobility patterns from passively-generated mobile phone data

    Science.gov (United States)

    Wang, Feilong; Chen, Cynthia

    2018-01-01

    Passively-generated mobile phone data is emerging as a potential data source for transportation research and applications. Despite the large amount of studies based on the mobile phone data, only a few have reported the properties of such data, and documented how they have processed the data. In this paper, we describe two types of common mobile phone data: Call Details Record (CDR) data and sightings data, and propose a data processing framework and the associated algorithms to address two key issues associated with the sightings data: locational uncertainty and oscillation. We show the effectiveness of our proposed methods in addressing these two issues compared to the state of art algorithms in the field. We also demonstrate that without proper processing applied to the data, the statistical regularity of human mobility patterns—a key, significant trait identified for human mobility—is over-estimated. We hope this study will stimulate more studies in examining the properties of such data and developing methods to address them. Though not as glamorous as those directly deriving insights on mobility patterns (such as statistical regularity), understanding properties of such data and developing methods to address them is a fundamental research topic on which important insights are derived on mobility patterns. PMID:29398790

  4. Evolution of the solar constant

    International Nuclear Information System (INIS)

    Newman, M.J.

    1978-01-01

    The ultimate source of the energy utilized by life on Earth is the Sun, and the behavior of the Sun determines to a large extent the conditions under which life originated and continues to thrive. What can be said about the history of the Sun. Has the solar constant, the rate at which energy is received by the Earth from the Sun per unit area per unit time, been constant at its present level since Archean times. Three mechanisms by which it has been suggested that the solar energy output can vary with time are discussed, characterized by long (approx. 10 9 years), intermediate (approx. 10 8 years), and short (approx. years to decades) time scales

  5. Calculation of magnetic hyperfine constants

    International Nuclear Information System (INIS)

    Bufaical, R.F.; Maffeo, B.; Brandi, H.S.

    1975-01-01

    The magnetic hyperfine constants of the V sub(K) center in CaF 2 , SrF 2 and BaF 2 have been calculated assuming a phenomenological model, based on the F 2 - 'central molucule', to describe the wavefunction of the defect. Calculations have shown that introduction of a small degree of covalence, between this central molecule and neighboring ions, is necessary to improve the electronic structure description of the defect. It was also shown that the results for the hyperfine constants are strongly dependent on the relaxations of the ions neighboring the central molecule; these relaxations have been determined by fitting the experimental data. The present results are compared with other previous calculations where similar and different theoretical methods have been used

  6. Development of high temperature containerless processing equipment and the design and evaluation of associated systems required for microgravity materials processing and property measurements

    Science.gov (United States)

    Rey, Charles A.

    1991-03-01

    The development of high temperature containerless processing equipment and the design and evaluation of associated systems required for microgravity materials processing and property measurements are discussed. Efforts were directed towards the following task areas: design and development of a High Temperature Acoustic Levitator (HAL) for containerless processing and property measurements at high temperatures; testing of the HAL module to establish this technology for use as a positioning device for microgravity uses; construction and evaluation of a brassboard hot wall Acoustic Levitation Furnace; construction and evaluation of a noncontact temperature measurement (NCTM) system based on AGEMA thermal imaging camera; construction of a prototype Division of Amplitude Polarimetric Pyrometer for NCTM of levitated specimens; evaluation of and recommendations for techniques to control contamination in containerless materials processing chambers; and evaluation of techniques for heating specimens to high temperatures for containerless materials experimentation.

  7. Development of high temperature containerless processing equipment and the design and evaluation of associated systems required for microgravity materials processing and property measurements

    Science.gov (United States)

    Rey, Charles A.

    1991-01-01

    The development of high temperature containerless processing equipment and the design and evaluation of associated systems required for microgravity materials processing and property measurements are discussed. Efforts were directed towards the following task areas: design and development of a High Temperature Acoustic Levitator (HAL) for containerless processing and property measurements at high temperatures; testing of the HAL module to establish this technology for use as a positioning device for microgravity uses; construction and evaluation of a brassboard hot wall Acoustic Levitation Furnace; construction and evaluation of a noncontact temperature measurement (NCTM) system based on AGEMA thermal imaging camera; construction of a prototype Division of Amplitude Polarimetric Pyrometer for NCTM of levitated specimens; evaluation of and recommendations for techniques to control contamination in containerless materials processing chambers; and evaluation of techniques for heating specimens to high temperatures for containerless materials experimentation.

  8. On the gravitational constant change

    International Nuclear Information System (INIS)

    Milyukov, V.K.

    1986-01-01

    The nowadays viewpoint on the problem of G gravitational constant invariability is presented in brief. The methods and results of checking of the G dependence on the nature of substance (checking of the equivalence principle), G dependepce on distance (checking of Newton gravity law) and time (cosmological experiments) are presented. It is pointed out that all performed experiments don't give any reasons to have doubts in G constancy in space and time and G independence on the nature of the substance

  9. Dry-grind processing using amylase corn and superior yeast to reduce the exogenous enzyme requirements in bioethanol production.

    Science.gov (United States)

    Kumar, Deepak; Singh, Vijay

    2016-01-01

    Conventional corn dry-grind ethanol production process requires exogenous alpha and glucoamylases enzymes to breakdown starch into glucose, which is fermented to ethanol by yeast. This study evaluates the potential use of new genetically engineered corn and yeast, which can eliminate or minimize the use of these external enzymes, improve the economics and process efficiencies, and simplify the process. An approach of in situ ethanol removal during fermentation was also investigated for its potential to improve the efficiency of high-solid fermentation, which can significantly reduce the downstream ethanol and co-product recovery cost. The fermentation of amylase corn (producing endogenous α-amylase) using conventional yeast and no addition of exogenous α-amylase resulted in ethanol concentration of 4.1 % higher compared to control treatment (conventional corn using exogenous α-amylase). Conventional corn processed with exogenous α-amylase and superior yeast (producing glucoamylase or GA) with no exogenous glucoamylase addition resulted in ethanol concentration similar to control treatment (conventional yeast with exogenous glucoamylase addition). Combination of amylase corn and superior yeast required only 25 % of recommended glucoamylase dose to complete fermentation and achieve ethanol concentration and yield similar to control treatment (conventional corn with exogenous α-amylase, conventional yeast with exogenous glucoamylase). Use of superior yeast with 50 % GA addition resulted in similar increases in yield for conventional or amylase corn of approximately 7 % compared to that of control treatment. Combination of amylase corn, superior yeast, and in situ ethanol removal resulted in a process that allowed complete fermentation of 40 % slurry solids with only 50 % of exogenous GA enzyme requirements and 64.6 % higher ethanol yield compared to that of conventional process. Use of amylase corn and superior yeast in the dry-grind processing industry

  10. Requirements for a systems-based research and development management process in transport infrastructure engineering

    Directory of Open Access Journals (Sweden)

    Rust, Frederik Christoffel

    2015-05-01

    Full Text Available The management of research and development (R&D in the transport infrastructure field is complex due to the multidisciplinary nature of the work. The literature shows that linear R&D models that progress from idea through to consumer product are not suitable for the management of such multi-disciplinary projects. This study focuses on determining the key characteristics required for a systems-based approach to the management of R&D projects. The information and data was compiled from literature reviews, interviews, and an e-mail survey with responses from 42 significant international R&D programmes. The findings confirmed the need for a systems-based approach to R&D management. The study formulated twelve principles or tenets for a new, systems-based approach.

  11. Astronaut suitability requirements and selection process; Uchu hikoshi tanjo eno michi (shishitsu yokyu)

    Energy Technology Data Exchange (ETDEWEB)

    Miyoshi, H. [National Space Development Agency of Japan, Tokyo (Japan)

    1999-10-05

    Manned space activities at National Space Development Agency of Japan and the suitability requirements that an astronaut is supposed to satisfy are described. At the first phase, candidates have to participate in a manned space experiment utilizing a NASA space shuttle and, in 1985, Mori, Mukai, and Doi were selected to be payload specialists. At the second phase, Astronauts Wakata, Doi, and Mori were sent to the mission specialist training course, this being one of the jobs aboard a space shuttle, which was for preparing for the construction and operation of the international space station. In January, 1996, Astronaut Wakata performed extravehicular tool manipulation and so forth, and Astronaut Doi did the same in 1997. The endowments that an astronaut is expected to have include undoubted professionalism, adaptability to branches out of his field, adaptability to a prolonged stay in space, spirit of teamwork and coordination, and ability to perform wide range of duties aboard an international space station. (NEDO)

  12. Emergent gravity in spaces of constant curvature

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez, Orlando; Haddad, Matthew [Department of Physics, University of Miami,1320 Campo Sano Ave, Coral Gables, FL 33146 (United States)

    2017-03-07

    In physical theories where the energy (action) is localized near a submanifold of a constant curvature space, there is a universal expression for the energy (or the action). We derive a multipole expansion for the energy that has a finite number of terms, and depends on intrinsic geometric invariants of the submanifold and extrinsic invariants of the embedding of the submanifold. This is the second of a pair of articles in which we try to develop a theory of emergent gravity arising from the embedding of a submanifold into an ambient space equipped with a quantum field theory. Our theoretical method requires a generalization of a formula due to by Hermann Weyl. While the first paper discussed the framework in Euclidean (Minkowski) space, here we discuss how this framework generalizes to spaces of constant sectional curvature. We focus primarily on anti de Sitter space. We then discuss how such a theory can give rise to a cosmological constant and Planck mass that are within reasonable bounds of the experimental values.

  13. Radiation balances and the solar constant

    Science.gov (United States)

    Crommelynck, D.

    1981-01-01

    The radiometric concepts are defined in order to consider various types of radiation balances and relate them to the diabetic form of the energy balance. Variability in space and time of the components of the radiation field are presented. A specific concept for sweeping which is tailored to the requirements is proposed. Finally, after establishing the truncated character of the present knowledge of the radiation balance. The results of the last observations of the solar constant are given. Ground and satellite measurement techniques are discussed.

  14. Antigen processing and remodeling of the endosomal pathway: requirements for antigen cross-presentation.

    Science.gov (United States)

    Compeer, Ewoud Bernardus; Flinsenberg, Thijs Willem Hendrik; van der Grein, Susanna Geertje; Boes, Marianne

    2012-01-01

    Cross-presentation of endocytosed antigen as peptide/class I major histocompatibility complex complexes plays a central role in the elicitation of CD8(+) T cell clones that mediate anti-viral and anti-tumor immune responses. While it has been clear that there are specific subsets of professional antigen presenting cells capable of antigen cross-presentation, identification of mechanisms involved is still ongoing. Especially amongst dendritic cells (DC), there are specialized subsets that are highly proficient at antigen cross-presentation. We here present a focused survey on the cell biological processes in the endosomal pathway that support antigen cross-presentation. This review highlights DC-intrinsic mechanisms that facilitate the cross-presentation of endocytosed antigen, including receptor-mediated uptake, maturation-induced endosomal sorting of membrane proteins, dynamic remodeling of endosomal structures and cell surface-directed endosomal trafficking. We will conclude with the description of pathogen-induced deviation of endosomal processing, and discuss how immune evasion strategies pertaining endosomal trafficking may preclude antigen cross-presentation.

  15. Antigen processing and remodeling of the endosomal pathway: requirements for antigen cross-presentation.

    Directory of Open Access Journals (Sweden)

    Ewoud Bernardus Compeer

    2012-03-01

    Full Text Available The cross-presentation of endocytosed antigen as peptide/class I MHC complexes plays a central role in the elicitation of CD8+ T cell clones that mediate anti-viral and anti-tumor immune responses. While it has been clear that there are specific subsets of professional antigen presenting cells (APC capable of antigen cross-presentation, description of mechanisms involved is still ongoing. Especially amongst dendritic cells (DC, there are specialized subsets that are highly proficient at antigen cross-presentation. We here present a focused survey on the cell biological processes in the endosomal pathway that support antigen cross-presentation. This review highlight DC-intrinsic mechanisms that facilitate the cross-presentation of endocytosed antigen, including receptor-mediated uptake, recycling and maturation including the sorting of membrane proteins, dynamic remodeling of endosomal structures and cell-surface directed endosomal trafficking. We will conclude with description of pathogen-induced deviation of endosomal processing, and discuss how immune evasion strategies pertaining endosomal trafficking may preclude antigen cross-presentation.

  16. Fine-structure constant: Is it really a constant

    International Nuclear Information System (INIS)

    Bekenstein, J.D.

    1982-01-01

    It is often claimed that the fine-structure ''constant'' α is shown to be strictly constant in time by a variety of astronomical and geophysical results. These constrain its fractional rate of change alpha-dot/α to at least some orders of magnitude below the Hubble rate H 0 . We argue that the conclusion is not as straightforward as claimed since there are good physical reasons to expect alpha-dot/α 0 . We propose to decide the issue by constructing a framework for a variability based on very general assumptions: covariance, gauge invariance, causality, and time-reversal invariance of electromagnetism, as well as the idea that the Planck-Wheeler length (10 -33 cm) is the shortest scale allowable in any theory. The framework endows α with well-defined dynamics, and entails a modification of Maxwell electrodynamics. It proves very difficult to rule it out with purely electromagnetic experiments. In a cosmological setting, the framework predicts an alpha-dot/α which can be compatible with the astronomical constraints; hence, these are too insensitive to rule out α variability. There is marginal conflict with the geophysical constraints: however, no firm decision is possible because of uncertainty about various cosmological parameters. By contrast the framework's predictions for spatial gradients of α are in fatal conflict with the results of the Eoetvoes-Dicke-Braginsky experiments. Hence these tests of the equivalence principle rule out with confidence spacetime variability of α at any level

  17. Description of quantum coherence in thermodynamic processes requires constraints beyond free energy

    Science.gov (United States)

    Lostaglio, Matteo; Jennings, David; Rudolph, Terry

    2015-01-01

    Recent studies have developed fundamental limitations on nanoscale thermodynamics, in terms of a set of independent free energy relations. Here we show that free energy relations cannot properly describe quantum coherence in thermodynamic processes. By casting time-asymmetry as a quantifiable, fundamental resource of a quantum state, we arrive at an additional, independent set of thermodynamic constraints that naturally extend the existing ones. These asymmetry relations reveal that the traditional Szilárd engine argument does not extend automatically to quantum coherences, but instead only relational coherences in a multipartite scenario can contribute to thermodynamic work. We find that coherence transformations are always irreversible. Our results also reveal additional structural parallels between thermodynamics and the theory of entanglement. PMID:25754774

  18. Description of quantum coherence in thermodynamic processes requires constraints beyond free energy

    Science.gov (United States)

    Lostaglio, Matteo; Jennings, David; Rudolph, Terry

    2015-03-01

    Recent studies have developed fundamental limitations on nanoscale thermodynamics, in terms of a set of independent free energy relations. Here we show that free energy relations cannot properly describe quantum coherence in thermodynamic processes. By casting time-asymmetry as a quantifiable, fundamental resource of a quantum state, we arrive at an additional, independent set of thermodynamic constraints that naturally extend the existing ones. These asymmetry relations reveal that the traditional Szilárd engine argument does not extend automatically to quantum coherences, but instead only relational coherences in a multipartite scenario can contribute to thermodynamic work. We find that coherence transformations are always irreversible. Our results also reveal additional structural parallels between thermodynamics and the theory of entanglement.

  19. A 3D bioprinting exemplar of the consequences of the regulatory requirements on customized processes.

    Science.gov (United States)

    Hourd, Paul; Medcalf, Nicholas; Segal, Joel; Williams, David J

    2015-01-01

    Computer-aided 3D printing approaches to the industrial production of customized 3D functional living constructs for restoration of tissue and organ function face significant regulatory challenges. Using the manufacture of a customized, 3D-bioprinted nasal implant as a well-informed but hypothetical exemplar, we examine how these products might be regulated. Existing EU and USA regulatory frameworks do not account for the differences between 3D printing and conventional manufacturing methods or the ability to create individual customized products using mechanized rather than craft approaches. Already subject to extensive regulatory control, issues related to control of the computer-aided design to manufacture process and the associated software system chain present additional scientific and regulatory challenges for manufacturers of these complex 3D-bioprinted advanced combination products.

  20. RNA Processing Factor 5 is required for efficient 5' cleavage at a processing site conserved in RNAs of three different mitochondrial genes in Arabidopsis thaliana.

    Science.gov (United States)

    Hauler, Aron; Jonietz, Christian; Stoll, Birgit; Stoll, Katrin; Braun, Hans-Peter; Binder, Stefan

    2013-05-01

    The 5' ends of many mitochondrial transcripts are generated post-transcriptionally. Recently, we identified three RNA PROCESSING FACTORs required for 5' end maturation of different mitochondrial mRNAs in Arabidopsis thaliana. All of these factors are pentatricopeptide repeat proteins (PPRPs), highly similar to RESTORERs OF FERTILTY (RF), that rescue male fertility in cytoplasmic male-sterile lines from different species. Therefore, we suggested a general role of these RF-like PPRPs in mitochondrial 5' processing. We now identified RNA PROCESSING FACTOR 5, a PPRP not classified as an RF-like protein, required for the efficient 5' maturation of the nad6 and atp9 mRNAs as well as 26S rRNA. The precursor molecules of these RNAs share conserved sequence elements, approximately ranging from positions -50 to +9 relative to mature 5' mRNA termini, suggesting these sequences to be at least part of the cis elements required for processing. The knockout of RPF5 has only a moderate influence on 5' processing of atp9 mRNA, whereas the generation of the mature nad6 mRNA and 26S rRNA is almost completely abolished in the mutant. The latter leads to a 50% decrease of total 26S rRNA species, resulting in an imbalance between the large rRNA and 18S rRNA. Despite these severe changes in RNA levels and in the proportion between the 26S and 18S rRNAs, mitochondrial protein levels appear to be unaltered in the mutant, whereas seed germination capacity is markedly reduced. © 2013 The Authors The Plant Journal © 2013 John Wiley & Sons Ltd.

  1. EDF training process: From the training needs to the training requirements

    International Nuclear Information System (INIS)

    Poizat, C.

    2002-01-01

    The Training and Development Division - SFP - is the main EDF actor in the strategic skills development. It is the prime contractor designated by the Nuclear Generation Division DPN. The four main SFP goals for the period 2001/2003 are as followed: to satisfy our customers, to optimize and diversify our offer in reply to the needs of our customers and to adapt our skills, to improve our efficiency (cost-effectiveness ratio), and to reinforce the Quality Management of the SFP. The SFP has a quality policy ISO 9001 oriented. It is based on 6 commitments: to take into account the needs of each client regarding training, to insure the requirements of the quality approaches followed by our clients, to assist the national specifications in the training area, to inform SFP customers about the whole training offer, to give a first answer 8 days after each complaint, to provide trainees a good and adapted learning environment. A customers satisfaction survey twice a year and frequent internal and external audits and assessments guarantee these commitments.The main challenge for the Nuclear Generation Division - DPN - is to improve the performances in safety and competitiveness and to increase the professionalism of people involved. That is the reason why the partnership between the DPN and the SFP is now based on the skills management instead of the courses organisation

  2. ER-associated degradation is required for vasopressin prohormone processing and systemic water homeostasis

    Science.gov (United States)

    Somlo, Diane R.M.; Kim, Geun Hyang; Prescianotto-Baschong, Cristina; Sun, Shengyi; Beuret, Nicole; Long, Qiaoming; Rutishauser, Jonas

    2017-01-01

    Peptide hormones are crucial regulators of many aspects of human physiology. Mutations that alter these signaling peptides are associated with physiological imbalances that underlie diseases. However, the conformational maturation of peptide hormone precursors (prohormones) in the ER remains largely unexplored. Here, we report that conformational maturation of proAVP, the precursor for the antidiuretic hormone arginine-vasopressin, within the ER requires the ER-associated degradation (ERAD) activity of the Sel1L-Hrd1 protein complex. Serum hyperosmolality induces expression of both ERAD components and proAVP in AVP-producing neurons. Mice with global or AVP neuron–specific ablation of Se1L-Hrd1 ERAD progressively developed polyuria and polydipsia, characteristics of diabetes insipidus. Mechanistically, we found that ERAD deficiency causes marked ER retention and aggregation of a large proportion of all proAVP protein. Further, we show that proAVP is an endogenous substrate of Sel1L-Hrd1 ERAD. The inability to clear misfolded proAVP with highly reactive cysteine thiols in the absence of Sel1L-Hrd1 ERAD causes proAVP to accumulate and participate in inappropriate intermolecular disulfide–bonded aggregates, promoted by the enzymatic activity of protein disulfide isomerase (PDI). This study highlights a pathway linking ERAD to prohormone conformational maturation in neuroendocrine cells, expanding the role of ERAD in providing a conducive ER environment for nascent proteins to reach proper conformation. PMID:28920920

  3. Seismic procurement requirements at the FPR (Fuel Processing Restoration) facility at INEL (Idaho National Engineering Laboratory)

    International Nuclear Information System (INIS)

    Bingham, G.E.; Hardy, G.S.; Griffin, M.J.

    1989-01-01

    Traditional methods used to seismically qualify equipment for new facilities has been either by testing or analysis. Testing programs are generally expensive and their input loadings are conservative. It is also generally recognized that standard seismic analysis techniques produce overly conservative results. Seismic loads and response levels for equipment are typically calculated that far exceed the values actually experienced in earthquakes. A more efficient method for demonstrating the seismic adequacy of equipment has been developed which is based on conclusions derived from studying the performance of equipment that has been subjected to actual earthquake excitations. The earthquake experience data concludes that damage or malfunction to most types of equipment subjected to earthquakes is far less than that predicted by traditional testing and analysis techniques. The use of conclusions derived from experience data provides a more realistic approach in assessing the seismic ruggedness of equipment. By recognizing this inherently higher capacity that exists in specific classes of equipment, vendors can often supply off the shelf equipment without the need to perform expensive modifications to meet requirements imposed by conservative qualification analyses. This paper will describe the development of the experienced based method for equipment seismic qualification and its application at the FPR facility

  4. Hypoxic survival requires a 2-on-2 hemoglobin in a process involving nitric oxide

    Science.gov (United States)

    Hemschemeier, Anja; Düner, Melis; Casero, David; Merchant, Sabeeha S.; Winkler, Martin; Happe, Thomas

    2013-01-01

    Hemoglobins are recognized today as a diverse family of proteins present in all kingdoms of life and performing multiple reactions beyond O2 chemistry. The physiological roles of most hemoglobins remain elusive. Here, we show that a 2-on-2 (“truncated”) hemoglobin, termed THB8, is required for hypoxic growth and the expression of anaerobic genes in Chlamydomonas reinhardtii. THB8 is 1 of 12 2-on-2 hemoglobins in this species. It belongs to a subclass within the 2-on-2 hemoglobin class I family whose members feature a remarkable variety of domain arrangements and lengths. Posttranscriptional silencing of the THB8 gene results in the mis-regulation of several genes and a growth defect under hypoxic conditions. The latter is intensified in the presence of an NO scavenger, which also impairs growth of wild-type cells. As recombinant THB8 furthermore reacts with NO, the results of this study indicate that THB8 is part of an NO-dependent signaling pathway. PMID:23754374

  5. TruMicro Series 2000 sub-400 fs class industrial fiber lasers: adjustment of laser parameters to process requirements

    Science.gov (United States)

    Kanal, Florian; Kahmann, Max; Tan, Chuong; Diekamp, Holger; Jansen, Florian; Scelle, Raphael; Budnicki, Aleksander; Sutter, Dirk

    2017-02-01

    The matchless properties of ultrashort laser pulses, such as the enabling of cold processing and non-linear absorption, pave the way to numerous novel applications. Ultrafast lasers arrived in the last decade at a level of reliability suitable for the industrial environment.1 Within the next years many industrial manufacturing processes in several markets will be replaced by laser-based processes due to their well-known benefits: These are non-contact wear-free processing, higher process accuracy or an increase of processing speed and often improved economic efficiency compared to conventional processes. Furthermore, new processes will arise with novel sources, addressing previously unsolved challenges. One technical requirement for these exciting new applications will be to optimize the large number of available parameters to the requirements of the application. In this work we present an ultrafast laser system distinguished by its capability to combine high flexibility and real time process-inherent adjustments of the parameters with industry-ready reliability. This industry-ready reliability is ensured by a long experience in designing and building ultrashort-pulse lasers in combination with rigorous optimization of the mechanical construction, optical components and the entire laser head for continuous performance. By introducing a new generation of mechanical design in the last few years, TRUMPF enabled its ultrashort-laser platforms to fulfill the very demanding requirements for passively coupling high-energy single-mode radiation into a hollow-core transport fiber. The laser architecture presented here is based on the all fiber MOPA (master oscillator power amplifier) CPA (chirped pulse amplification) technology. The pulses are generated in a high repetition rate mode-locked fiber oscillator also enabling flexible pulse bursts (groups of multiple pulses) with 20 ns intra-burst pulse separation. An external acousto-optic modulator (XAOM) enables linearization

  6. SEISMIC DESIGN REQUIREMENTS SELECTION METHODOLOGY FOR THE SLUDGE TREATMENT and M-91 SOLID WASTE PROCESSING FACILITIES PROJECTS

    International Nuclear Information System (INIS)

    RYAN GW

    2008-01-01

    In complying with direction from the U.S. Department of Energy (DOE), Richland Operations Office (RL) (07-KBC-0055, 'Direction Associated with Implementation of DOE-STD-1189 for the Sludge Treatment Project,' and 08-SED-0063, 'RL Action on the Safety Design Strategy (SDS) for Obtaining Additional Solid Waste Processing Capabilities (M-91 Project) and Use of Draft DOE-STD-I 189-YR'), it has been determined that the seismic design requirements currently in the Project Hanford Management Contract (PHMC) will be modified by DOE-STD-1189, Integration of Safety into the Design Process (March 2007 draft), for these two key PHMC projects. Seismic design requirements for other PHMC facilities and projects will remain unchanged. Considering the current early Critical Decision (CD) phases of both the Sludge Treatment Project (STP) and the Solid Waste Processing Facilities (M-91) Project and a strong intent to avoid potentially costly re-work of both engineering and nuclear safety analyses, this document describes how Fluor Hanford, Inc. (FH) will maintain compliance with the PHMC by considering both the current seismic standards referenced by DOE 0 420.1 B, Facility Safety, and draft DOE-STD-1189 (i.e., ASCE/SEI 43-05, Seismic Design Criteria for Structures, Systems, and Components in Nuclear Facilities, and ANSI ANS 2.26-2004, Categorization of Nuclear Facility Structures, Systems and Components for Seismic Design, as modified by draft DOE-STD-1189) to choose the criteria that will result in the most conservative seismic design categorization and engineering design. Following the process described in this document will result in a conservative seismic design categorization and design products. This approach is expected to resolve discrepancies between the existing and new requirements and reduce the risk that project designs and analyses will require revision when the draft DOE-STD-1189 is finalized

  7. Xenopus laevis Kif18A is a highly processive kinesin required for meiotic spindle integrity

    Directory of Open Access Journals (Sweden)

    Martin M. Möckel

    2017-04-01

    Full Text Available The assembly and functionality of the mitotic spindle depends on the coordinated activities of microtubule-associated motor proteins of the dynein and kinesin superfamily. Our current understanding of the function of motor proteins is significantly shaped by studies using Xenopus laevis egg extract as its open structure allows complex experimental manipulations hardly feasible in other model systems. Yet, the Kinesin-8 orthologue of human Kif18A has not been described in Xenopus laevis so far. Here, we report the cloning and characterization of Xenopus laevis (Xl Kif18A. Xenopus Kif18A is expressed during oocyte maturation and its depletion from meiotic egg extract results in severe spindle defects. These defects can be rescued by wild-type Kif18A, but not Kif18A lacking motor activity or the C-terminus. Single-molecule microscopy assays revealed that Xl_Kif18A possesses high processivity, which depends on an additional C-terminal microtubule-binding site. Human tissue culture cells depleted of endogenous Kif18A display mitotic defects, which can be rescued by wild-type, but not tail-less Xl_Kif18A. Thus, Xl_Kif18A is the functional orthologue of human Kif18A whose activity is essential for the correct function of meiotic spindles in Xenopus oocytes.

  8. Bilateral generic working memory circuit requires left-lateralized addition for verbal processing.

    Science.gov (United States)

    Ray, Manaan Kar; Mackay, Clare E; Harmer, Catherine J; Crow, Timothy J

    2008-06-01

    According to the Baddeley-Hitch model, phonological and visuospatial representations are separable components of working memory (WM) linked by a central executive. The traditional view that the separation reflects the relative contribution of the 2 hemispheres (verbal WM--left; spatial WM--right) has been challenged by the position that a common bilateral frontoparietal network subserves both domains. Here, we test the hypothesis that there is a generic WM circuit that recruits additional specialized regions for verbal and spatial processing. We designed a functional magnetic resonance imaging paradigm to elicit activation in the WM circuit for verbal and spatial information using identical stimuli and applied this in 33 healthy controls. We detected left-lateralized quantitative differences in the left frontal and temporal lobe for verbal > spatial WM but no areas of activation for spatial > verbal WM. We speculate that spatial WM is analogous to a "generic" bilateral frontoparietal WM circuit we inherited from our great ape ancestors that evolved, by recruitment of additional left-lateralized frontal and temporal regions, to accommodate language.

  9. Tank Waste Remediation System tank waste pretreatment and vitrification process development testing requirements assessment

    International Nuclear Information System (INIS)

    Howden, G.F.

    1994-01-01

    A multi-faceted study was initiated in November 1993 to provide assurance that needed testing capabilities, facilities, and support infrastructure (sampling systems, casks, transportation systems, permits, etc.) would be available when needed for process and equipment development to support pretreatment and vitrification facility design and construction schedules. This first major report provides a snapshot of the known testing needs for pretreatment, low-level waste (LLW) and high-level waste (HLW) vitrification, and documents the results of a series of preliminary studies and workshops to define the issues needing resolution by cold or hot testing. Identified in this report are more than 140 Hanford Site tank waste pretreatment and LLW/HLW vitrification technology issues that can only be resolved by testing. The report also broadly characterizes the level of testing needed to resolve each issue. A second report will provide a strategy(ies) for ensuring timely test capability. Later reports will assess the capabilities of existing facilities to support needed testing and will recommend siting of the tests together with needed facility and infrastructure upgrades or additions

  10. Cross sections for electron and photon processes required by electron-transport calculations

    International Nuclear Information System (INIS)

    Peek, J.M.

    1979-11-01

    Electron-transport calculations rely on a large collection of electron-atom and photon-atom cross-section data to represent the response characteristics of the target medium. These basic atomic-physics quantities, and certain qualities derived from them that are now commonly in use, are critically reviewed. Publications appearing after 1978 are not given consideration. Processes involving electron or photon energies less than 1 keV are ignored, while an attempt is made to exhaustively cover the remaining independent parameters and target possibilities. Cases for which data improvements can be made from existing information are identified. Ranges of parameters for which state-of-the-art data are not available are sought out, and recommendations for explicit measurements and/or calculations with presently available tools are presented. An attempt is made to identify the maturity of the atomic-physics data and to predict the possibilities for rapid changes in the quality of the data. Finally, weaknesses in the state-of-the-art atomic-physics data and in the conceptual usage of these data in the context of electron-transport theory are discussed. Brief attempts are made to weight the various aspects of these questions and to suggest possible remedies

  11. Tank Waste Remediation System tank waste pretreatment and vitrification process development testing requirements assessment

    Energy Technology Data Exchange (ETDEWEB)

    Howden, G.F.

    1994-10-24

    A multi-faceted study was initiated in November 1993 to provide assurance that needed testing capabilities, facilities, and support infrastructure (sampling systems, casks, transportation systems, permits, etc.) would be available when needed for process and equipment development to support pretreatment and vitrification facility design and construction schedules. This first major report provides a snapshot of the known testing needs for pretreatment, low-level waste (LLW) and high-level waste (HLW) vitrification, and documents the results of a series of preliminary studies and workshops to define the issues needing resolution by cold or hot testing. Identified in this report are more than 140 Hanford Site tank waste pretreatment and LLW/HLW vitrification technology issues that can only be resolved by testing. The report also broadly characterizes the level of testing needed to resolve each issue. A second report will provide a strategy(ies) for ensuring timely test capability. Later reports will assess the capabilities of existing facilities to support needed testing and will recommend siting of the tests together with needed facility and infrastructure upgrades or additions.

  12. Cryptography in constant parallel time

    CERN Document Server

    Applebaum, Benny

    2013-01-01

    Locally computable (NC0) functions are 'simple' functions for which every bit of the output can be computed by reading a small number of bits of their input. The study of locally computable cryptography attempts to construct cryptographic functions that achieve this strong notion of simplicity and simultaneously provide a high level of security. Such constructions are highly parallelizable and they can be realized by Boolean circuits of constant depth.This book establishes, for the first time, the possibility of local implementations for many basic cryptographic primitives such as one-way func

  13. Can coupling constants be related

    International Nuclear Information System (INIS)

    Nandi, Satyanarayan; Ng, Wing-Chiu.

    1978-06-01

    We analyze the conditions under which several coupling constants in field theory can be related to each other. When the relation is independent of the renormalization point, the relation between any g and g' must satisfy a differential equation as follows from the renormalization group equations. Using this differential equation, we investigate the criteria for the feasibility of a power-series relation for various theories, especially the Weinberg-Salam type (including Higgs bosons) with an arbitrary number of quark and lepton flavors. (orig./WL) [de

  14. Exact constants in approximation theory

    CERN Document Server

    Korneichuk, N

    1991-01-01

    This book is intended as a self-contained introduction for non-specialists, or as a reference work for experts, to the particular area of approximation theory that is concerned with exact constants. The results apply mainly to extremal problems in approximation theory, which in turn are closely related to numerical analysis and optimization. The book encompasses a wide range of questions and problems: best approximation by polynomials and splines; linear approximation methods, such as spline-approximation; optimal reconstruction of functions and linear functionals. Many of the results are base

  15. Hydrodynamic constants from cosmic censorship

    International Nuclear Information System (INIS)

    Nakamura, Shin

    2008-01-01

    We study a gravity dual of Bjorken flow of N=4 SYM-theory plasma. We point out that the cosmic censorship hypothesis may explain why the regularity of the dual geometry constrains the hydrodynamic constants. We also investigate the apparent horizon of the dual geometry. We find that the dual geometry constructed on Fefferman-Graham (FG) coordinates is not appropriate for examination of the apparent horizon since the coordinates do not cover the trapped region. However, the preliminary analysis on FG coordinates suggests that the location of the apparent horizon is very sensitive to the hydrodynamic parameters. (author)

  16. The continuous improvement of the Internal Audits Process assurance the effective compliance of ISO 17025:2005 requirements

    Directory of Open Access Journals (Sweden)

    Carina Di Candia

    2011-04-01

    Full Text Available Continuous Improvement Process started in LATU in 1996. The Impact was so important that covered all the organization. Nowadays LATU has almost all its processes certificated and most than 200 tests accredited. The Internal Audits process began in 1996 with an annual planning for all the laboratory's areas. For the UKAS accreditation in 1998, LATU improves the internal audits planning auditing not only the system but also the tests. In 1999 LATU was certified by SQS and accredited the calibrations by DKD. Since 2004 internal audits was managed as a process; in order to that was defined objectives, indicators, achievements and the necessary resources of the internal audit programme and process. The internal audit programme has a pre defined tri annual planning that includes all the laboratory areas. The results of the measures obtained till now demonstrate the improvement in the internal audit and all the laboratory processes. Auditors final staff increase their technical competence. As a consequence of managing the internal audits as a process, the internal communication has an important relevance to feedback the continuous improvement of the laboratory. This was evidence in a decrease of the documentaries non conformities, improvement of the calibrations and maintenance programme, optimization trainings and qualifications of the staff, common internal trainings, creation of a quality assurance team to improvement the tests control, improvement in the relationship with the support areas. Most of this requirements are included in ISO 17025:2005; that assurance the effective compliance of this standard.

  17. Tank 241-C-106 sampling data requirements developed through the data quality objectives (DQO) process

    International Nuclear Information System (INIS)

    Wang, O.S.; Bell, K.E.; Anderson, C.M.; Peffers, M.S.; Pulsipher, B.A.; Scott, J.L.

    1994-01-01

    The rate of heat generation for tank 241-C-106 at the Hanford Site is estimated at more then 100,000 Btu/h. The heat is generated primarily from the radioactive decay of 90 Sr waste that was inadvertently transferred into the tank in the late 1960s. If proper tank cooling is not maintained for this tank, heat-induced structural damage to the tank's concrete shell could result in the release of nuclear waste to the environment. Because of high-heat concerns in January 1991, tank 241-C-106 was designated as a Watch List tank and deemed as a Priority 1 safety issue. Waste Tank Safety Program (WTSP) is responsible for the resolution of this safety issue. Although forced cooling is effective for short term, the long-term resolution for tank cooling is waste retrieval. Single-shell Tank Retrieval Project (Retrieval) is responsible for the safe retrieval and transfer of radioactive waste from tank 241-C-106 to a selected double-shell tank. This data quality objective (DQO) study is an effort to determine engineering and design data needs for WTSP and assist Retrieval in designing contingency action retrieval systems. The 7-step DQO process is a tool developed by the Environmental Protection Agency with a goal of identifying needs and reducing costs. This report discusses the results of two DQO efforts for WTSP and Retrieval. The key data needs to support WTSP are thermal conductivity, permeability, and heat load profile. For the Retrieval support, there are nine and three data needs identified, respectively, for retrieval engineering system design and HVAC system design. The updated schedule to drill two core samples using rotary mode is set for March 1994. The analysis of the sample is expected to be completed by September 1994

  18. Ventricular fibrillation time constant for swine

    International Nuclear Information System (INIS)

    Wu, Jiun-Yan; Sun, Hongyu; Nimunkar, Amit J; Webster, John G; O'Rourke, Ann; Huebner, Shane; Will, James A

    2008-01-01

    The strength–duration curve for cardiac excitation can be modeled by a parallel resistor–capacitor circuit that has a time constant. Experiments on six pigs were performed by delivering current from the X26 Taser dart at a distance from the heart to cause ventricular fibrillation (VF). The X26 Taser is an electromuscular incapacitation device (EMD), which generates about 50 kV and delivers a pulse train of about 15–19 pulses s −1 with a pulse duration of about 150 µs and peak current about 2 A. Similarly a continuous 60 Hz alternating current of the amplitude required to cause VF was delivered from the same distance. The average current and duration of the current pulse were estimated in both sets of experiments. The strength–duration equation was solved to yield an average time constant of 2.87 ms ± 1.90 (SD). Results obtained may help in the development of safety standards for future electromuscular incapacitation devices (EMDs) without requiring additional animal tests

  19. Time-dependent 31P saturation transfer in the phosphoglucomutase reaction. Characterization of the spin system for the Cd(II) enzyme and evaluation of rate constants for the transfer process

    International Nuclear Information System (INIS)

    Post, C.B.; Ray, W.J. Jr.; Gorenstein, D.G.

    1989-01-01

    Time-dependent 31 P saturation-transfer studies were conducted with the Cd 2+ -activated form of muscle phosphoglucomutase to probe the origin of the 100-fold difference between its catalytic efficiency (in terms of k cat ) and that of the more efficient Mg 2+ -activated enzyme. The present paper describes the equilibrium mixture of phosphoglucomutase and its substrate/product pair when the concentration of the Cd 2+ enzyme approaches that of the substrate and how the nine-spin 31 P NMR system provided by this mixture was treated. It shows that the presence of abortive complexes is not a significant factor in the reduced activity of the Cd 2+ enzyme since the complex of the dephosphoenzyme and glucose 1,6-bisphosphate, which accounts for a large majority of the enzyme present at equilibrium, is catalytically competent. It also shows that rate constants for saturation transfer obtained at three different ratios of enzyme to free substrate are mutually compatible. These constants, which were measured at chemical equilibrium, can be used to provide a quantitative kinetic rationale for the reduced steady-state activity elicited by Cd 2+ relative to Mg 2+ . They also provide minimal estimates of 350 and 150 s -1 for the rate constants describing (PO 3 - ) transfer from the Cd 2+ phosphoenzyme to the 6-position of bound glucose 1-phosphate and to the 1-position of bound glucose 6-phosphate, respectively. These minimal estimates are compared with analogous estimates for the Mg 2+ and Li + forms of the enzyme in the accompanying paper

  20. Cosmological constant and general isocurvature initial conditions

    International Nuclear Information System (INIS)

    Trotta, R.; Riazuelo, A.; Durrer, R.

    2003-01-01

    We investigate in detail the question of whether a nonvanishing cosmological constant is required by the present-day cosmic microwave background and large scale structure data when general isocurvature initial conditions are taken into account. We also discuss the differences between the usual Bayesian and the frequentist approaches in data analysis. We show that the Cosmic Background Explorer (COBE)-normalized matter power spectrum is dominated by the adiabatic mode and therefore breaks the degeneracy between initial conditions which is present in the cosmic microwave background anisotropies. We find that in a flat universe the Bayesian analysis requires Ω Λ =e0 to more than 3σ, while in the frequentist approach Ω Λ =0 is still within 3σ for a value of h≤0.48. Both conclusions hold regardless of the initial conditions

  1. Relaxing a large cosmological constant

    International Nuclear Information System (INIS)

    Bauer, Florian; Sola, Joan; Stefancic, Hrvoje

    2009-01-01

    The cosmological constant (CC) problem is the biggest enigma of theoretical physics ever. In recent times, it has been rephrased as the dark energy (DE) problem in order to encompass a wider spectrum of possibilities. It is, in any case, a polyhedric puzzle with many faces, including the cosmic coincidence problem, i.e. why the density of matter ρ m is presently so close to the CC density ρ Λ . However, the oldest, toughest and most intriguing face of this polyhedron is the big CC problem, namely why the measured value of ρ Λ at present is so small as compared to any typical density scale existing in high energy physics, especially taking into account the many phase transitions that our Universe has undergone since the early times, including inflation. In this Letter, we propose to extend the field equations of General Relativity by including a class of invariant terms that automatically relax the value of the CC irrespective of the initial size of the vacuum energy in the early epochs. We show that, at late times, the Universe enters an eternal de Sitter stage mimicking a tiny positive cosmological constant. Thus, these models could be able to solve the big CC problem without fine-tuning and have also a bearing on the cosmic coincidence problem. Remarkably, they mimic the ΛCDM model to a large extent, but they still leave some characteristic imprints that should be testable in the next generation of experiments.

  2. WIPP conceptual design report. Addendum M. Computer system and data processing requirements for Waste Isolation Pilot Plant (WIPP)

    International Nuclear Information System (INIS)

    Young, R.

    1977-06-01

    Data-processing requirements for the Waste Isolation Pilot Plant (WIPP) dictate a computing system that can provide a wide spectrum of data-processing needs on a 24-hour-day basis over an indeterminate time. A computer system is defined as a computer or computers complete with all peripheral equipment and extensive software and communications capabilities, including an operating system, compilers, assemblers, loaders, etc., all applicable to real-world problems. The computing system must be extremely reliable and easily expandable in both hardware and software to provide for future capabilities with a minimum impact on the existing applications software and operating system. The computer manufacturer or WIPP operating contractor must provide continuous on-site computer maintenance (maintain an adequate inventory of spare components and parts to guarantee a minimum mean-time-to-repair of any portion of the computer system). The computer operating system or monitor must process a wide mix of application programs and languages, yet be readily changeable to obtain maximum computer usage. The WIPP computing system must handle three general types of data processing requirements: batch, interactive, and real-time. These are discussed. Data bases, data collection systems, scientific and business systems, building and facilities, remote terminals and locations, and cables are also discussed

  3. Water in the Mendoza, Argentina, food processing industry: water requirements and reuse potential of industrial effluents in agriculture

    Directory of Open Access Journals (Sweden)

    Alicia Elena Duek

    2016-04-01

    Full Text Available This paper estimates the volume of water used by the Mendoza food processing industry considering different water efficiency scenarios. The potential for using food processing industry effluents for irrigation is also assessed. The methodology relies upon information collected from interviews with qualified informants from different organizations and food-processing plants in Mendoza selected from a targeted sample. Scenarios were developed using local and international secondary information sources. The results show that food processing plants in Mendoza use 19.65 hm3 of water per year; efficient water management practices would make it possible to reduce water use by 64%, i.e., to 7.11 hm3. At present, 70% of the water is used by the fruit and vegetable processing industry, 16% by wineries, 8% by mineral water bottling plants, and the remaining 6% by olive oil, beer and soft drink plants. The volume of effluents from the food processing plants in Mendoza has been estimated at 16.27 hm3 per year. Despite the seasonal variations of these effluents, and the high sodium concentration and electrical conductivity of some of them, it is possible to use them for irrigation purposes. However, because of these variables and their environmental impact, land treatment is required.

  4. Challenges for the registration of vaccines in emerging countries: Differences in dossier requirements, application and evaluation processes.

    Science.gov (United States)

    Dellepiane, Nora; Pagliusi, Sonia

    2018-06-07

    The divergence of regulatory requirements and processes in developing and emerging countries contributes to hamper vaccines' registration, and therefore delay access to high-quality, safe and efficacious vaccines for their respective populations. This report focuses on providing insights on the heterogeneity of registration requirements in terms of numbering structure and overall content of dossiers for marketing authorisation applications for vaccines in different areas of the world. While it also illustrates the divergence of regulatory processes in general, as well as the need to avoid redundant reviews, it does not claim to provide a comprehensive view of all processes nor existing facilitating mechanisms, nor is it intended to touch upon the differences in assessments made by different regulatory authorities. This report describes the work analysed by regulatory experts from vaccine manufacturing companies during a meeting held in Geneva in May 2017, in identifying and quantifying differences in the requirements for vaccine registration in three aspects for comparison: the dossier numbering structure and contents, the application forms, and the evaluation procedures, in different countries and regions. The Module 1 of the Common Technical Document (CTD) of 10 countries were compared. Modules 2-5 of the CTDs of two regions and three countries were compared to the CTD of the US FDA. The application forms of eight countries were compared and the registration procedures of 134 importing countries were compared as well. The analysis indicates a high degree of divergence in numbering structure and content requirements. Possible interventions that would lead to significant improvements in registration efficiency include alignment in CTD numbering structure, a standardised model-application form, and better convergence of evaluation procedures. Copyright © 2018.

  5. Constant Proportion Debt Obligations (CPDOs)

    DEFF Research Database (Denmark)

    Cont, Rama; Jessen, Cathrine

    2012-01-01

    be made arbitrarily small—and thus the credit rating arbitrarily high—by increasing leverage, but the ratings obtained strongly depend on assumptions on the credit environment (high spread or low spread). More importantly, CPDO loss distributions are found to exhibit a wide range of tail risk measures......Constant Proportion Debt Obligations (CPDOs) are structured credit derivatives that generate high coupon payments by dynamically leveraging a position in an underlying portfolio of investment-grade index default swaps. CPDO coupons and principal notes received high initial credit ratings from...... the major rating agencies, based on complex models for the joint transition of ratings and spreads for all names in the underlying portfolio. We propose a parsimonious model for analysing the performance of CPDO strategies using a top-down approach that captures the essential risk factors of the CPDO. Our...

  6. Energy, stability and cosmological constant

    International Nuclear Information System (INIS)

    Deser, S.

    1982-01-01

    The definition of energy and its use in studying stability in general relativity are extended to the case when there is a nonvanishing cosmological constant Λ. Existence of energy is first demonstrated for any model (with arbitrary Λ). It is defined with respect to sets of solutions tending asymptotically to any background space possessing timelike Killing symmetry, and is both conserved and of flux integral form. When Λ O, small excitations about De Sitter space are stable inside the event horizon. Outside excitations can contribute negatively due to the Killing vector's flip at the horizon. This is a universal phenomenon associated with the possibility of Hawking radiation. Apart from this effect, the Λ>O theory appears to be stable, also at the semi-classical level. (author)

  7. Evolution of the solar 'constant'

    Energy Technology Data Exchange (ETDEWEB)

    Newman, M J

    1980-06-01

    Variations in solar luminosity over geological time are discussed in light of the effect of the solar constant on the evolution of life on earth. Consideration is given to long-term (5 - 7% in a billion years) increases in luminosity due to the conversion of hydrogen into helium in the solar interior, temporary enhancements to solar luminosity due to the accretion of matter from the interstellar medium at intervals on the order of 100 million years, and small-amplitude rapid fluctuations of luminosity due to the stochastic nature of convection on the solar surface. It is noted that encounters with dense interstellar clouds could have had serious consequences for life on earth due to the peaking of the accretion-induced luminosity variation at short wavelengths.

  8. Asympotics with positive cosmological constant

    Science.gov (United States)

    Bonga, Beatrice; Ashtekar, Abhay; Kesavan, Aruna

    2014-03-01

    Since observations to date imply that our universe has a positive cosmological constant, one needs an extension of the theory of isolated systems and gravitational radiation in full general relativity from the asymptotically flat to asymptotically de Sitter space-times. In current definitions, one mimics the boundary conditions used in asymptotically AdS context to conclude that the asymptotic symmetry group is the de Sitter group. However, these conditions severely restricts radiation and in fact rules out non-zero flux of energy, momentum and angular momentum carried by gravitational waves. Therefore, these formulations of asymptotically de Sitter space-times are uninteresting beyond non-radiative spacetimes. The situation is compared and contrasted with conserved charges and fluxes at null infinity in asymptotically flat space-times.

  9. The fundamental constants a mystery of physics

    CERN Document Server

    Fritzsch, Harald

    2009-01-01

    The speed of light, the fine structure constant, and Newton's constant of gravity — these are just three among the many physical constants that define our picture of the world. Where do they come from? Are they constant in time and across space? In this book, physicist and author Harald Fritzsch invites the reader to explore the mystery of the fundamental constants of physics in the company of Isaac Newton, Albert Einstein, and a modern-day physicist

  10. Mathematical modeling of radiation-chemical processes in HNO3 solutions of Pu. 5. Effect of [HNO3] on rate constants of radiation-chemical and chemical reactions of Pu ions

    International Nuclear Information System (INIS)

    Vladimirova, M.V.

    1993-01-01

    Dependences of rate constants on [HNO 3 ] are obtained for the reactions Pu(IV) + OH, Pu(IV) + NO 3 , Pu(V) + NO 2 , Pu(III) + NO 2 , Pu(V) + Pu(III), Pu(IV) + Pu(IV), and Pu(V) + Pu(V). These dependences are obtained for [HNO 3 ] = 0.3-6 M using existing experimental and literature data and the data obtained using mathematical modeling. The correctness of the resulting dependences is checked by comparing the calculated and experimental kinetic laws for the behavior of Pu in 0.3, 0.4, 0.6, and 1.6 M HNO 3 . 17 refs., 15 figs., 2 tabs

  11. Harmonized Constraints in Software Engineering and Acquisition Process Management Requirements are the Clue to Meet Future Performance Goals Successfully in an Environment of Scarce Resources

    National Research Council Canada - National Science Library

    Reich, Holger

    2008-01-01

    This MBA project investigates the importance of correctly deriving requirements from the capability gap and operational environment, and translating them into the processes of contracting, software...

  12. Embedded XML DOM Parser: An Approach for XML Data Processing on Networked Embedded Systems with Real-Time Requirements

    Directory of Open Access Journals (Sweden)

    Cavia Soto MAngeles

    2008-01-01

    Full Text Available Abstract Trends in control and automation show an increase in data processing and communication in embedded automation controllers. The eXtensible Markup Language (XML is emerging as a dominant data syntax, fostering interoperability, yet little is still known about how to provide predictable real-time performance in XML processing, as required in the domain of industrial automation. This paper presents an XML processor that is designed with such real-time performance in mind. The publication attempts to disclose insight gained in applying techniques such as object pooling and reuse, and other methods targeted at avoiding dynamic memory allocation and its consequent memory fragmentation. Benchmarking tests are reported in order to illustrate the benefits of the approach.

  13. Arrhenius Rate: constant volume burn

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-12-06

    A constant volume burn occurs for an idealized initial state in which a large volume of reactants at rest is suddenly raised to a high temperature and begins to burn. Due to the uniform spatial state, there is no fluid motion and no heat conduction. This reduces the time evolu tion to an ODE for the reaction progress variable. With an Arrhenius reaction rate, two characteristics of thermal ignition are illustrated: induction time and thermal runaway. The Frank-Kamenetskii approximation then leads to a simple expression for the adiabatic induction time. For a first order reaction, the analytic solution is derived and used to illustrate the effect of varying the activation temperature; in particular, on the induction time. In general, the ODE can be solved numerically. This is used to illustrate the effect of varying the reaction order. We note that for a first order reaction, the time evolution of the reaction progress variable has an exponential tail. In contrast, for a reaction order less than one, the reaction completes in a nite time. The reaction order also affects the induction time.

  14. Adapting the SpaceCube v2.0 Data Processing System for Mission-Unique Application Requirements

    Science.gov (United States)

    Petrick, David; Gill, Nat; Hasouneh, Munther; Stone, Robert; Winternitz, Luke; Thomas, Luke; Davis, Milton; Sparacino, Pietro; Flatley, Thomas

    2015-01-01

    The SpaceCube (sup TM) v2.0 system is a superior high performance, reconfigurable, hybrid data processing system that can be used in a multitude of applications including those that require a radiation hardened and reliable solution. This paper provides an overview of the design architecture, flexibility, and the advantages of the modular SpaceCube v2.0 high performance data processing system for space applications. The current state of the proven SpaceCube technology is based on nine years of engineering and operations. Five systems have been successfully operated in space starting in 2008 with four more to be delivered for launch vehicle integration in 2015. The SpaceCube v2.0 system is also baselined as the avionics solution for five additional flight projects and is always a top consideration as the core avionics for new instruments or spacecraft control. This paper will highlight how this multipurpose system is currently being used to solve design challenges of three independent applications. The SpaceCube hardware adapts to new system requirements by allowing for application-unique interface cards that are utilized by reconfiguring the underlying programmable elements on the core processor card. We will show how this system is being used to improve on a heritage NASA GPS technology, enable a cutting-edge LiDAR instrument, and serve as a typical command and data handling (C&DH) computer for a space robotics technology demonstration.

  15. RNase MRP is required for entry of 35S precursor rRNA into the canonical processing pathway.

    Science.gov (United States)

    Lindahl, Lasse; Bommankanti, Ananth; Li, Xing; Hayden, Lauren; Jones, Adrienne; Khan, Miriam; Oni, Tolulope; Zengel, Janice M

    2009-07-01

    RNase MRP is a nucleolar RNA-protein enzyme that participates in the processing of rRNA during ribosome biogenesis. Previous experiments suggested that RNase MRP makes a nonessential cleavage in the first internal transcribed spacer. Here we report experiments with new temperature-sensitive RNase MRP mutants in Saccharomyces cerevisiae that show that the abundance of all early intermediates in the processing pathway is severely reduced upon inactivation of RNase MRP. Transcription of rRNA continues unabated as determined by RNA polymerase run-on transcription, but the precursor rRNA transcript does not accumulate, and appears to be unstable. Taken together, these observations suggest that inactivation of RNase MRP blocks cleavage at sites A0, A1, A2, and A3, which in turn, prevents precursor rRNA from entering the canonical processing pathway (35S > 20S + 27S > 18S + 25S + 5.8S rRNA). Nevertheless, at least some cleavage at the processing site in the second internal transcribed spacer takes place to form an unusual 24S intermediate, suggesting that cleavage at C2 is not blocked. Furthermore, the long form of 5.8S rRNA is made in the absence of RNase MRP activity, but only in the presence of Xrn1p (exonuclease 1), an enzyme not required for the canonical pathway. We conclude that RNase MRP is a key enzyme for initiating the canonical processing of precursor rRNA transcripts, but alternative pathway(s) might provide a backup for production of small amounts of rRNA.

  16. Capacitive Cells for Dielectric Constant Measurement

    Science.gov (United States)

    Aguilar, Horacio Munguía; Maldonado, Rigoberto Franco

    2015-01-01

    A simple capacitive cell for dielectric constant measurement in liquids is presented. As an illustrative application, the cell is used for measuring the degradation of overheated edible oil through the evaluation of their dielectric constant.

  17. The Dielectric Constant of Lubrication Oils

    National Research Council Canada - National Science Library

    Carey, A

    1998-01-01

    The values of the dielectric constant of simple molecules is discussed first, along with the relationship between the dielectric constant and other physical properties such as boiling point, melting...

  18. Globally Coupled Chaotic Maps with Constant Force

    International Nuclear Information System (INIS)

    Li Jinghui

    2008-01-01

    We investigate the motion of the globally coupled maps (logistic map) with a constant force. It is shown that the constant force can cause multi-synchronization for the globally coupled chaotic maps studied by us.

  19. STABILITY CONSTANT OF THE TRISGLYCINATO METAL ...

    African Journals Online (AJOL)

    DR. AMINU

    overall stability constants of the complexes were found to be similar. Keywords: Glycinato, titration ... +. −. = 1 where Ka = dissociation constant of the amino acid. [ ]+. H = concentration of the .... Synthesis and techniques in inorganic chemistry.

  20. Robust control for constant thrust rendezvous under thrust failure

    Directory of Open Access Journals (Sweden)

    Qi Yongqiang

    2015-04-01

    Full Text Available A robust constant thrust rendezvous approach under thrust failure is proposed based on the relative motion dynamic model. Firstly, the design problem is cast into a convex optimization problem by introducing a Lyapunov function subject to linear matrix inequalities. Secondly, the robust controllers satisfying the requirements can be designed by solving this optimization problem. Then, a new algorithm of constant thrust fitting is proposed through the impulse compensation and the fuel consumption under the theoretical continuous thrust and the actual constant thrust is calculated and compared by using the method proposed in this paper. Finally, the proposed method having the advantage of saving fuel is proved and the actual constant thrust switch control laws are obtained through the isochronous interpolation method, meanwhile, an illustrative example is provided to show the effectiveness of the proposed control design method.

  1. Clinical Course of Homozygous Hemoglobin Constant Spring in Pediatric Patients.

    Science.gov (United States)

    Komvilaisak, Patcharee; Jetsrisuparb, Arunee; Fucharoen, Goonnapa; Komwilaisak, Ratana; Jirapradittha, Junya; Kiatchoosakun, Pakaphan

    2018-04-17

    Hemoglobin (Hb) Constant Spring is an alpha-globin gene variant due to a mutation of the stop codon resulting in the elongation of the encoded polypeptide from 141 to 172 amino acid residues. Patients with homozygous Hb Constant Spring are usually mildly anemic. We retrospectively describe clinical manifestations, diagnosis, laboratory investigations, treatment, and associated findings in pediatric patients with homozygous Hb Constant Spring followed-up at Srinagarind Hospital. Sixteen pediatric cases (5 males and 11 females) were diagnosed in utero (N=6) or postnatal (n=10). Eleven cases were diagnosed with homozygous Hb Constant Spring, 4 with homozygous Hb Constant Spring with heterozygous Hb E, and 1 with homozygous Hb Constant Spring with homozygous Hb E. Three cases were delivered preterm. Six patients had low birth weights. Clinical manifestations included fetal anemia in 6 cases, hepatomegaly in 1 case, hepatosplenomegaly in 2 cases, splenomegaly in 1 case. Twelve cases exhibited early neonatal jaundice, 9 of which required phototherapy. Six cases received red cell transfusions; 1 (3), >1 (3). After the first few months of life, almost all patients had mild microcytic hypochromic anemia and an increased reticulocyte count with a wide red cell distribution (RDW), but no longer required red cell transfusion. At 1 to 2 years of age, some patients still had mild microcytic hypochromic anemia and some had normocytic hypochromic anemia with Hb around 10 g/dL, increased reticulocyte count and wide RDW. Associated findings included hypothyroidism (2), congenital heart diseases (4), genitourinary abnormalities (3), gastrointestinal abnormalities (2), and developmental delay (1). Pediatric patients with homozygous Hb Constant Spring developed severe anemia in utero and up to the age of 2 to 3 months postnatal, requiring blood transfusions. Subsequently, their anemia was mild with no evidence of hepatosplenomegaly. Their Hb level was above 9 g/dL with hypochromic

  2. Technical basis and programmatic requirements for large block testing of coupled thermal-mechanical-hydrological-chemical processes

    International Nuclear Information System (INIS)

    Lin, Wunan.

    1993-09-01

    This document contains the technical basis and programmatic requirements for a scientific investigation plan that governs tests on a large block of tuff for understanding the coupled thermal- mechanical-hydrological-chemical processes. This study is part of the field testing described in Section 8.3.4.2.4.4.1 of the Site Characterization Plan (SCP) for the Yucca Mountain Project. The first, and most important objective is to understand the coupled TMHC processes in order to develop models that will predict the performance of a nuclear waste repository. The block and fracture properties (including hydrology and geochemistry) can be well characterized from at least five exposed surfaces, and the block can be dismantled for post-test examinations. The second objective is to provide preliminary data for development of models that will predict the quality and quantity of water in the near-field environment of a repository over the current 10,000 year regulatory period of radioactive decay. The third objective is to develop and evaluate the various measurement systems and techniques that will later be employed in the Engineered Barrier System Field Tests (EBSFT)

  3. Catalytically Active Guanylyl Cyclase B Requires Endoplasmic Reticulum-mediated Glycosylation, and Mutations That Inhibit This Process Cause Dwarfism.

    Science.gov (United States)

    Dickey, Deborah M; Edmund, Aaron B; Otto, Neil M; Chaffee, Thomas S; Robinson, Jerid W; Potter, Lincoln R

    2016-05-20

    C-type natriuretic peptide activation of guanylyl cyclase B (GC-B), also known as natriuretic peptide receptor B or NPR2, stimulates long bone growth, and missense mutations in GC-B cause dwarfism. Four such mutants (L658F, Y708C, R776W, and G959A) bound (125)I-C-type natriuretic peptide on the surface of cells but failed to synthesize cGMP in membrane GC assays. Immunofluorescence microscopy also indicated that the mutant receptors were on the cell surface. All mutant proteins were dephosphorylated and incompletely glycosylated, but dephosphorylation did not explain the inactivation because the mutations inactivated a "constitutively phosphorylated" enzyme. Tunicamycin inhibition of glycosylation in the endoplasmic reticulum or mutation of the Asn-24 glycosylation site decreased GC activity, but neither inhibition of glycosylation in the Golgi by N-acetylglucosaminyltransferase I gene inactivation nor PNGase F deglycosylation of fully processed GC-B reduced GC activity. We conclude that endoplasmic reticulum-mediated glycosylation is required for the formation of an active catalytic, but not ligand-binding domain, and that mutations that inhibit this process cause dwarfism. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  4. Long Pulse Integrator of Variable Integral Time Constant

    International Nuclear Information System (INIS)

    Wang Yong; Ji Zhenshan; Du Xiaoying; Wu Yichun; Li Shi; Luo Jiarong

    2010-01-01

    A kind of new long pulse integrator was designed based on the method of variable integral time constant and deducting integral drift by drift slope. The integral time constant can be changed by choosing different integral resistors, in order to improve the signal-to-noise ratio, and avoid output saturation; the slope of integral drift of a certain period of time can be calculated by digital signal processing, which can be used to deduct the drift of original integral signal in real time to reduce the integral drift. The tests show that this kind of long pulse integrator is good at reducing integral drift, which also can eliminate the effects of changing integral time constant. According to experiments, the integral time constant can be changed by remote control and manual adjustment of integral drift is avoided, which can improve the experiment efficiency greatly and can be used for electromagnetic measurement in Tokamak experiment. (authors)

  5. 39 CFR 230.26 - Do these rules affect the service of process requirements of the Federal Rules of Civil Procedure...

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Do these rules affect the service of process....26 Do these rules affect the service of process requirements of the Federal Rules of Civil Procedure... Rules of Civil Procedure regarding service of process. ...

  6. Process control analysis requirement in NH3-H2 exchange bi-thermal Heavy Water Plant (Talcher) (Paper No. 6.8)

    International Nuclear Information System (INIS)

    Pattnaik, S.P.; Mishra, G.C.

    1992-01-01

    Heavy Water Plant, Talcher is based on bithermal NH 3 -H 2 exchange process. Isotopic exchange of deuterium takes place between gaseous hydrogen and liquid ammonia with potassium amide as catalyst. The process control analysis requirement in NH 3 -H 2 exchange dual temperature process is described. (author). 4 refs., 4 figs

  7. Research of digital constant fraction discriminator in PET system

    International Nuclear Information System (INIS)

    Du Yaoyao; Hu Xuanhou; Wu Jianping; Wang Peilin; Li Xiaohui; Li Daowu; Li Ke; Wei Long

    2012-01-01

    The research on digital constant fraction discriminator of spike pulse signal in PET detector is introduced. Based on FPGA technique, rapid signal's time information is extracted via DCFD algorithm after a high-speed ADC digitization. Experiment results show that time resolution of DCFD is 772 ps, which meets the requirement of time measurement in PET system well. (authors)

  8. Constant round group key agreement protocols: A comparative study

    NARCIS (Netherlands)

    Makri, E.; Konstantinou, Elisavet

    2011-01-01

    The scope of this paper is to review and evaluate all constant round Group Key Agreement (GKA) protocols proposed so far in the literature. We have gathered all GKA protocols that require 1,2,3,4 and 5 rounds and examined their efficiency. In particular, we calculated each protocol’s computation and

  9. Why Batteries Deliver a Fairly Constant Voltage until Dead

    Science.gov (United States)

    Smith, Garon C.; Hossain, Md. Mainul; MacCarthy, Patrick

    2012-01-01

    Two characteristics of batteries, their delivery of nearly constant voltage and their rapid failure, are explained through a visual examination of the Nernst equation. Two Galvanic cells are described in detail: (1) a wet cell involving iron and copper salts and (2) a mercury oxide dry cell. A complete description of the wet cell requires a…

  10. Identification of elastic, dielectric, and piezoelectric constants in piezoceramic disks.

    Science.gov (United States)

    Perez, Nicolas; Andrade, Marco A B; Buiochi, Flavio; Adamowski, Julio C

    2010-12-01

    Three-dimensional modeling of piezoelectric devices requires a precise knowledge of piezoelectric material parameters. The commonly used piezoelectric materials belong to the 6mm symmetry class, which have ten independent constants. In this work, a methodology to obtain precise material constants over a wide frequency band through finite element analysis of a piezoceramic disk is presented. Given an experimental electrical impedance curve and a first estimate for the piezoelectric material properties, the objective is to find the material properties that minimize the difference between the electrical impedance calculated by the finite element method and that obtained experimentally by an electrical impedance analyzer. The methodology consists of four basic steps: experimental measurement, identification of vibration modes and their sensitivity to material constants, a preliminary identification algorithm, and final refinement of the material constants using an optimization algorithm. The application of the methodology is exemplified using a hard lead zirconate titanate piezoceramic. The same methodology is applied to a soft piezoceramic. The errors in the identification of each parameter are statistically estimated in both cases, and are less than 0.6% for elastic constants, and less than 6.3% for dielectric and piezoelectric constants.

  11. Universal equations and constants of turbulent motion

    International Nuclear Information System (INIS)

    Baumert, H Z

    2013-01-01

    This paper presents a parameter-free theory of shear-generated turbulence at asymptotically high Reynolds numbers in incompressible fluids. It is based on a two-fluids concept. Both components are materially identical and inviscid. The first component is an ensemble of quasi-rigid dipole-vortex tubes (vortex filaments, excitations) as quasi-particles in chaotic motion. The second is a superfluid performing evasive motions between the tubes. The local dipole motions follow Helmholtz' law. The vortex radii scale with the energy-containing length scale. Collisions between quasi-particles lead either to annihilation (likewise rotation, turbulent dissipation) or to scattering (counterrotation, turbulent diffusion). There are analogies with birth and death processes of population dynamics and their master equations and with Landau's two-fluid theory of liquid helium. For free homogeneous decay the theory predicts the turbulent kinetic energy to follow t −1 . With an adiabatic wall condition it predicts the logarithmic law with von Kármán's constant as 1/√(2 π)= 0.399. Likewise rotating couples form localized dissipative patches almost at rest (→ intermittency) wherein under local quasi-steady conditions the spectrum evolves into an ‘Apollonian gear’ as discussed first by Herrmann (1990 Correlation and Connectivity (Dordrecht: Kluwer) pp 108–20). Dissipation happens exclusively at scale zero and at finite scales this system is frictionless and reminds of Prigogine's (1947 Etude Thermodynamique des Phenomenes Irreversibles (Liege: Desoer) p 143) law of minimum (here: zero) entropy production. The theory predicts further the prefactor of the 3D-wavenumber spectrum (a Kolmogorov constant) as 1/3 (4 π) 2/3 =1.802, well within the scatter range of observational, experimental and direct numerical simulation results. (paper)

  12. Universal equations and constants of turbulent motion

    Science.gov (United States)

    Baumert, H. Z.

    2013-07-01

    This paper presents a parameter-free theory of shear-generated turbulence at asymptotically high Reynolds numbers in incompressible fluids. It is based on a two-fluids concept. Both components are materially identical and inviscid. The first component is an ensemble of quasi-rigid dipole-vortex tubes (vortex filaments, excitations) as quasi-particles in chaotic motion. The second is a superfluid performing evasive motions between the tubes. The local dipole motions follow Helmholtz' law. The vortex radii scale with the energy-containing length scale. Collisions between quasi-particles lead either to annihilation (likewise rotation, turbulent dissipation) or to scattering (counterrotation, turbulent diffusion). There are analogies with birth and death processes of population dynamics and their master equations and with Landau's two-fluid theory of liquid helium. For free homogeneous decay the theory predicts the turbulent kinetic energy to follow t-1. With an adiabatic wall condition it predicts the logarithmic law with von Kármán's constant as 1/\\sqrt {2\\,\\pi }= 0.399 . Likewise rotating couples form localized dissipative patches almost at rest (→ intermittency) wherein under local quasi-steady conditions the spectrum evolves into an ‘Apollonian gear’ as discussed first by Herrmann (1990 Correlation and Connectivity (Dordrecht: Kluwer) pp 108-20). Dissipation happens exclusively at scale zero and at finite scales this system is frictionless and reminds of Prigogine's (1947 Etude Thermodynamique des Phenomenes Irreversibles (Liege: Desoer) p 143) law of minimum (here: zero) entropy production. The theory predicts further the prefactor of the 3D-wavenumber spectrum (a Kolmogorov constant) as \\frac {1}{3}(4\\,\\pi )^{2/3}=1.802 , well within the scatter range of observational, experimental and direct numerical simulation results.

  13. Thermal decay of the cosmological constant into black holes

    International Nuclear Information System (INIS)

    Gomberoff, Andres; Henneaux, Marc; Teitelboim, Claudio; Wilczek, Frank

    2004-01-01

    We show that the cosmological constant may be reduced by thermal production of membranes by the cosmological horizon, analogous to a particle 'going over the top of the potential barrier', rather than tunneling through it. The membranes are endowed with charge associated with the gauge invariance of an antisymmetric gauge potential. In this new process, the membrane collapses into a black hole; thus, the net effect is to produce black holes out of the vacuum energy associated with the cosmological constant. We study here the corresponding Euclidean configurations ('thermalons') and calculate the probability for the process in the leading semiclassical approximation

  14. CODATA recommended values of the fundamental constants

    International Nuclear Information System (INIS)

    Mohr, Peter J.; Taylor, Barry N.

    2000-01-01

    A review is given of the latest Committee on Data for Science and Technology (CODATA) adjustment of the values of the fundamental constants. The new set of constants, referred to as the 1998 values, replaces the values recommended for international use by CODATA in 1986. The values of the constants, and particularly the Rydberg constant, are of relevance to the calculation of precise atomic spectra. The standard uncertainty (estimated standard deviation) of the new recommended value of the Rydberg constant, which is based on precision frequency metrology and a detailed analysis of the theory, is approximately 1/160 times the uncertainty of the 1986 value. The new set of recommended values as well as a searchable bibliographic database that gives citations to the relevant literature is available on the World Wide Web at physics.nist.gov/constants and physics.nist.gov/constantsbib, respectively

  15. Large, but not small, antigens require time- and temperature-dependent processing in accessory cells before they can be recognized by T cells

    DEFF Research Database (Denmark)

    Buus, S; Werdelin, O

    1986-01-01

    We have studied if antigens of different size and structure all require processing in antigen-presenting cells of guinea-pigs before they can be recognized by T cells. The method of mild paraformaldehyde fixation was used to stop antigen-processing in the antigen-presenting cells. As a measure...... of antigen presentation we used the proliferative response of appropriately primed T cells during a co-culture with the paraformaldehyde-fixed and antigen-exposed presenting cells. We demonstrate that the large synthetic polypeptide antigen, dinitrophenyl-poly-L-lysine, requires processing. After an initial......-dependent and consequently energy-requiring. Processing is strongly inhibited by the lysosomotrophic drug, chloroquine, suggesting a lysosomal involvement in antigen processing. The existence of a minor, non-lysosomal pathway is suggested, since small amounts of antigen were processed even at 10 degrees C, at which...

  16. Stability constants of scandium complexes, 1

    International Nuclear Information System (INIS)

    Itoh, Hisako; Itoh, Naomi; Suzuki, Yasuo

    1984-01-01

    The stability constants of scandium complexes with some carboxylate ligands were determined potentiometrically at 25.0 and 40.0 0 C and at an ionic strength of 0.10 with potassium nitrate as supporting electrolyte. The constants of the scandium complexes were appreciably greater than those of the corresponding lanthanoid complexes, as expected. The changes in free energy, enthalpy, and entropy for the formation of the scandium complexes were calculated from the stability constants at two temperatures. (author)

  17. Dose rate constants for new dose quantities

    International Nuclear Information System (INIS)

    Tschurlovits, M.; Daverda, G.; Leitner, A.

    1992-01-01

    Conceptual changes and new quantities made is necessary to reassess dose rate quantities. Calculations of the dose rate constant were done for air kerma, ambient dose equivalent and directional dose equivalent. The number of radionuclides is more than 200. The threshold energy is selected as 20 keV for the dose equivalent constants. The dose rate constant for the photon equivalent dose as used mainly in German speaking countries as a temporary quantity is also included. (Author)

  18. A natural cosmological constant from chameleons

    International Nuclear Information System (INIS)

    Nastase, Horatiu; Weltman, Amanda

    2015-01-01

    We present a simple model where the effective cosmological constant appears from chameleon scalar fields. For a Kachru–Kallosh–Linde–Trivedi (KKLT)-inspired form of the potential and a particular chameleon coupling to the local density, patches of approximately constant scalar field potential cluster around regions of matter with density above a certain value, generating the effect of a cosmological constant on large scales. This construction addresses both the cosmological constant problem (why Λ is so small, yet nonzero) and the coincidence problem (why Λ is comparable to the matter density now)

  19. A natural cosmological constant from chameleons

    Directory of Open Access Journals (Sweden)

    Horatiu Nastase

    2015-07-01

    Full Text Available We present a simple model where the effective cosmological constant appears from chameleon scalar fields. For a Kachru–Kallosh–Linde–Trivedi (KKLT-inspired form of the potential and a particular chameleon coupling to the local density, patches of approximately constant scalar field potential cluster around regions of matter with density above a certain value, generating the effect of a cosmological constant on large scales. This construction addresses both the cosmological constant problem (why Λ is so small, yet nonzero and the coincidence problem (why Λ is comparable to the matter density now.

  20. A natural cosmological constant from chameleons

    Energy Technology Data Exchange (ETDEWEB)

    Nastase, Horatiu, E-mail: nastase@ift.unesp.br [Instituto de Física Teórica, UNESP-Universidade Estadual Paulista, R. Dr. Bento T. Ferraz 271, Bl. II, Sao Paulo 01140-070, SP (Brazil); Weltman, Amanda, E-mail: amanda.weltman@uct.ac.za [Astrophysics, Cosmology & Gravity Center, Department of Mathematics and Applied Mathematics, University of Cape Town, Private Bag, Rondebosch 7700 (South Africa)

    2015-07-30

    We present a simple model where the effective cosmological constant appears from chameleon scalar fields. For a Kachru–Kallosh–Linde–Trivedi (KKLT)-inspired form of the potential and a particular chameleon coupling to the local density, patches of approximately constant scalar field potential cluster around regions of matter with density above a certain value, generating the effect of a cosmological constant on large scales. This construction addresses both the cosmological constant problem (why Λ is so small, yet nonzero) and the coincidence problem (why Λ is comparable to the matter density now)

  1. METHOD FOR SECURITY SPECIFICATION SOFTWARE REQUIREMENTS AS A MEANS FOR IMPLEMENTING A SOFTWARE DEVELOPMENT PROCESS SECURE - MERSEC

    Directory of Open Access Journals (Sweden)

    Castro Mecías, L.T.

    2015-06-01

    Full Text Available Often security incidents that have the object or use the software as a means of causing serious damage and legal, economic consequences, etc. Results of a survey by Kaspersky Lab reflectvulnerabilities in software are the main cause of security incidents in enterprises, the report shows that 85% of them have reported security incidents and vulnerabilities in software are the main reason is further estimated that incidents can cause significant losses estimated from 50,000 to $ 649.000. (1 In this regard academic and industry research focuses on proposals based on reducing vulnerabilities and failures of technology, with a positive influence on how the software is developed. A development process for improved safety practices and should include activities from the initial phases of the software; so that security needs are identified, manage risk and appropriate measures are implemented. This article discusses a method of analysis, acquisition and requirements specification of the software safety analysis on the basis of various proposals and deficiencies identified from participant observation in software development teams. Experiments performed using the proposed yields positive results regarding the reduction of security vulnerabilities and compliance with the safety objectives of the software.

  2. Dynamic analysis of the CTAR (constant temperature adsorption refrigeration) cycle

    International Nuclear Information System (INIS)

    Hassan, H.Z.; Mohamad, A.A.; Al-Ansary, H.A.; Alyousef, Y.M.

    2014-01-01

    The basic SAR (solar-driven adsorption refrigeration) machine is an intermittent cold production system. Recently, the CO-SAR (continuous operation solar-powered adsorption refrigeration) system is developed. The CO-SAR machine is based on the theoretical CTAR (constant temperature adsorption refrigeration) cycle in which the adsorption process takes place at a constant temperature that equals the ambient temperature. Practically, there should be a temperature gradient between the adsorption bed and the surrounding atmosphere to provide a driving potential for heat transfer. In the present study, the dynamic analysis of the CTAR cycle is developed. This analysis provides a comparison between the theoretical and the dynamic operation of the CTAR cycle. The developed dynamic model is based on the D-A adsorption equilibrium equation and the energy and mass balances in the adsorption reactor. Results obtained from the present work demonstrate that, the idealization of the constant temperature adsorption process in the theoretical CTAR cycle is not far from the real situation and can be approached. Furthermore, enhancing the heat transfer between the adsorption bed and the ambient during the bed pre-cooling process helps accelerating the heat rejection process from the adsorption reactor and therefore approaching the isothermal process. - Highlights: • The dynamic analysis of the CTAR (constant temperature adsorption refrigeration) cycle is developed. • The CTAR theoretical and dynamic cycles are compared. • The dynamic cycle approaches the ideal one by enhancing the bed precooling

  3. Shifting from constant-voltage to constant-current in Parkinson's disease patients with chronic stimulation.

    Science.gov (United States)

    Amami, P; Mascia, M M; Franzini, A; Saba, F; Albanese, A

    2017-08-01

    The study aimed to evaluate safety and efficacy of shifting stimulation settings from constant-voltage (CV) to constant-current (CC) programming in patients with Parkinson's disease (PD) and chronic subthalamic nucleus deep brain stimulation (STN DBS). Twenty PD patients with chronic STN DBS set in CV programming were shifted to CC and followed for 3 months; the other stimulation settings and the medication regimen remained unchanged. Side effects, motor, non-motor, executive functions, and impedance were assessed at baseline and during follow-up. No adverse events were observed at time of shifting or during CC stimulation. Motor and non-motor measures remained unchanged at follow-up despite impedance decreased. Compared to baseline, inhibition processes improved at follow-up. The shifting strategy was well tolerated and the clinical outcome was maintained with no need to adjust stimulation settings or medications notwithstanding a decrease of impedance. Improvement of inhibition processes is a finding which needed further investigation.

  4. Remote Sensing of Salinity: The Dielectric Constant of Sea Water

    Science.gov (United States)

    LeVine, David M.; Lang, R.; Utku, C.; Tarkocin, Y.

    2011-01-01

    Global monitoring of sea surface salinity from space requires an accurate model for the dielectric constant of sea water as a function of salinity and temperature to characterize the emissivity of the surface. Measurements are being made at 1.413 GHz, the center frequency of the Aquarius radiometers, using a resonant cavity and the perturbation method. The cavity is operated in a transmission mode and immersed in a liquid bath to control temperature. Multiple measurements are made at each temperature and salinity. Error budgets indicate a relative accuracy for both real and imaginary parts of the dielectric constant of about 1%.

  5. Tank waste processing analysis: Database development, tank-by-tank processing requirements, and examples of pretreatment sequences and schedules as applied to Hanford Double-Shell Tank Supernatant Waste - FY 1993

    International Nuclear Information System (INIS)

    Colton, N.G.; Orth, R.J.; Aitken, E.A.

    1994-09-01

    This report gives the results of work conducted in FY 1993 by the Tank Waste Processing Analysis Task for the Underground Storage Tank Integrated Demonstration. The main purpose of this task, led by Pacific Northwest Laboratory, is to demonstrate a methodology to identify processing sequences, i.e., the order in which a tank should be processed. In turn, these sequences may be used to assist in the development of time-phased deployment schedules. Time-phased deployment is implementation of pretreatment technologies over a period of time as technologies are required and/or developed. The work discussed here illustrates how tank-by-tank databases and processing requirements have been used to generate processing sequences and time-phased deployment schedules. The processing sequences take into account requirements such as the amount and types of data available for the tanks, tank waste form and composition, required decontamination factors, and types of compact processing units (CPUS) required and technology availability. These sequences were developed from processing requirements for the tanks, which were determined from spreadsheet analyses. The spreadsheet analysis program was generated by this task in FY 1993. Efforts conducted for this task have focused on the processing requirements for Hanford double-shell tank (DST) supernatant wastes (pumpable liquid) because this waste type is easier to retrieve than the other types (saltcake and sludge), and more tank space would become available for future processing needs. The processing requirements were based on Class A criteria set by the U.S. Nuclear Regulatory Commission and Clean Option goals provided by Pacific Northwest Laboratory

  6. Equilibrium-constant expressions for aqueous plutonium

    International Nuclear Information System (INIS)

    Silver, G.L.

    2010-01-01

    Equilibrium-constant expressions for Pu disproportionation reactions traditionally contain three or four terms representing the concentrations or fractions of the oxidation states. The expressions can be rewritten so that one of the oxidation states is replaced by a term containing the oxidation number of the plutonium. Experimental estimations of the numerical values of the constants can then be checked in several ways. (author)

  7. A null test of the cosmological constant

    International Nuclear Information System (INIS)

    Chiba, Takeshi; Nakamura, Takashi

    2007-01-01

    We provide a consistency relation between cosmological observables in general relativity with the cosmological constant. Breaking of this relation at any redshift would imply the breakdown of the hypothesis of the cosmological constant as an explanation of the current acceleration of the universe. (author)

  8. A stringy nature needs just two constants

    International Nuclear Information System (INIS)

    Veneziano, G.

    1986-01-01

    Dual string theories of everything, being purely geometrical, contain only two fundamental constants: c, for relativistic invariance, and a length lambda, for quantization. Planck's and Newton's constants appear only through Planck's length, a ''calculable'' fraction of lambda. Only the existence of a light sector breaks a ''reciprocity'' principle and unification at lambda, which is also the theory's cut-off

  9. On special relativity with cosmological constant

    International Nuclear Information System (INIS)

    Guo Hanying; Huang Chaoguang; Xu Zhan; Zhou Bin

    2004-01-01

    Based on the principle of relativity and the postulate of invariant speed and length, we propose the theory of special relativity with cosmological constant SRc,R, in which the cosmological constant is linked with the invariant length. Its relation with the doubly special relativity is briefly mentioned

  10. DETERMINATION OF STABILITY CONSTANTS OF MANGANESE (II ...

    African Journals Online (AJOL)

    DR. AMINU

    Keywords: Amino acids, dissociation constant, potentiometry, stability constant. INTRODUCTION. Acids – base titration involves the gradual addition or removal of protons for example using the deprotic form of glycine. The plot has two distinct stages corresponding to the deprotonation of the two different groups on glycine.

  11. Shapley Value for Constant-sum Games

    NARCIS (Netherlands)

    Khmelnitskaya, A.B.

    2002-01-01

    It is proved that Young's axiomatization for the Shapley value by marginalism, efficiency, and symmetry is still valid for the Shapley value defined on the class of nonnegative constant-sum games and on the entire class of constant-sum games as well. To support an interest to study the class of

  12. Constant Width Planar Computation Characterizes ACC0

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Arnsfelt

    2006-01-01

    We obtain a characterization of ACC0 in terms of a natural class of constant width circuits, namely in terms of constant width polynomial size planar circuits. This is shown via a characterization of the class of acyclic digraphs which can be embedded on a cylinder surface in such a way that all...

  13. Experimental Determination of the Avogadro Constant

    Indian Academy of Sciences (India)

    mental physical constant such as charge of an electron or the. Boltzmann constant ... ideas was that the number of particles or molecules in a gas of given volume could not ... knowledge of at least one property of a single molecule. Loschmidt ...

  14. The time constant of the somatogravic illusion.

    Science.gov (United States)

    Correia Grácio, B J; de Winkel, K N; Groen, E L; Wentink, M; Bos, J E

    2013-02-01

    Without visual feedback, humans perceive tilt when experiencing a sustained linear acceleration. This tilt illusion is commonly referred to as the somatogravic illusion. Although the physiological basis of the illusion seems to be well understood, the dynamic behavior is still subject to discussion. In this study, the dynamic behavior of the illusion was measured experimentally for three motion profiles with different frequency content. Subjects were exposed to pure centripetal accelerations in the lateral direction and were asked to indicate their tilt percept by means of a joystick. Variable-radius centrifugation during constant angular rotation was used to generate these motion profiles. Two self-motion perception models were fitted to the experimental data and were used to obtain the time constant of the somatogravic illusion. Results showed that the time constant of the somatogravic illusion was on the order of two seconds, in contrast to the higher time constant found in fixed-radius centrifugation studies. Furthermore, the time constant was significantly affected by the frequency content of the motion profiles. Motion profiles with higher frequency content revealed shorter time constants which cannot be explained by self-motion perception models that assume a fixed time constant. Therefore, these models need to be improved with a mechanism that deals with this variable time constant. Apart from the fundamental importance, these results also have practical consequences for the simulation of sustained accelerations in motion simulators.

  15. Zero cosmological constant from normalized general relativity

    International Nuclear Information System (INIS)

    Davidson, Aharon; Rubin, Shimon

    2009-01-01

    Normalizing the Einstein-Hilbert action by the volume functional makes the theory invariant under constant shifts in the Lagrangian. The associated field equations then resemble unimodular gravity whose otherwise arbitrary cosmological constant is now determined as a Machian universal average. We prove that an empty space-time is necessarily Ricci tensor flat, and demonstrate the vanishing of the cosmological constant within the scalar field paradigm. The cosmological analysis, carried out at the mini-superspace level, reveals a vanishing cosmological constant for a universe which cannot be closed as long as gravity is attractive. Finally, we give an example of a normalized theory of gravity which does give rise to a non-zero cosmological constant.

  16. Graviton fluctuations erase the cosmological constant

    Science.gov (United States)

    Wetterich, C.

    2017-10-01

    Graviton fluctuations induce strong non-perturbative infrared renormalization effects for the cosmological constant. The functional renormalization flow drives a positive cosmological constant towards zero, solving the cosmological constant problem without the need to tune parameters. We propose a simple computation of the graviton contribution to the flow of the effective potential for scalar fields. Within variable gravity, with effective Planck mass proportional to the scalar field, we find that the potential increases asymptotically at most quadratically with the scalar field. The solutions of the derived cosmological equations lead to an asymptotically vanishing cosmological "constant" in the infinite future, providing for dynamical dark energy in the present cosmological epoch. Beyond a solution of the cosmological constant problem, our simplified computation also entails a sizeable positive graviton-induced anomalous dimension for the quartic Higgs coupling in the ultraviolet regime, substantiating the successful prediction of the Higgs boson mass within the asymptotic safety scenario for quantum gravity.

  17. Solar constant values for estimating solar radiation

    International Nuclear Information System (INIS)

    Li, Huashan; Lian, Yongwang; Wang, Xianlong; Ma, Weibin; Zhao, Liang

    2011-01-01

    There are many solar constant values given and adopted by researchers, leading to confusion in estimating solar radiation. In this study, some solar constant values collected from literature for estimating solar radiation with the Angstroem-Prescott correlation are tested in China using the measured data between 1971 and 2000. According to the ranking method based on the t-statistic, a strategy to select the best solar constant value for estimating the monthly average daily global solar radiation with the Angstroem-Prescott correlation is proposed. -- Research highlights: → The effect of the solar constant on estimating solar radiation is investigated. → The investigation covers a diverse range of climate and geography in China. → A strategy to select the best solar constant for estimating radiation is proposed.

  18. Dynamical black rings with a positive cosmological constant

    International Nuclear Information System (INIS)

    Kimura, Masashi

    2009-01-01

    We construct dynamical black ring solutions in the five-dimensional Einstein-Maxwell system with a positive cosmological constant and investigate the geometrical structure. The solutions describe the physical process such that a thin black ring at early time shrinks and changes into a single black hole as time increases. We also discuss the multiblack rings and the coalescence of them.

  19. Engineering Analysis of Intermediate Loop and Process Heat Exchanger Requirements to Include Configuration Analysis and Materials Needs

    Energy Technology Data Exchange (ETDEWEB)

    T.M. Lillo; R.L. Williamson; T.R. Reed; C.B. Davis; D.M. Ginosar

    2005-09-01

    The need to locate advanced hydrogen production facilities a finite distance away from a nuclear power source necessitates the need for an intermediate heat transport loop (IHTL). This IHTL must not only efficiently transport energy over distances up to 500 meters but must also be capable of operating at high temperatures (>850oC) for many years. High temperature, long term operation raises concerns of material strength, creep resistance and general material stability (corrosion resistance). IHTL design is currently in the initial stages. Many questions remain to be answered before intelligent design can begin. The report begins to look at some of the issues surrounding the main components of an IHTL. Specifically, a stress analysis of a compact heat exchanger design under expected operating conditions is reported. Also the results of a thermal analysis performed on two ITHL pipe configurations for different heat transport fluids are presented. The configurations consist of separate hot supply and cold return legs as well as annular design in which the hot fluid is carried in an inner pipe and the cold return fluids travels in the opposite direction in the annular space around the hot pipe. The effects of insulation configurations on pipe configuration performance are also reported. Finally, a simple analysis of two different process heat exchanger designs, one a tube in shell type and the other a compact or microchannel reactor are evaluated in light of catalyst requirements. Important insights into the critical areas of research and development are gained from these analyses, guiding the direction of future areas of research.

  20. Europium (III) and americium (III) stability constants with humic acid

    International Nuclear Information System (INIS)

    Torres, R.A.; Choppin, G.R.

    1984-01-01

    The stability constants for tracer concentrations of Eu(III) and Am(III) complexes with a humic acid extracted from a lake-bottom sediment were measured using a solvent extraction system. The organic extractant was di(2-ethylhexyl)-phosphoric acid in toluene while the humate aqueous phase had a constant ionic strength of 0.1 M (NaClO 4 ). Aqueous humic acid concentrations were monitored by measuring uv-visible absorbances at approx.= 380 nm. The total carboxylate capacity of the humic acid was determined by direct potentiometric titration to be 3.86 +- 0.03 meq/g. The humic acid displayed typical characteristics of a polyelectrolyte - the apparent pKsub(a), as well as the calculated metal ion stability constants increased as the degree of ionization (α) increased. The binding data required a fit of two stability constants, β 1 and β 2 , such that for Eu, log β 1 = 8.86 α + 4.39, log β 2 = 3.55 α + 11.06 while for Am, log β 1 = 10.58 α + 3.84, log β 2 = 5.32 α + 10.42. With hydroxide, carbonate, and humate as competing ligands, the humate complex associated with the β 1 constant is calculated to be the dominant species for the trivalent actinides and lanthanides under conditions present in natural waters. (orig.)

  1. Sterilization of health care products - Ethylene oxide - Part 1: Requirements for development, validation and routine control of a sterilization process for medical devices

    International Nuclear Information System (INIS)

    2007-01-01

    This part of ISO 11135 describes requirements that, if met, will provide an ethylene oxide sterilization process intended to sterilize medical devices, which has appropriate microbicidal activity. Furthermore, compliance with the requirements ensures that this activity is both reliable and reproducible so that it can be predicted, with reasonable confidence, that there is a low level of probability of there being a viable microorganism present on product after sterilization. Specification of this probability is a matter for regulatory authorities and may vary from country to country. The paper provides information on scope, normative references, terms and definitions, quality management systems, sterilizing agent characterization, process and equipment characterization, product definition, process definition, validation, routine monitoring and control, product release from sterilization and maintaining process effectiveness followed by Annex A (Determination of lethal rate of the sterilization process - Biological indicator/bioburden approach), Annex B (Conservative determination of lethal rate of the sterilization process - Overkill approach, annex C (General guidance) and a bibliography.

  2. The Requirement for Acquisition and Logistics Integration: An Examination of Reliability Management Within the Marine Corps Acquisition Process

    National Research Council Canada - National Science Library

    Norcross, Marvin

    2002-01-01

    Combat system reliability is central to creating combat power determining logistics supportability requirements and determining systems total ownership costs, yet the Marine Corps typically monitors...

  3. Impact of the "faster better cheaper" requirements for satellites components/subsystems on SEP organisation and processes.

    Science.gov (United States)

    Pages, X.

    2000-03-01

    In the early 90's, SEP environment in the satellites business quickly evolved from agencies funded programs (ESA, CNES, government) to a situation in which SEP has numerous private customers and where agencies behave as private companies i.e. opening world-wide competition, requesting high involvement of SEP in non recurring funding. SEP quickly reacted to face this challenge by improving not only their products but also the way these products are developed and produced. A new organization of SEP/DPES unit (around 200 people) was set up end 1994, with project oriented guidelines such as streamlining the hierarchical levels in order to increase personals implication and motivation, favoring flexible project organizations to the previous somewhat rigid matrix organization, enforcing commercial/marketing structure to the new customers. Highly motivated slim teams were constituted around each project, picking up expert partners inside SEP/DPES departments. Project partners proved to plead in an efficient manner with their own management on the behalf of the projects they were implied in. Eventually, this organization helped, of course with other progress actions, to a global performance improvement of SEP/DPES. Improved development processes were put into practice in 1995 among which design to cost, carefully decided internal preliminary studies, long term agreements with preferred subcontractors. SEP/DPES ISO.9001 certification (mid-1998) which gives evidence of the satisfactory status of SEP/DPES PA system already helps to avoid to costly comply with numerous project tailored P.A. requirements. New products were developed/qualified since the mid-90's, on SEP funding (at least partial, sometimes total), following the here before described processes and organization. Among SEP/DPES newly developed products, three examples are more thoroughly discussed. In the field of electrical propulsion where SEP/DPES has gained expertise in since the 60's, new developments started

  4. Statistical orientation fluctuations: constant angular momentum versus constant rotational frequency constraints

    Energy Technology Data Exchange (ETDEWEB)

    Goodman, A L [Tulane Univ., New Orleans, LA (United States)

    1992-08-01

    Statistical orientation fluctuations are calculated with two alternative assumptions: the rotational frequency remains constant as the shape orientation fluctuates; and, the average angular momentum remains constant as the shape orientation fluctuates. (author). 2 refs., 3 figs.

  5. Identifying extensions required by RUP (Rational Unified Process) to comply with CMM (Capability Maturity Model) levels 2 and 3

    OpenAIRE

    Manzoni, Lisandra Vielmo; Price, Roberto Tom

    2003-01-01

    This paper describes an assessment of the Rational Unified Process (RUP) based on the Capability Maturity Model (CMM). For each key practice (KP) identified in each key process area (KPA) of CMM levels 2 and 3, the Rational Unified Process was assessed to determine whether it satisfied the KP or not. For each KPA, the percentage of the key practices supported was calculated, and the results were tabulated. The report includes considerations about the coverage of each key process area, describ...

  6. On the constants for some Sobolev imbeddings

    Directory of Open Access Journals (Sweden)

    Pizzocchero Livio

    2001-01-01

    Full Text Available We consider the imbedding inequality is the Sobolev space (or Bessel potential space of type and (integer or fractional order . We write down upper bounds for the constants , using an argument previously applied in the literature in particular cases. We prove that the upper bounds computed in this way are in fact the sharp constants if , , and exhibit the maximising functions. Furthermore, using convenient trial functions, we derive lower bounds on for in many cases these are close to the previous upper bounds, as illustrated by a number of examples, thus characterizing the sharp constants with little uncertainty.

  7. On the constant-roll inflation

    Science.gov (United States)

    Yi, Zhu; Gong, Yungui

    2018-03-01

    The primordial power spectra of scalar and tensor perturbations during slow-roll inflation are usually calculated with the method of Bessel function approximation. For constant-roll or ultra slow-roll inflation, the method of Bessel function approximation may be invalid. We compare the numerical results with the analytical results derived from the Bessel function approximation, and we find that they differ significantly on super-horizon scales if the constant slow-roll parameter ηH is not small. More accurate method is needed for calculating the primordial power spectrum for constant-roll inflation.

  8. Scalar-tensor cosmology with cosmological constant

    International Nuclear Information System (INIS)

    Maslanka, K.

    1983-01-01

    The equations of scalar-tensor theory of gravitation with cosmological constant in the case of homogeneous and isotropic cosmological model can be reduced to dynamical system of three differential equations with unknown functions H=R/R, THETA=phi/phi, S=e/phi. When new variables are introduced the system becomes more symmetrical and cosmological solutions R(t), phi(t), e(t) are found. It is shown that when cosmological constant is introduced large class of solutions which depend also on Dicke-Brans parameter can be obtained. Investigations of these solutions give general limits for cosmological constant and mean density of matter in plane model. (author)

  9. Cosmological constant and advanced gravitational wave detectors

    International Nuclear Information System (INIS)

    Wang, Y.; Turner, E.L.

    1997-01-01

    Interferometric gravitational wave detectors could measure the frequency sweep of a binary inspiral (characterized by its chirp mass) to high accuracy. The observed chirp mass is the intrinsic chirp mass of the binary source multiplied by (1+z), where z is the redshift of the source. Assuming a nonzero cosmological constant, we compute the expected redshift distribution of observed events for an advanced LIGO detector. We find that the redshift distribution has a robust and sizable dependence on the cosmological constant; the data from advanced LIGO detectors could provide an independent measurement of the cosmological constant. copyright 1997 The American Physical Society

  10. Constant strength fuel-fuel cell

    International Nuclear Information System (INIS)

    Vaseen, V.A.

    1980-01-01

    A fuel cell is an electrochemical apparatus composed of both a nonconsumable anode and cathode; and electrolyte, fuel oxidant and controls. This invention guarantees the constant transfer of hydrogen atoms and their respective electrons, thus a constant flow of power by submergence of the negative electrode in a constant strength hydrogen furnishing fuel; when said fuel is an aqueous absorbed hydrocarbon, such as and similar to ethanol or methnol. The objective is accomplished by recirculation of the liquid fuel, as depleted in the cell through specific type membranes which pass water molecules and reject the fuel molecules; thus concentrating them for recycle use

  11. Authenticity in Teaching: A Constant Process of Becoming

    Science.gov (United States)

    Ramezanzadeh, Akram; Zareian, Gholamreza; Adel, Seyyed Mohammad Reza; Ramezanzadeh, Ramin

    2017-01-01

    This study probed the conceptualization of (in)authenticity in teaching and the way it could be enacted in pedagogical practices. The participants were a purposive sample of 20 Iranian university teachers. Data were collected using in-depth interviews, field notes, and observation. The collected data were analyzed through the lens of hermeneutic…

  12. Grain boundary cavitation under reversed constant stress

    International Nuclear Information System (INIS)

    Hales, R.

    1978-06-01

    The growth of grain boundary cavities by diffusion processes has been examined for cyclic stresses. It is found that the time required to grow a void by a predetermined amount (tsub(t)) is always longer than the time required to shrink the same defect to its original size (tsub(c)) under reversed stress. The ratio tsub(c)/tsub(t) is a function of the magnitude of the applied stress and tensile hold time. Similar calculations have been performed for gas filled bubbles. Similar results to those for voids are found at long hold times, but a significantly different ratio of tsub(c)/tsub(t) is obtained at short times. (author)

  13. Sensitivity of molecular vibrational dynamics to energy exchange rate constants

    International Nuclear Information System (INIS)

    Billing, G D; Coletti, C; Kurnosov, A K; Napartovich, A P

    2003-01-01

    The sensitivity of molecular vibrational population dynamics, governing the CO laser operated in fundamental and overtone transitions, to vibration-to-vibration rate constants is investigated. With this aim, three rate constant sets have been used, differing in their completeness (i.e. accounting for single-quantum exchange only, or for multi-quantum exchange with a limited number of rate constants obtained by semiclassical calculations, and, finally, with an exhaustive set of rate constants including asymmetric exchange processes, as well) and in the employed interaction potential. The most complete set among these three is introduced in this paper. An existing earlier kinetic model was updated to include the latter new data. Comparison of data produced by kinetic modelling with the above mentioned sets of rate constants shows that the vibrational distribution function, and, in particular, the CO overtone laser characteristics, are very sensitive to the choice of the model. The most complete model predicts slower evolution of the vibrational distribution, in qualitative agreement with experiments

  14. Software requirements

    CERN Document Server

    Wiegers, Karl E

    2003-01-01

    Without formal, verifiable software requirements-and an effective system for managing them-the programs that developers think they've agreed to build often will not be the same products their customers are expecting. In SOFTWARE REQUIREMENTS, Second Edition, requirements engineering authority Karl Wiegers amplifies the best practices presented in his original award-winning text?now a mainstay for anyone participating in the software development process. In this book, you'll discover effective techniques for managing the requirements engineering process all the way through the development cy

  15. Relationship between electrophilicity index, Hammett constant and ...

    Indian Academy of Sciences (India)

    Unknown

    Inter-relationships between the electrophilicity index (ω), Hammett constant (óp) and nucleus- independent chemical ... cess of DFT is that it provides simple working equa- tions to elucidate ... compasses both the ability of an electrophile to ac-.

  16. Canonoid transformations and constants of motion

    International Nuclear Information System (INIS)

    Negri, L.J.; Oliveira, L.C.; Teixeira, J.M.

    1986-01-01

    The necessary and sufficient conditions for a canonoid transformation with respect to a given Hamiltonian are obtained in terms of the Lagrange brackets of the trasformation. The relation of these conditions with the constants of motion is discussed. (Author) [pt

  17. An improved dosimeter having constant flow pump

    International Nuclear Information System (INIS)

    Baker, W.B.

    1980-01-01

    A dosemeter designed for individual use which can be used to monitor toxic radon gas and toxic related products of radon gas in mines and which incorporates a constant air stream flowing through the dosimeter is described. (U.K.)

  18. Interacting universes and the cosmological constant

    International Nuclear Information System (INIS)

    Alonso-Serrano, A.; Bastos, C.; Bertolami, O.; Robles-Pérez, S.

    2013-01-01

    In this Letter it is studied the effects that an interaction scheme among universes can have in the values of their cosmological constants. In the case of two interacting universes, the value of the cosmological constant of one of the universes becomes very close to zero at the expense of an increasing value of the cosmological constant of the partner universe. In the more general case of a chain of N interacting universes with periodic boundary conditions, the spectrum of the Hamiltonian splits into a large number of levels, each of them associated with a particular value of the cosmological constant, that can be occupied by single universes revealing a collective behavior that plainly shows that the multiverse is much more than the mere sum of its parts

  19. Interacting universes and the cosmological constant

    Energy Technology Data Exchange (ETDEWEB)

    Alonso-Serrano, A. [Centro de Física “Miguel Catalán”, Instituto de Física Fundamental, Consejo Superior de Investigaciones Científicas, Serrano 121, 28006 Madrid (Spain); Estación Ecológica de Biocosmología, Pedro de Alvarado 14, 06411 Medellín (Spain); Bastos, C. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Avenida Rovisco Pais 1, 1049-001 Lisboa (Portugal); Bertolami, O. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Avenida Rovisco Pais 1, 1049-001 Lisboa (Portugal); Departamento de Física e Astronomia, Faculdade de Ciências da Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Robles-Pérez, S., E-mail: salvarp@imaff.cfmac.csic.es [Centro de Física “Miguel Catalán”, Instituto de Física Fundamental, Consejo Superior de Investigaciones Científicas, Serrano 121, 28006 Madrid (Spain); Estación Ecológica de Biocosmología, Pedro de Alvarado 14, 06411 Medellín (Spain); Física Teórica, Universidad del País Vasco, Apartado 644, 48080 Bilbao (Spain)

    2013-02-12

    In this Letter it is studied the effects that an interaction scheme among universes can have in the values of their cosmological constants. In the case of two interacting universes, the value of the cosmological constant of one of the universes becomes very close to zero at the expense of an increasing value of the cosmological constant of the partner universe. In the more general case of a chain of N interacting universes with periodic boundary conditions, the spectrum of the Hamiltonian splits into a large number of levels, each of them associated with a particular value of the cosmological constant, that can be occupied by single universes revealing a collective behavior that plainly shows that the multiverse is much more than the mere sum of its parts.

  20. Constant conditional entropy and related hypotheses

    International Nuclear Information System (INIS)

    Ferrer-i-Cancho, Ramon; Dębowski, Łukasz; Moscoso del Prado Martín, Fermín

    2013-01-01

    Constant entropy rate (conditional entropies must remain constant as the sequence length increases) and uniform information density (conditional probabilities must remain constant as the sequence length increases) are two information theoretic principles that are argued to underlie a wide range of linguistic phenomena. Here we revise the predictions of these principles in the light of Hilberg’s law on the scaling of conditional entropy in language and related laws. We show that constant entropy rate (CER) and two interpretations for uniform information density (UID), full UID and strong UID, are inconsistent with these laws. Strong UID implies CER but the reverse is not true. Full UID, a particular case of UID, leads to costly uncorrelated sequences that are totally unrealistic. We conclude that CER and its particular cases are incomplete hypotheses about the scaling of conditional entropies. (letter)

  1. New perspectives on constant-roll inflation

    Science.gov (United States)

    Cicciarella, Francesco; Mabillard, Joel; Pieroni, Mauro

    2018-01-01

    We study constant-roll inflation using the β-function formalism. We show that the constant rate of the inflaton roll is translated into a first order differential equation for the β-function which can be solved easily. The solutions to this equation correspond to the usual constant-roll models. We then construct, by perturbing these exact solutions, more general classes of models that satisfy the constant-roll equation asymptotically. In the case of an asymptotic power law solution, these corrections naturally provide an end to the inflationary phase. Interestingly, while from a theoretical point of view (in particular in terms of the holographic interpretation) these models are intrinsically different from standard slow-roll inflation, they may have phenomenological predictions in good agreement with present cosmological data.

  2. Hydrolysis and formation constants at 250C

    International Nuclear Information System (INIS)

    Phillips, S.L.

    1982-05-01

    A database consisting of hydrolysis and formation constants for about 20 metals associated with the disposal of nuclear waste is given. Complexing ligands for the various ionic species of these metals include OH, F, Cl, SO 4 , PO 4 and CO 3 . Table 1 consists of tabulated calculated and experimental values of log K/sub xy/, mainly at 25 0 C and various ionic strengths together with references to the origin of the data. Table 2 consists of a column of recommended stability constants at 25 0 C and zero ionic strength tabulated in the column headed log K/sub xy/(0); other columns contain coefficients for an extended Debye-Huckel equation to permit calculations of stability constants up to 3 ionic strength, and up to 0.7 ionic strength using the Davies equation. Selected stability constants calculated with these coefficients for various ionic strengths agree to an average of +- 2% when compared with published experimental and calculated values

  3. Wormholes and the cosmological constant problem.

    Science.gov (United States)

    Klebanov, I.

    The author reviews the cosmological constant problem and the recently proposed wormhole mechanism for its solution. Summation over wormholes in the Euclidean path integral for gravity turns all the coupling parameters into dynamical variables, sampled from a probability distribution. A formal saddle point analysis results in a distribution with a sharp peak at the cosmological constant equal to zero, which appears to solve the cosmological constant problem. He discusses the instabilities of the gravitational Euclidean path integral and the difficulties with its interpretation. He presents an alternate formalism for baby universes, based on the "third quantization" of the Wheeler-De Witt equation. This approach is analyzed in a minisuperspace model for quantum gravity, where it reduces to simple quantum mechanics. Once again, the coupling parameters become dynamical. Unfortunately, the a priori probability distribution for the cosmological constant and other parameters is typically a smooth function, with no sharp peaks.

  4. Building evolutionary architectures support constant change

    CERN Document Server

    Ford, Neal; Kua, Patrick

    2017-01-01

    The software development ecosystem is constantly changing, providing a constant stream of new tools, frameworks, techniques, and paradigms. Over the past few years, incremental developments in core engineering practices for software development have created the foundations for rethinking how architecture changes over time, along with ways to protect important architectural characteristics as it evolves. This practical guide ties those parts together with a new way to think about architecture and time.

  5. Nuclei quadrupole coupling constants in diatomic molecule

    International Nuclear Information System (INIS)

    Ivanov, A.I.; Rebane, T.K.

    1993-01-01

    An approximate relationship between the constants of quadrupole interaction of nuclei in a two-atom molecule is found. It enabled to establish proportionality of oscillatory-rotation corrections to these constants for both nuclei in the molecule. Similar results were obtained for the factors of electrical dipole-quadrupole screening of nuclei. Applicability of these relationships is proven by the example of lithium deuteride molecule. 4 refs., 1 tab

  6. A model for solar constant secular changes

    Science.gov (United States)

    Schatten, Kenneth H.

    1988-01-01

    In this paper, contrast models for solar active region and global photospheric features are used to reproduce the observed Active Cavity Radiometer and Earth Radiation Budget secular trends in reasonably good fashion. A prediction for the next decade of solar constant variations is made using the model. Secular trends in the solar constant obtained from the present model support the view that the Maunder Minimum may be related to the Little Ice Age of the 17th century.

  7. A quadri-constant fraction discriminator

    International Nuclear Information System (INIS)

    Wang Wei; Gu Zhongdao

    1992-01-01

    A quad Constant Fraction (Amplitude and Rise Time Compensation) Discriminator Circuit is described, which is based on the ECL high-speed dual comparator AD 9687. The CFD (ARCD) is of the constant fraction timing type (the amplitude and rise time compensation timing type) employing a leading edge discriminator to eliminate error triggers caused by noises. A timing walk measurement indicates a timing walk of less than +- 150 ps from -50 mV to -5 V

  8. Renormalization group equations with multiple coupling constants

    International Nuclear Information System (INIS)

    Ghika, G.; Visinescu, M.

    1975-01-01

    The main purpose of this paper is to study the renormalization group equations of a renormalizable field theory with multiple coupling constants. A method for the investigation of the asymptotic stability is presented. This method is applied to a gauge theory with Yukawa and self-quartic couplings of scalar mesons in order to find the domains of asymptotic freedom. An asymptotic expansion for the solutions which tend to the origin of the coupling constants is given

  9. Influence of the Gilbert damping constant on the flux rise time of write head fields

    International Nuclear Information System (INIS)

    Ertl, Othmar; Schrefl, Thomas; Suess, Dieter; Schabes, Manfred E.

    2005-01-01

    Magnetic recording at fast data rates requires write heads with rapid rise times of the magnetic flux during the write process. We present three-dimensional (3D) micromagnetic finite element calculations of an entire ring head including 3D coil geometry during the writing of magnetic bits in granular media. The simulations demonstrate how input current profiles translate into magnetization processes in the head and which in turn generate the write head field. The flux rise time significantly depends on the Gilbert damping constant of the head material. Low damping causes incoherent magnetization processes, leading to long rise times and low head fields. High damping leads to coherent reversal of the magnetization in the head. As a consequence, the gap region can be quickly saturated which causes high head fields with short rise times

  10. Linear free energy relationships between aqueous phase hydroxyl radical reaction rate constants and free energy of activation.

    Science.gov (United States)

    Minakata, Daisuke; Crittenden, John

    2011-04-15

    The hydroxyl radical (HO(•)) is a strong oxidant that reacts with electron-rich sites on organic compounds and initiates complex radical chain reactions in aqueous phase advanced oxidation processes (AOPs). Computer based kinetic modeling requires a reaction pathway generator and predictions of associated reaction rate constants. Previously, we reported a reaction pathway generator that can enumerate the most important elementary reactions for aliphatic compounds. For the reaction rate constant predictor, we develop linear free energy relationships (LFERs) between aqueous phase literature-reported HO(•) reaction rate constants and theoretically calculated free energies of activation for H-atom abstraction from a C-H bond and HO(•) addition to alkenes. The theoretical method uses ab initio quantum mechanical calculations, Gaussian 1-3, for gas phase reactions and a solvation method, COSMO-RS theory, to estimate the impact of water. Theoretically calculated free energies of activation are found to be within approximately ±3 kcal/mol of experimental values. Considering errors that arise from quantum mechanical calculations and experiments, this should be within the acceptable errors. The established LFERs are used to predict the HO(•) reaction rate constants within a factor of 5 from the experimental values. This approach may be applied to other reaction mechanisms to establish a library of rate constant predictions for kinetic modeling of AOPs.

  11. RNA structure and scalar coupling constants

    Energy Technology Data Exchange (ETDEWEB)

    Tinoco, I. Jr.; Cai, Z.; Hines, J.V.; Landry, S.M.; SantaLucia, J. Jr.; Shen, L.X.; Varani, G. [Univ. of California, Berkeley, CA (United States)

    1994-12-01

    Signs and magnitudes of scalar coupling constants-spin-spin splittings-comprise a very large amount of data that can be used to establish the conformations of RNA molecules. Proton-proton and proton-phosphorus splittings have been used the most, but the availability of {sup 13}C-and {sup 15}N-labeled molecules allow many more coupling constants to be used for determining conformation. We will systematically consider the torsion angles that characterize a nucleotide unit and the coupling constants that depend on the values of these torsion angles. Karplus-type equations have been established relating many three-bond coupling constants to torsion angles. However, one- and two-bond coupling constants can also depend on conformation. Serianni and coworkers measured carbon-proton coupling constants in ribonucleosides and have calculated their values as a function of conformation. The signs of two-bond coupling can be very useful because it is easier to measure a sign than an accurate magnitude.

  12. FOREWORD: International determination of the Avogadro constant International determination of the Avogadro constant

    Science.gov (United States)

    Massa, Enrico; Nicolaus, Arnold

    2011-04-01

    This issue of Metrologia collects papers about the results of an international research project aimed at the determination of the Avogadro constant, NA, by counting the atoms in a silicon crystal highly enriched with the isotope 28Si. Fifty years ago, Egidi [1] thought about realizing an atomic mass standard. In 1965, Bonse and Hart [2] operated the first x-ray interferometer, thus paving the way to the achievement of Egidi's dream, and soon Deslattes et al [3] completed the first counting of the atoms in a natural silicon crystal. The present project, outlined by Zosi [4] in 1983, began in 2004 by combining the experiences and capabilities of the BIPM, INRIM, IRMM, NIST, NPL, NMIA, NMIJ and PTB. The start signal, ratified by a memorandum of understanding, was a contract for the production of a silicon crystal highly enriched with 28Si. The enrichment process was undertaken by the Central Design Bureau of Machine Building in St Petersburg. Subsequently, a polycrystal was grown in the Institute of Chemistry of High-Purity Substances of the Russian Academy of Sciences in Nizhny Novgorod and a 28Si boule was grown and purified by the Leibniz-Institut für Kristallzüchtung in Berlin. Isotope enrichment made it possible to apply isotope dilution mass spectroscopy, to determine the Avogadro constant with unprecedented accuracy, and to fulfil Egidi's dream. To convey Egidi's 'fantasy' into practice, two 28Si kilogram prototypes shaped as quasi-perfect spheres were manufactured by the Australian Centre for Precision Optics; their isotopic composition, molar mass, mass, volume, density and lattice parameter were accurately determined and their surfaces were chemically and physically characterized at the atomic scale. The paper by Andreas et al reviews the work carried out; it collates all the findings and illustrates how Avogadro's constant was obtained. Impurity concentration and gradients in the enriched crystal were measured by infrared spectroscopy and taken into

  13. Compactification over coset spaces with torsion and vanishing cosmological constant

    International Nuclear Information System (INIS)

    Batakis, N.A.

    1989-01-01

    We consider the compactification of ten-dimensional Einstein-Yang-Mills theories over non-symmetric, six-dimensional homogeneous coset spaces with torsion. We examine the Einstein-Yang-Mills equations of motion requiring vanishing cosmological constant at ten and four dimensions and we present examples of compactifying solutions. It appears that the introduction of more than one radii in the coset space, when possible, may be mandatory for the existence of compactifying solutions. (orig.)

  14. Compactification over coset spaces with torsion and vanishing cosmological constant

    Energy Technology Data Exchange (ETDEWEB)

    Batakis, N.A.; Farakos, K.; Koutsoumbas, G.; Zoupanos, G.; Kapetanakis, D.

    1989-04-13

    We consider the compactification of ten-dimensional Einstein-Yang-Mills theories over non-symmetric, six-dimensional homogeneous coset spaces with torsion. We examine the Einstein-Yang-Mills equations of motion requiring vanishing cosmological constant at ten and four dimensions and we present examples of compactifying solutions. It appears that the introduction of more than one radii in the coset space, when possible, may be mandatory for the existence of compactifying solutions.

  15. Sulfur-oxidizing autotrophic and mixotrophic denitrification processes for drinking water treatment: elimination of excess sulfate production and alkalinity requirement.

    Science.gov (United States)

    Sahinkaya, Erkan; Dursun, Nesrin

    2012-09-01

    This study evaluated the elimination of alkalinity need and excess sulfate generation of sulfur-based autotrophic denitrification process by stimulating simultaneous autotrophic and heterotrophic (mixotrophic) denitrification process in a column bioreactor by methanol supplementation. Also, denitrification performances of sulfur-based autotrophic and mixotrophic processes were compared. In autotrophic process, acidity produced by denitrifying sulfur-oxidizing bacteria was neutralized by the external NaHCO(3) supplementation. After stimulating mixotrophic denitrification process, the alkalinity need of the autotrophic process was satisfied by the alkalinity produced by heterotrophic denitrifiers. Decreasing and lastly eliminating the external alkalinity supplementation did not adversely affect the process performance. Complete denitrification of 75 mg L(-1) NO(3)-N under mixotrophic conditions at 4 h hydraulic retention time was achieved without external alkalinity supplementation and with effluent sulfate concentration lower than the drinking water guideline value of 250 mg L(-1). The denitrification rate of mixotrophic process (0.45 g NO(3)-N L(-1) d(-1)) was higher than that of autotrophic one (0.3 g NO(3)-N L(-1) d(-1)). Batch studies showed that the sulfur-based autotrophic nitrate reduction rate increased with increasing initial nitrate concentration and transient accumulation of nitrite was observed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Tests at constant extension velocity CERT for the evaluation of environmental assisted cracking

    International Nuclear Information System (INIS)

    Arganis J, C.R.

    1994-01-01

    The test at constant velocity extension (CERT) is firmly established as a technique for the study of environmentally cracking (stress corrosion and hydrogen embrittlement) and is widely used, mainly in mechanistic studies. In CERT test, an increasing charge is applied to a sample and the extension velocity is hold up constant to allow that corrosion interplay in the process. The type of crack and ductility measurements with the conditions for testing are compared with a cracked sample in an inert media. Required equipment: 1) A charge mechanism capable to control the elongation of test samples in a rank of 1 x 10 -5 to 1 x 10 -7 inch/inch sec and capable to hold up constant such elongation. 2) A suitable standard (Astm standard A-370). 3) A chamber or cell for the media in which the chemical composition of the solution, the gas composition, the pressure, temperature and electrochemical potential can be controlled in order to simulate with anticipation the service conditions. The cell must allow the mechanical access of the test sample to the charge train of the machine. (Author)

  17. Optimal design of constant-stress accelerated degradation tests using the M-optimality criterion

    International Nuclear Information System (INIS)

    Wang, Han; Zhao, Yu; Ma, Xiaobing; Wang, Hongyu

    2017-01-01

    In this paper, we propose the M-optimality criterion for designing constant-stress accelerated degradation tests (ADTs). The newly proposed criterion concentrates on the degradation mechanism equivalence rather than evaluation precision or prediction accuracy which is usually considered in traditional optimization criteria. Subject to the constraints of total sample number, test termination time as well as the stress region, an optimum constant-stress ADT plan is derived by determining the combination of stress levels and the number of samples allocated to each stress level, when the degradation path comes from inverse Gaussian (IG) process model with covariates and random effects. A numerical example is presented to verify the robustness of our proposed optimum plan and compare its efficiency with other test plans. Results show that, with a slightly relaxed requirement of evaluation precision and prediction accuracy, our proposed optimum plan reduces the dispersion of the estimated acceleration factor between the usage stress level and a higher accelerated stress level, which makes an important contribution to reliability demonstration and assessment tests. - Highlights: • We establish the necessary conditions for degradation mechanism equivalence of ADTs. • We propose the M-optimality criterion for designing constant-stress ADT plans. • The M-optimality plan reduces the dispersion of the estimated accelerated factors. • An electrical connector with its stress relaxation data is used for illustration.

  18. Method for Determining the Time Constants Characterizing the Intensity of Steel Mixing in Continuous Casting Tundish

    Directory of Open Access Journals (Sweden)

    Pieprzyca J.

    2015-04-01

    Full Text Available A common method used in identification of hydrodynamics phenomena occurring in Continuous Casting (CC device's tundish is to determine the RTD curves of time. These curves allows to determine the way of the liquid steel flowing and mixing in the tundish. These can be identified either as the result of numerical simulation or by the experiments - as the result of researching the physical models. Special problem is to objectify it while conducting physical research. It is necessary to precisely determine the time constants which characterize researched phenomena basing on the data acquired in the measured change of the concentration of the tracer in model liquid's volume. The mathematical description of determined curves is based on the approximate differential equations formulated in the theory of fluid mechanics. Solving these equations to calculate the time constants requires a special software and it is very time-consuming. To improve the process a method was created to calculate the time constants with use of automation elements. It allows to solve problems using algebraic method, which improves interpretation of the research results of physical modeling.

  19. Innovative E/E developments with enhanced quality requirements along the process chain; Innovative E/E Entwicklungen mit hohen Qualitaetsanforderungen entlang der Prozesskette

    Energy Technology Data Exchange (ETDEWEB)

    Tappe, Robert [Audi AG, Ingolstadt (Germany)

    2010-07-01

    The proportion of electronics in motor vehicles is still steadily increasing and is constantly creating new challenges for the Technical Development departments. They must generate innovative products while the product cycles are getting shorter and shorter. The new developments, in turn, are also making great demands on the production - the construction process must be efficient, the cycle times are getting shorter and shorter and the quality standards are rising. In addition, our after-sales service staff must increasingly face the challenges of servicing vehicles from very different generations of electronics quickly and reliably while at the same time keeping the costs low for our customers. And above all this, we are all facing the challenge that we must inspire customer confidence in our products for many years and all over the world - by offering reliability, functionality and top-level design. (orig.)

  20. Molecular dynamics simulations of solutions at constant chemical potential

    Science.gov (United States)

    Perego, C.; Salvalaglio, M.; Parrinello, M.

    2015-04-01

    Molecular dynamics studies of chemical processes in solution are of great value in a wide spectrum of applications, which range from nano-technology to pharmaceutical chemistry. However, these calculations are affected by severe finite-size effects, such as the solution being depleted as the chemical process proceeds, which influence the outcome of the simulations. To overcome these limitations, one must allow the system to exchange molecules with a macroscopic reservoir, thus sampling a grand-canonical ensemble. Despite the fact that different remedies have been proposed, this still represents a key challenge in molecular simulations. In the present work, we propose the Constant Chemical Potential Molecular Dynamics (CμMD) method, which introduces an external force that controls the environment of the chemical process of interest. This external force, drawing molecules from a finite reservoir, maintains the chemical potential constant in the region where the process takes place. We have applied the CμMD method to the paradigmatic case of urea crystallization in aqueous solution. As a result, we have been able to study crystal growth dynamics under constant supersaturation conditions and to extract growth rates and free-energy barriers.

  1. QCD Axion Dark Matter with a Small Decay Constant.

    Science.gov (United States)

    Co, Raymond T; Hall, Lawrence J; Harigaya, Keisuke

    2018-05-25

    The QCD axion is a good dark matter candidate. The observed dark matter abundance can arise from misalignment or defect mechanisms, which generically require an axion decay constant f_{a}∼O(10^{11})  GeV (or higher). We introduce a new cosmological origin for axion dark matter, parametric resonance from oscillations of the Peccei-Quinn symmetry breaking field, that requires f_{a}∼(10^{8}-10^{11})  GeV. The axions may be warm enough to give deviations from cold dark matter in large scale structure.

  2. QCD Axion Dark Matter with a Small Decay Constant

    Science.gov (United States)

    Co, Raymond T.; Hall, Lawrence J.; Harigaya, Keisuke

    2018-05-01

    The QCD axion is a good dark matter candidate. The observed dark matter abundance can arise from misalignment or defect mechanisms, which generically require an axion decay constant fa˜O (1011) GeV (or higher). We introduce a new cosmological origin for axion dark matter, parametric resonance from oscillations of the Peccei-Quinn symmetry breaking field, that requires fa˜(108- 1011) GeV . The axions may be warm enough to give deviations from cold dark matter in large scale structure.

  3. Towards elicitation of users requirements for hospital information system: from a care process modelling technique to a web based collaborative tool.

    OpenAIRE

    Staccini, Pascal M.; Joubert, Michel; Quaranta, Jean-Francois; Fieschi, Marius

    2002-01-01

    Growing attention is being given to the use of process modeling methodology for user requirements elicitation. In the analysis phase of hospital information systems, the usefulness of care-process models has been investigated to evaluate the conceptual applicability and practical understandability by clinical staff and members of users teams. Nevertheless, there still remains a gap between users and analysts in their mutual ability to share conceptual views and vocabulary, keeping the meaning...

  4. Online feedback-controlled renal constant infusion clearances in rats.

    Science.gov (United States)

    Schock-Kusch, Daniel; Shulhevich, Yury; Xie, Qing; Hesser, Juergen; Stsepankou, Dzmitry; Neudecker, Sabine; Friedemann, Jochen; Koenig, Stefan; Heinrich, Ralf; Hoecklin, Friederike; Pill, Johannes; Gretz, Norbert

    2012-08-01

    Constant infusion clearance techniques using exogenous renal markers are considered the gold standard for assessing the glomerular filtration rate. Here we describe a constant infusion clearance method in rats allowing the real-time monitoring of steady-state conditions using an automated closed-loop approach based on the transcutaneous measurement of the renal marker FITC-sinistrin. In order to optimize parameters to reach steady-state conditions as fast as possible, a Matlab-based simulation tool was established. Based on this, a real-time feedback-regulated approach for constant infusion clearance monitoring was developed. This was validated by determining hourly FITC-sinistrin plasma concentrations and the glomerular filtration rate in healthy and unilaterally nephrectomized rats. The transcutaneously assessed FITC-sinistrin fluorescence signal was found to reflect the plasma concentration. Our method allows the precise determination of the onset of steady-state marker concentration. Moreover, the steady state can be monitored and controlled in real time for several hours. This procedure is simple to perform since no urine samples and only one blood sample are required. Thus, we developed a real-time feedback-based system for optimal regulation and monitoring of a constant infusion clearance technique.

  5. Constant-roll (quasi-)linear inflation

    Science.gov (United States)

    Karam, A.; Marzola, L.; Pappas, T.; Racioppi, A.; Tamvakis, K.

    2018-05-01

    In constant-roll inflation, the scalar field that drives the accelerated expansion of the Universe is rolling down its potential at a constant rate. Within this framework, we highlight the relations between the Hubble slow-roll parameters and the potential ones, studying in detail the case of a single-field Coleman-Weinberg model characterised by a non-minimal coupling of the inflaton to gravity. With respect to the exact constant-roll predictions, we find that assuming an approximate slow-roll behaviour yields a difference of Δ r = 0.001 in the tensor-to-scalar ratio prediction. Such a discrepancy is in principle testable by future satellite missions. As for the scalar spectral index ns, we find that the existing 2-σ bound constrains the value of the non-minimal coupling to ξphi ~ 0.29–0.31 in the model under consideration.

  6. Cosmological constant is a conserved charge

    Science.gov (United States)

    Chernyavsky, Dmitry; Hajian, Kamal

    2018-06-01

    Cosmological constant can always be considered as the on-shell value of a top form in gravitational theories. The top form is the field strength of a gauge field, and the theory enjoys a gauge symmetry. We show that cosmological constant is the charge of the global part of the gauge symmetry, and is conserved irrespective of the dynamics of the metric and other fields. In addition, we introduce its conjugate chemical potential, and prove the generalized first law of thermodynamics which includes variation of cosmological constant as a conserved charge. We discuss how our new term in the first law is related to the volume–pressure term. In parallel with the seminal Wald entropy, this analysis suggests that pressure can also be considered as a conserved charge.

  7. Fast optimization algorithms and the cosmological constant

    Science.gov (United States)

    Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad

    2017-11-01

    Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.

  8. Conformally invariant braneworld and the cosmological constant

    International Nuclear Information System (INIS)

    Guendelman, E.I.

    2004-01-01

    A six-dimensional braneworld scenario based on a model describing the interaction of gravity, gauge fields and 3+1 branes in a conformally invariant way is described. The action of the model is defined using a measure of integration built of degrees of freedom independent of the metric. There is no need to fine tune any bulk cosmological constant or the tension of the two (in the scenario described here) parallel branes to obtain zero cosmological constant, the only solutions are those with zero 4D cosmological constant. The two extra dimensions are compactified in a 'football' fashion and the branes lie on the two opposite poles of the compact 'football-shaped' sphere

  9. Vanishing cosmological constant in elementary particles theory

    International Nuclear Information System (INIS)

    Pisano, F.; Tonasse, M.D.

    1997-01-01

    The quest of a vanishing cosmological constant is considered in the simplest anomaly-free chiral gauge extension of the electroweak standard model where the new physics is limited to a well defined additional flavordynamics above the Fermi scale, namely up to a few TeVs by matching the gauge coupling constants at the electroweak scale, and with an extended Higgs structure. In contrast to the electroweak standard model, it is shown how the extended scalar sector of the theory allows a vanishing or a very small cosmological constant. the details of the cancellation mechanism are presented. At accessible energies the theory is indistinguishable from the standard model of elementary particles and it is in agreement with all existing data. (author). 32 refs

  10. Stability constants for silicate adsorbed to ferrihydrite

    DEFF Research Database (Denmark)

    Hansen, Hans Christian Bruun; Wetche, T.P.; Raulund-Rasmussen, Karsten

    1994-01-01

    Intrinsic surface acidity constants (K(a1)intr, K(a2)intr) and surface complexation constant for adsorption of orthosilicate onto synthetic ferrihydrite (K(Si) for the complex = FeOSi(OH)3) have been determined from acid/base titrations in 0.001-0.1 m NaClO4 electrolytes and silicate adsorption...... experiments in 0.01 m NaNO3 electrolyte (pH 3-6). The surface equilibrium constants were calculated according to the two-layer model by Dzombak & Morel (1990). Near equilibrium between protons/hydroxyls in solution and the ferrihydrite surface was obtained within minutes while equilibration with silicate...

  11. Derivation of the optical constants of anisotropic

    Science.gov (United States)

    Aronson, J. R.; Emslie, A. G.; Smith, E. M.; Strong, P. F.

    1985-07-01

    This report concerns the development of methods for obtaining the optical constants of anisotropic crystals of the triclinic and monoclinic systems. The principal method used, classical dispersion theory, is adapted to these crystal systems by extending the Lorentz line parameters to include the angles characterizing the individual resonances, and by replacing the dielectric constant by a dielectric tensor. The sample crystals are gypsium, orthoclase and chalcanthite. The derived optical constants are shown to be suitable for modeling the optical properties of particulate media in the infrared spectral region. For those materials where suitable size single crystals are not available, an extension of a previously used method is applied to alabaster, a polycrystalline material of the monoclinic crystal system.

  12. Effects of quantum entropy on bag constant

    International Nuclear Information System (INIS)

    Miller, D.E.; Tawfik, A.

    2012-01-01

    The effects of quantum entropy on the bag constant are studied at low temperatures and for small chemical potentials. The inclusion of the quantum entropy of the quarks in the equation of state provides the hadronic bag with an additional heat which causes a decrease in the effective latent heat inside the bag. We have considered two types of baryonic bags, Δ and Ω - . In both cases we have found that the bag constant without the quantum entropy almost does not change with temperature and quark chemical potential. The contribution from the quantum entropy to the equation of state clearly decreases the value of the bag constant. Furthermore, we construct states densities for quarks using the 'Thomas Fermi model' and take into consideration a thermal potential for the interaction. (author)

  13. Performance evaluation of wideband bio-impedance spectroscopy using constant voltage source and constant current source

    International Nuclear Information System (INIS)

    Mohamadou, Youssoufa; Oh, Tong In; Wi, Hun; Sohal, Harsh; Farooq, Adnan; Woo, Eung Je; McEwan, Alistair Lee

    2012-01-01

    Current sources are widely used in bio-impedance spectroscopy (BIS) measurement systems to maximize current injection for increased signal to noise while keeping within medical safety specifications. High-performance current sources based on the Howland current pump with optimized impedance converters are able to minimize stray capacitance of the cables and setup. This approach is limited at high frequencies primarily due to the deteriorated output impedance of the constant current source when situated in a real measurement system. For this reason, voltage sources have been suggested, but they require a current sensing resistor, and the SNR reduces at low impedance loads due to the lower current required to maintain constant voltage. In this paper, we compare the performance of a current source-based BIS and a voltage source-based BIS, which use common components. The current source BIS is based on a Howland current pump and generalized impedance converters to maintain a high output impedance of more than 1 MΩ at 2 MHz. The voltage source BIS is based on voltage division between an internal current sensing resistor (R s ) and an external sample. To maintain high SNR, R s is varied so that the source voltage is divided more or less equally. In order to calibrate the systems, we measured the transfer function of the BIS systems with several known resistor and capacitor loads. From this we may estimate the resistance and capacitance of biological tissues using the least-squares method to minimize error between the measured transimpedance excluding the system transfer function and that from an impedance model. When tested on realistic loads including discrete resistors and capacitors, and saline and agar phantoms, the voltage source-based BIS system had a wider bandwidth of 10 Hz to 2.2 MHz with less than 1% deviation from the expected spectra compared to more than 10% with the current source. The voltage source also showed an SNR of at least 60 dB up to 2.2 MHz

  14. Using IT to improve quality at NewYork-Presybterian Hospital: a requirements-driven strategic planning process.

    Science.gov (United States)

    Kuperman, Gilad J; Boyer, Aurelia; Cole, Curt; Forman, Bruce; Stetson, Peter D; Cooper, Mary

    2006-01-01

    At NewYork-Presbyterian Hospital, we are committed to the delivery of high quality care. We have implemented a strategic planning process to determine the information technology initiatives that will best help us improve quality. The process began with the creation of a Clinical Quality and IT Committee. The Committee identified 2 high priority goals that would enable demonstrably high quality care: 1) excellence at data warehousing, and 2) optimal use of automated clinical documentation to capture encounter-related quality and safety data. For each high priority goal, a working group was created to develop specific recommendations. The Data Warehousing subgroup has recommended the implementation of an architecture management process and an improved ability for users to get access to aggregate data. The Structured Documentation subgroup is establishing recommendations for a documentation template creation process. The strategic planning process at times is slow, but assures that the organization is focusing on the information technology activities most likely to lead to improved quality.

  15. Effect of process parameters on power requirements of vacuum swing adsorption technology for CO2 capture from flue gas

    International Nuclear Information System (INIS)

    Zhang, Jun; Webley, Paul A.; Xiao, Penny

    2008-01-01

    This study focuses on the effects of process and operating parameters - feed gas temperature, evacuation pressure and feed concentration - on the performance of carbon dioxide vacuum swing adsorption (CO 2 VSA) processes for CO 2 capture from gas, especially as it affects power consumption. To obtain reliable data on the VSA process, experimental work was conducted on a purposely built three bed CO 2 VSA pilot plant using commercial 13X zeolite. Both 6 step and 9 step cycles were used to determine the influences of temperature, evacuation pressure and feed concentration on process performance (recovery, purity, power and corresponding capture cost). A simple economic model for CO 2 capture was developed and employed herein. Through experiments and analysis, it is found that the feed gas temperature, evacuation pressure and feed concentration have significant effects on power consumption and CO 2 capture cost. Our data demonstrate that the CO 2 VSA process has good recovery (>70%), purity (>90%) and low power cost (4-10 kW/TPDc) when operating with 40 C feed gas provided relatively deep vacuum is used. Enhanced performance is obtained when higher feed gas concentration is fed to the plant, as expected. Our data indicates large potential for application of CO 2 VSA to CO 2 capture from flue gas. (author)

  16. A conceptual design for an integrated data base management system for remote sensing data. [user requirements and data processing

    Science.gov (United States)

    Maresca, P. A.; Lefler, R. M.

    1978-01-01

    The requirements of potential users were considered in the design of an integrated data base management system, developed to be independent of any specific computer or operating system, and to be used to support investigations in weather and climate. Ultimately, the system would expand to include data from the agriculture, hydrology, and related Earth resources disciplines. An overview of the system and its capabilities is presented. Aspects discussed cover the proposed interactive command language; the application program command language; storage and tabular data maintained by the regional data base management system; the handling of data files and the use of system standard formats; various control structures required to support the internal architecture of the system; and the actual system architecture with the various modules needed to implement the system. The concepts on which the relational data model is based; data integrity, consistency, and quality; and provisions for supporting concurrent access to data within the system are covered in the appendices.

  17. 5 CFR 581.307 - Compliance with legal process requiring the payment of attorney fees, interest, and/or court costs.

    Science.gov (United States)

    2010-01-01

    ... the payment of attorney fees, interest, and/or court costs. 581.307 Section 581.307 Administrative... payment of attorney fees, interest, and/or court costs. Before complying with legal process that requires withholding for the payment of attorney fees, interest, and/or court costs, the governmental entity must...

  18. The Cosmological Constant Problem (1/2)

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review the cosmological constant problem as a serious challenge to our notion of naturalness in Physics. Weinberg’s no go theorem is worked through in detail. I review a number of proposals possibly including Linde's universe multiplication, Coleman's wormholes, the fat graviton, and SLED, to name a few. Large distance modifications of gravity are also discussed, with causality considerations pointing towards a global modification as being the most sensible option. The global nature of the cosmological constant problem is also emphasized, and as a result, the sequestering scenario is reviewed in some detail, demonstrating the cancellation of the Standard Model vacuum energy through a global modification of General Relativity.

  19. The Cosmological Constant Problem (2/2)

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review the cosmological constant problem as a serious challenge to our notion of naturalness in Physics. Weinberg’s no go theorem is worked through in detail. I review a number of proposals possibly including Linde's universe multiplication, Coleman's wormholes, the fat graviton, and SLED, to name a few. Large distance modifications of gravity are also discussed, with causality considerations pointing towards a global modification as being the most sensible option. The global nature of the cosmological constant problem is also emphasized, and as a result, the sequestering scenario is reviewed in some detail, demonstrating the cancellation of the Standard Model vacuum energy through a global modification of General Relativity.

  20. Atomic weights: no longer constants of nature

    Science.gov (United States)

    Coplen, Tyler B.; Holden, Norman E.

    2011-01-01

    Many of us were taught that the standard atomic weights we found in the back of our chemistry textbooks or on the Periodic Table of the Chemical Elements hanging on the wall of our chemistry classroom are constants of nature. This was common knowledge for more than a century and a half, but not anymore. The following text explains how advances in chemical instrumentation and isotopic analysis have changed the way we view atomic weights and why they are no longer constants of nature

  1. Constant-work-space algorithms for geometric problems

    Directory of Open Access Journals (Sweden)

    Tetsuo Asano

    2011-07-01

    Full Text Available Constant-work-space algorithms may use only constantly many cells of storage in addition to their input, which is provided as a read-only array. We show how to construct several geometric structures efficiently in the constant-work-space model. Traditional algorithms process the input into a suitable data structure (like a doubly-connected edge list that allows efficient traversal of the structure at hand. In the constant-work-space setting, however, we cannot afford to do this. Instead, we provide operations that compute the desired features on the fly by accessing the input with no extra space. The whole geometric structure can be obtained by using these operations to enumerate all the features. Of course, we must pay for the space savings by slower running times. While the standard data structure allows us to implement traversal operations in constant time, our schemes typically take linear time to read the input data in each step.We begin with two simple problems: triangulating a planar point set and finding the trapezoidal decomposition of a simple polygon. In both cases adjacent features can be enumerated in linear time per step, resulting in total quadratic running time to output the whole structure. Actually, we show that the former result carries over to the Delaunay triangulation, and hence the Voronoi diagram. This also means that we can compute the largest empty circle of a planar point set in quadratic time and constant work-space. As another application, we demonstrate how to enumerate the features of an Euclidean minimum spanning tree (EMST in quadratic time per step, so that the whole EMST can be found in cubic time using constant work-space.Finally, we describe how to compute a shortest geodesic path between two points in a simple polygon. Although the shortest path problem in general graphs is NL-complete (Jakoby and Tantau 2003, this constrained problem can be solved in quadratic time using only constant work-space.

  2. Determination of the stability constants for the complexes of rare-earth elements and tetracycline

    International Nuclear Information System (INIS)

    Saiki, M.; Lima, F.W.

    1977-01-01

    Stability constants for the lanthanide elements complexes with tetracycline were determined by the methods of average number of ligands, the two parameters and by weighted least squares. The technique of solvent extraction was applied to obtain the values of the parameters required for the determination of the constants [pt

  3. A lattice QCD calculation of the transverse decay constant of the b1(1235) meson

    International Nuclear Information System (INIS)

    Jansen, K.; McNeile, C.; Michael, C.; Urbach, C.

    2009-10-01

    We review various B meson decays that require knowledge of the transverse decay constant of the b 1 (1235) meson. We report on an exploratory lattice QCD calculation of the transverse decay constant of the b 1 meson. The lattice QCD calculations used unquenched gauge configurations, at two lattice spacings, generated with two flavours of sea quarks. The twisted mass formalism is used. (orig.)

  4. Biosynthesis of intestinal microvillar proteins. Processing of N-linked carbohydrate is not required for surface expression

    DEFF Research Database (Denmark)

    Danielsen, Erik Michael; Cowell, G M

    1986-01-01

    Castanospermine, an inhibitor of glucosidase I, the initial enzyme in the trimming of N-linked carbohydrate, was used to study the importance of carbohydrate processing in the biosynthesis of microvillar enzymes in organ-cultured pig intestinal explants. For aminopeptidase N (EC 3.4.11.2), aminop......Castanospermine, an inhibitor of glucosidase I, the initial enzyme in the trimming of N-linked carbohydrate, was used to study the importance of carbohydrate processing in the biosynthesis of microvillar enzymes in organ-cultured pig intestinal explants. For aminopeptidase N (EC 3...

  5. Mimicking the cosmological constant: Constant curvature spherical solutions in a nonminimally coupled model

    International Nuclear Information System (INIS)

    Bertolami, Orfeu; Paramos, Jorge

    2011-01-01

    The purpose of this study is to describe a perfect fluid matter distribution that leads to a constant curvature region, thanks to the effect of a nonminimal coupling. This distribution exhibits a density profile within the range found in the interstellar medium and an adequate matching of the metric components at its boundary. By identifying this constant curvature with the value of the cosmological constant and superimposing the spherical distributions arising from different matter sources throughout the universe, one is able to mimic a large-scale homogeneous cosmological constant solution.

  6. Quantum mechanical methods for calculation of force constants

    International Nuclear Information System (INIS)

    Mullally, D.J.

    1985-01-01

    The focus of this thesis is upon the calculation of force constants; i.e., the second derivatives of the potential energy with respect to nuclear displacements. This information is useful for the calculation of molecular vibrational modes and frequencies. In addition, it may be used for the location and characterization of equilibrium and transition state geometries. The methods presented may also be applied to the calculation of electric polarizabilities and infrared and Raman vibrational intensities. Two approaches to this problem are studied and evaluated: finite difference methods and analytical techniques. The most suitable method depends on the type and level of theory used to calculate the electronic wave function. Double point displacement finite differencing is often required for accurate calculation of the force constant matrix. These calculations require energy and gradient calculations on both sides of the geometry of interest. In order to speed up these calculations, a novel method is presented that uses geometry dependent information about the wavefunction. A detailed derivation for the analytical evaluation of force constants with a complete active space multiconfiguration self consistent field wave function is presented

  7. Accelerating Sequential Gaussian Simulation with a constant path

    Science.gov (United States)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  8. 17 CFR 160.14 - Exceptions to notice and opt out requirements for processing and servicing transactions.

    Science.gov (United States)

    2010-04-01

    ....13 in connection with service providers and joint marketing, do not apply if you disclose nonpublic... or authorizes, or in connection with: (1) Processing or servicing a financial product or service that... persons engaged in carrying out the financial transaction or providing the product or service; or (2...

  9. 40 CFR Table 6 to Subpart Ppp of... - Process Vents From Continuous Unit Operations-Monitoring, Recordkeeping, and Reporting Requirements

    Science.gov (United States)

    2010-07-01

    ... when insufficient monitoring data are collected. e Boiler or Process Heater with a design heat input... operating value established in the NCS or operating—PR. d,e Condenser f Exit (product side) temperature 1... operating permit—PR. d,e Absorber, Condenser, and Carbon Adsorber (as an alternative to the above...

  10. 40 CFR Table 5 to Subpart Ppp of... - Process Vents From Batch Unit Operations-Monitoring, Recordkeeping, and Reporting Requirements

    Science.gov (United States)

    2010-07-01

    ... all instances when monitoring data are not collected—PR. d,e If a base absorbent is used, report all p... all instances when monitoring data are not collected—PR. d,e Catalytic Incinerator Temperature... instances when monitoring data are not collected. e Boiler or Process Heater with a design heat input...

  11. REQUIREMENTS TO AUTOMATIZATION PROCESSING IN THE PROGRAMMING INFORMATION SYSTEM OF SCIENTIFIC RESEARCHES IN ACADEMY OF PEDAGOGICAL SCIENCES OF UKRAINE

    Directory of Open Access Journals (Sweden)

    Alla V. Kilchenko

    2010-08-01

    Full Text Available A construction and introduction of the information systems in a management education is the actual task of forming of modern information society. In the article the results of research of automation of treatment of financial documents, which was conducted within the project «Scientific-methodical providing of the informative system of programming of scientific researches in Academy of Pedagogical Sciences of Ukraine based on the Internet» № 0109U002139 are represented. The article contains methodical principles of automation of treatment programming and financial documents as well as requirements to the information system, which will be the base to next project stages.

  12. The Nature of the Cosmological Constant Problem

    Science.gov (United States)

    Maia, M. D.; Capistrano, A. J. S.; Monte, E. M.

    General relativity postulates the Minkowski space-time as the standard (flat) geometry against which we compare all curved space-times and also as the gravitational ground state where particles, quantum fields and their vacua are defined. On the other hand, experimental evidences tell that there exists a non-zero cosmological constant, which implies in a deSitter ground state, which not compatible with the assumed Minkowski structure. Such inconsistency is an evidence of the missing standard of curvature in Riemann's geometry, which in general relativity manifests itself in the form of the cosmological constant problem. We show how the lack of a curvature standard in Riemann's geometry can be fixed by Nash's theorem on metric perturbations. The resulting higher dimensional gravitational theory is more general than general relativity, similar to brane-world gravity, but where the propagation of the gravitational field along the extra dimensions is a mathematical necessity, rather than a postulate. After a brief introduction to Nash's theorem, we show that the vacuum energy density must remain confined to four-dimensional space-times, but the cosmological constant resulting from the contracted Bianchi identity represents a gravitational term which is not confined. In this case, the comparison between the vacuum energy and the cosmological constant in general relativity does not make sense. Instead, the geometrical fix provided by Nash's theorem suggests that the vacuum energy density contributes to the perturbations of the gravitational field.

  13. A Memorandum Report: Physical Constants of MCE

    Science.gov (United States)

    2016-08-01

    the density and surface tension. In effect, this constant is a corrected molar volume = P = MS / = S / where P = Parachor M = molar volume ...3 3. Vapor Pressure of MCE Calculated from the Experimental Data by Method of Least Squares...values were obtained by averaging the determinations for each sample separately, and then averaging those values. **No average was calculated due to

  14. Study on electromagnetic constants of rotational bands

    International Nuclear Information System (INIS)

    Abdurazakov, A.A.; Adib, Yu.Sh.; Karakhodzhaev, A.K.

    1991-01-01

    Values of electromagnetic constant S and rotation bands of odd nuclei with Z=64-70 within the mass number change interval A=153-173 are determined. Values of γ-transition mixing parameter with M1+E2 multipolarity are presented. ρ parameter dependence on mass number A is discussed

  15. On the determination of the Hubble constant

    International Nuclear Information System (INIS)

    Gurzadyan, V.G.; Harutyunyan, V.V.; Kocharyan, A.A.

    1990-10-01

    The possibility of an alternative determination of the distance scale of the Universe and the Hubble constant based on the numerical analysis of the hierarchical nature of the large scale Universe (galaxies, clusters and superclusters) is proposed. The results of computer experiments performed by means of special numerical algorithms are represented. (author). 9 refs, 7 figs

  16. Dissociative electron attachment to ozone: rate constant

    International Nuclear Information System (INIS)

    Skalny, J.D.; Cicman, P.; Maerk, T.D.

    2002-01-01

    The rate constant for dissociative electron attachment to ozone has been derived over the energy range of 0-10 eV by using previously measured cross section data revisited here in regards to discrimination effect occurring during the extraction of ions. The obtained data for both possible channels exhibit the maximum at mean electron energies close to 1 eV. (author)

  17. Running coupling constants of the Luttinger liquid

    International Nuclear Information System (INIS)

    Boose, D.; Jacquot, J.L.; Polonyi, J.

    2005-01-01

    We compute the one-loop expressions of two running coupling constants of the Luttinger model. The obtained expressions have a nontrivial momentum dependence with Landau poles. The reason for the discrepancy between our results and those of other studies, which find that the scaling laws are trivial, is explained

  18. Constant force linear permanent magnet actuators

    NARCIS (Netherlands)

    Paulides, J.J.H.; Encica, L.; Meessen, K.J.; Lomonova, E.A.

    2009-01-01

    In applications, such as vibration isolation, gravity compensation, pick-and-place machines, etc., there is a need for (long-stroke) passive constant force actuators combined with tubular permanent magnet actuators to minimize the power consumption, hence, passively counteract the gravitational

  19. Lifetime of titanium filament at constant current

    International Nuclear Information System (INIS)

    Chou, T.S.; Lanni, C.

    1981-01-01

    Titanium Sublimation Pump (TSP) represents the most efficient and the least expensive method to produce Ultra High Vacuum (UHV) in storage rings. In ISABELLE, a proton storage accelerator under construction at Brookhaven National Laboratory, for example, TSP provides a pumping speed for hydrogen of > 2 x 10 6 l/s. Due to the finite life of titanium filaments, new filaments have to be switched in before the end of filament burn out, to ensure smooth operation of the accelerator. Therefore, several operational modes that can be used to activate the TSP were studied. The constant current mode is a convenient way of maintaining constant evaporating rate by increasing the power input while the filament diameter decreases as titanium evaporates. The filaments used in this experiment were standard Varian 916-0024 filaments made of Ti 85%, Mo 15% alloy. During their lifetime at a constant current of 48 amperes, the evaporation rate rose to a maximum at about 10% of their life and then flattened out to a constant value, 0.25 g/hr. The maximum evaporation rate occurs coincidently with the recrystallization of 74% Ti 26% Mo 2 from microstructure crystalline at higher titanium concentration to macrostructure crystalline at lower titanium concentration. As the macrocrystal grows, the slip plane develops at the grain boundary resulting in high resistance at the slip plane which will eventually cause the filament burn out due to local heating

  20. Derivation of the fine-structure constant

    International Nuclear Information System (INIS)

    Samec, A.

    1980-01-01

    The fine-structure constant is derived as a dynamical property of quantum electrodynamics. Single-particle solutions of the coupled Maxwell and Dirac equations have a physical charge spectrum. The solutions are used to construct lepton-and quark-like particles. The strong, weak, electromagnetic, and gravitational forces are described as the interactions of complex charges in multiple combinations

  1. GRUCAL: a program system for the calculation of macroscopic group constants

    International Nuclear Information System (INIS)

    Woll, D.

    1984-01-01

    Nuclear reactor calculations require material- and composition-dependent, energy-averaged neutron physical data in order to decribe the interaction between neutrons and isotopes. The multigroup cross section code GRUCAL calculates these macroscopic group constants for given material compositions from the material-dependent data of the group constant library GRUBA. The instructions for calculating group constants are not fixed in the program, but are read in from an instruction file. This makes it possible to adapt GRUCAL to various problems or different group constant concepts

  2. A constant-density Gurney approach to the Cylinder test

    Energy Technology Data Exchange (ETDEWEB)

    Reaugh, John E.; Souers, P. Clark [Energetic Materials Center, Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States)

    2004-04-01

    The previous analysis of the Cylinder test required the treatment of different wall thicknesses and wall materials separately. To fix this, the Gurney analysis is used, but this results in low values for full-wall standard, ideal explosives relative to CHEETAH analyses. A new constant metal-density model is suggested, which takes account of the thinning metal wall as the cylinder expands. With this model, the inner radius of the metal cylinder moves faster than the measured outer radius. Additional small corrections occur in all cylinders because of energy trapped in the copper walls. Also, the half-wall cylinders have a small correction because the relative volumes of the gas products are smaller at a given outside wall displacement. The Fabry-Perot and streak camera measurements are compared. The Fabry method is shown to equate to the constant density model. (Abstract Copyright [2004], Wiley Periodicals, Inc.)

  3. Estimation of the effective distribution coefficient from the solubility constant

    International Nuclear Information System (INIS)

    Wang, Yug-Yea; Yu, C.

    1994-01-01

    An updated version of RESRAD has been developed by Argonne National Laboratory for the US Department of Energy to derive site-specific soil guidelines for residual radioactive material. In this updated version, many new features have been added to the, RESRAD code. One of the options is that a user can input a solubility constant to limit the leaching of contaminants. The leaching model used in the code requires the input of an empirical distribution coefficient, K d , which represents the ratio of the solute concentration in soil to that in solution under equilibrium conditions. This paper describes the methodology developed to estimate an effective distribution coefficient, Kd, from the user-input solubility constant and the use of the effective K d for predicting the leaching of contaminants

  4. On-board Data Processing to Lower Bandwidth Requirements on an Infrared Astronomy Satellite: Case of Herschel-PACS Camera

    Directory of Open Access Journals (Sweden)

    Christian Reimers

    2005-09-01

    Full Text Available This paper presents a new data compression concept, “on-board processing,” for infrared astronomy, where space observatories have limited processing resources. The proposed approach has been developed and tested for the PACS camera from the European Space Agency (ESA mission, Herschel. Using lossy and lossless compression, the presented method offers high compression ratio with a minimal loss of potentially useful scientific data. It also provides higher signal-to-noise ratio than that for standard compression techniques. Furthermore, the proposed approach presents low algorithmic complexity such that it is implementable on the resource-limited hardware. The various modules of the data compression concept are discussed in detail.

  5. A mature and fusogenic form of the Nipah virus fusion protein requires proteolytic processing by cathepsin L

    International Nuclear Information System (INIS)

    Pager, Cara Theresia; Craft, Willie Warren; Patch, Jared; Dutch, Rebecca Ellis

    2006-01-01

    The Nipah virus fusion (F) protein is proteolytically processed to F 1 + F 2 subunits. We demonstrate here that cathepsin L is involved in this important maturation event. Cathepsin inhibitors ablated cleavage of Nipah F. Proteolytic processing of Nipah F and fusion activity was dramatically reduced in cathepsin L shRNA-expressing Vero cells. Additionally, Nipah virus F-mediated fusion was inhibited in cathepsin L-deficient cells, but coexpression of cathepsin L restored fusion activity. Both purified cathepsin L and B could cleave immunopurified Nipah F protein, but only cathepsin L produced products of the correct size. Our results suggest that endosomal cathepsins can cleave Nipah F, but that cathepsin L specifically converts Nipah F to a mature and fusogenic form

  6. SATISFACTION OF QUALIFICATION REQUIREMENTS OF EMPLOYERS APPLIED TO SOFTWARE ENGINEERS IN THE PROCESS OF TRAINING AT HIGHER EDUCATIONAL INSTITUTIONS

    Directory of Open Access Journals (Sweden)

    Vladislav Kruhlyk

    2017-03-01

    Full Text Available In the article, based on the analysis of the problems of the professional training of software engineers in higher educational institutions, was shown that the contents of the curricula for the training of software engineers in basic IT specialties in higher education institutions generally meet the requirements to them at the labor market. It is stated that at the present time there are certain changes in the job market not only in the increasing demand for IT professionals but also in the requirements settled for future specialists. To scientists’ opinion, at present there is a gap between the level of expectation of employers and the level of education of graduates of IT-specialties of universities. Due to the extremely fast pace of IT development, already at the end of the studies, students' knowledge may become obsolete. We are talking about a complex of competencies offered by university during training of specialist for their relevance and competitiveness at the labor market. At the same time, the practical training of students does not fully correspond to the current state of information technology. Therefore, it is necessary to ensure the updating of the contents of the academic disciplines with the aim of providing quality training of specialists.

  7. 40 CFR 63.117 - Process vent provisions-reporting and recordkeeping requirements for group and TRE determinations...

    Science.gov (United States)

    2010-07-01

    ... provisions for Group 2 process vents with a TRE index value greater than 1.0 but less than or equal to 4.0 in... report the following when achieving and maintaining a TRE index value greater than 1.0 but less than 4.0... than 4.0 as specified in § 63.113(e) of this subpart, shall maintain records and submit as part of the...

  8. Towards the understanding of the requirements of a communication language to support process interoperation in cross-disciplinary supply chains

    OpenAIRE

    DAS , BISHNU PADA; Young , R I M; Case , K; Rahimifard , S; Anumba , C; Bouchlaghem , N; Cutting Decelle , Anne-Francoise

    2007-01-01

    Abstract Many manufacturing organisations while doing business either directly or indirectly with other industrial sectors often encounter interoperability problems amongst software systems. This increases the business cost and reduces the efficiency. Research communities are exploring ways to reduce this cost. Incompatibility amongst the syntaxes and the semantics of the languages of application systems is the most common cause to this problem. The Process Specification Language (...

  9. Disturbed holistic processing in autism spectrum disorders verified by two cognitive tasks requiring perception of complex visual stimuli.

    Science.gov (United States)

    Nakahachi, Takayuki; Yamashita, Ko; Iwase, Masao; Ishigami, Wataru; Tanaka, Chitaru; Toyonaga, Koji; Maeda, Shizuyo; Hirotsune, Hideto; Tei, Yosyo; Yokoi, Koichi; Okajima, Shoji; Shimizu, Akira; Takeda, Masatoshi

    2008-06-30

    Central coherence is a key concept in research on autism spectrum disorders (ASD). It refers to the process in which diverse information is integrated and higher meaning is constructed in context. A malfunction in this process could result in abnormal attention to partial information in preference to the whole. To verify this hypothesis, we studied the performance of two visual tasks by 10 patients with autistic disorder or Asperger's disorder and by 26 (experiment 1) or 25 (experiment 2) normal subjects. In experiment 1, the subjects memorized pictures, some pictures with a change related to the main theme (D1) and others with a change not related to the main theme (D2); then the same pictures were randomly presented to the subjects who were asked to find the change. In experiment 2, the subjects were presented pictures of a normal (N) or a Thatcherized (T) face arranged side by side inversely (I) or uprightly (U) and to judge them as the same or different. In experiment 1, ASD subjects exhibited significantly lower rates of correct responses in D1 but not in D2. In experiment 2, ASD subjects exhibited significantly longer response times in NT-U but not in TN-I. These results showed a deficit in holistic processing, which is consistent with weak central coherence in ASD.

  10. Sterilization of health care products - Radiation. Part 1: Requirements for development, validation and routine control of a sterilization process for medical devices

    International Nuclear Information System (INIS)

    2006-01-01

    A sterile medical device is one that is free of viable microorganisms. International Standards, which specify requirements for validation and routine control of sterilization processes, require, when it is necessary to supply a sterile medical device, that adventitious microbiological contamination of a medical device prior to sterilization be minimized. Even so, medical devices produced under standard manufacturing conditions in accordance with the requirements for quality management systems (see, for example, ISO 13485) may, prior to sterilization, have microorganisms on them, albeit in low numbers. Such medical devices are non-sterile. The purpose of sterilization is to inactivate the microbiological contaminants and thereby transform the nonsterile medical devices into sterile ones. The kinetics of inactivation of a pure culture of microorganisms by physical and/or chemical agents used to sterilize medical devices can generally best be described by an exponential relationship between the numbers of microorganisms surviving and the extent of treatment with the sterilizing agent; inevitably this means that there is always a finite probability that a microorganism may survive regardless of the extent of treatment applied. For a given treatment, the probability of survival is determined by the number and resistance of microorganisms and by the environment in which the organisms exist during treatment. It follows that the sterility of any one medical device in a population subjected to sterilization processing cannot be guaranteed and the sterility of a processed population is defined in terms of the probability of there being a viable microorganism present on a medical device. This part of ISO 11137 describes requirements that, if met, will provide a radiation sterilization process intended to sterilize medical devices, that has appropriate microbicidal activity. Furthermore, compliance with the requirements ensures that this activity is both reliable and

  11. Construction of Lines of Constant Density and Constant Refractive Index for Ternary Liquid Mixtures.

    Science.gov (United States)

    Tasic, Aleksandar Z.; Djordjevic, Bojan D.

    1983-01-01

    Demonstrates construction of density constant and refractive index constant lines in triangular coordinate system on basis of systematic experimental determinations of density and refractive index for both homogeneous (single-phase) ternary liquid mixtures (of known composition) and the corresponding binary compositions. Background information,…

  12. Mobility-based correction for accurate determination of binding constants by capillary electrophoresis-frontal analysis.

    Science.gov (United States)

    Qian, Cheng; Kovalchik, Kevin A; MacLennan, Matthew S; Huang, Xiaohua; Chen, David D Y

    2017-06-01

    Capillary electrophoresis frontal analysis (CE-FA) can be used to determine binding affinity of molecular interactions. However, its current data processing method mandate specific requirement on the mobilities of the binding pair in order to obtain accurate binding constants. This work shows that significant errors are resulted when the mobilities of the interacting species do not meet these requirements. Therefore, the applicability of CE-FA in many real word applications becomes questionable. An electrophoretic mobility-based correction method is developed in this work based on the flux of each species. A simulation program and a pair of model compounds are used to verify the new equations and evaluate the effectiveness of this method. Ibuprofen and hydroxypropyl-β-cyclodextrinare used to demonstrate the differences in the obtained binding constant by CE-FA when different calculation methods are used, and the results are compared with those obtained by affinity capillary electrophoresis (ACE). The results suggest that CE-FA, with the mobility-based correction method, can be a generally applicable method for a much wider range of applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Knowledge representation requirements for model sharing between model-based reasoning and simulation in process flow domains

    Science.gov (United States)

    Throop, David R.

    1992-01-01

    The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.

  14. Use of the Kalman filter in signal processing to reduce beam requirements for alpha-particle diagnostics

    International Nuclear Information System (INIS)

    Cooper, W.S.

    1986-01-01

    Several techniques proposed for diagnosing the velocity distribution of fast alpha-particles in a burning plasma require the injection of a beam of fast neutral atoms as probes. The author discusses how improving signal detection techniques is a high leverage factor in reducing the cost of the diagnostic beam. Optimal estimation theory provides a computational algorithm, the Kalman filter, that can optimally estimate the amplitude of a signal with arbitrary (but known) time dependence in the presence of noise. In one example presented, based on a square-wave signal and assumed noise levels, the Kalman filter achieves an enhancement of signal detection efficiency of about a factor of 10 (as compared with the straightforward observation of the signal superimposed on noise) with an observation time of 100 signal periods

  15. Impacts of oil sands process water on fen plants: Implications for plant selection in required reclamation projects

    International Nuclear Information System (INIS)

    Pouliot, Rémy; Rochefort, Line; Graf, Martha D.

    2012-01-01

    Fen plant growth in peat contaminated with groundwater discharges of oil sands process water (OSPW) was assessed in a greenhouse over two growing seasons. Three treatments (non-diluted OSPW, diluted OSPW and rainwater) were tested on five vascular plants and four mosses. All vascular plants tested can grow in salinity and naphthenic acids levels currently produced by oil sands activity in northwestern Canada. No stress sign was observed after both seasons. Because of plant characteristics, Carex species (C. atherodes and C. utriculata) and Triglochin maritima would be more useful for rapidly restoring vegetation and creating a new peat-accumulating system. Groundwater discharge of OSPW proved detrimental to mosses under dry conditions and ensuring adequate water levels would be crucial in fen creation following oil sands exploitation. Campylium stellatum would be the best choice to grow in contaminated areas and Bryum pseudotriquetrum might be interesting as it has spontaneously regenerated in all treatments. - Highlights: ► Fen plant growth was assessed under groundwater discharges of oil sands process water. ► Sedge and grass species were not stressed after two growing seasons in greenhouse. ► Carex species and Triglochin maritima would be helpful in created contaminated fens. ► In dry conditions, contaminated groundwater discharge was detrimental for mosses. ► Campylium stellatum would be the best choice in created fens with contaminated water. - Sedges and grasses tolerated the contact with oil sands process water and could probably grow well in contaminated created fens, but mosses were particularly affected under dry conditions.

  16. Molecular equilibrium structures from experimental rotational constants and calculated vibration-rotation interaction constants

    DEFF Research Database (Denmark)

    Pawlowski, F; Jorgensen, P; Olsen, Jeppe

    2002-01-01

    A detailed study is carried out of the accuracy of molecular equilibrium geometries obtained from least-squares fits involving experimental rotational constants B(0) and sums of ab initio vibration-rotation interaction constants alpha(r)(B). The vibration-rotation interaction constants have been...... calculated for 18 single-configuration dominated molecules containing hydrogen and first-row atoms at various standard levels of ab initio theory. Comparisons with the experimental data and tests for the internal consistency of the calculations show that the equilibrium structures generated using Hartree......-Fock vibration-rotation interaction constants have an accuracy similar to that obtained by a direct minimization of the CCSD(T) energy. The most accurate vibration-rotation interaction constants are those calculated at the CCSD(T)/cc-pVQZ level. The equilibrium bond distances determined from these interaction...

  17. New method for evaluating the kinetic constant of thermal protection materials

    International Nuclear Information System (INIS)

    Bae, Ji Yeul; Yi, Jong Ju; Park, Sul Ki; Cho, Hyung Hee; Bae, Ju Chan; Ham, Hee Cheol

    2013-01-01

    Thermal protection material (TPM) is used to protect rocket structures from extreme conditions created by the hot exhaust of the rocket. Designing TPM is an important step in the rocket design process. Considering that an increase in the system weight decreases the overall performance of a rocket, the amount of TPM is carefully determined during the design process. Therefore, the precise properties of TPM guarantee an accurate thermal analysis and the successful design of the rocket. Among the many properties of TPM, the kinetic constant and activation energy, which govern the thermochemical reaction of the TPM, are the most important. Thus, an experiment to measure the kinetic constant and activation energy is conducted as part of this research. A theoretical approach to deduce the properties from measured data is discussed, and a method to apply the theory to experimental data, termed the R 2 method, is developed. Compared to a previous method which was difficult to apply, the R 2 method reduces unclear selections of the reaction time and does not require intervention by an interpreter. The properties deduced by the R 2 method show good agreement with the other method despite the limited number of experimental results.

  18. New method for evaluating the kinetic constant of thermal protection materials

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Ji Yeul; Yi, Jong Ju; Park, Sul Ki; Cho, Hyung Hee [Yonsei University, Seoul (Korea, Republic of); Bae, Ju Chan; Ham, Hee Cheol [Agency for Defense Development, Daegu (Korea, Republic of)

    2013-06-15

    Thermal protection material (TPM) is used to protect rocket structures from extreme conditions created by the hot exhaust of the rocket. Designing TPM is an important step in the rocket design process. Considering that an increase in the system weight decreases the overall performance of a rocket, the amount of TPM is carefully determined during the design process. Therefore, the precise properties of TPM guarantee an accurate thermal analysis and the successful design of the rocket. Among the many properties of TPM, the kinetic constant and activation energy, which govern the thermochemical reaction of the TPM, are the most important. Thus, an experiment to measure the kinetic constant and activation energy is conducted as part of this research. A theoretical approach to deduce the properties from measured data is discussed, and a method to apply the theory to experimental data, termed the R{sup 2} method, is developed. Compared to a previous method which was difficult to apply, the R{sup 2} method reduces unclear selections of the reaction time and does not require intervention by an interpreter. The properties deduced by the R{sup 2} method show good agreement with the other method despite the limited number of experimental results.

  19. A framework for cost-aware process management: cost reporting and cost prediction

    NARCIS (Netherlands)

    Wynn, M.T.; Low, W.Z.; Hofstede, ter A.H.M.; Nauta, W.E.

    2014-01-01

    Organisations are constantly seeking efficiency gains for their business processes in terms of time and cost. Management accounting enables detailed cost reporting of business operations for decision making purposes, although significant effort is required to gather accurate operational data.

  20. Towards Competence-based Practices in Vocational Education – What Will the Process Require from Teacher Education and Teacher Identities?

    Directory of Open Access Journals (Sweden)

    Säde-Pirkko Nissilä

    2015-06-01

    Full Text Available Competence-based education refers to the integration of knowledge, skills, attitudes and interactivity as the intended outcomes of learning. It makes use of lifelong learning and lifelike tasks in realistic settings and requires the cooperation of teachers. This research was prompted by the desire to explain why collegial cooperation often seems to be problematic in schools and universities. Are there certain social structures or behavioural patterns that influence the cooperative culture in teacher communities? The research material was collected in 2013 and 2014 in Oulu, Finland. The target groups were both newly qualified and experienced vocational teachers at all educational levels (N=30. The data collection methods were open questions in interviews and questionnaires. The research approach and analysis methods were qualitative. The theoretical background is in humanistic-cognitive and experiential learning as well as in dynamic epistemic conceptions. The findings show that the prevailing model in teacher communities is individualistic, discipline-divided and course-based, especially among older teachers. The obstacles refer to teachers’ self-image and a deeply rooted fear of criticism or revelation of incompetence. The promoters of cooperation were connected to the changing practices and desire of sharing with colleagues.

  1. Distinct requirements for signal peptidase processing and function in the stable signal peptide subunit of the Junin virus envelope glycoprotein

    International Nuclear Information System (INIS)

    York, Joanne; Nunberg, Jack H.

    2007-01-01

    The arenavirus envelope glycoprotein (GP-C) retains a cleaved and stable signal peptide (SSP) as an essential subunit of the mature complex. This 58-amino-acid residue peptide serves as a signal sequence and is additionally required to enable transit of the assembled GP-C complex to the Golgi, and for pH-dependent membrane fusion activity. We have investigated the C-terminal region of the Junin virus SSP to study the role of the cellular signal peptidase (SPase) in generating SSP. Site-directed mutagenesis at the cleavage site (positions - 1 and - 3) reveals a pattern of side-chain preferences consistent with those of SPase. Although position - 2 is degenerate for SPase cleavage, this residue in the arenavirus SSP is invariably a cysteine. In the Junin virus, this cysteine is not involved in disulfide bonding. We show that replacement with alanine or serine is tolerated for SPase cleavage but prevents the mutant SSP from associating with GP-C and enabling transport to the cell surface. Conversely, an arginine mutation at position - 1 that prevents SPase cleavage is fully compatible with GP-C-mediated membrane fusion activity when the mutant SSP is provided in trans. These results point to distinct roles of SSP sequences in SPase cleavage and GP-C biogenesis. Further studies of the unique structural organization of the GP-C complex will be important in identifying novel opportunities for antiviral intervention against arenaviral hemorrhagic disease

  2. Information dissemination model for social media with constant updates

    Science.gov (United States)

    Zhu, Hui; Wu, Heng; Cao, Jin; Fu, Gang; Li, Hui

    2018-07-01

    With the development of social media tools and the pervasiveness of smart terminals, social media has become a significant source of information for many individuals. However, false information can spread rapidly, which may result in negative social impacts and serious economic losses. Thus, reducing the unfavorable effects of false information has become an urgent challenge. In this paper, a new competitive model called DMCU is proposed to describe the dissemination of information with constant updates in social media. In the model, we focus on the competitive relationship between the original false information and updated information, and then propose the priority of related information. To more effectively evaluate the effectiveness of the proposed model, data sets containing actual social media activity are utilized in experiments. Simulation results demonstrate that the DMCU model can precisely describe the process of information dissemination with constant updates, and that it can be used to forecast information dissemination trends on social media.

  3. Constant force extensional rheometry of polymer solutions

    DEFF Research Database (Denmark)

    Szabo, Peter; McKinley, Gareth H.; Clasen, Christian

    2012-01-01

    We revisit the rapid stretching of a liquid filament under the action of a constant imposed tensile force, a problem which was first considered by Matta and Tytus [J. Non-Newton. Fluid Mech. 35 (1990) 215–229]. A liquid bridge formed from a viscous Newtonian fluid or from a dilute polymer solution...... is first established between two cylindrical disks. The upper disk is held fixed and may be connected to a force transducer while the lower cylinder falls due to gravity. By varying the mass of the falling cylinder and measuring its resulting acceleration, the viscoelastic nature of the elongating fluid...... filament can be probed. In particular, we show that with this constant force pull (CFP) technique it is possible to readily impose very large material strains and strain rates so that the maximum extensibility of the polymer molecules may be quantified. This unique characteristic of the experiment...

  4. f(R) constant-roll inflation

    Energy Technology Data Exchange (ETDEWEB)

    Motohashi, Hayato [Universidad de Valencia-CSIC, Instituto de Fisica Corpuscular (IFIC), Valencia (Spain); Starobinsky, Alexei A. [L.D. Landau Institute for Theoretical Physics, RAS, Moscow (Russian Federation); National Research University Higher School of Economics, Moscow (Russian Federation)

    2017-08-15

    The previously introduced class of two-parametric phenomenological inflationary models in general relativity in which the slow-roll assumption is replaced by the more general, constant-roll condition is generalized to the case of f(R) gravity. A simple constant-roll condition is defined in the original Jordan frame, and exact expressions for a scalaron potential in the Einstein frame, for a function f(R) (in the parametric form) and for inflationary dynamics are obtained. The region of the model parameters permitted by the latest observational constraints on the scalar spectral index and the tensor-to-scalar ratio of primordial metric perturbations generated during inflation is determined. (orig.)

  5. Benjamin Constant. Libertad, democracia y pluralismo

    Directory of Open Access Journals (Sweden)

    Claudia Patricia Fonnegra Osorio

    2015-12-01

    Full Text Available A partir de un enfoque interpretativo, en este artículo se aborda por qué para Benjamin Constant la democracia solo puede darse en donde se presenta una relación necesaria entre la libertad entendida como defensa de los derechos individuales -libertad como independencia o negativa- y la libertad concebida como principio de la participación pública -libertad como autonomía o positiva-. Asimismo, se presenta la importancia que atribuye el autor a las tradiciones que dan vida a la configuración del universo cultural de un pueblo. Se concluye que en la obra de Constant se encuentra una clara defensa del Estado de derecho y del pluralismo, la cual puede iluminar la comprensión de los problemas políticos de la contemporaneidad.

  6. Varying constants, black holes, and quantum gravity

    International Nuclear Information System (INIS)

    Carlip, S.

    2003-01-01

    Tentative observations and theoretical considerations have recently led to renewed interest in models of fundamental physics in which certain 'constants' vary in time. Assuming fixed black hole mass and the standard form of the Bekenstein-Hawking entropy, Davies, Davis and Lineweaver have argued that the laws of black hole thermodynamics disfavor models in which the fundamental electric charge e changes. I show that with these assumptions, similar considerations severely constrain 'varying speed of light' models, unless we are prepared to abandon cherished assumptions about quantum gravity. Relaxation of these assumptions permits sensible theories of quantum gravity with ''varying constants,'' but also eliminates the thermodynamic constraints, though the black hole mass spectrum may still provide some restrictions on the range of allowable models

  7. Cosmological constant in the quantum multiverse

    International Nuclear Information System (INIS)

    Larsen, Grant; Nomura, Yasunori; Roberts, Hannes L. L.

    2011-01-01

    Recently, a new framework for describing the multiverse has been proposed which is based on the principles of quantum mechanics. The framework allows for well-defined predictions, both regarding global properties of the universe and outcomes of particular experiments, according to a single probability formula. This provides complete unification of the eternally inflating multiverse and many worlds in quantum mechanics. In this paper, we elucidate how cosmological parameters can be calculated in this framework, and study the probability distribution for the value of the cosmological constant. We consider both positive and negative values, and find that the observed value is consistent with the calculated distribution at an order of magnitude level. In particular, in contrast to the case of earlier measure proposals, our framework prefers a positive cosmological constant over a negative one. These results depend only moderately on how we model galaxy formation and life evolution therein.

  8. On determining dose rate constants spectroscopically

    International Nuclear Information System (INIS)

    Rodriguez, M.; Rogers, D. W. O.

    2013-01-01

    Purpose: To investigate several aspects of the Chen and Nath spectroscopic method of determining the dose rate constants of 125 I and 103 Pd seeds [Z. Chen and R. Nath, Phys. Med. Biol. 55, 6089–6104 (2010)] including the accuracy of using a line or dual-point source approximation as done in their method, and the accuracy of ignoring the effects of the scattered photons in the spectra. Additionally, the authors investigate the accuracy of the literature's many different spectra for bare, i.e., unencapsulated 125 I and 103 Pd sources. Methods: Spectra generated by 14 125 I and 6 103 Pd seeds were calculated in vacuo at 10 cm from the source in a 2.7 × 2.7 × 0.05 cm 3 voxel using the EGSnrc BrachyDose Monte Carlo code. Calculated spectra used the initial photon spectra recommended by AAPM's TG-43U1 and NCRP (National Council of Radiation Protection and Measurements) Report 58 for the 125 I seeds, or TG-43U1 and NNDC(2000) (National Nuclear Data Center, 2000) for 103 Pd seeds. The emitted spectra were treated as coming from a line or dual-point source in a Monte Carlo simulation to calculate the dose rate constant. The TG-43U1 definition of the dose rate constant was used. These calculations were performed using the full spectrum including scattered photons or using only the main peaks in the spectrum as done experimentally. Statistical uncertainties on the air kerma/history and the dose rate/history were ⩽0.2%. The dose rate constants were also calculated using Monte Carlo simulations of the full seed model. Results: The ratio of the intensity of the 31 keV line relative to that of the main peak in 125 I spectra is, on average, 6.8% higher when calculated with the NCRP Report 58 initial spectrum vs that calculated with TG-43U1 initial spectrum. The 103 Pd spectra exhibit an average 6.2% decrease in the 22.9 keV line relative to the main peak when calculated with the TG-43U1 rather than the NNDC(2000) initial spectrum. The measured values from three different

  9. Some Dynamical Effects of the Cosmological Constant

    Science.gov (United States)

    Axenides, M.; Floratos, E. G.; Perivolaropoulos, L.

    Newton's law gets modified in the presence of a cosmological constant by a small repulsive term (antigravity) that is proportional to the distance. Assuming a value of the cosmological constant consistent with the recent SnIa data (Λ~=10-52 m-2), we investigate the significance of this term on various astrophysical scales. We find that on galactic scales or smaller (less than a few tens of kpc), the dynamical effects of the vacuum energy are negligible by several orders of magnitude. On scales of 1 Mpc or larger however we find that the vacuum energy can significantly affect the dynamics. For example we show that the velocity data in the local group of galaxies correspond to galactic masses increased by 35% in the presence of vacuum energy. The effect is even more important on larger low density systems like clusters of galaxies or superclusters.

  10. Daylight calculations using constant luminance curves

    Energy Technology Data Exchange (ETDEWEB)

    Betman, E. [CRICYT, Mendoza (Argentina). Laboratorio de Ambiente Humano y Vivienda

    2005-02-01

    This paper presents a simple method to manually estimate daylight availability and to make daylight calculations using constant luminance curves calculated with local illuminance and irradiance data and the all-weather model for sky luminance distribution developed in the Atmospheric Science Research Center of the University of New York (ARSC) by Richard Perez et al. Work with constant luminance curves has the advantage that daylight calculations include the problem's directionality and preserve the information of the luminous climate of the place. This permits accurate knowledge of the resource and a strong basis to establish conclusions concerning topics related to the energy efficiency and comfort in buildings. The characteristics of the proposed method are compared with the method that uses the daylight factor. (author)

  11. Understanding fine structure constants and three generations

    International Nuclear Information System (INIS)

    Bennett, D.L.; Nielsen, H.B.

    1988-02-01

    We put forward a model inspired by random dynamics that relates the smallness of the gauge coupling constants to the number of generations being 'large'. The new element in the present version of our model is the appearance of a free parameter χ that is a measure of the (presumably relatively minor) importance of a term in the plaquette action proportional to the trace in the (1/6, 2, 3) representation of the Standard Model. Calling N gen the number of generations, the sets of allowed (N gen , χN gen )-pairs obtained by imposing the three measured coupling constant values of the Standard Model form three lines. In addition to finding that these lines cross at a single point (as needed for a consistent fit), the intersection occurs with surprising accuracy at the integer N gen = 3 (thereby predicting exactly three generations). It is also encouraging that the parameter χ turns out to be small and positive as expected. (orig.)

  12. Bardeen-Cooper-Schrieffer universal constants generalized

    International Nuclear Information System (INIS)

    Hazaimeh, A.H.

    1992-01-01

    Weak- and moderate-coupling BCS superconductivity theory is shown to admit a more general T c formula, wherein T c approaches zero somewhat faster than with the familiar BCS T c -formula. This theory leads to a departure from the universal behavior of the gap-to-T c ratio and is consistent with some recent empirical values for exotic superconductors. This ratio is smaller than the universal BCS value of 3.53 in a way which is consistent with weak electron-boson coupling. Similarly, other universal constants related to specific heat and critical magnetic field are modified. In this dissertation, The author investigates the latter constants for weak-coupling and moderate-coupling and carry out detailed comparisons with experimental data for the cuprates and with the corresponding predictions of strong-coupling theory. This effort is to elucidate the nature of these superconductors with regards to coupling strength within an electron-boson mechanism

  13. Piezooptical constants of Rochelle salt crystals

    OpenAIRE

    V.Yo. Stadnyk; M.O. Romanyuk; V.Yu. Kurlyak; V.F.Vachulovych

    2000-01-01

    The influence of uniaxial mechanical pressure applied along the principal axes and the corresponding bisectors on the birefringent properties of Rochelle salt (RS) crystals are studied. The temperature (77-300 K) and spectral (300-700 nm) dependencies of the effective and absolute piezooptical constants of the RS crystals are calculated. The intercept of dispersion curves of is revealed in the region of the birefringence sign inversion. This testifies that the anizotropy of the piezooptical ...

  14. Simulated annealing with constant thermodynamic speed

    International Nuclear Information System (INIS)

    Salamon, P.; Ruppeiner, G.; Liao, L.; Pedersen, J.

    1987-01-01

    Arguments are presented to the effect that the optimal annealing schedule for simulated annealing proceeds with constant thermodynamic speed, i.e., with dT/dt = -(v T)/(ε-√C), where T is the temperature, ε- is the relaxation time, C ist the heat capacity, t is the time, and v is the thermodynamic speed. Experimental results consistent with this conjecture are presented from simulated annealing on graph partitioning problems. (orig.)

  15. A noteworthy dimensionless constant in gravitation theory

    International Nuclear Information System (INIS)

    Fayos, F.; Lobo, J.A.; Llanta, E.

    1986-01-01

    A simple problem of gravitation is studied classically and in the Schwarzchild framework. A relationship is found between the parameters that define the trajectories of two particles (the first in radial motion and the second in a circular orbit) which are initially together and meet again after one revolution of particle 2. Dimensional analysis is the clue to explain the appearance of a dimensionless constant in the Newtonian case. (author)

  16. Cosmological Constant and the Final Anthropic Hypothesis

    OpenAIRE

    Cirkovic, Milan M.; Bostrom, Nick

    1999-01-01

    The influence of recent detections of a finite vacuum energy ("cosmological constant") on our formulation of anthropic conjectures, particularly the so-called Final Anthropic Principle is investigated. It is shown that non-zero vacuum energy implies the onset of a quasi-exponential expansion of our causally connected domain ("the universe") at some point in the future, a stage similar to the inflationary expansion at the very beginning of time. The transition to this future inflationary phase...

  17. Singlet axial constant from QCD sum rules

    International Nuclear Information System (INIS)

    Belitskij, A.V.; Teryaev, O.V.

    1995-01-01

    We analyze the singlet axial form factor of the proton for small momentum transferred in the framework of QCD sum rules using the interpolating nucleon current which explicitly accounts for the gluonic degrees of freedom. As the result we come to the quantitative prediction of the singlet axial constant. It is shown that the bilocal power corrections play the most important role in the analysis. 21 refs., 3 figs

  18. Lattice Paths and the Constant Term

    International Nuclear Information System (INIS)

    Brak, R; Essam, J; Osborn, J; Owczarek, A L; Rechnitzer, A

    2006-01-01

    We firstly review the constant term method (CTM), illustrating its combinatorial connections and show how it can be used to solve a certain class of lattice path problems. We show the connection between the CTM, the transfer matrix method (eigenvectors and eigenvalues), partial difference equations, the Bethe Ansatz and orthogonal polynomials. Secondly, we solve a lattice path problem first posed in 1971. The model stated in 1971 was only solved for a special case - we solve the full model

  19. Strong coupling constant extraction from high-multiplicity Z +jets observables

    Science.gov (United States)

    Johnson, Mark; Maître, Daniel

    2018-03-01

    We present a strong coupling constant extraction at next-to-leading order QCD accuracy using ATLAS Z +2 ,3,4 jets data. This is the first extraction using processes with a dependency on high powers of the coupling constant. We obtain values of the strong coupling constant at the Z mass compatible with the world average and with uncertainties commensurate with other next-to-leading order extractions at hadron colliders. Our most conservative result for the strong coupling constant is αS(MZ)=0.117 8-0.0043+0.0051 .

  20. Elastic constants from microscopic strain fluctuations

    Science.gov (United States)

    Sengupta; Nielaba; Rao; Binder

    2000-02-01

    Fluctuations of the instantaneous local Lagrangian strain epsilon(ij)(r,t), measured with respect to a static "reference" lattice, are used to obtain accurate estimates of the elastic constants of model solids from atomistic computer simulations. The measured strains are systematically coarse-grained by averaging them within subsystems (of size L(b)) of a system (of total size L) in the canonical ensemble. Using a simple finite size scaling theory we predict the behavior of the fluctuations as a function of L(b)/L and extract elastic constants of the system in the thermodynamic limit at nonzero temperature. Our method is simple to implement, efficient, and general enough to be able to handle a wide class of model systems, including those with singular potentials without any essential modification. We illustrate the technique by computing isothermal elastic constants of "hard" and "soft" disk triangular solids in two dimensions from Monte Carlo and molecular dynamics simulations. We compare our results with those from earlier simulations and theory.