WorldWideScience

Sample records for methods typically require

  1. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children

    Science.gov (United States)

    Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children’s category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults. PMID:27322371

  2. Method for calculating required shielding in medical x-ray rooms

    International Nuclear Information System (INIS)

    Karppinen, J.

    1997-10-01

    The new annual radiation dose limits - 20 mSv (previously 50 mSv) for radiation workers and 1 mSv (previously 5 mSv) for other persons - implies that the adequacy of existing radiation shielding must be re-evaluated. In principle, one could assume that the thicknesses of old radiation shields should be increased by about one or two half-value layers in order to comply with the new dose limits. However, the assumptions made in the earlier shielding calculations are highly conservative; the required shielding was often determined by applying the maximum high-voltage of the x-ray tube for the whole workload. A more realistic calculation shows that increased shielding is typically not necessary if more practical x-ray tube voltages are used in the evaluation. We have developed a PC-based calculation method for calculating the x-ray shielding which is more realistic than the highly conservative method formerly used. The method may be used to evaluate an existing shield for compliance with new regulations. As examples of these calculations, typical x-ray rooms are considered. The lead and concrete thickness requirements as a function of x-ray tube voltage and workload are also given in tables. (author)

  3. A Hybrid Method for Generation of Typical Meteorological Years for Different Climates of China

    Directory of Open Access Journals (Sweden)

    Haixiang Zang

    2016-12-01

    Full Text Available Since a representative dataset of the climatological features of a location is important for calculations relating to many fields, such as solar energy system, agriculture, meteorology and architecture, there is a need to investigate the methodology for generating a typical meteorological year (TMY. In this paper, a hybrid method with mixed treatment of selected results from the Danish method, the Festa-Ratto method, and the modified typical meteorological year method is proposed to determine typical meteorological years for 35 locations in six different climatic zones of China (Tropical Zone, Subtropical Zone, Warm Temperate Zone, Mid Temperate Zone, Cold Temperate Zone and Tibetan Plateau Zone. Measured weather data (air dry-bulb temperature, air relative humidity, wind speed, pressure, sunshine duration and global solar radiation, which cover the period of 1994–2015, are obtained and applied in the process of forming TMY. The TMY data and typical solar radiation data are investigated and analyzed in this study. It is found that the results of the hybrid method have better performance in terms of the long-term average measured data during the year than the other investigated methods. Moreover, the Gaussian process regression (GPR model is recommended to forecast the monthly mean solar radiation using the last 22 years (1994–2015 of measured data.

  4. Proposal for Requirement Validation Criteria and Method Based on Actor Interaction

    Science.gov (United States)

    Hattori, Noboru; Yamamoto, Shuichiro; Ajisaka, Tsuneo; Kitani, Tsuyoshi

    We propose requirement validation criteria and a method based on the interaction between actors in an information system. We focus on the cyclical transitions of one actor's situation against another and clarify observable stimuli and responses based on these transitions. Both actors' situations can be listed in a state transition table, which describes the observable stimuli or responses they send or receive. Examination of the interaction between both actors in the state transition tables enables us to detect missing or defective observable stimuli or responses. Typically, this method can be applied to the examination of the interaction between a resource managed by the information system and its user. As a case study, we analyzed 332 requirement defect reports of an actual system development project in Japan. We found that there were a certain amount of defects regarding missing or defective stimuli and responses, which can be detected using our proposed method if this method is used in the requirement definition phase. This means that we can reach a more complete requirement definition with our proposed method.

  5. Comparative analysis on flexibility requirements of typical Cryogenic Transfer lines

    Science.gov (United States)

    Jadon, Mohit; Kumar, Uday; Choukekar, Ketan; Shah, Nitin; Sarkar, Biswanath

    2017-04-01

    The cryogenic systems and their applications; primarily in large Fusion devices, utilize multiple cryogen transfer lines of various sizes and complexities to transfer cryogenic fluids from plant to the various user/ applications. These transfer lines are composed of various critical sections i.e. tee section, elbows, flexible components etc. The mechanical sustainability (under failure circumstances) of these transfer lines are primary requirement for safe operation of the system and applications. The transfer lines need to be designed for multiple design constraints conditions like line layout, support locations and space restrictions. The transfer lines are subjected to single load and multiple load combinations, such as operational loads, seismic loads, leak in insulation vacuum loads etc. [1]. The analytical calculations and flexibility analysis using professional software are performed for the typical transfer lines without any flexible component, the results were analysed for functional and mechanical load conditions. The failure modes were identified along the critical sections. The same transfer line was then refurbished with the flexible components and analysed for failure modes. The flexible components provide additional flexibility to the transfer line system and make it safe. The results obtained from the analytical calculations were compared with those obtained from the flexibility analysis software calculations. The optimization of the flexible component’s size and selection was performed and components were selected to meet the design requirements as per code.

  6. Typical Complexity Numbers

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Typical Complexity Numbers. Say. 1000 tones,; 100 Users,; Transmission every 10 msec. Full Crosstalk cancellation would require. Full cancellation requires a matrix multiplication of order 100*100 for all the tones. 1000*100*100*100 operations every second for the ...

  7. How typical are 'typical' tremor characteristics? : Sensitivity and specificity of five tremor phenomena

    NARCIS (Netherlands)

    van der Stouwe, A. M. M.; Elting, J. W.; van der Hoeven, J. H.; van Laar, T.; Leenders, K. L.; Maurits, N. M.; Tijssen, M. Aj.

    Introduction: Distinguishing between different tremor disorders can be challenging. Some tremor disorders are thought to have typical tremor characteristics: the current study aims to provide sensitivity and specificity for five 'typical' tremor phenomena. Methods: Retrospectively, we examined 210

  8. Generation of typical meteorological year for different climates of China

    International Nuclear Information System (INIS)

    Jiang, Yingni

    2010-01-01

    Accurate prediction of building energy performance requires precise information of the local climate. Typical weather year files like typical meteorological year (TMY) are commonly used in building simulation. They are also essential for numerical analysis of sustainable and renewable energy systems. The present paper presents the generation of typical meteorological year (TMY) for eight typical cities representing the major climate zones of China. The data set, which includes global solar radiation data and other meteorological parameters referring to dry bulb temperature, relative humidity, wind speed, has been analyzed. The typical meteorological year is generated from the available meteorological data recorded during the period 1995-2004, using the Finkelstein-Schafer statistical method. The cumulative distribution function (CDF) for each year is compared with the CDF for the long-term composite of all the years in the period. Typical months for each of the 12 calendar months from the period of years are selected by choosing the one with the smallest deviation from the long-term CDF. The 12 typical months selected from the different years are used for the formulation of a TMY.

  9. METHODS OF THE APPROXIMATE ESTIMATIONS OF FATIGUE DURABILITY OF COMPOSITE AIRFRAME COMPONENT TYPICAL ELEMENTS

    Directory of Open Access Journals (Sweden)

    V. E. Strizhius

    2015-01-01

    Full Text Available Methods of the approximate estimations of fatigue durability of composite airframe component typical elements which can be recommended for application at the stage of outline designing of the airplane are generated and presented.

  10. LBB evaluation for a typical Japanese PWR primary loop by using the US NRC approved methods

    Energy Technology Data Exchange (ETDEWEB)

    Swamy, S.A.; Bhowmick, D.C.; Prager, D.E. [Westinghouse Nuclear Technology Division, Pittsburgh, PA (United States)

    1997-04-01

    The regulatory requirements for postulated pipe ruptures have changed significantly since the first nuclear plants were designed. The Leak-Before-Break (LBB) methodology is now accepted as a technically justifiable approach for eliminating postulation of double-ended guillotine breaks (DEGB) in high energy piping systems. The previous pipe rupture design requirements for nuclear power plant applications are responsible for all the numerous and massive pipe whip restraints and jet shields installed for each plant. This results in significant plant congestion, increased labor costs and radiation dosage for normal maintenance and inspection. Also the restraints increase the probability of interference between the piping and supporting structures during plant heatup, thereby potentially impacting overall plant reliability. The LBB approach to eliminate postulating ruptures in high energy piping systems is a significant improvement to former regulatory methodologies, and therefore, the LBB approach to design is gaining worldwide acceptance. However, the methods and criteria for LBB evaluation depend upon the policy of individual country and significant effort continues towards accomplishing uniformity on a global basis. In this paper the historical development of the U.S. LBB criteria will be traced and the results of an LBB evaluation for a typical Japanese PWR primary loop applying U.S. NRC approved methods will be presented. In addition, another approach using the Japanese LBB criteria will be shown and compared with the U.S. criteria. The comparison will be highlighted in this paper with detailed discussion.

  11. Inverse operator method for solutions of nonlinear dynamical equations and some typical applications

    International Nuclear Information System (INIS)

    Fang Jinqing; Yao Weiguang

    1993-01-01

    The inverse operator method (IOM) is described briefly. We have realized the IOM for the solutions of nonlinear dynamical equations by the mathematics-mechanization (MM) with computers. They can then offer a new and powerful method applicable to many areas of physics. We have applied them successfully to study the chaotic behaviors of some nonlinear dynamical equations. As typical examples, the well-known Lorentz equation, generalized Duffing equation and two coupled generalized Duffing equations are investigated by using the IOM and the MM. The results are in good agreement with those given by Runge-Kutta method. So the IOM realized by the MM is of potential application valuable in nonlinear physics and many other fields

  12. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    Science.gov (United States)

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  13. Research on release rate of volatile organic compounds in typical vessel cabin

    Directory of Open Access Journals (Sweden)

    ZHANG Jinlan

    2018-02-01

    Full Text Available [Objectives] Volatile Organic Compounds (VOC should be efficiently controlled in vessel cabins to ensure the crew's health and navigation safety. As an important parameter, research on release rate of VOCs in cabins is required. [Methods] This paper develops a method to investigate this parameter of a ship's cabin based on methods used in other closed indoor environments. A typical vessel cabin is sampled with Tenax TA tubes and analyzed by Automated Thermal Desorption-Gas Chromatography-Mass Spectrometry (ATD-GC/MS. The lumped mode is used and the release rate of Benzene, Toluene, Ethylbenzene and Xylene (BTEX, the typical representatives of VOCs, is obtained both in closed and ventilated conditions. [Results] The results show that the content of xylene and Total Volatile Organic Compounds (TVOC exceed the indoor environment standards in ventilated conditions. The BTEX release rate is similar in both conditions except for the benzene. [Conclusions] This research builds a method to measure the release rate of VOCs, providing references for pollution character evaluation and ventilation and purification system design.

  14. Metric-based method of software requirements correctness improvement

    Directory of Open Access Journals (Sweden)

    Yaremchuk Svitlana

    2017-01-01

    Full Text Available The work highlights the most important principles of software reliability management (SRM. The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

  15. A Fluorine-18 Radiolabeling Method Enabled by Rhenium(I) Complexation Circumvents the Requirement of Anhydrous Conditions.

    Science.gov (United States)

    Klenner, Mitchell A; Pascali, Giancarlo; Zhang, Bo; Sia, Tiffany R; Spare, Lawson K; Krause-Heuer, Anwen M; Aldrich-Wright, Janice R; Greguric, Ivan; Guastella, Adam J; Massi, Massimiliano; Fraser, Benjamin H

    2017-05-11

    Azeotropic distillation is typically required to achieve fluorine-18 radiolabeling during the production of positron emission tomography (PET) imaging agents. However, this time-consuming process also limits fluorine-18 incorporation, due to radioactive decay of the isotope and its adsorption to the drying vessel. In addressing these limitations, the fluorine-18 radiolabeling of one model rhenium(I) complex is reported here, which is significantly improved under conditions that do not require azeotropic drying. This work could open a route towards the investigation of a simplified metal-mediated late-stage radiofluorination method, which would expand upon the accessibility of new PET and PET-optical probes. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Comparing the methods plot and point-centered quarter to describe a woody community from typical Cerrado

    Directory of Open Access Journals (Sweden)

    Firmino Cardoso Pereira

    2015-05-01

    Full Text Available This article evaluates the effectiveness of the methods fixed area plots (AP and point-centered quarters (PQ to describe a woody community from typical Cerrado. We used 10 APs and 140 PQs, distributed into 5 transects. We compared the density of individuals, floristic composition, richness of families, genera, and species, and vertical and horizontal vegetation structure. The AP method was more effective to sample the density of individuals. The PQ method was more effective for characterizing species richness, vertical vegetation structure, and record of species with low abundance. The composition of families, genera, and species, as well as the species with higher importance value index in the community were similarly determined by the 2 methods. The methods compared are complementary. We suggest that the use of AP, PQ, or both methods may be aimed at the vegetation parameter under study.

  17. Material characteristics and construction methods for a typical research reactor concrete containment in Iran

    International Nuclear Information System (INIS)

    Ebrahimia, Mahsa; Suha, Kune Y.; Eghbalic, Rahman; Jahan, Farzaneh Asadi malek

    2012-01-01

    Generally selecting an appropriate material and also construction style for a concrete containment due to its function and special geometry play an important role in applicability and also construction cost and duration decrease in a research reactor (RR) project. The reactor containment enclosing the reactor vessel comprises physical barriers reflecting the safety design and construction codes, regulations and standards so as to prevent the community and the environment from uncontrolled release of radioactive materials. It is the third and the last barrier against radioactivity release. It protects the reactor vessel from such external events as earthquake and aircraft crash as well. Thus, it should be designed and constructed in such a manner as to withstand dead and live loads, ground and seismic loads, missiles and aircraft loads, and thermal and shrinkage loads. This study aims to present a construction method for concrete containment of a typical RR in Iran. The work also presents an acceptable characteristic for concrete and reinforcing re bar of a typical concrete containment. The current study has evaluated the various types of the RR containments. The most proper type was selected in accordance with the current knowledge and technology of Iran

  18. Material characteristics and construction methods for a typical research reactor concrete containment in Iran

    Energy Technology Data Exchange (ETDEWEB)

    Ebrahimia, Mahsa; Suha, Kune Y. [Seoul National Univ., Seoul (Korea, Republic of); Eghbalic, Rahman; Jahan, Farzaneh Asadi malek [School of Architecture and Urbanism, Qazvin (Iran, Islamic Republic of)

    2012-10-15

    Generally selecting an appropriate material and also construction style for a concrete containment due to its function and special geometry play an important role in applicability and also construction cost and duration decrease in a research reactor (RR) project. The reactor containment enclosing the reactor vessel comprises physical barriers reflecting the safety design and construction codes, regulations and standards so as to prevent the community and the environment from uncontrolled release of radioactive materials. It is the third and the last barrier against radioactivity release. It protects the reactor vessel from such external events as earthquake and aircraft crash as well. Thus, it should be designed and constructed in such a manner as to withstand dead and live loads, ground and seismic loads, missiles and aircraft loads, and thermal and shrinkage loads. This study aims to present a construction method for concrete containment of a typical RR in Iran. The work also presents an acceptable characteristic for concrete and reinforcing re bar of a typical concrete containment. The current study has evaluated the various types of the RR containments. The most proper type was selected in accordance with the current knowledge and technology of Iran.

  19. A new method to predict the metadynamic recrystallization behavior in a typical nickel-based superalloy

    International Nuclear Information System (INIS)

    Lin, Y.C.; Chen, Xiao-Min; Chen, Ming-Song; Wen, Dong-Xu; Zhou, Ying; He, Dao-Guang

    2016-01-01

    The metadynamic recrystallization (MDRX) behaviors of a typical nickel-based superalloy are investigated by two-pass hot compression tests and four conventional stress-based conventional approaches (offset stress method, back-extrapolation stress method, peak stress method, and mean stress method). It is found that the conventional stress-based methods are not suitable to evaluate the MDRX softening fractions for the studied superalloy. Therefore, a new approach, 'maximum stress method', is proposed to evaluate the MDRX softening fraction. Based on the proposed method, the effects of deformation temperature, strain rate, initial average grain size, and interpass time on MDRX behaviors are discussed in detail. Results show that MDRX softening fraction is sensitive to deformation parameters. The MDRX softening fraction rapidly increases with the increase of deformation temperature, strain rate, and interpass time. The MDRX softening fraction in the coarse-grain material is lower than that in the fine-grain material. Moreover, the observed microstructures indicate that the initial coarse grains can be effectively refined by MDRX. Based on the experimental results, the kinetics equations are established and validated to describe the MDRX behaviors of the studied superalloy. (orig.)

  20. Typical entanglement

    Science.gov (United States)

    Deelan Cunden, Fabio; Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio

    2013-05-01

    Let a pure state | ψ> be chosen randomly in an NM-dimensional Hilbert space, and consider the reduced density matrix ρ A of an N-dimensional subsystem. The bipartite entanglement properties of | ψ> are encoded in the spectrum of ρ A . By means of a saddle point method and using a "Coulomb gas" model for the eigenvalues, we obtain the typical spectrum of reduced density matrices. We consider the cases of an unbiased ensemble of pure states and of a fixed value of the purity. We finally obtain the eigenvalue distribution by using a statistical mechanics approach based on the introduction of a partition function.

  1. Comparison of methods for generating typical meteorological year using meteorological data from a tropical environment

    Energy Technology Data Exchange (ETDEWEB)

    Janjai, S.; Deeyai, P. [Laboratory of Tropical Atmospheric Physics, Department of Physics, Faculty of Science, Silpakorn University, Nakhon Pathom 73000 (Thailand)

    2009-04-15

    This paper presents the comparison of methods for generating typical meteorological year (TMY) data set using a 10-year period of meteorological data from four stations in a tropical environment of Thailand. These methods are the Sadia National Laboratory method, the Danish method and the Festa and Ratto method. In investigating their performance, these methods were employed to generate TMYs for each station. For all parameters of the TMYs and the stations, statistical test indicates that there is no significant difference between the 10-year average values of these parameters and the corresponding average values from TMY generated from each method. The TMY obtained from each method was also used as input data to simulate two solar water heating systems and two photovoltaic systems with different sizes at the four stations by using the TRNSYS simulation program. Solar fractions and electrical output calculated using TMYs are in good agreement with those computed employing the 10-year period hourly meteorological data. It is concluded that the performance of the three methods has no significant difference for all stations under this investigation. Due to its simplicity, the method of Sandia National Laboratories is recommended for the generation of TMY for this tropical environment. The TMYs developed in this work can be used for solar energy and energy conservation applications at the four locations in Thailand. (author)

  2. Studies of fuel loading pattern optimization for a typical pressurized water reactor (PWR) using improved pivot particle swarm method

    International Nuclear Information System (INIS)

    Liu, Shichang; Cai, Jiejin

    2012-01-01

    Highlights: ► The mathematical model of loading pattern problems for PWR has been established. ► IPPSO was integrated with ‘donjon’ and ‘dragon’ into fuel arrangement optimizing code. ► The novel method showed highly efficiency for the LP problems. ► The core effective multiplication factor increases by about 10% in simulation cases. ► The power peaking factor decreases by about 0.6% in simulation cases. -- Abstract: An in-core fuel reload design tool using the improved pivot particle swarm method was developed for the loading pattern optimization problems in a typical PWR, such as Daya Bay Nuclear Power Plant. The discrete, multi-objective improved pivot particle swarm optimization, was integrated with the in-core physics calculation code ‘donjon’ based on finite element method, and assemblies’ group constant calculation code ‘dragon’, composing the optimization code for fuel arrangement. The codes of both ‘donjon’ and ‘dragon’ were programmed by Institute of Nuclear Engineering of Polytechnique Montréal, Canada. This optimization code was aiming to maximize the core effective multiplication factor (Keff), while keeping the local power peaking factor (Ppf) lower than a predetermined value to maintain fuel integrity. At last, the code was applied to the first cycle loading of Daya Bay Nuclear Power Plant. The result showed that, compared with the reference loading pattern design, the core effective multiplication factor increased by 9.6%, while the power peaking factor decreased by 0.6%, meeting the safety requirement.

  3. Typical horticultural products between tradition and innovation

    Directory of Open Access Journals (Sweden)

    Innocenza Chessa

    Full Text Available Recent EU and National policies for agriculture and rural development are mainly focused to foster the production of high quality products as a result of the increasing demand of food safety, typical foods and traditional processing methods. Another word very often used to describe foods in these days is “typicality” which pools together the concepts of “food connected with a specific place”, “historical memory and tradition” and “culture”. The importance for the EU and the National administrations of the above mentioned kind of food is demonstrated, among other things, by the high number of the PDO, PGI and TSG certificated products in Italy. In this period of global markets and economical crisis farmers are realizing how “typical products” can be an opportunity to maintain their market share and to improve the economy of local areas. At the same time, new tools and strategy are needed to reach these goals. A lack of knowledge has being recognized also on how new technologies and results coming from recent research can help in exploiting traditional product and in maintaining the biodiversity. Taking into account the great variety and richness of typical products, landscapes and biodiversity, this report will describe and analyze the relationships among typicality, innovation and research in horticulture. At the beginning “typicality” and “innovation” will be defined also through some statistical features, which ranks Italy at the first place in terms of number of typical labelled products, then will be highlighted how typical products of high quality and connected with the tradition and culture of specific production areas are in a strict relationship with the value of agro-biodiversity. Several different examples will be used to explain different successful methods and/or strategies used to exploit and foster typical Italian vegetables, fruits and flowers. Finally, as a conclusion, since it is thought that

  4. Research on the recycling industry development model for typical exterior plastic components of end-of-life passenger vehicle based on the SWOT method.

    Science.gov (United States)

    Zhang, Hongshen; Chen, Ming

    2013-11-01

    In-depth studies on the recycling of typical automotive exterior plastic parts are significant and beneficial for environmental protection, energy conservation, and sustainable development of China. In the current study, several methods were used to analyze the recycling industry model for typical exterior parts of passenger vehicles in China. The strengths, weaknesses, opportunities, and challenges of the current recycling industry for typical exterior parts of passenger vehicles were analyzed comprehensively based on the SWOT method. The internal factor evaluation matrix and external factor evaluation matrix were used to evaluate the internal and external factors of the recycling industry. The recycling industry was found to respond well to all the factors and it was found to face good developing opportunities. Then, the cross-link strategies analysis for the typical exterior parts of the passenger car industry of China was conducted based on the SWOT analysis strategies and established SWOT matrix. Finally, based on the aforementioned research, the recycling industry model led by automobile manufacturers was promoted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. The Sources and Methods of Engineering Design Requirement

    DEFF Research Database (Denmark)

    Li, Xuemeng; Zhang, Zhinan; Ahmed-Kristensen, Saeema

    2014-01-01

    to be defined in a new context. This paper focuses on understanding the design requirement sources at the requirement elicitation phase. It aims at proposing an improved design requirement source classification considering emerging markets and presenting current methods for eliciting requirement for each source...

  6. Aeroelastic Calculations Using CFD for a Typical Business Jet Model

    Science.gov (United States)

    Gibbons, Michael D.

    1996-01-01

    Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.

  7. [Typical atrial flutter: Diagnosis and therapy].

    Science.gov (United States)

    Thomas, Dierk; Eckardt, Lars; Estner, Heidi L; Kuniss, Malte; Meyer, Christian; Neuberger, Hans-Ruprecht; Sommer, Philipp; Steven, Daniel; Voss, Frederik; Bonnemeier, Hendrik

    2016-03-01

    Typical, cavotricuspid-dependent atrial flutter is the most common atrial macroreentry tachycardia. The incidence of atrial flutter (typical and atypical forms) is age-dependent with 5/100,000 in patients less than 50 years and approximately 600/100,000 in subjects > 80 years of age. Concomitant heart failure or pulmonary disease further increases the risk of typical atrial flutter.Patients with atrial flutter may present with symptoms of palpitations, reduced exercise capacity, chest pain, or dyspnea. The risk of thromboembolism is probably similar to atrial fibrillation; therefore, the same antithrombotic prophylaxis is required in atrial flutter patients. Acutely symptomatic cases may be subjected to cardioversion or pharmacologic rate control to relieve symptoms. Catheter ablation of the cavotricuspid isthmus represents the primary choice in long-term therapy, associated with high procedural success (> 97 %) and low complication rates (0.5 %).This article represents the third part of a manuscript series designed to improve professional education in the field of cardiac electrophysiology. Mechanistic and clinical characteristics as well as management of isthmus-dependent atrial flutter are described in detail. Electrophysiological findings and catheter ablation of the arrhythmia are highlighted.

  8. A Survey of Various Object Oriented Requirement Engineering Methods

    OpenAIRE

    Anandi Mahajan; Dr. Anurag Dixit

    2013-01-01

    In current years many industries have been moving to the use of object-oriented methods for the development of large scale information systems The requirement of Object Oriented approach in the development of software systems is increasing day by day. This paper is basically a survey paper on various Object-oriented requirement engineering methods. This paper contains a summary of the available Object-oriented requirement engineering methods with their relative advantages and disadvantages...

  9. Basic requirements to the methods of personnel monitoring

    International Nuclear Information System (INIS)

    Keirim-Markus, I.B.

    1981-01-01

    Requirements to methods of personnel monitoring (PMM) depending on irradiation conditions are given. The irradiation conditions determine subjected to monitoring types of irradiation, measurement ranges, periodicity of monitoring, operativeness of obtaining results and required accuracy. The PMM based on the photographic effect of ionizing radiation is the main method of the mass monitoring [ru

  10. Foods Inducing Typical Gastroesophageal Reflux Disease Symptoms in Korea

    OpenAIRE

    Choe, Jung Wan; Joo, Moon Kyung; Kim, Hyo Jung; Lee, Beom Jae; Kim, Ji Hoon; Yeon, Jong Eun; Park, Jong-Jae; Kim, Jae Seon; Byun, Kwan Soo; Bak, Young-Tae

    2017-01-01

    Background/Aims Several specific foods are known to precipitate gastroesophageal reflux disease (GERD) symptoms and GERD patients are usually advised to avoid such foods. However, foods consumed daily are quite variable according to regions, cultures, etc. This study was done to elucidate the food items which induce typical GERD symptoms in Korean patients. Methods One hundred and twenty-six Korean patients with weekly typical GERD symptoms were asked to mark all food items that induced typic...

  11. A Method for Software Requirement Volatility Analysis Using QFD

    Directory of Open Access Journals (Sweden)

    Yunarso Anang

    2016-10-01

    Full Text Available Changes of software requirements are inevitable during the development life cycle. Rather than avoiding the circumstance, it is easier to just accept it and find a way to anticipate those changes. This paper proposes a method to analyze the volatility of requirement by using the Quality Function Deployment (QFD method and the introduced degree of volatility. Customer requirements are deployed to software functions and subsequently to architectural design elements. And then, after determining the potential for changes of the design elements, the degree of volatility of the software requirements is calculated. In this paper the method is described using a flow diagram and illustrated using a simple example, and is evaluated using a case study.

  12. Herpes zoster - typical and atypical presentations.

    Science.gov (United States)

    Dayan, Roy Rafael; Peleg, Roni

    2017-08-01

    Varicella- zoster virus infection is an intriguing medical entity that involves many medical specialties including infectious diseases, immunology, dermatology, and neurology. It can affect patients from early childhood to old age. Its treatment requires expertise in pain management and psychological support. While varicella is caused by acute viremia, herpes zoster occurs after the dormant viral infection, involving the cranial nerve or sensory root ganglia, is re-activated and spreads orthodromically from the ganglion, via the sensory nerve root, to the innervated target tissue (skin, cornea, auditory canal, etc.). Typically, a single dermatome is involved, although two or three adjacent dermatomes may be affected. The lesions usually do not cross the midline. Herpes zoster can also present with unique or atypical clinical manifestations, such as glioma, zoster sine herpete and bilateral herpes zoster, which can be a challenging diagnosis even for experienced physicians. We discuss the epidemiology, pathophysiology, diagnosis and management of Herpes Zoster, typical and atypical presentations.

  13. How to Compare the Security Quality Requirements Engineering (SQUARE) Method with Other Methods

    National Research Council Canada - National Science Library

    Mead, Nancy R

    2007-01-01

    The Security Quality Requirements Engineering (SQUARE) method, developed at the Carnegie Mellon Software Engineering Institute, provides a systematic way to identify security requirements in a software development project...

  14. Method for developing cost estimates for generic regulatory requirements

    International Nuclear Information System (INIS)

    1985-01-01

    The NRC has established a practice of performing regulatory analyses, reflecting costs as well as benefits, of proposed new or revised generic requirements. A method had been developed to assist the NRC in preparing the types of cost estimates required for this purpose and for assigning priorities in the resolution of generic safety issues. The cost of a generic requirement is defined as the net present value of total lifetime cost incurred by the public, industry, and government in implementing the requirement for all affected plants. The method described here is for commercial light-water-reactor power plants. Estimating the cost for a generic requirement involves several steps: (1) identifying the activities that must be carried out to fully implement the requirement, (2) defining the work packages associated with the major activities, (3) identifying the individual elements of cost for each work package, (4) estimating the magnitude of each cost element, (5) aggregating individual plant costs over the plant lifetime, and (6) aggregating all plant costs and generic costs to produce a total, national, present value of lifetime cost for the requirement. The method developed addresses all six steps. In this paper, we discuss on the first three

  15. Nuclear data for fusion: Validation of typical pre-processing methods for radiation transport calculations

    International Nuclear Information System (INIS)

    Hutton, T.; Sublet, J.C.; Morgan, L.; Leadbeater, T.W.

    2015-01-01

    Highlights: • We quantify the effect of processing nuclear data from ENDF to ACE format. • We consider the differences between fission and fusion angular distributions. • C-nat(n,el) at 2.0 MeV has a 0.6% deviation between original and processed data. • Fe-56(n,el) at 14.1 MeV has a 11.0% deviation between original and processed data. • Processed data do not accurately depict ENDF distributions for fusion energies. - Abstract: Nuclear data form the basis of the radiation transport codes used to design and simulate the behaviour of nuclear facilities, such as the ITER and DEMO fusion reactors. Typically these data and codes are biased towards fission and high-energy physics applications yet are still applied to fusion problems. With increasing interest in fusion applications, the lack of fusion specific codes and relevant data libraries is becoming increasingly apparent. Industry standard radiation transport codes require pre-processing of the evaluated data libraries prior to use in simulation. Historically these methods focus on speed of simulation at the cost of accurate data representation. For legacy applications this has not been a major concern, but current fusion needs differ significantly. Pre-processing reconstructs the differential and double differential interaction cross sections with a coarse binned structure, or more recently as a tabulated cumulative distribution function. This work looks at the validity of applying these processing methods to data used in fusion specific calculations in comparison to fission. The relative effects of applying this pre-processing mechanism, to both fission and fusion relevant reaction channels are demonstrated, and as such the poor representation of these distributions for the fusion energy regime. For the nat C(n,el) reaction at 2.0 MeV, the binned differential cross section deviates from the original data by 0.6% on average. For the 56 Fe(n,el) reaction at 14.1 MeV, the deviation increases to 11.0%. We

  16. Classical Methods and Calculation Algorithms for Determining Lime Requirements

    Directory of Open Access Journals (Sweden)

    André Guarçoni

    Full Text Available ABSTRACT The methods developed for determination of lime requirements (LR are based on widely accepted principles. However, the formulas used for calculation have evolved little over recent decades, and in some cases there are indications of their inadequacy. The aim of this study was to compare the lime requirements calculated by three classic formulas and three algorithms, defining those most appropriate for supplying Ca and Mg to coffee plants and the smaller possibility of causing overliming. The database used contained 600 soil samples, which were collected in coffee plantings. The LR was estimated by the methods of base saturation, neutralization of Al3+, and elevation of Ca2+ and Mg2+ contents (two formulas and by the three calculation algorithms. Averages of the lime requirements were compared, determining the frequency distribution of the 600 lime requirements (LR estimated through each calculation method. In soils with low cation exchange capacity at pH 7, the base saturation method may fail to adequately supply the plants with Ca and Mg in many situations, while the method of Al3+ neutralization and elevation of Ca2+ and Mg2+ contents can result in the calculation of application rates that will increase the pH above the suitable range. Among the methods studied for calculating lime requirements, the algorithm that predicts reaching a defined base saturation, with adequate Ca and Mg supply and the maximum application rate limited to the H+Al value, proved to be the most efficient calculation method, and it can be recommended for use in numerous crops conditions.

  17. Handbook of methods for risk-based analysis of technical specification requirements

    International Nuclear Information System (INIS)

    Samanta, P.K.; Vesely, W.E.

    1994-01-01

    Technical Specifications (TS) requirements for nuclear power plants define the Limiting Conditions for Operation (LCOs) and Surveillance Requirements (SRs) to assure safety during operation. In general, these requirements were based on deterministic analysis and engineering judgments. Experiences with plant operation indicate that some elements of the requirements are unnecessarily restrictive, while others may not be conducive to safety. Improvements in these requirements are facilitated by the availability of plant specific Probabilistic Safety Assessments (PSAs). The use of risk and reliability-based methods to improve TS requirements has gained wide interest because these methods can: Quantitatively evaluate the risk and justify changes based on objective risk arguments; Provide a defensible basis for these requirements for regulatory applications. The US NRC Office of Research is sponsoring research to develop systematic risk-based methods to improve various aspects of TS requirements. The handbook of methods, which is being prepared, summarizes such risk-based methods. The scope of the handbook includes reliability and risk-based methods for evaluating allowed outage times (AOTs), action statements requiring shutdown where shutdown risk may be substantial, surveillance test intervals (STIs), defenses against common-cause failures, managing plant configurations, and scheduling maintenances. For each topic, the handbook summarizes methods of analysis and data needs, outlines the insights to be gained, lists additional references, and presents examples of evaluations

  18. Handbook of methods for risk-based analysis of Technical Specification requirements

    International Nuclear Information System (INIS)

    Samanta, P.K.; Vesely, W.E.

    1993-01-01

    Technical Specifications (TS) requirements for nuclear power plants define the Limiting Conditions for Operation (LCOs) and Surveillance Requirements (SRs) to assure safety during operation. In general, these requirements were based on deterministic analysis and engineering judgments. Experiences with plant operation indicate that some elements of the requirements are unnecessarily restrictive, while others may not be conducive to safety. Improvements in these requirements are facilitated by the availability of plant specific Probabilistic Safety Assessments (PSAs). The use of risk and reliability-based methods to improve TS requirements has gained wide interest because these methods can: quantitatively evaluate the risk impact and justify changes based on objective risk arguments. Provide a defensible basis for these requirements for regulatory applications. The United States Nuclear Regulatory Commission (USNRC) Office of Research is sponsoring research to develop systematic risk-based methods to improve various aspects of TS requirements. The handbook of methods, which is being prepared, summarizes such risk-based methods. The scope of the handbook includes reliability and risk-based methods for evaluating allowed outage times (AOTs), action statements requiring shutdown where shutdown risk may be substantial, surveillance test intervals (STIs), defenses against common-cause failures, managing plant configurations, and scheduling maintenances. For each topic, the handbook summarizes methods of analysis and data needs, outlines the insights to be gained, lists additional references, and presents examples of evaluations

  19. What is typical is good: The influence of face typicality on perceived trustworthiness

    NARCIS (Netherlands)

    Sofer, C.; Dotsch, R.; Wigboldus, D.H.J.; Todorov, A.T.

    2015-01-01

    The role of face typicality in face recognition is well established, but it is unclear whether face typicality is important for face evaluation. Prior studies have focused mainly on typicality's influence on attractiveness, although recent studies have cast doubt on its importance for attractiveness

  20. The Videographic Requirements Gathering Method for Adolescent-Focused Interaction Design

    Directory of Open Access Journals (Sweden)

    Tamara Peyton

    2014-08-01

    Full Text Available We present a novel method for conducting requirements gathering with adolescent populations. Called videographic requirements gathering, this technique makes use of mobile phone data capture and participant creation of media images. The videographic requirements gathering method can help researchers and designers gain intimate insight into adolescent lives while simultaneously reducing power imbalances. We provide rationale for this approach, pragmatics of using the method, and advice on overcoming common challenges facing researchers and designers relying on this technique.

  1. Runge-Kutta methods with minimum storage implementations

    KAUST Repository

    Ketcheson, David I.

    2010-01-01

    Solution of partial differential equations by the method of lines requires the integration of large numbers of ordinary differential equations (ODEs). In such computations, storage requirements are typically one of the main considerations

  2. Exploration of Rice Husk Compost as an Alternate Organic Manure to Enhance the Productivity of Blackgram in Typic Haplustalf and Typic Rhodustalf

    Directory of Open Access Journals (Sweden)

    Subramanium Thiyageshwari

    2018-02-01

    Full Text Available The present study was aimed at using cellulolytic bacterium Enhydrobacter and fungi Aspergillus sp. for preparing compost from rice husk (RH. Further, the prepared compost was tested for their effect on blackgram growth promotion along with different levels of recommended dose of fertilizer (RDF in black soil (typic Haplustalf and red soil (typic Rhodustalf soil. The results revealed that, inoculation with lignocellulolytic fungus (LCF Aspergillus sp. @ 2% was considered as the most efficient method of composting within a short period. Characterization of composted rice husk (CRH was examined through scanning electron microscope (SEM for identifying significant structural changes. At the end of composting, N, P and K content increased with decrease in CO2 evolution, C:N and C:P ratios. In comparison to inorganic fertilization, an increase in grain yield of 16% in typic Haplustalf and 17% in typic Rhodustalf soil over 100% RDF was obtained from the integrated application of CRH@ 5 t ha−1 with 50% RDF and biofertilizers. The crude protein content was maximum with the combined application of CRH, 50% RDF and biofertilizers of 20% and 21% in typic Haplustalf and typic Rhodustalf soils, respectively. Nutrient rich CRH has proved its efficiency on crop growth and soil fertility.

  3. Optimal probabilistic energy management in a typical micro-grid based-on robust optimization and point estimate method

    International Nuclear Information System (INIS)

    Alavi, Seyed Arash; Ahmadian, Ali; Aliakbar-Golkar, Masoud

    2015-01-01

    Highlights: • Energy management is necessary in the active distribution network to reduce operation costs. • Uncertainty modeling is essential in energy management studies in active distribution networks. • Point estimate method is a suitable method for uncertainty modeling due to its lower computation time and acceptable accuracy. • In the absence of Probability Distribution Function (PDF) robust optimization has a good ability for uncertainty modeling. - Abstract: Uncertainty can be defined as the probability of difference between the forecasted value and the real value. As this probability is small, the operation cost of the power system will be less. This purpose necessitates modeling of system random variables (such as the output power of renewable resources and the load demand) with appropriate and practicable methods. In this paper, an adequate procedure is proposed in order to do an optimal energy management on a typical micro-grid with regard to the relevant uncertainties. The point estimate method is applied for modeling the wind power and solar power uncertainties, and robust optimization technique is utilized to model load demand uncertainty. Finally, a comparison is done between deterministic and probabilistic management in different scenarios and their results are analyzed and evaluated

  4. Analogical Reasoning Ability in Autistic and Typically Developing Children

    Science.gov (United States)

    Morsanyi, Kinga; Holyoak, Keith J.

    2010-01-01

    Recent studies (e.g. Dawson et al., 2007) have reported that autistic people perform in the normal range on the Raven Progressive Matrices test, a formal reasoning test that requires integration of relations as well as the ability to infer rules and form high-level abstractions. Here we compared autistic and typically developing children, matched…

  5. Lack of Effect of Typical Rapid-Weight-Loss Practices on Balance and Anaerobic Performance in Apprentice Jockeys.

    Science.gov (United States)

    Cullen, SarahJane; Dolan, Eimear; O Brien, Kate; McGoldrick, Adrian; Warrington, Giles

    2015-11-01

    Balance and anaerobic performance are key attributes related to horse-racing performance, but research on the impact of making weight for racing on these parameters remains unknown. The purpose of this study was to investigate the effects of rapid weight loss in preparation for racing on balance and anaerobic performance in a group of jockeys. Twelve apprentice male jockeys and 12 age- and gender-matched controls completed 2 trials separated by 48 h. In both trials, body mass, hydration status, balance, and anaerobic performance were assessed. Between the trials, the jockeys reduced body mass by 4% using weight-loss methods typically adopted in preparation for racing, while controls maintained body mass through typical daily dietary and physical activity habits. Apprentice jockeys decreased mean body mass by 4.2% ± 0.3% (P balance, on the left or right side, or in peak power, mean power, or fatigue index were reported between the trials in either group. Results from this study indicate that a 4% reduction in body mass in 48 h through the typical methods employed for racing, in association with an increase in dehydration, resulted in no impairments in balance or anaerobic performance. Further research is required to evaluate performance in a sport-specific setting and to investigate the specific physiological mechanisms involved.

  6. Advanced software tool for the creation of a typical meteorological year

    International Nuclear Information System (INIS)

    Skeiker, Kamal; Ghani, Bashar Abdul

    2008-01-01

    The generation of a typical meteorological year is of great importance for calculations concerning many applications in the field of thermal engineering. In this context, method that has been proposed by Hall et al. is selected for generating typical data, and an improved criterion for final selection of typical meteorological month (TMM) was demonstrated. The final selection of the most representative year was done by examining a composite score S. The composite score was calculated as the weighed sum of the scores of the four meteorological parameters used. These parameters are air dry bulb temperature, relative humidity, wind velocity and global solar radiation intensity. Moreover, a new modern software tool using Delphi 6.0 has been developed, utilizing the Filkenstein-Schafer statistical method for the creation of a typical meteorological year for any site of concern. Whereas, an improved criterion for final selection of typical meteorological month was employed. Such tool allows the user to perform this task without an intimate knowledge of all of the computational details. The final alphanumerical and graphical results are presented on screen, and can be saved to a file or printed as a hard copy. Using this software tool, a typical meteorological year was generated for Damascus, capital of Syria, as a test run example. The data processed used were obtained from the Department of Meteorology and cover a period of 10 years (1991-2000)

  7. Study on the knowledge base system for the identification of typical target

    International Nuclear Information System (INIS)

    Qin Kai; Zhao Yingjun

    2008-01-01

    Based on the research on target knowledge base, target database, texture analysis, shape analysis, this paper proposed a new knowledge based method for typical target identification from remote sensing image. By extracting the texture characters and shape characters, joining with spatial analysis in GIS, reasoning according to the prior knowledge in the knowledge base, this method can identify and ex- tract typical target from remote sensing images. (authors)

  8. Comparison of mammographic and sonographic findings in typical and atypical medullary carcinomas of the breast

    International Nuclear Information System (INIS)

    Yilmaz, E.; Lebe, B.; Balci, P.; Sal, S.; Canda, T.

    2002-01-01

    AIM: The aim of this study was to describe the contribution of mammographic and sonographic findings to the discrimination of typical and atypical histopathologic groups of medullary carcinomas of the breast. MATERIALS AND METHODS: Imaging findings were retrospectively assessed in 33 women with medullary carcinomas (15 typical medullary carcinomas and 18 atypical medullary carcinomas) identified during pre-operative mammography. Twenty-nine of these women also had ultrasound and these findings were reviewed. RESULTS: Mammography showed a well circumscribed mass in 10 of the 15 (67%) typical medullary carcinomas and in four of the 17 (24%) atypical medullary carcinomas (P < 0.02). One small tumour in a woman with atypical medullary carcinoma was missed on mammography and was shown only on sonography. Sonographically, an irregular margin surrounding the whole mass or part of it was seen in three out of 14 (21%) patients with typical medullary carcinoma and in nine out of 15 (60%) patients with atypical medullary carcinomas (P < 0.05). Posterior acoustic shadowing was more often observed in the typical medullary carcinoma group than in atypical medullary carcinoma and the difference was found to be statistically significant (P < 0.05). None of the other mammographic and sonographic findings were sufficiently characteristic to allow for a differentiation between two groups. CONCLUSION: When typical medullary carcinomas were compared with atypical medullary carcinomas according to imaging features, they tended to be well circumscribed masses on both mammography and sonography, and a posterior acoustic shadow was not found on sonography. However, the imaging findings in these two subgroups often resembled each other and histopathology will always be required to confirm the diagnosis. Yilmaz, E. et al. (2002)

  9. Comparison of mammographic and sonographic findings in typical and atypical medullary carcinomas of the breast

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, E.; Lebe, B.; Balci, P.; Sal, S.; Canda, T

    2002-07-01

    AIM: The aim of this study was to describe the contribution of mammographic and sonographic findings to the discrimination of typical and atypical histopathologic groups of medullary carcinomas of the breast. MATERIALS AND METHODS: Imaging findings were retrospectively assessed in 33 women with medullary carcinomas (15 typical medullary carcinomas and 18 atypical medullary carcinomas) identified during pre-operative mammography. Twenty-nine of these women also had ultrasound and these findings were reviewed. RESULTS: Mammography showed a well circumscribed mass in 10 of the 15 (67%) typical medullary carcinomas and in four of the 17 (24%) atypical medullary carcinomas (P < 0.02). One small tumour in a woman with atypical medullary carcinoma was missed on mammography and was shown only on sonography. Sonographically, an irregular margin surrounding the whole mass or part of it was seen in three out of 14 (21%) patients with typical medullary carcinoma and in nine out of 15 (60%) patients with atypical medullary carcinomas (P < 0.05). Posterior acoustic shadowing was more often observed in the typical medullary carcinoma group than in atypical medullary carcinoma and the difference was found to be statistically significant (P < 0.05). None of the other mammographic and sonographic findings were sufficiently characteristic to allow for a differentiation between two groups. CONCLUSION: When typical medullary carcinomas were compared with atypical medullary carcinomas according to imaging features, they tended to be well circumscribed masses on both mammography and sonography, and a posterior acoustic shadow was not found on sonography. However, the imaging findings in these two subgroups often resembled each other and histopathology will always be required to confirm the diagnosis. Yilmaz, E. et al. (2002)

  10. Quantitative methods for developing C2 system requirement

    Energy Technology Data Exchange (ETDEWEB)

    Tyler, K.K.

    1992-06-01

    The US Army established the Army Tactical Command and Control System (ATCCS) Experimentation Site (AES) to provide a place where material and combat developers could experiment with command and control systems. The AES conducts fundamental and applied research involving command and control issues using a number of research methods, ranging from large force-level experiments, to controlled laboratory experiments, to studies and analyses. The work summarized in this paper was done by Pacific Northwest Laboratory under task order from the Army Tactical Command and Control System Experimentation Site. The purpose of the task was to develop the functional requirements for army engineer automation and support software, including MCS-ENG. A client, such as an army engineer, has certain needs and requirements of his or her software; these needs must be presented in ways that are readily understandable to the software developer. A requirements analysis then, such as the one described in this paper, is simply the means of communication between those who would use a piece of software and those who would develop it. The analysis from which this paper was derived attempted to bridge the ``communications gap`` between army combat engineers and software engineers. It sought to derive and state the software needs of army engineers in ways that are meaningful to software engineers. In doing this, it followed a natural sequence of investigation: (1) what does an army engineer do, (2) with which tasks can software help, (3) how much will it cost, and (4) where is the highest payoff? This paper demonstrates how each of these questions was addressed during an analysis of the functional requirements of engineer support software. Systems engineering methods are used in a task analysis and a quantitative scoring method was developed to score responses regarding the feasibility of task automation. The paper discusses the methods used to perform utility and cost-benefits estimates.

  11. Generation of a typical meteorological year for Hong Kong

    International Nuclear Information System (INIS)

    Chan, Apple L.S.; Chow, T.T.; Fong, Square K.F.; Lin, John Z.

    2006-01-01

    Weather data can vary significantly from year to year. There is a need to derive typical meteorological year (TMY) data to represent the long-term typical weather condition over a year, which is one of the crucial factors for successful building energy simulation. In this paper, various types of typical weather data sets including the TMY, TMY2, WYEC, WYEC2, WYEC2W, WYEC2T and IWEC were reviewed. The Finkelstein-Schafer statistical method was applied to analyze the hourly measured weather data of a 25-year period (1979-2003) in Hong Kong and select representative typical meteorological months (TMMs). The cumulative distribution function (CDF) for each year was compared with the CDF for the long-term composite of all the years in the period for four major weather indices including dry bulb temperature, dew point temperature, wind speed and solar radiation. Typical months for each of the 12 calendar months from the period of years were selected by choosing the one with the smallest deviation from the long-term CDF. The 12 TMMs selected from the different years were used for formulation of a TMY for Hong Kong

  12. Quality functions for requirements engineering in system development methods.

    Science.gov (United States)

    Johansson, M; Timpka, T

    1996-01-01

    Based on a grounded theory framework, this paper analyses the quality characteristics for methods to be used for requirements engineering in the development of medical decision support systems (MDSS). The results from a Quality Function Deployment (QFD) used to rank functions connected to user value and a focus group study were presented to a validation focus group. The focus group studies take advantage of a group process to collect data for further analyses. The results describe factors considered by the participants as important in the development of methods for requirements engineering in health care. Based on the findings, the content which, according to the user a MDSS method should support is established.

  13. What Is Typical Is Good : The Influence of Face Typicality on Perceived Trustworthiness

    NARCIS (Netherlands)

    Sofer, Carmel; Dotsch, Ron; Wigboldus, Daniel H J; Todorov, Alexander

    2015-01-01

    The role of face typicality in face recognition is well established, but it is unclear whether face typicality is important for face evaluation. Prior studies have focused mainly on typicality’s influence on attractiveness, although recent studies have cast doubt on its importance for attractiveness

  14. Runge-Kutta methods with minimum storage implementations

    KAUST Repository

    Ketcheson, David I.

    2010-03-01

    Solution of partial differential equations by the method of lines requires the integration of large numbers of ordinary differential equations (ODEs). In such computations, storage requirements are typically one of the main considerations, especially if a high order ODE solver is required. We investigate Runge-Kutta methods that require only two storage locations per ODE. Existing methods of this type require additional memory if an error estimate or the ability to restart a step is required. We present a new, more general class of methods that provide error estimates and/or the ability to restart a step while still employing the minimum possible number of memory registers. Examples of such methods are found to have good properties. © 2009 Elsevier Inc. All rights reserved.

  15. Energy-Performance-Based Design-Build Process: Strategies for Procuring High-Performance Buildings on Typical Construction Budgets: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Scheib, J.; Pless, S.; Torcellini, P.

    2014-08-01

    NREL experienced a significant increase in employees and facilities on our 327-acre main campus in Golden, Colorado over the past five years. To support this growth, researchers developed and demonstrated a new building acquisition method that successfully integrates energy efficiency requirements into the design-build requests for proposals and contracts. We piloted this energy performance based design-build process with our first new construction project in 2008. We have since replicated and evolved the process for large office buildings, a smart grid research laboratory, a supercomputer, a parking structure, and a cafeteria. Each project incorporated aggressive efficiency strategies using contractual energy use requirements in the design-build contracts, all on typical construction budgets. We have found that when energy efficiency is a core project requirement as defined at the beginning of a project, innovative design-build teams can integrate the most cost effective and high performance efficiency strategies on typical construction budgets. When the design-build contract includes measurable energy requirements and is set up to incentivize design-build teams to focus on achieving high performance in actual operations, owners can now expect their facilities to perform. As NREL completed the new construction in 2013, we have documented our best practices in training materials and a how-to guide so that other owners and owner's representatives can replicate our successes and learn from our experiences in attaining market viable, world-class energy performance in the built environment.

  16. Theory of Mind experience sampling in typical adults.

    Science.gov (United States)

    Bryant, Lauren; Coffey, Anna; Povinelli, Daniel J; Pruett, John R

    2013-09-01

    We explored the frequency with which typical adults make Theory of Mind (ToM) attributions, and under what circumstances these attributions occur. We used an experience sampling method to query 30 typical adults about their everyday thoughts. Participants carried a Personal Data Assistant (PDA) that prompted them to categorize their thoughts as Action, Mental State, or Miscellaneous at approximately 30 pseudo-random times during a continuous 10-h period. Additionally, participants noted the direction of their thought (self versus other) and degree of socializing (with people versus alone) at the time of inquiry. We were interested in the relative frequency of ToM (mental state attributions) and how prominent they were in immediate social exchanges. Analyses of multiple choice answers suggest that typical adults: (1) spend more time thinking about actions than mental states and miscellaneous things, (2) exhibit a higher degree of own- versus other-directed thought when alone, and (3) make mental state attributions more frequently when not interacting (offline) than while interacting with others (online). A significant 3-way interaction between thought type, direction of thought, and socializing emerged because action but not mental state thoughts about others occurred more frequently when participants were interacting with people versus when alone; whereas there was an increase in the frequency of both action and mental state attributions about the self when participants were alone as opposed to socializing. A secondary analysis of coded free text responses supports findings 1-3. The results of this study help to create a more naturalistic picture of ToM use in everyday life and the method shows promise for future study of typical and atypical thought processes. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Quantitative methods for developing C2 system requirement

    Energy Technology Data Exchange (ETDEWEB)

    Tyler, K.K.

    1992-06-01

    The US Army established the Army Tactical Command and Control System (ATCCS) Experimentation Site (AES) to provide a place where material and combat developers could experiment with command and control systems. The AES conducts fundamental and applied research involving command and control issues using a number of research methods, ranging from large force-level experiments, to controlled laboratory experiments, to studies and analyses. The work summarized in this paper was done by Pacific Northwest Laboratory under task order from the Army Tactical Command and Control System Experimentation Site. The purpose of the task was to develop the functional requirements for army engineer automation and support software, including MCS-ENG. A client, such as an army engineer, has certain needs and requirements of his or her software; these needs must be presented in ways that are readily understandable to the software developer. A requirements analysis then, such as the one described in this paper, is simply the means of communication between those who would use a piece of software and those who would develop it. The analysis from which this paper was derived attempted to bridge the communications gap'' between army combat engineers and software engineers. It sought to derive and state the software needs of army engineers in ways that are meaningful to software engineers. In doing this, it followed a natural sequence of investigation: (1) what does an army engineer do, (2) with which tasks can software help, (3) how much will it cost, and (4) where is the highest payoff This paper demonstrates how each of these questions was addressed during an analysis of the functional requirements of engineer support software. Systems engineering methods are used in a task analysis and a quantitative scoring method was developed to score responses regarding the feasibility of task automation. The paper discusses the methods used to perform utility and cost-benefits estimates.

  18. Effects of Biofeedback on Control and Generalization of Nasalization in Typical Speakers

    Science.gov (United States)

    Murray, Elizabeth S. Heller; Mendoza, Joseph O.; Gill, Simone V.; Perkell, Joseph S.; Stepp, Cara E.

    2016-01-01

    Purpose: The purpose of this study was to determine the effects of biofeedback on control of nasalization in individuals with typical speech. Method: Forty-eight individuals with typical speech attempted to increase and decrease vowel nasalization. During training, stimuli consisted of consonant-vowel-consonant (CVC) tokens with the center vowels…

  19. Towards the accurate electronic structure descriptions of typical high-constant dielectrics

    Science.gov (United States)

    Jiang, Ting-Ting; Sun, Qing-Qing; Li, Ye; Guo, Jiao-Jiao; Zhou, Peng; Ding, Shi-Jin; Zhang, David Wei

    2011-05-01

    High-constant dielectrics have gained considerable attention due to their wide applications in advanced devices, such as gate oxides in metal-oxide-semiconductor devices and insulators in high-density metal-insulator-metal capacitors. However, the theoretical investigations of these materials cannot fulfil the requirement of experimental development, especially the requirement for the accurate description of band structures. We performed first-principles calculations based on the hybrid density functionals theory to investigate several typical high-k dielectrics such as Al2O3, HfO2, ZrSiO4, HfSiO4, La2O3 and ZrO2. The band structures of these materials are well described within the framework of hybrid density functionals theory. The band gaps of Al2O3, HfO2, ZrSiO4, HfSiO4, La2O3 and ZrO2are calculated to be 8.0 eV, 5.6 eV, 6.2 eV, 7.1 eV, 5.3 eV and 5.0 eV, respectively, which are very close to the experimental values and far more accurate than those obtained by the traditional generalized gradient approximation method.

  20. Is our Universe typical?

    International Nuclear Information System (INIS)

    Gurzadyan, V.G.

    1988-01-01

    The problem of typicalness of the Universe - as a dynamical system possessing both regular and chaotic regions of positive measure of phase space, is raised and discussed. Two dynamical systems are considered: 1) The observed Universe as a hierarchy of systems of N graviting bodies; 2) (3+1)-manifold with matter evolving to Wheeler-DeWitt equation in superspace with Hawking boundary condition of compact metrics. It is shown that the observed Universe is typical. There is no unambiguous answer for the second system yet. If it is typical too then the same present state of the Universe could have been originated from an infinite number of different initial conditions the restoration of which is practically impossible at present. 35 refs.; 2 refs

  1. Probabilistic Harmonic Analysis on Distributed Photovoltaic Integration Considering Typical Weather Scenarios

    Science.gov (United States)

    Bin, Che; Ruoying, Yu; Dongsheng, Dang; Xiangyan, Wang

    2017-05-01

    Distributed Generation (DG) integrating to the network would cause the harmonic pollution which would cause damages on electrical devices and affect the normal operation of power system. On the other hand, due to the randomness of the wind and solar irradiation, the output of DG is random, too, which leads to an uncertainty of the harmonic generated by the DG. Thus, probabilistic methods are needed to analyse the impacts of the DG integration. In this work we studied the harmonic voltage probabilistic distribution and the harmonic distortion in distributed network after the distributed photovoltaic (DPV) system integrating in different weather conditions, mainly the sunny day, cloudy day, rainy day and the snowy day. The probabilistic distribution function of the DPV output power in different typical weather conditions could be acquired via the parameter identification method of maximum likelihood estimation. The Monte-Carlo simulation method was adopted to calculate the probabilistic distribution of harmonic voltage content at different frequency orders as well as the harmonic distortion (THD) in typical weather conditions. The case study was based on the IEEE33 system and the results of harmonic voltage content probabilistic distribution as well as THD in typical weather conditions were compared.

  2. Typicals/Típicos

    Directory of Open Access Journals (Sweden)

    Silvia Vélez

    2004-01-01

    Full Text Available Typicals is a series of 12 colour photographs digitally created from photojournalistic images from Colombia combined with "typical" craft textiles and text from guest writers. Typicals was first exhibited as photographs 50cm x 75cm in size, each with their own magnifying glass, at the Contemporary Art Space at Gorman House in Canberra, Australia, in 2000. It was then exhibited in "Feedback: Art Social Consciousness and Resistance" at Monash University Museum of Art in Melbourne, Australia, from March to May 2003. From May to June 2003 it was exhibited at the Museo de Arte de la Universidad Nacional de Colombia Santa Fé Bogotá, Colombia. In its current manifestation the artwork has been adapted from the catalogue of the museum exhibitions. It is broken up into eight pieces corresponding to the contributions of the writers. The introduction by Sylvia Vélez is the PDF file accessible via a link below this abstract. The other seven PDF files are accessible via the 'Supplementary Files' section to the left of your screen. Please note that these files are around 4 megabytes each, so it may be difficult to access them from a dial-up connection.

  3. Typical load shapes for six categories of Swedish commercial buildings

    Energy Technology Data Exchange (ETDEWEB)

    Noren, C.

    1997-01-01

    In co-operation with several Swedish electricity suppliers, typical load shapes have been developed for six categories of commercial buildings located in the south of Sweden. The categories included in the study are: hotels, warehouses/grocery stores, schools with no kitchen, schools with kitchen, office buildings, health, health buildings. Load shapes are developed for different mean daily outdoor temperatures and for different day types, normally standard weekdays and standard weekends. The load shapes are presented as non-dimensional normalized 1-hour load. All measured loads for an object are divided by the object`s mean load during the measuring period and typical load shapes are developed for each category of buildings. Thus errors were kept lower as compared to use of W/m{sup 2}-terms. Typical daytime (9 a.m. - 5 p.m.) standard deviations are 7-10% of the mean values for standard weekdays but during very cold or warm weather conditions, single objects can deviate from the typical load shape. On weekends, errors are higher and depending on very different activity levels in the buildings, it is difficult to develop weekend load shapes with good accuracy. The method presented is very easy to use for similar studies and no building simulation programs are needed. If more load data is available, a good method to lower the errors is to make sure that every category only consists of objects with the same activity level, both on weekdays and weekends. To make it easier to use the load shapes, Excel load shape workbooks have been developed, where it is even possible to compare typical load shapes with measured data. 23 refs, 53 figs, 20 tabs

  4. Prediction of the Electromagnetic Field Distribution in a Typical Aircraft Using the Statistical Energy Analysis

    Science.gov (United States)

    Kovalevsky, Louis; Langley, Robin S.; Caro, Stephane

    2016-05-01

    Due to the high cost of experimental EMI measurements significant attention has been focused on numerical simulation. Classical methods such as Method of Moment or Finite Difference Time Domain are not well suited for this type of problem, as they require a fine discretisation of space and failed to take into account uncertainties. In this paper, the authors show that the Statistical Energy Analysis is well suited for this type of application. The SEA is a statistical approach employed to solve high frequency problems of electromagnetically reverberant cavities at a reduced computational cost. The key aspects of this approach are (i) to consider an ensemble of system that share the same gross parameter, and (ii) to avoid solving Maxwell's equations inside the cavity, using the power balance principle. The output is an estimate of the field magnitude distribution in each cavity. The method is applied on a typical aircraft structure.

  5. An approach of requirements tracing in formal refinement

    DEFF Research Database (Denmark)

    Jastram, Michael; Hallerstede, Stefan; Leuschel, Michael

    2010-01-01

    Formal modeling of computing systems yields models that are intended to be correct with respect to the requirements that have been formalized. The complexity of typical computing systems can be addressed by formal refinement introducing all the necessary details piecemeal. We report on preliminar...... changes, making use of corresponding techniques already built into the Event-B method....

  6. Determination of fuel irradiation parameters. Required accuracies and available methods

    International Nuclear Information System (INIS)

    Mas, P.

    1977-01-01

    This paper reports on the present point of some main methods to determine the nuclear parameters of fuel irradiation in testing reactors (nuclear power, burn up, ...) The different methods (theoretical or experimental) are reviewed: neutron measurements and calculations, gamma scanning, heat balance, ... . The required accuracies are reviewed: they are of 3-5 % on flux, fluences, nuclear power, burn-up, conversion factor. These required accuracies are compared with the real accuracies available which are the present time of order of 5-20 % on these parameters

  7. Determination of material irradiation parameters. Required accuracies and available methods

    International Nuclear Information System (INIS)

    Cerles, J.M.; Mas, P.

    1978-01-01

    In this paper, the author reports some main methods to determine the nuclear parameters of material irradiation in testing reactor (nuclear power, burn-up, fluxes, fluences, ...). The different methods (theoretical or experimental) are reviewed: neutronics measurements and calculations, gamma scanning, thermal balance, ... The required accuracies are reviewed: they are of 3-5% on flux, fluences, nuclear power, burn-up, conversion factor, ... These required accuracies are compared with the real accuracies available which are at the present time of order of 5-20% on these parameters

  8. Functional Mobility Testing: A Novel Method to Create Suit Design Requirements

    Science.gov (United States)

    England, Scott A.; Benson, Elizabeth A.; Rajulu, Sudhakar L.

    2008-01-01

    This study was performed to aide in the creation of design requirements for the next generation of space suits that more accurately describe the level of mobility necessary for a suited crewmember through the use of an innovative methodology utilizing functional mobility. A novel method was utilized involving the collection of kinematic data while 20 subjects (10 male, 10 female) performed pertinent functional tasks that will be required of a suited crewmember during various phases of a lunar mission. These tasks were selected based on relevance and criticality from a larger list of tasks that may be carried out by the crew. Kinematic data was processed through Vicon BodyBuilder software to calculate joint angles for the ankle, knee, hip, torso, shoulder, elbow, and wrist. Maximum functional mobility was consistently lower than maximum isolated mobility. This study suggests that conventional methods for establishing design requirements for human-systems interfaces based on maximal isolated joint capabilities may overestimate the required mobility. Additionally, this method provides a valuable means of evaluating systems created from these requirements by comparing the mobility available in a new spacesuit, or the mobility required to use a new piece of hardware, to this newly established database of functional mobility.

  9. Characterization of typical irradiation channels of CNESTEN'S TRIGA Mark II reactor (Rabat, Morocco) using NAA K0-method

    International Nuclear Information System (INIS)

    Embarch, K.; Bounouira, H.; Bounakhla, M.; Amsil, H.; Jacimovic, R.

    2010-01-01

    The aim of this work is the use of neutron activation analysis using k0-standardization method to characterize some typical irradiation channels of the Moroccan TRIGA Mark II research reactor. The two parameters of neutron flux in the selected irradiation channels used for elemental concentration calculation, f (thermal-to-epithermal ratio) and α (deviation from the 1/E distribution), have been determined as well in the pneumatic tube (PT) as in the carousel facility (CR1) using the zirconium bare triple method. Results obtained for f and α in two irradiation channels show that f parameter determined in this way is different in the carousel facility (CR1) and the PT channel. This can be explained by the fact that the CR1 channel is situated in a graphite reflector and is relatively far from the reactor core, while the PT is in the core. Parameter α in the CR1 has a positive value, as expected, indicating that the neutron spectrum is relatively well thermalized. Parameter α in the PT has a negative value, which is very small and cannot significantly influence the final results obtained by k0-method. The method in our laboratory is validated by analyzing the elemental concentrations of the IAEA Standard Reference Material (Soil-7). All calculations were performed using Kay Win Software. (author)

  10. Software Safety Analysis of Digital Protection System Requirements Using a Qualitative Formal Method

    International Nuclear Information System (INIS)

    Lee, Jang-Soo; Kwon, Kee-Choon; Cha, Sung-Deok

    2004-01-01

    The safety analysis of requirements is a key problem area in the development of software for the digital protection systems of a nuclear power plant. When specifying requirements for software of the digital protection systems and conducting safety analysis, engineers find that requirements are often known only in qualitative terms and that existing fault-tree analysis techniques provide little guidance on formulating and evaluating potential failure modes. A framework for the requirements engineering process is proposed that consists of a qualitative method for requirements specification, called the qualitative formal method (QFM), and a safety analysis method for the requirements based on causality information, called the causal requirements safety analysis (CRSA). CRSA is a technique that qualitatively evaluates causal relationships between software faults and physical hazards. This technique, extending the qualitative formal method process and utilizing information captured in the state trajectory, provides specific guidelines on how to identify failure modes and the relationship among them. The QFM and CRSA processes are described using shutdown system 2 of the Wolsong nuclear power plants as the digital protection system example

  11. Testing typicality in multiverse cosmology

    Science.gov (United States)

    Azhar, Feraz

    2015-05-01

    In extracting predictions from theories that describe a multiverse, we face the difficulty that we must assess probability distributions over possible observations prescribed not just by an underlying theory, but by a theory together with a conditionalization scheme that allows for (anthropic) selection effects. This means we usually need to compare distributions that are consistent with a broad range of possible observations with actual experimental data. One controversial means of making this comparison is by invoking the "principle of mediocrity": that is, the principle that we are typical of the reference class implicit in the conjunction of the theory and the conditionalization scheme. In this paper, we quantitatively assess the principle of mediocrity in a range of cosmological settings, employing "xerographic distributions" to impose a variety of assumptions regarding typicality. We find that for a fixed theory, the assumption that we are typical gives rise to higher likelihoods for our observations. If, however, one allows both the underlying theory and the assumption of typicality to vary, then the assumption of typicality does not always provide the highest likelihoods. Interpreted from a Bayesian perspective, these results support the claim that when one has the freedom to consider different combinations of theories and xerographic distributions (or different "frameworks"), one should favor the framework that has the highest posterior probability; and then from this framework one can infer, in particular, how typical we are. In this way, the invocation of the principle of mediocrity is more questionable than has been recently claimed.

  12. 5 CFR 610.404 - Requirement for time-accounting method.

    Science.gov (United States)

    2010-01-01

    ... REGULATIONS HOURS OF DUTY Flexible and Compressed Work Schedules § 610.404 Requirement for time-accounting method. An agency that authorizes a flexible work schedule or a compressed work schedule under this...

  13. Effective teaching methods in higher education: requirements and barriers

    Directory of Open Access Journals (Sweden)

    NAHID SHIRANI BIDABADI

    2016-10-01

    Full Text Available Introduction: Teaching is one of the main components in educational planning which is a key factor in conducting educational plans. Despite the importance of good teaching, the outcomes are far from ideal. The present qualitative study aimed to investigate effective teaching in higher education in Iran based on the experiences of best professors in the country and the best local professors of Isfahan University of Technology. Methods: This qualitative content analysis study was conducted through purposeful sampling. Semi-structured interviews were conducted with ten faculty members (3 of them from the best professors in the country and 7 from the best local professors. Content analysis was performed by MAXQDA software. The codes, categories and themes were explored through an inductive process that began from semantic units or direct quotations to general themes. Results: According to the results of this study, the best teaching approach is the mixed method (student-centered together with teacher-centered plus educational planning and previous readiness. But whenever the teachers can teach using this method confront with some barriers and requirements; some of these requirements are prerequisite in professors’ behavior and some of these are prerequisite in professors’ outlook. Also, there are some major barriers, some of which are associated with the professors’ operation and others are related to laws and regulations. Implications of these findings for teachers’ preparation in education are discussed. Conclusion: In the present study, it was illustrated that a good teaching method helps the students to question their preconceptions, and motivates them to learn, by putting them in a situation in which they come to see themselves as the authors of answers, as the agents of responsibility for change. But training through this method has some barriers and requirements. To have an effective teaching; the faculty members of the universities

  14. TYPICAL SAFETY MANAGEMENT SYSTEM OF AN OPERATOR IN THE RUSSIAN FEDERATION

    Directory of Open Access Journals (Sweden)

    Alexander Michaylovich Lushkin

    2017-01-01

    Full Text Available In order to implement the concept of acceptable risk all airlines should have the Safety Management System (SMS from 01.01.2009 - at the request of ICAO and from 01.01.2010 - at the request of the Federal Air Transport Agen- cy. State requirements for SMS have not been formulated clearly. Leading airlines, in an effort to meet international stand- ards, develop and implement SMS on their own. So the implemented SMS differ in control settings (level of safety, proce- dures and methodological support of the processes of safety management. The summary of the best experience in develop- ment, implementation and improvement of SMS in leading airlines, allows to create a standard SMS to the airline, where the basic procedures required by the standards are systematized. The standard SMS is formed on experience in design, implementation and development of corporate SMS in three leading Russian airlines, in which the author worked in 2006-2015, and can be the basis of an SMS of the airlines operat- ing the planes and helicopters. Taken into account in a typical SMS requirements of international and national standards, research results, developed and implemented methodical maintenance of management procedures level of safety, contribut- ed to the successful passage of IATA periodic audits on developing standards of operational safety IOSA by the airline members and achieve the best level of safety not only in Russia but also in the world.

  15. Rapid methods for jugular bleeding of dogs requiring one technician.

    Science.gov (United States)

    Frisk, C S; Richardson, M R

    1979-06-01

    Two methods were used to collect blood from the jugular vein of dogs. In both techniques, only one technician was required. A rope with a slip knot was placed around the base of the neck to assist in restraint and act as a tourniquet for the vein. The technician used one hand to restrain the dog by the muzzle and position the head. The other hand was used for collecting the sample. One of the methods could be accomplished with the dog in its cage. The bleeding techniques were rapid, requiring approximately 1 minute per dog.

  16. Robust Requirements Tracing via Internet Search Technology: Improving an IV and V Technique. Phase 2

    Science.gov (United States)

    Hayes, Jane; Dekhtyar, Alex

    2004-01-01

    There are three major objectives to this phase of the work. (1) Improvement of Information Retrieval (IR) methods for Independent Verification and Validation (IV&V) requirements tracing. Information Retrieval methods are typically developed for very large (order of millions - tens of millions and more documents) document collections and therefore, most successfully used methods somewhat sacrifice precision and recall in order to achieve efficiency. At the same time typical IR systems treat all user queries as independent of each other and assume that relevance of documents to queries is subjective for each user. The IV&V requirements tracing problem has a much smaller data set to operate on, even for large software development projects; the set of queries is predetermined by the high-level specification document and individual requirements considered as query input to IR methods are not necessarily independent from each other. Namely, knowledge about the links for one requirement may be helpful in determining the links of another requirement. Finally, while the final decision on the exact form of the traceability matrix still belongs to the IV&V analyst, his/her decisions are much less arbitrary than those of an Internet search engine user. All this suggests that the information available to us in the framework of the IV&V tracing problem can be successfully leveraged to enhance standard IR techniques, which in turn would lead to increased recall and precision. We developed several new methods during Phase II; (2) IV&V requirements tracing IR toolkit. Based on the methods developed in Phase I and their improvements developed in Phase II, we built a toolkit of IR methods for IV&V requirements tracing. The toolkit has been integrated, at the data level, with SAIC's SuperTracePlus (STP) tool; (3) Toolkit testing. We tested the methods included in the IV&V requirements tracing IR toolkit on a number of projects.

  17. Assessing User Needs and Requirements for Assistive Robots at Home.

    Science.gov (United States)

    Werner, Katharina; Werner, Franz

    2015-01-01

    'Robots in healthcare' is a very trending topic. This paper gives an overview of currently and commonly used methods to gather user needs and requirements in research projects in the field of assistive robotics. Common strategies between authors are presented as well as examples of exceptions, which can help future researchers to find methods suitable for their own work. Typical problems of the field are discussed and partial solutions are proposed.

  18. 40 CFR 63.344 - Performance test requirements and test methods.

    Science.gov (United States)

    2010-07-01

    ... electroplating tanks or chromium anodizing tanks. The sampling time and sample volume for each run of Methods 306... Chromium Anodizing Tanks § 63.344 Performance test requirements and test methods. (a) Performance test... Emissions From Decorative and Hard Chromium Electroplating and Anodizing Operations,” appendix A of this...

  19. Determination of neutron energy spectrum at a pneumatic rabbit station of a typical swimming pool type material test research reactor

    International Nuclear Information System (INIS)

    Malkawi, S.R.; Ahmad, N.

    2002-01-01

    The method of multiple foil activation was used to measure the neutron energy spectrum, experimentally, at a rabbit station of Pakistan Research Reactor-1 (PARR-1), which is a typical swimming pool type material test research reactor. The computer codes MSITER and SANDBP were used to adjust the spectrum. The pre-information required by the adjustment codes was obtained by modelling the core and its surroundings in three-dimensions by using the one dimensional transport theory code WIMS-D/4 and the multidimensional finite difference diffusion theory code CITATION. The input spectrum covariance information required by MSITER code was also calculated from the CITATION output. A comparison between calculated and adjusted spectra shows a good agreement

  20. Generation of typical solar radiation data for different climates of China

    International Nuclear Information System (INIS)

    Zang, Haixiang; Xu, Qingshan; Bian, Haihong

    2012-01-01

    In this study, typical solar radiation data are generated from both measured data and synthetic generation for 35 stations in six different climatic zones of China. (1) By applying the measured weather data during at least 10 years from 1994 to 2009, typical meteorological years (TMYs) for 35 cities are generated using the Finkelstein–Schafer statistical method. The cumulative distribution function (CDF) of daily global solar radiation (DGSR) for each year is compared with the CDF of DGSR for the long-term years in six different climatic stations (Sanya, Shanghai, Zhengzhou, Harbin, Mohe and Lhasa). The daily global solar radiation as typical data obtained from the TMYs are presented in the Table. (2) Based on the recorded global radiation data from at least 10 years, a new daily global solar radiation model is developed with a sine and cosine wave (SCW) equation. The results of the proposed model and other empirical regression models are compared with measured data using different statistical indicators. It is found that solar radiation data, calculated by the new model, are superior to these from other empirical models at six typical climatic zones. In addition, the novel SCW model is tested and applied for 35 stations in China. -- Highlights: ► Both TMY method and synthetic generation are used to generate solar radiation data. ► The latest and accurate long term weather data in six different climates are applied. ► TMYs using new weighting factors of 8 weather indices for 35 regions are obtained. ► A new sine and cosine wave model is proposed and utilized for 35 major stations. ► Both TMY method and the proposed regression model perform well on monthly bases.

  1. Prediction and typicality in multiverse cosmology

    International Nuclear Information System (INIS)

    Azhar, Feraz

    2014-01-01

    In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios. (paper)

  2. Prediction and typicality in multiverse cosmology

    Science.gov (United States)

    Azhar, Feraz

    2014-02-01

    In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios.

  3. [Comparison of efficacy of tests for differentiation of typical and atypical strains of Yersinia pestis and Yersinia pseudotuberculosis].

    Science.gov (United States)

    Arsen'eva, T E; Lebedeva, S A; Trukhachev, A L; Vasil'eva, E A; Ivanova, V S; Bozhko, N V

    2010-01-01

    To characterize species specificity of officially recommended tests for differentiation of Yersiniapestis and Yersinia pseudotuberculosis and propose additional tests allowing for more accurate identification. Natural, laboratory and typical strains oftwo Yersinia species were studied using microbiological, molecular and biochemical methods. For PCR species-specific primers complementary to certain fragments of chromosomal DNA of each species as well as to several plasmid genes of Y. pestis were used. It was shown that such attributes of Y. pestis as form of colonies, fermentation ofrhamnose, melibiose and urea, susceptibility to diagnostic phages, nutritional requirements could be lost in pestis bacterial species or detected in pseudotuberculosis species. Such attribute as mobility as well as positive result of CoA-reaction on fraction V antigen are more reliable. Guaranteed differentiation of typical and changed according to differential tests strains is provided only by PCR-analysis with primers vlml2for/ISrev216 and JS respectively, which are homologous to certain chromosome fragments of one of two Yersinia species.

  4. Effective Teaching Methods in Higher Education: Requirements and Barriers.

    Science.gov (United States)

    Shirani Bidabadi, Nahid; Nasr Isfahani, Ahmmadreza; Rouhollahi, Amir; Khalili, Roya

    2016-10-01

    Teaching is one of the main components in educational planning which is a key factor in conducting educational plans. Despite the importance of good teaching, the outcomes are far from ideal. The present qualitative study aimed to investigate effective teaching in higher education in Iran based on the experiences of best professors in the country and the best local professors of Isfahan University of Technology. This qualitative content analysis study was conducted through purposeful sampling. Semi-structured interviews were conducted with ten faculty members (3 of them from the best professors in the country and 7 from the best local professors). Content analysis was performed by MAXQDA software. The codes, categories and themes were explored through an inductive process that began from semantic units or direct quotations to general themes. According to the results of this study, the best teaching approach is the mixed method (student-centered together with teacher-centered) plus educational planning and previous readiness. But whenever the teachers can teach using this method confront with some barriers and requirements; some of these requirements are prerequisite in professors' behavior and some of these are prerequisite in professors' outlook. Also, there are some major barriers, some of which are associated with the professors' operation and others are related to laws and regulations. Implications of these findings for teachers' preparation in education are discussed. In the present study, it was illustrated that a good teaching method helps the students to question their preconceptions, and motivates them to learn, by putting them in a situation in which they come to see themselves as the authors of answers, as the agents of responsibility for change. But training through this method has some barriers and requirements. To have an effective teaching; the faculty members of the universities should be awarded of these barriers and requirements as a way to

  5. Advanced methods of solid oxide fuel cell modeling

    CERN Document Server

    Milewski, Jaroslaw; Santarelli, Massimo; Leone, Pierluigi

    2011-01-01

    Fuel cells are widely regarded as the future of the power and transportation industries. Intensive research in this area now requires new methods of fuel cell operation modeling and cell design. Typical mathematical models are based on the physical process description of fuel cells and require a detailed knowledge of the microscopic properties that govern both chemical and electrochemical reactions. ""Advanced Methods of Solid Oxide Fuel Cell Modeling"" proposes the alternative methodology of generalized artificial neural networks (ANN) solid oxide fuel cell (SOFC) modeling. ""Advanced Methods

  6. Typicality and reasoning fallacies.

    Science.gov (United States)

    Shafir, E B; Smith, E E; Osherson, D N

    1990-05-01

    The work of Tversky and Kahneman on intuitive probability judgment leads to the following prediction: The judged probability that an instance belongs to a category is an increasing function of the typicality of the instance in the category. To test this prediction, subjects in Experiment 1 read a description of a person (e.g., "Linda is 31, bright, ... outspoken") followed by a category. Some subjects rated how typical the person was of the category, while others rated the probability that the person belonged to that category. For categories like bank teller and feminist bank teller: (1) subjects rated the person as more typical of the conjunctive category (a conjunction effect); (2) subjects rated it more probable that the person belonged to the conjunctive category (a conjunction fallacy); and (3) the magnitudes of the conjunction effect and fallacy were highly correlated. Experiment 2 documents an inclusion fallacy, wherein subjects judge, for example, "All bank tellers are conservative" to be more probable than "All feminist bank tellers are conservative." In Experiment 3, results parallel to those of Experiment 1 were obtained with respect to the inclusion fallacy.

  7. Typicality and misinformation: Two sources of distortion

    Directory of Open Access Journals (Sweden)

    Malen Migueles

    2008-01-01

    Full Text Available This study examined the effect of two sources of memory error: exposure to post-event information and extracting typical contents from schemata. Participants were shown a video of a bank robbery and presented with highand low-typicality misinformation extracted from two normative studies. The misleading suggestions consisted of either changes in the original video information or additions of completely new contents. In the subsequent recognition task the post-event misinformation produced memory impairment. The participants used the underlying schema of the event to extract high-typicality information which had become integrated with episodic information, thus giving rise to more hits and false alarms for these items. However, the effect of exposure to misinformation was greater on low-typicality items. There were no differences between changed or added information, but there were more false alarms when a low-typicality item was changed to a high-typicality item

  8. Inverse operator theory method mathematics-mechanization for the solutions of nonlinear equations and some typical applications in nonlinear physics

    International Nuclear Information System (INIS)

    Fang Jinqing; Yao Weiguang

    1992-12-01

    Inverse operator theory method (IOTM) has developed rapidly in the last few years. It is an effective and useful procedure for quantitative solution of nonlinear or stochastic continuous dynamical systems. Solutions are obtained in series form for deterministic equations, and in the case of stochastic equation it gives statistic measures of the solution process. A very important advantage of the IOTM is to eliminate a number of restrictive and assumption on the nature of stochastic processes. Therefore, it provides more realistic solutions. The IOTM and its mathematics-mechanization (MM) are briefly introduced. They are used successfully to study the chaotic behaviors of the nonlinear dynamical systems for the first time in the world. As typical examples, the Lorentz equation, generalized Duffing equation, two coupled generalized Duffing equations are investigated by the use of the IOTM and the MM. The results are in good agreement with ones by the Runge-Kutta method (RKM). It has higher accuracy and faster convergence. So the IOTM realized by the MM is of potential application valuable in nonlinear science

  9. Identifying and prioritizing customer requirements from tractor production by QFD method

    Directory of Open Access Journals (Sweden)

    H Taghizadeh

    2017-05-01

    Full Text Available Introduction Discovering and understanding customer needs and expectations are considered as important factors on customer satisfaction and play vital role to maintain the current activity among its competitors, proceeding and obtaining customer satisfaction which are critical factors to design a successful production; thus the successful organizations must meet their needs containing the quality of the products or services to customers. Quality Function Deployment (QFD is a technique for studying demands and needs of customers which is going to give more emphasis to the customer's interests in this way. The QFD method in general implemented various tools and methods for reaching qualitative goals; but the most important and the main tool of this method is the house of quality diagrams. The Analytic Hierarchy Process (AHP is a famous and common MADM method based on pair wise comparisons used for determining the priority of understudied factors in various studies until now. With considering effectiveness of QFD method to explicating customer's demands and obtaining customer satisfaction, generally, the researchers followed this question's suite and scientific answer: how can QFD explicate real demands and requirements of customers from tractor final production and what is the prioritization of these demands and requirements in view of customers. Accordingly, the aim of this study was to identify and prioritize the customer requirements of Massey Ferguson (MF 285 tractor production in Iran tractor manufacturing company with t- student statistical test, AHP and QFD methods. Materials and Methods Research method was descriptive and statistical population included all of the tractor customers of Tractor Manufacturing Company in Iran from March 2011 to March 2015. The statistical sample size was 171 which are determined with Cochran index. Moreover, 20 experts' opinion has been considered for determining product's technical requirements. Literature

  10. A Fast Implementation for the Typical Testor Property Identification Based on an Accumulative Binary Tuple

    Directory of Open Access Journals (Sweden)

    Guillermo Sanchez-Diaz

    2012-11-01

    Full Text Available In this paper, we introduce a fast implementation of the CT EXT algorithm for testor property identification, that is based on an accumulative binary tuple. The fast implementation of the CT EXT algorithm (one of the fastest algorithms reported, is designed to generate all the typical testors from a training matrix, requiring a reduced number of operations. Experimental results using this fast implementation and the comparison with other state-of-the-art algorithms that generate typical testors are presented.

  11. [Research on developping the spectral dataset for Dunhuang typical colors based on color constancy].

    Science.gov (United States)

    Liu, Qiang; Wan, Xiao-Xia; Liu, Zhen; Li, Chan; Liang, Jin-Xing

    2013-11-01

    The present paper aims at developping a method to reasonably set up the typical spectral color dataset for different kinds of Chinese cultural heritage in color rendering process. The world famous wall paintings dating from more than 1700 years ago in Dunhuang Mogao Grottoes was taken as typical case in this research. In order to maintain the color constancy during the color rendering workflow of Dunhuang culture relics, a chromatic adaptation based method for developping the spectral dataset of typical colors for those wall paintings was proposed from the view point of human vision perception ability. Under the help and guidance of researchers in the art-research institution and protection-research institution of Dunhuang Academy and according to the existing research achievement of Dunhuang Research in the past years, 48 typical known Dunhuang pigments were chosen and 240 representative color samples were made with reflective spectral ranging from 360 to 750 nm was acquired by a spectrometer. In order to find the typical colors of the above mentioned color samples, the original dataset was devided into several subgroups by clustering analysis. The grouping number, together with the most typical samples for each subgroup which made up the firstly built typical color dataset, was determined by wilcoxon signed rank test according to the color inconstancy index comprehensively calculated under 6 typical illuminating conditions. Considering the completeness of gamut of Dunhuang wall paintings, 8 complementary colors was determined and finally the typical spectral color dataset was built up which contains 100 representative spectral colors. The analytical calculating results show that the median color inconstancy index of the built dataset in 99% confidence level by wilcoxon signed rank test was 3.28 and the 100 colors are distributing in the whole gamut uniformly, which ensures that this dataset can provide reasonable reference for choosing the color with highest

  12. Quality assurance requirements and methods for high level waste package acceptability

    International Nuclear Information System (INIS)

    1992-12-01

    This document should serve as guidance for assigning the necessary items to control the conditioning process in such a way that waste packages are produced in compliance with the waste acceptance requirements. It is also provided to promote the exchange of information on quality assurance requirements and on the application of quality assurance methods associated with the production of high level waste packages, to ensure that these waste packages comply with the requirements for transportation, interim storage and waste disposal in deep geological formations. The document is intended to assist both the operators of conditioning facilities and repositories as well as national authorities and regulatory bodies, involved in the licensing of the conditioning of high level radioactive wastes or in the development of deep underground disposal systems. The document recommends the quality assurance requirements and methods which are necessary to generate data for these parameters identified in IAEA-TECDOC-560 on qualitative acceptance criteria, and indicates where and when the control methods can be applied, e.g. in the operation or commissioning of a process or in the development of a waste package design. Emphasis is on the control of the process and little reliance is placed on non-destructive or destructive testing. Qualitative criteria, relevant to disposal of high level waste, are repository dependent and are not addressed here. 37 refs, 3 figs, 2 tabs

  13. Time to discontinuation of atypical versus typical antipsychotics in the naturalistic treatment of schizophrenia

    Directory of Open Access Journals (Sweden)

    Swartz Marvin

    2006-02-01

    Full Text Available Abstract Background There is an ongoing debate over whether atypical antipsychotics are more effective than typical antipsychotics in the treatment of schizophrenia. This naturalistic study compares atypical and typical antipsychotics on time to all-cause medication discontinuation, a recognized index of medication effectiveness in the treatment of schizophrenia. Methods We used data from a large, 3-year, observational, non-randomized, multisite study of schizophrenia, conducted in the U.S. between 7/1997 and 9/2003. Patients who were initiated on oral atypical antipsychotics (clozapine, olanzapine, risperidone, quetiapine, or ziprasidone or oral typical antipsychotics (low, medium, or high potency were compared on time to all-cause medication discontinuation for 1 year following initiation. Treatment group comparisons were based on treatment episodes using 3 statistical approaches (Kaplan-Meier survival analysis, Cox Proportional Hazards regression model, and propensity score-adjusted bootstrap resampling methods. To further assess the robustness of the findings, sensitivity analyses were performed, including the use of (a only 1 medication episode for each patient, the one with which the patient was treated first, and (b all medication episodes, including those simultaneously initiated on more than 1 antipsychotic. Results Mean time to all-cause medication discontinuation was longer on atypical (N = 1132, 256.3 days compared to typical antipsychotics (N = 534, 197.2 days; p Conclusion In the usual care of schizophrenia patients, time to medication discontinuation for any cause appears significantly longer for atypical than typical antipsychotics regardless of the typical antipsychotic potency level. Findings were primarily driven by clozapine and olanzapine, and to a lesser extent by risperidone. Furthermore, only clozapine and olanzapine therapy showed consistently and significantly longer treatment duration compared to perphenazine, a medium

  14. ECOLOGICAL AND TECHNICAL REQUIREMENTS OF RADIOACTIVE WASTE UTILISATION

    Directory of Open Access Journals (Sweden)

    Gabriel Borowski

    2013-01-01

    Full Text Available The paper presents a survey of radioactive waste disposal technologies used worldwide in terms of their influence upon natural environment. Typical sources of radioactive waste from medicine and industry were presented. In addition, various types of radioactive waste, both liquid and solid, were described. Requirements and conditions of the waste’s storage were characterised. Selected liquid and solid waste processing technologies were shown. It was stipulated that contemporary methods of radioactive waste utilisation enable their successful neutralisation. The implementation of these methods ought to be mandated by ecological factors first and only then economical ones.

  15. Neural Interfaces for Intracortical Recording: Requirements, Fabrication Methods, and Characteristics.

    Science.gov (United States)

    Szostak, Katarzyna M; Grand, Laszlo; Constandinou, Timothy G

    2017-01-01

    Implantable neural interfaces for central nervous system research have been designed with wire, polymer, or micromachining technologies over the past 70 years. Research on biocompatible materials, ideal probe shapes, and insertion methods has resulted in building more and more capable neural interfaces. Although the trend is promising, the long-term reliability of such devices has not yet met the required criteria for chronic human application. The performance of neural interfaces in chronic settings often degrades due to foreign body response to the implant that is initiated by the surgical procedure, and related to the probe structure, and material properties used in fabricating the neural interface. In this review, we identify the key requirements for neural interfaces for intracortical recording, describe the three different types of probes-microwire, micromachined, and polymer-based probes; their materials, fabrication methods, and discuss their characteristics and related challenges.

  16. Estimation methods of eco-environmental water requirements: Case study

    Institute of Scientific and Technical Information of China (English)

    YANG Zhifeng; CUI Baoshan; LIU Jingling

    2005-01-01

    Supplying water to the ecological environment with certain quantity and quality is significant for the protection of diversity and the realization of sustainable development. The conception and connotation of eco-environmental water requirements, including the definition of the conception, the composition and characteristics of eco-environmental water requirements, are evaluated in this paper. The classification and estimation methods of eco-environmental water requirements are then proposed. On the basis of the study on the Huang-Huai-Hai Area, the present water use, the minimum and suitable water requirement are estimated and the corresponding water shortage is also calculated. According to the interrelated programs, the eco-environmental water requirements in the coming years (2010, 2030, 2050) are estimated. The result indicates that the minimum and suitable eco-environmental water requirements fluctuate with the differences of function setting and the referential standard of water resources, and so as the water shortage. Moreover, the study indicates that the minimum eco-environmental water requirement of the study area ranges from 2.84×1010m3 to 1.02×1011m3, the suitable water requirement ranges from 6.45×1010m3 to 1.78×1011m3, the water shortage ranges from 9.1×109m3 to 2.16×1010m3 under the minimum water requirement, and it is from 3.07×1010m3 to 7.53×1010m3 under the suitable water requirement. According to the different values of the water shortage, the water priority can be allocated. The ranges of the eco-environmental water requirements in the three coming years (2010, 2030, 2050) are 4.49×1010m3-1.73×1011m3, 5.99×10m3?2.09×1011m3, and 7.44×1010m3-2.52×1011m3, respectively.

  17. Concept typicality responses in the semantic memory network.

    Science.gov (United States)

    Santi, Andrea; Raposo, Ana; Frade, Sofia; Marques, J Frederico

    2016-12-01

    For decades concept typicality has been recognized as critical to structuring conceptual knowledge, but only recently has typicality been applied in better understanding the processes engaged by the neurological network underlying semantic memory. This previous work has focused on one region within the network - the Anterior Temporal Lobe (ATL). The ATL responds negatively to concept typicality (i.e., the more atypical the item, the greater the activation in the ATL). To better understand the role of typicality in the entire network, we ran an fMRI study using a category verification task in which concept typicality was manipulated parametrically. We argue that typicality is relevant to both amodal feature integration centers as well as category-specific regions. Both the Inferior Frontal Gyrus (IFG) and ATL demonstrated a negative correlation with typicality, whereas inferior parietal regions showed positive effects. We interpret this in light of functional theories of these regions. Interactions between category and typicality were not observed in regions classically recognized as category-specific, thus, providing an argument against category specific regions, at least with fMRI. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Applying Knowledge of Species-Typical Scavenging Behavior to the Search and Recovery of Mammalian Skeletal Remains.

    Science.gov (United States)

    Young, Alexandria; Stillman, Richard; Smith, Martin J; Korstjens, Amanda H

    2016-03-01

    Forensic investigations involving animal scavenging of human remains require a physical search of the scene and surrounding areas. However, there is currently no standard procedure in the U.K. for physical searches of scavenged human remains. The Winthrop and grid search methods used by police specialist searchers for scavenged remains were examined through the use of mock red fox (Vulpes vulpes) scatter scenes. Forty-two police specialist searchers from two different regions within the U.K. were divided between those briefed and not briefed with fox-typical scavenging information. Briefing searchers with scavenging information significantly affected the recovery of scattered bones (χ(2) = 11.45, df = 1, p = 0.001). Searchers briefed with scavenging information were 2.05 times more likely to recover bones. Adaptions to search methods used by searchers were evident on a regional level, such that searchers more accustom to a peri-urban to rural region recovered a higher percentage of scattered bones (58.33%, n = 84). © 2015 American Academy of Forensic Sciences.

  19. Lipoma arborescens: Comparison of typical and atypical disease presentations

    International Nuclear Information System (INIS)

    Howe, B.M.; Wenger, D.E.

    2013-01-01

    Aim: To determine whether the aetiology differed between typical cases of lipoma arborescens with unilateral knee involvement and atypical cases involving joints other than the knee, polyarticular disease, and disease outside of the knee joint. Materials and methods: Cases of lipoma arborescens involving the knee joint were evaluated for the distribution of the disease and severity of degenerative arthritis. Joints other than the knee were evaluated for the presence and severity of degenerative arthritis, and the distribution was classified as either intra-articular, extra-articular, or both. Clinical history was reviewed for patient age at presentation, a history of inflammatory arthritis, diabetes mellitus, and known steroid use. Fisher's exact test was used to determine whether there was a statistically significant difference between typical and atypical presentations of the disease. Results: Lipoma arborescens was identified in 45 joints in 39 patients. Twenty-eight patients were classified as “typical” and 11 patients had “atypical” disease. There was no significant difference in age at presentation, presence of degenerative arthritis, or known inflammatory arthritis when comparing typical and atypical presentations of the disease. Conclusion: Twenty-eight percent of patients in the present study had atypical presentation of lipoma arborescens with multifocal lipoma arborescens or disease in joints other than the knee. There was no significant difference in age at presentation, presence of degenerative arthritis, or known inflammatory arthritis when comparing typical and atypical presentations of the disease. Of the 39 patients, only three had no evidence of degenerative arthritis, which suggests that many cases of lipoma arborescens are secondary to chronic reactive change in association with degenerative arthritis

  20. The non-typical MRI findings of the branchial cleft cysts

    International Nuclear Information System (INIS)

    Hu Chunhong; Wu Qingde; Yao Xuanjun; Chen Jie; Zhu Wei; Chen Jianhua; Xing Jianming; Ding Yi; Ge Zili

    2006-01-01

    Objective: To investigate the non-typical MRI findings of the branchial cleft cysts in order to improve their diagnoses. Methods: 10 cases with branchial cleft cysts proven by surgery and pathology were collected and their MRI features were analyzed. There were 6 male and 4 female, aged 15 to 70, with an averaged age of 37. All patients underwent plain MR scan, 6 patients underwent enhanced scan, and 4 patients underwent magnetic resonance angiography. Results: All 10 cases were second branchial cleft cysts, including 4 of Bailey type I and 6 of type II. The non-typical MRI findings were composed of haematocele (2 cases), extraordinarily thick cyst wall (4 cases), solidified cystic fluid (2 cases), and concomitant canceration (2 cases), which made the diagnoses more difficult. Conclusion: The diagnoses of the branchial cleft cysts with non-typical MRI features should combined with its characteristic of position that located at the lateral portion of the neck adjacent to the anterior border of the sternocleidomastoid muscle at the mandibular angle. The findings, such as thickened wall, ill-defined margin, and vascular involvement or jugular lymphadenectasis, strongly suggest cancerous tendency. (authors)

  1. Neural Interfaces for Intracortical Recording: Requirements, Fabrication Methods, and Characteristics

    Directory of Open Access Journals (Sweden)

    Katarzyna M. Szostak

    2017-12-01

    Full Text Available Implantable neural interfaces for central nervous system research have been designed with wire, polymer, or micromachining technologies over the past 70 years. Research on biocompatible materials, ideal probe shapes, and insertion methods has resulted in building more and more capable neural interfaces. Although the trend is promising, the long-term reliability of such devices has not yet met the required criteria for chronic human application. The performance of neural interfaces in chronic settings often degrades due to foreign body response to the implant that is initiated by the surgical procedure, and related to the probe structure, and material properties used in fabricating the neural interface. In this review, we identify the key requirements for neural interfaces for intracortical recording, describe the three different types of probes—microwire, micromachined, and polymer-based probes; their materials, fabrication methods, and discuss their characteristics and related challenges.

  2. Lipreading Ability and Its Cognitive Correlates in Typically Developing Children and Children with Specific Language Impairment

    Science.gov (United States)

    Heikkilä, Jenni; Lonka, Eila; Ahola, Sanna; Meronen, Auli; Tiippana, Kaisa

    2017-01-01

    Purpose: Lipreading and its cognitive correlates were studied in school-age children with typical language development and delayed language development due to specific language impairment (SLI). Method: Forty-two children with typical language development and 20 children with SLI were tested by using a word-level lipreading test and an extensive…

  3. Impact of Typical Aging and Parkinson's Disease on the Relationship among Breath Pausing, Syntax, and Punctuation

    Science.gov (United States)

    Huber, Jessica E.; Darling, Meghan; Francis, Elaine J.; Zhang, Dabao

    2012-01-01

    Purpose: The present study examines the impact of typical aging and Parkinson's disease (PD) on the relationship among breath pausing, syntax, and punctuation. Method: Thirty young adults, 25 typically aging older adults, and 15 individuals with PD participated. Fifteen participants were age- and sex-matched to the individuals with PD.…

  4. 42 CFR 137.294 - What is the typical IHS environmental review process for construction projects?

    Science.gov (United States)

    2010-10-01

    ... SELF-GOVERNANCE Construction Nepa Process § 137.294 What is the typical IHS environmental review... impact on the environment, and therefore do not require environmental impact statements (EIS). Under current IHS procedures, an environmental review is performed on all construction projects. During the IHS...

  5. Typical Periods for Two-Stage Synthesis by Time-Series Aggregation with Bounded Error in Objective Function

    Energy Technology Data Exchange (ETDEWEB)

    Bahl, Björn; Söhler, Theo; Hennen, Maike; Bardow, André, E-mail: andre.bardow@ltt.rwth-aachen.de [Institute of Technical Thermodynamics, RWTH Aachen University, Aachen (Germany)

    2018-01-08

    Two-stage synthesis problems simultaneously consider here-and-now decisions (e.g., optimal investment) and wait-and-see decisions (e.g., optimal operation). The optimal synthesis of energy systems reveals such a two-stage character. The synthesis of energy systems involves multiple large time series such as energy demands and energy prices. Since problem size increases with the size of the time series, synthesis of energy systems leads to complex optimization problems. To reduce the problem size without loosing solution quality, we propose a method for time-series aggregation to identify typical periods. Typical periods retain the chronology of time steps, which enables modeling of energy systems, e.g., with storage units or start-up cost. The aim of the proposed method is to obtain few typical periods with few time steps per period, while accurately representing the objective function of the full time series, e.g., cost. Thus, we determine the error of time-series aggregation as the cost difference between operating the optimal design for the aggregated time series and for the full time series. Thereby, we rigorously bound the maximum performance loss of the optimal energy system design. In an initial step, the proposed method identifies the best length of typical periods by autocorrelation analysis. Subsequently, an adaptive procedure determines aggregated typical periods employing the clustering algorithm k-medoids, which groups similar periods into clusters and selects one representative period per cluster. Moreover, the number of time steps per period is aggregated by a novel clustering algorithm maintaining chronology of the time steps in the periods. The method is iteratively repeated until the error falls below a threshold value. A case study based on a real-world synthesis problem of an energy system shows that time-series aggregation from 8,760 time steps to 2 typical periods with each 2 time steps results in an error smaller than the optimality gap of

  6. Conducting organizational safety reviews - requirements, methods and experience

    International Nuclear Information System (INIS)

    Reiman, T.; Oedewald, P.; Wahlstroem, B.; Rollenhagen, C.; Kahlbom, U.

    2008-03-01

    Organizational safety reviews are part of the safety management process of power plants. They are typically performed after major reorganizations, significant incidents or according to specified review programs. Organizational reviews can also be a part of a benchmarking between organizations that aims to improve work practices. Thus, they are important instruments in proactive safety management and safety culture. Most methods that have been used for organizational reviews are based more on practical considerations than a sound scientific theory of how various organizational or technical issues influence safety. Review practices and methods also vary considerably. The objective of this research is to promote understanding on approaches used in organizational safety reviews as well as to initiate discussion on criteria and methods of organizational assessment. The research identified a set of issues that need to be taken into account when planning and conducting organizational safety reviews. Examples of the issues are definition of appropriate criteria for evaluation, the expertise needed in the assessment and the organizational motivation for conducting the assessment. The study indicates that organizational safety assessments involve plenty of issues and situations where choices have to be made regarding what is considered valid information and a balance has to be struck between focus on various organizational phenomena. It is very important that these choices are based on a sound theoretical framework and that these choices can later be evaluated together with the assessment findings. The research concludes that at its best, the organizational safety reviews can be utilised as a source of information concerning the changing vulnerabilities and the actual safety performance of the organization. In order to do this, certain basic organizational phenomena and assessment issues have to be acknowledged and considered. The research concludes with recommendations on

  7. Conducting organizational safety reviews - requirements, methods and experience

    Energy Technology Data Exchange (ETDEWEB)

    Reiman, T.; Oedewald, P.; Wahlstroem, B. [Technical Research Centre of Finland, VTT (Finland); Rollenhagen, C. [Royal Institute of Technology, KTH, (Sweden); Kahlbom, U. [RiskPilot (Sweden)

    2008-03-15

    Organizational safety reviews are part of the safety management process of power plants. They are typically performed after major reorganizations, significant incidents or according to specified review programs. Organizational reviews can also be a part of a benchmarking between organizations that aims to improve work practices. Thus, they are important instruments in proactive safety management and safety culture. Most methods that have been used for organizational reviews are based more on practical considerations than a sound scientific theory of how various organizational or technical issues influence safety. Review practices and methods also vary considerably. The objective of this research is to promote understanding on approaches used in organizational safety reviews as well as to initiate discussion on criteria and methods of organizational assessment. The research identified a set of issues that need to be taken into account when planning and conducting organizational safety reviews. Examples of the issues are definition of appropriate criteria for evaluation, the expertise needed in the assessment and the organizational motivation for conducting the assessment. The study indicates that organizational safety assessments involve plenty of issues and situations where choices have to be made regarding what is considered valid information and a balance has to be struck between focus on various organizational phenomena. It is very important that these choices are based on a sound theoretical framework and that these choices can later be evaluated together with the assessment findings. The research concludes that at its best, the organizational safety reviews can be utilised as a source of information concerning the changing vulnerabilities and the actual safety performance of the organization. In order to do this, certain basic organizational phenomena and assessment issues have to be acknowledged and considered. The research concludes with recommendations on

  8. Examining the Language Phenotype in Children with Typical Development, Specific Language Impairment, and Fragile X Syndrome

    Science.gov (United States)

    Haebig, Eileen; Sterling, Audra; Hoover, Jill

    2016-01-01

    Purpose: One aspect of morphosyntax, finiteness marking, was compared in children with fragile X syndrome (FXS), specific language impairment (SLI), and typical development matched on mean length of utterance (MLU). Method: Nineteen children with typical development (mean age = 3.3 years), 20 children with SLI (mean age = 4.9 years), and 17 boys…

  9. Narrative versus style: Effect of genre-typical events versus genre-typical filmic realizations on film viewers’ genre recognition

    NARCIS (Netherlands)

    Visch, V.; Tan, E.

    2008-01-01

    This study investigated whether film viewers recognize four basic genres (comic, drama, action and nonfiction) on the basis of genre-typical event cues or of genre-typical filmic realization cues of events. Event cues are similar to the narrative content of a film sequence, while filmic realization

  10. Sensitivity Analysis of Hydraulic Methods Regarding Hydromorphologic Data Derivation Methods to Determine Environmental Water Requirements

    Directory of Open Access Journals (Sweden)

    Alireza Shokoohi

    2015-07-01

    Full Text Available This paper studies the accuracy of hydraulic methods in determining environmental flow requirements. Despite the vital importance of deriving river cross sectional data for hydraulic methods, few studies have focused on the criteria for deriving this data. The present study shows that the depth of cross section has a meaningful effect on the results obtained from hydraulic methods and that, considering fish as the index species for river habitat analysis, an optimum depth of 1 m should be assumed for deriving information from cross sections. The second important parameter required for extracting the geometric and hydraulic properties of rivers is the selection of an appropriate depth increment; ∆y. In the present research, this parameter was found to be equal to 1 cm. The uncertainty of the environmental discharge evaluation, when allocating water in areas with water scarcity, should be kept as low as possible. The Manning friction coefficient (n is an important factor in river discharge calculation. Using a range of "n" equal to 3 times the standard deviation for the study area, it is shown that the influence of friction coefficient on the estimation of environmental flow is much less than that on the calculation of river discharge.

  11. Proposed New Method of Interpretation of Infrared Ship Signature Requirements

    NARCIS (Netherlands)

    Neele, F.P.; Wilson, M.T.; Youern, K.

    2005-01-01

    new method of deriving and defining requirements for the infrared signature of new ships is presented. The current approach is to specify the maximum allowed temperature or radiance contrast of the ship with respect to its background. At present, in most NATO countries, it is the contractor’s

  12. Perfect-use and typical-use Pearl Index of a contraceptive mobile app.

    Science.gov (United States)

    Berglund Scherwitzl, E; Lundberg, O; Kopp Kallner, H; Gemzell Danielsson, K; Trussell, J; Scherwitzl, R

    2017-12-01

    The Natural Cycles application is a fertility awareness-based contraceptive method that uses dates of menstruation and basal body temperature to inform couples whether protected intercourse is needed to prevent pregnancies. Our purpose with this study is to investigate the contraceptive efficacy of the mobile application by evaluating the perfect- and typical-use Pearl Index. In this prospective observational study, 22,785 users of the application logged a total of 18,548 woman-years of data into the application. We used these data to calculate typical- and perfect-use Pearl Indexes, as well as 13-cycle pregnancy rates using life-table analysis. We found a typical-use Pearl Index of 6.9 pregnancies per 100 woman-years [95% confidence interval (CI): 6.5-7.2], corrected to 6.8 (95% CI: 6.4-7.2) when truncating users after 12months. We estimated a 13-cycle typical-use failure rate of 8.3% (95% CI: 7.8-8.9). We found that the perfect-use Pearl Index was 1.0 pregnancy per 100 woman-years (95% CI: 0.5-1.5). Finally, we estimated that the rate of pregnancies from cycles where the application erroneously flagged a fertile day as infertile was 0.5 (95% CI: 0.4-0.7) per 100 woman-years. We estimated a discontinuation rate over 12months of 54%. This study shows that the efficacy of a contraceptive mobile application is higher than usually reported for traditional fertility awareness-based methods. The application may contribute to reducing the unmet need for contraception. The measured typical- and perfect-use efficacies of the mobile application Natural Cycles are important parameters for women considering their contraceptive options as well as for the clinicians advising them. The large available data set in this paper allows for future studies on acceptability, for example, by studying the efficacy for different cohorts and geographic regions. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  13. Extended abstract: Partial row projection methods

    Energy Technology Data Exchange (ETDEWEB)

    Bramley, R.; Lee, Y. [Indiana Univ., Bloomington, IN (United States)

    1996-12-31

    Accelerated row projection (RP) algorithms for solving linear systems Ax = b are a class of iterative methods which in theory converge for any nonsingular matrix. RP methods are by definition ones that require finding the orthogonal projection of vectors onto the null space of block rows of the matrix. The Kaczmarz form, considered here because it has a better spectrum for iterative methods, has an iteration matrix that is the product of such projectors. Because straightforward Kaczmarz method converges slowly for practical problems, typically an outer CG acceleration is applied. Definiteness, symmetry, or localization of the eigenvalues, of the coefficient matrix is not required. In spite of this robustness, work has generally been limited to structured systems such as block tridiagonal matrices because unlike many iterative solvers, RP methods cannot be implemented by simply supplying a matrix-vector multiplication routine. Finding the orthogonal projection of vectors onto the null space of block rows of the matrix in practice requires accessing the actual entries in the matrix. This report introduces a new partial RP algorithm which retains advantages of the RP methods.

  14. Typical IAEA operations at a fuel fabrication plant

    International Nuclear Information System (INIS)

    Morsy, S.

    1984-01-01

    The IAEA operations performed at a typical Fuel Fabrication Plant are explained. To make the analysis less general the case of Low Enriched Uranium (LEU) Fuel Fabrication Plants is considered. Many of the conclusions drawn from this analysis could be extended to other types of fabrication plants. The safeguards objectives and goals at LEU Fuel Fabrication Plants are defined followed by a brief description of the fabrication process. The basic philosophy behind nuclear material stratification and the concept of Material Balance Areas (MBA's) and Key Measurement Points (KMP's) is explained. The Agency operations and verification methods used during physical inventory verifications are illustrated

  15. Latency in Visionic Systems: Test Methods and Requirements

    Science.gov (United States)

    Bailey, Randall E.; Arthur, J. J., III; Williams, Steven P.; Kramer, Lynda J.

    2005-01-01

    A visionics device creates a pictorial representation of the external scene for the pilot. The ultimate objective of these systems may be to electronically generate a form of Visual Meteorological Conditions (VMC) to eliminate weather or time-of-day as an operational constraint and provide enhancement over actual visual conditions where eye-limiting resolution may be a limiting factor. Empirical evidence has shown that the total system delays or latencies including the imaging sensors and display systems, can critically degrade their utility, usability, and acceptability. Definitions and measurement techniques are offered herein as common test and evaluation methods for latency testing in visionics device applications. Based upon available data, very different latency requirements are indicated based upon the piloting task, the role in which the visionics device is used in this task, and the characteristics of the visionics cockpit display device including its resolution, field-of-regard, and field-of-view. The least stringent latency requirements will involve Head-Up Display (HUD) applications, where the visionics imagery provides situational information as a supplement to symbology guidance and command information. Conversely, the visionics system latency requirement for a large field-of-view Head-Worn Display application, providing a Virtual-VMC capability from which the pilot will derive visual guidance, will be the most stringent, having a value as low as 20 msec.

  16. Evaluation of methods to estimate the essential amino acids requirements of fish from the muscle amino acid profile

    Directory of Open Access Journals (Sweden)

    Álvaro José de Almeida Bicudo

    2014-03-01

    Full Text Available Many methods to estimate amino acid requirement based on amino acid profile of fish have been proposed. This study evaluates the methodology proposed by Meyer & Fracalossi (2005 and by Tacon (1989 to estimate amino acids requirement of fish, which do exempt knowledge on previous nutritional requirement of reference amino acid. Data on amino acid requirement of pacu, Piaractus mesopotamicus, were used to validate de accuracy of those methods. Meyer & Fracalossi's and Tacon's methodology estimated the lysine requirement of pacu, respectively, at 13 and 23% above requirement determined using dose-response method. The values estimated by both methods lie within the range of requirements determined for other omnivorous fish species, the Meyer & Fracalossi (2005 method showing better accuracy.

  17. A Typical Verification Challenge for the GRID

    NARCIS (Netherlands)

    van de Pol, Jan Cornelis; Bal, H. E.; Brim, L.; Leucker, M.

    2008-01-01

    A typical verification challenge for the GRID community is presented. The concrete challenge is to implement a simple recursive algorithm for finding the strongly connected components in a graph. The graph is typically stored in the collective memory of a number of computers, so a distributed

  18. KidReporter : a method for engaging children in making a newspaper to gather user requirements

    NARCIS (Netherlands)

    Bekker, M.M.; Beusmans, J.; Keyson, D.V.; Lloyd, P.A.; Bekker, M.M.; Markopoulos, P.; Tsikalkina, M.

    2002-01-01

    We describe a design method, called the KidReporter method, for gathering user requirements from children. Two school classes participated in making a newspaper about a zoo, to gather requirements for the design process of an interactive educational game. The educational game was developed to

  19. A simple eigenfunction convergence acceleration method for Monte Carlo

    International Nuclear Information System (INIS)

    Booth, Thomas E.

    2011-01-01

    Monte Carlo transport codes typically use a power iteration method to obtain the fundamental eigenfunction. The standard convergence rate for the power iteration method is the ratio of the first two eigenvalues, that is, k_2/k_1. Modifications to the power method have accelerated the convergence by explicitly calculating the subdominant eigenfunctions as well as the fundamental. Calculating the subdominant eigenfunctions requires using particles of negative and positive weights and appropriately canceling the negative and positive weight particles. Incorporating both negative weights and a ± weight cancellation requires a significant change to current transport codes. This paper presents an alternative convergence acceleration method that does not require modifying the transport codes to deal with the problems associated with tracking and cancelling particles of ± weights. Instead, only positive weights are used in the acceleration method. (author)

  20. Generation of a typical meteorological year for north–east, Nigeria

    International Nuclear Information System (INIS)

    Ohunakin, Olayinka S.; Adaramola, Muyiwa S.; Oyewola, Olanrewaju M.; Fagbenle, Richard O.

    2013-01-01

    Highlights: • TMY for sites in north–east Nigeria was produced using Finkelstein–Schafer method. • It was found the TMY can be used to represents the long-term weather parameters. • The generated TMY can be used the design and evaluation of solar energy systems. • A handy database in the estimation of building heating loads in north–east Nigeria. - Abstract: The Finkelstein–Schafer statistical method was applied to analyze a 34-year period (1975–2008) hourly measured weather data which includes global solar radiation, dry bulb temperatures, precipitation, relative humidity and wind speed in order to generate typical meteorological year (TMY) for five locations spreading across north–east zone, Nigeria. The selection criteria are based on solar radiation together with the dry bulb temperature values and representative typical meteorological months (TMMs) were selected by choosing the one with the smallest deviation from the long-term cumulative distribution function. A close-fit agreement is observed between the generated TMY and long-term averages. The TMY generated will be very useful for optimal design and performance evaluation of solar energy conversion systems, heating, ventilation, and air conditioning (HVAC) and other solar energy dependent systems to be located in this part of Nigeria

  1. Fault tree construction of hybrid system requirements using qualitative formal method

    International Nuclear Information System (INIS)

    Lee, Jang-Soo; Cha, Sung-Deok

    2005-01-01

    When specifying requirements for software controlling hybrid systems and conducting safety analysis, engineers experience that requirements are often known only in qualitative terms and that existing fault tree analysis techniques provide little guidance on formulating and evaluating potential failure modes. In this paper, we propose Causal Requirements Safety Analysis (CRSA) as a technique to qualitatively evaluate causal relationship between software faults and physical hazards. This technique, extending qualitative formal method process and utilizing information captured in the state trajectory, provides specific guidelines on how to identify failure modes and relationship among them. Using a simplified electrical power system as an example, we describe step-by-step procedures of conducting CRSA. Our experience of applying CRSA to perform fault tree analysis on requirements for the Wolsong nuclear power plant shutdown system indicates that CRSA is an effective technique in assisting safety engineers

  2. 沼泽湿地生态储水量及生态需水量计算方法%Method for calculating ecological water storage and ecological water requirement of marsh

    Institute of Scientific and Technical Information of China (English)

    李丽娟; 李九一; 粱丽乔; 柳玉梅

    2009-01-01

    As one of the most typical wetlands, marsh plays an important role in hydrological and economic aspects, especially in keeping biological diversity. In this study, the definition and connotation of the ecological water storage of marsh is discussed for the first time, and its distinction and relationship with ecological water requirement are also analyzed. Furthermore, the gist and method of calculating ecological water storage and ecological water requirement have been provided, and Momoge wetland has been given as an example of calculation of the two variables. Ecological water use of marsh can be ascertained according to ecological water storage and ecological water requirement. For reasonably spatial and temporal varia-tion of water storage and rational water resources planning, the suitable quantity of water supply to marsh can be calculated according to the hydrological conditions, ecological de-mand and actual water resources.

  3. Methods for ensuring compliance with regulatory requirements: regulators and operators

    International Nuclear Information System (INIS)

    Fleischmann, A.W.

    1989-01-01

    Some of the methods of ensuring compliance with regulatory requirements contained in various radiation protection documents such as Regulations, ICRP Recommendations etc. are considered. These include radiation safety officers and radiation safety committees, personnel monitoring services, dissemination of information, inspection services and legislative power of enforcement. Difficulties in ensuring compliance include outmoded legislation, financial and personnel constraints

  4. Language Learning of Children with Typical Development Using a Deductive Metalinguistic Procedure

    Science.gov (United States)

    Finestack, Lizbeth H.

    2014-01-01

    Purpose: In the current study, the author aimed to determine whether 4- to 6-year-old typically developing children possess requisite problem-solving and language abilities to produce, generalize, and retain a novel verb inflection when taught using an explicit, deductive teaching procedure. Method: Study participants included a cross-sectional…

  5. Evaluation of four methods for estimating leaf area of isolated trees

    Science.gov (United States)

    P.J. Peper; E.G. McPherson

    2003-01-01

    The accurate modeling of the physiological and functional processes of urban forests requires information on the leaf area of urban tree species. Several non-destructive, indirect leaf area sampling methods have shown good performance for homogenous canopies. These methods have not been evaluated for use in urban settings where trees are typically isolated and...

  6. 42 CFR 84.146 - Method of measuring the power and torque required to operate blowers.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Method of measuring the power and torque required... RESPIRATORY PROTECTIVE DEVICES Supplied-Air Respirators § 84.146 Method of measuring the power and torque.... These are used to facilitate timing. To determine the torque or horsepower required to operate the...

  7. Benefit and cost curves for typical pollination mutualisms.

    Science.gov (United States)

    Morris, William F; Vázquez, Diego P; Chacoff, Natacha P

    2010-05-01

    Mutualisms provide benefits to interacting species, but they also involve costs. If costs come to exceed benefits as population density or the frequency of encounters between species increases, the interaction will no longer be mutualistic. Thus curves that represent benefits and costs as functions of interaction frequency are important tools for predicting when a mutualism will tip over into antagonism. Currently, most of what we know about benefit and cost curves in pollination mutualisms comes from highly specialized pollinating seed-consumer mutualisms, such as the yucca moth-yucca interaction. There, benefits to female reproduction saturate as the number of visits to a flower increases (because the amount of pollen needed to fertilize all the flower's ovules is finite), but costs continue to increase (because pollinator offspring consume developing seeds), leading to a peak in seed production at an intermediate number of visits. But for most plant-pollinator mutualisms, costs to the plant are more subtle than consumption of seeds, and how such costs scale with interaction frequency remains largely unknown. Here, we present reasonable benefit and cost curves that are appropriate for typical pollinator-plant interactions, and we show how they can result in a wide diversity of relationships between net benefit (benefit minus cost) and interaction frequency. We then use maximum-likelihood methods to fit net-benefit curves to measures of female reproductive success for three typical pollination mutualisms from two continents, and for each system we chose the most parsimonious model using information-criterion statistics. We discuss the implications of the shape of the net-benefit curve for the ecology and evolution of plant-pollinator mutualisms, as well as the challenges that lie ahead for disentangling the underlying benefit and cost curves for typical pollination mutualisms.

  8. High performance sealing - meeting nuclear and aerospace requirements

    International Nuclear Information System (INIS)

    Wensel, R.; Metcalfe, R.

    1994-11-01

    Although high performance sealing is required in many places, two industries lead all others in terms of their demand-nuclear and aerospace. The factors that govern the high reliability and integrity of seals, particularly elastomer seals, for both industries are discussed. Aerospace requirements include low structural weight and a broad range of conditions, from the cold vacuum of space to the hot, high pressures of rocket motors. It is shown, by example, how a seal can be made an integral part of a structure in order to improve performance, rather than using a conventional handbook design. Typical processes are then described for selection, specification and procurement of suitable elastomers, functional and accelerated performance testing, database development and service-life prediction. Methods for quality assurance of elastomer seals are summarized. Potentially catastrophic internal dejects are a particular problem for conventional non-destructive inspection techniques. A new method of elastodynamic testing for these is described. (author)

  9. Stored object knowledge and the production of referring expressions: The case of color typicality

    Directory of Open Access Journals (Sweden)

    Hans eWesterbeek

    2015-07-01

    Full Text Available When speakers describe objects with atypical properties, do they include these properties in their referring expressions, even when that is not strictly required for unique referent identification? Based on previous work, we predict that speakers mention the color of a target object more often when the object is atypically colored, compared to when it is typical. Taking literature from object recognition and visual attention into account, we further hypothesize that this behavior is proportional to the degree to which a color is atypical, and whether color is a highly diagnostic feature in the referred-to object's identity. We investigate these expectations in two language production experiments, in which participants referred to target objects in visual contexts. In Experiment 1, we find a strong effect of color typicality: less typical colors for target objects predict higher proportions of referring expressions that include color. In Experiment 2 we manipulated objects with more complex shapes, for which color is less diagnostic, and we find that the color typicality effect is moderated by color diagnosticity: it is strongest for high-color-diagnostic objects (i.e., objects with a simple shape. These results suggest that the production of atypical color attributes results from a contrast with stored knowledge, an effect which is stronger when color is more central to object identification. Our findings offer evidence for models of reference production that incorporate general object knowledge, in order to be able to capture these effects of typicality on determining the content of referring expressions.

  10. Effects of stress typicality during speeded grammatical classification.

    Science.gov (United States)

    Arciuli, Joanne; Cupples, Linda

    2003-01-01

    The experiments reported here were designed to investigate the influence of stress typicality during speeded grammatical classification of disyllabic English words by native and non-native speakers. Trochaic nouns and iambic gram verbs were considered to be typically stressed, whereas iambic nouns and trochaic verbs were considered to be atypically stressed. Experiments 1a and 2a showed that while native speakers classified typically stressed words individual more quickly and more accurately than atypically stressed words during differences reading, there were no overall effects during classification of spoken stimuli. However, a subgroup of native speakers with high error rates did show a significant effect during classification of spoken stimuli. Experiments 1b and 2b showed that non-native speakers classified typically stressed words more quickly and more accurately than atypically stressed words during reading. Typically stressed words were classified more accurately than atypically stressed words when the stimuli were spoken. Importantly, there was a significant relationship between error rates, vocabulary size and the size of the stress typicality effect in each experiment. We conclude that participants use information about lexical stress to help them distinguish between disyllabic nouns and verbs during speeded grammatical classification. This is especially so for individuals with a limited vocabulary who lack other knowledge (e.g., semantic knowledge) about the differences between these grammatical categories.

  11. An improvement of estimation method of source term to the environment for interfacing system LOCA for typical PWR using MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seok Jung; Kim, Tae Woon; Ahn, Kwang Il [Risk and Environmental Safety Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2017-06-15

    Interfacing-system loss-of-coolant-accident (ISLOCA) has been identified as the most hazardous accident scenario in the typical PWR plants. The present study as an effort to improve the knowledge of the source term to the environment during ISLOCA focuses on an improvement of the estimation method. The improvement was performed to take into account an effect of broken pipeline and auxiliary building structures relevant to ISLOCA. An estimation of the source term to the environment was for the OPR-1000 plants by MELOCR code version 1.8.6. The key features of the source term showed that the massive amount of fission products departed from the beginning of core degradation to the vessel breach. The release amount of fission products may be affected by the broken pipeline and the auxiliary building structure associated with release pathway.

  12. Reduction in requirements for allogeneic blood products: nonpharmacologic methods.

    Science.gov (United States)

    Hardy, J F; Bélisle, S; Janvier, G; Samama, M

    1996-12-01

    Various strategies have been proposed to decrease bleeding and allogeneic transfusion requirements during and after cardiac operations. This article attempts to document the usefulness, or lack thereof, of the nonpharmacologic methods available in clinical practice. Blood conservation methods were reviewed in chronologic order, as they become available to patients during the perisurgical period. The literature in support of or against each strategy was reexamined critically. Avoidance of preoperative anemia and adherence to published guidelines for the practice of transfusion are of paramount importance. Intraoperatively, tolerance of low hemoglobin concentrations and use of autologous blood (predonated or harvested before bypass) will reduce allogeneic transfusions. The usefulness of plateletpheresis and retransfusion of shed mediastinal fluid remains controversial. Intraoperatively and postoperatively, maintenance of normothermia contributes to improved hemostasis. Several approaches have been shown to be effective. An efficient combination of methods can reduce, and sometimes abolish, the need for allogeneic blood products after cardiac operations, inasmuch as all those involved in the care of cardiac surgical patients adhere thoughtfully to existing transfusion guidelines.

  13. Portion distortion: typical portion sizes selected by young adults.

    Science.gov (United States)

    Schwartz, Jaime; Byrd-Bredbenner, Carol

    2006-09-01

    The incidence of obesity has increased in parallel with increasing portion sizes of individually packaged and ready-to-eat prepared foods as well as foods served at restaurants. Portion distortion (perceiving large portion sizes as appropriate amounts to eat at a single eating occasion) may contribute to increasing energy intakes and expanding waistlines. The purpose of this study was to determine typical portion sizes that young adults select, how typical portion sizes compare with reference portion sizes (based in this study on the Nutrition Labeling and Education Act's quantities of food customarily eaten per eating occasion), and whether the size of typical portions has changed over time. Young adults (n=177, 75% female, age range 16 to 26 years) at a major northeastern university. Participants served themselves typical portion sizes of eight foods at breakfast (n=63) or six foods at lunch or dinner (n=62, n=52, respectively). Typical portion-size selections were unobtrusively weighed. A unit score was calculated by awarding 1 point for each food with a typical portion size that was within 25% larger or smaller than the reference portion; larger or smaller portions were given 0 points. Thus, each participant's unit score could range from 0 to 8 at breakfast or 0 to 6 at lunch and dinner. Analysis of variance or t tests were used to determine whether typical and reference portion sizes differed, and whether typical portion sizes changed over time. Mean unit scores (+/-standard deviation) were 3.63+/-1.27 and 1.89+/-1.14, for breakfast and lunch/dinner, respectively, indicating little agreement between typical and reference portion sizes. Typical portions sizes in this study tended to be significantly different from those selected by young adults in a similar study conducted 2 decades ago. Portion distortion seems to affect the portion sizes selected by young adults for some foods. This phenomenon has the potential to hinder weight loss, weight maintenance, and

  14. 41 CFR 102-36.35 - What is the typical process for disposing of excess personal property?

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What is the typical... agency property or by obtaining excess property from other federal agencies in lieu of new procurements... eligible non-federal activities. Title 40 of the United States Code requires that surplus personal property...

  15. A study on prioritizing typical women’s entrepreneur characteristics

    Directory of Open Access Journals (Sweden)

    Ebrahim Ramezani

    2014-07-01

    Full Text Available Entrepreneurship is one of the main pivot of progress and growth of every country. The spread of entrepreneurship particularly the role of women in this category has speeded up today more than any other times. Many of researchers believe that attention to women entrepreneurship plays remarkable role in soundness and safety of nation’s economy. Maybe in Iran less attention has been paid to this matter in proportion to other countries and due to various reasons, there are not many entrepreneur woman. However, employing typical entrepreneur women in various fields of productivity, industrial, commercial, social and cultural and even higher than these, in country’s political issue proves that women’s role is magnificent and in many cases they enjoy higher abilities in portion to men. In this paper, using additive ratio assessment (ARAS as a prioritizing method, eleven entrepreneur women were chosen for prioritizing criteria for measuring a typical women’s entrepreneurship characteristics. The results show that the balance between work and family among criteria are propounded as the highest weight and fulfilling different jobs simultaneously as the lowest weight.

  16. Thermoluminescence dating method

    International Nuclear Information System (INIS)

    Zink, A.

    2004-01-01

    A crystal that is submitted to radiation stores energy and releases this energy under the form of light whenever it is heated. These 2 properties: the ability to store energy and the ability to reset the energy stored are the pillars on which time dating methods like thermoluminescence are based. A typical accuracy of the thermoluminescence method is between 5 to 7% but an accuracy of 3% can be reached with a sufficient number of measurement. This article describes the application of thermoluminescence to the dating of a series of old terra-cotta statues. This time measurement is absolute and does not require any calibration, it represents the time elapsed since the last heating of the artifact. (A.C.)

  17. Modeling of requirement specification for safety critical real time computer system using formal mathematical specifications

    International Nuclear Information System (INIS)

    Sankar, Bindu; Sasidhar Rao, B.; Ilango Sambasivam, S.; Swaminathan, P.

    2002-01-01

    Full text: Real time computer systems are increasingly used for safety critical supervision and control of nuclear reactors. Typical application areas are supervision of reactor core against coolant flow blockage, supervision of clad hot spot, supervision of undesirable power excursion, power control and control logic for fuel handling systems. The most frequent cause of fault in safety critical real time computer system is traced to fuzziness in requirement specification. To ensure the specified safety, it is necessary to model the requirement specification of safety critical real time computer systems using formal mathematical methods. Modeling eliminates the fuzziness in the requirement specification and also helps to prepare the verification and validation schemes. Test data can be easily designed from the model of the requirement specification. Z and B are the popular languages used for modeling the requirement specification. A typical safety critical real time computer system for supervising the reactor core of prototype fast breeder reactor (PFBR) against flow blockage is taken as case study. Modeling techniques and the actual model are explained in detail. The advantages of modeling for ensuring the safety are summarized

  18. Investigating the probability of detection of typical cavity shapes through modelling and comparison of geophysical techniques

    Science.gov (United States)

    James, P.

    2011-12-01

    assessed. The density of survey points required to achieve a required probability of detection can be calculated. The software aids discriminate choice of technique, improves survey design, and increases the likelihood of survey success; all factors sought in the engineering industry. As a simple example, the response from magnetometry, gravimetry, and gravity gradient techniques above an example 3m deep, 1m cube air cavity in limestone across a 15m grid was calculated. The maximum responses above the cavity are small (amplitudes of 0.018nT, 0.0013mGal, 8.3eotvos respectively), but at typical site noise levels the detection reliability is over 50% for the gradient gravity method on a single survey line. Increasing the number of survey points across the site increases the reliability of detection of the anomaly by the addition of probabilities. We can calculate the probability of detection at different profile spacings to assess the best possible survey design. At 1m spacing the overall probability of by the gradient gravity method is over 90%, and over 60% for magnetometry (at 3m spacing the probability drops to 32%). The use of modelling in near surface surveys is a useful tool to assess the feasibility of a range of techniques to detect subtle signals. Future work will integrate this work with borehole measured parameters.

  19. Toddlers' categorization of typical and scrambled dolls and cars.

    Science.gov (United States)

    Heron, Michelle; Slaughter, Virginia

    2008-09-01

    Previous research has demonstrated discrimination of scrambled from typical human body shapes at 15-18 months of age [Slaughter, V., & Heron, M. (2004). Origins and early development of human body knowledge. Monographs of the Society for Research in Child Development, 69]. In the current study 18-, 24- and 30-month-old infants were presented with four typical and four scrambled dolls in a sequential touching procedure, to assess the development of explicit categorization of human body shapes. Infants were also presented with typical and scrambled cars, allowing comparison of infants' categorization of scrambled and typical exemplars in a different domain. Spontaneous comments regarding category membership were recorded. Girls categorized dolls and cars as typical or scrambled at 30 months, whereas boys only categorized the cars. Earliest categorization was for typical and scrambled cars, at 24 months, but only for boys. Language-based knowledge, coded from infants' comments, followed the same pattern. This suggests that human body knowledge does not have privileged status in infancy. Gender differences in performance are discussed.

  20. Comparison of different methods to extract the required coefficient of friction for level walking.

    Science.gov (United States)

    Chang, Wen-Ruey; Chang, Chien-Chi; Matz, Simon

    2012-01-01

    The required coefficient of friction (RCOF) is an important predictor for slip incidents. Despite the wide use of the RCOF there is no standardised method for identifying the RCOF from ground reaction forces. This article presents a comparison of the outcomes from seven different methods, derived from those reported in the literature, for identifying the RCOF from the same data. While commonly used methods are based on a normal force threshold, percentage of stance phase or time from heel contact, a newly introduced hybrid method is based on a combination of normal force, time and direction of increase in coefficient of friction. Although no major differences were found with these methods in more than half the strikes, significant differences were found in a significant portion of strikes. Potential problems with some of these methods were identified and discussed and they appear to be overcome by the hybrid method. No standard method exists for determining the required coefficient of friction (RCOF), an important predictor for slipping. In this study, RCOF values from a single data set, using various methods from the literature, differed considerably for a significant portion of strikes. A hybrid method may yield improved results.

  1. Bioclim Deliverable D6b: application of statistical down-scaling within the BIOCLIM hierarchical strategy: methods, data requirements and underlying assumptions

    International Nuclear Information System (INIS)

    2004-01-01

    The overall aim of BIOCLIM is to assess the possible long term impacts due to climate change on the safety of radioactive waste repositories in deep formations. The coarse spatial scale of the Earth-system Models of Intermediate Complexity (EMICs) used in BIOCLIM compared with the BIOCLIM study regions and the needs of performance assessment creates a need for down-scaling. Most of the developmental work on down-scaling methodologies undertaken by the international research community has focused on down-scaling from the general circulation model (GCM) scale (with a typical spatial resolution of 400 km by 400 km over Europe in the current generation of models) using dynamical down-scaling (i.e., regional climate models (RCMs), which typically have a spatial resolution of 50 km by 50 km for models whose domain covers the European region) or statistical methods (which can provide information at the point or station scale) in order to construct scenarios of anthropogenic climate change up to 2100. Dynamical down-scaling (with the MAR RCM) is used in BIOCLIM WP2 to down-scale from the GCM (i.e., IPSL C M4 D ) scale. In the original BIOCLIM description of work, it was proposed that UEA would apply statistical down-scaling to IPSL C M4 D output in WP2 as part of the hierarchical strategy. Statistical down-scaling requires the identification of statistical relationships between the observed large-scale and regional/local climate, which are then applied to large-scale GCM output, on the assumption that these relationships remain valid in the future (the assumption of stationarity). Thus it was proposed that UEA would investigate the extent to which it is possible to apply relationships between the present-day large-scale and regional/local climate to the relatively extreme conditions of the BIOCLIM WP2 snapshot simulations. Potential statistical down-scaling methodologies were identified from previous work performed at UEA. Appropriate station data from the case

  2. A generalized window energy rating system for typical office buildings

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Cheng; Chen, Tingyao; Yang, Hongxing; Chung, Tse-ming [Research Center for Building Environmental Engineering, Department of Building Services Engineering, The Hong Kong Polytechnic University, Hong Kong (China)

    2010-07-15

    Detailed computer simulation programs require lengthy inputs, and cannot directly provide an insight to relationship between the window energy performance and the key window design parameters. Hence, several window energy rating systems (WERS) for residential houses and small buildings have been developed in different countries. Many studies showed that utilization of daylight through elaborate design and operation of windows leads to significant energy savings in both cooling and lighting in office buildings. However, the current WERSs do not consider daylighting effect, while most of daylighting analyses do not take into account the influence of convective and infiltration heat gains. Therefore, a generalized WERS for typical office buildings has been presented, which takes all primary influence factors into account. The model includes embodied and operation energy uses and savings by a window to fully reflect interactions among the influence parameters. Reference locations selected for artificial lighting and glare control in the current common simulation practice may cause uncompromised conflicts, which could result in over- or under-estimated energy performance. Widely used computer programs, DOE2 and ADELINE, for hourly daylighting and cooling simulations have their own weaknesses, which may result in unrealistic or inaccurate results. An approach is also presented for taking the advantages of the both programs and avoiding their weaknesses. The model and approach have been applied to a typical office building of Hong Kong as an example to demonstrate how a WERS in a particular location can be established and how well the model can work. The energy effect of window properties, window-to-wall ratio (WWR), building orientation and lighting control strategies have been analyzed, and can be indicated by the localized WERS. An application example also demonstrates that the algebraic WERS derived from simulation results can be easily used for the optimal design of

  3. Idiopathic intracranial hypertension: A typical presentation

    International Nuclear Information System (INIS)

    Algahtani, Hussein A.; Obeid, Tahir H.; Abuzinadah, Ahmad R.; Baeesa, Saleh S.

    2007-01-01

    Objective was to describe the clinical features of 5 patients with rare atypical presentation of idiopathic intracranial hypertension (IIH), and propose the possible mechanism of this atypical presentation. We carried out a retrospective study of 5 patients, admitted at King Khalid National Guard Hospital, Jeddah, Kingdom of Saudi Arabia with IIH during the period from January 2001 to December 2005. All were females with their age ranges from 24 to 40 years. The clinical presentations, the laboratory and imaging studies were analyzed. The opening pressures of the lumbar puncture tests were documented. All patients were presented with headache. One had typical pain of trigeminal neuralgia and one with neck pain and radiculopathy. Facial diplegia was present in one patient and two patients had bilateral 6th cranial neuropathy. Papilledema was present in all patients except in one patient. Imaging study was normal in all patients, and they had a very high opening pressure during lumbar puncture, except in one patient. All patients achieved full recovery with medical therapy in 6 to 12 weeks with no relapse during the mean follow up of 2 years. Atypical finding in IIH are rare and require a high index of suspicion for early diagnosis. (author)

  4. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    Science.gov (United States)

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  5. SSI response of a typical shear wall structure. Volume 1

    International Nuclear Information System (INIS)

    Johnson, J.J.; Schewe, E.C.; Maslenikov, O.R.

    1984-04-01

    The Simplified Methods project of the US NRC-funded Seismic Safety Margins Research Program (SSMRP) has as its goal the development of a methodology to perform routine seismic probabilistic risk assessments of commercial nuclear power plants. The study reported here develops calibration factors to relate best estimate response to design values accounting for approximations and simplifications in SSI analysis procedures. Nineteen cases were analyzed and in-structure response compared. The structure of interest was a typical shear wall structure. 6 references, 44 figures, 22 tables

  6. Integrated SNG Production in a Typical Nordic Sawmill

    Directory of Open Access Journals (Sweden)

    Sennai Mesfun

    2016-04-01

    Full Text Available Advanced biomass-based motor fuels and chemicals are becoming increasingly important to replace fossil energy sources within the coming decades. It is likely that the new biorefineries will evolve mainly from existing forest industry sites, as they already have the required biomass handling infrastructure in place. The main objective of this work is to assess the potential for increasing the profit margin from sawmill byproducts by integrating innovative downstream processes. The focus is on the techno-economic evaluation of an integrated site for biomass-based synthetic natural gas (bio-SNG production. The option of using the syngas in a biomass-integrated gasification combined cycle (b-IGCC for the production of electricity (instead of SNG is also considered for comparison. The process flowsheets that are used to analyze the energy and material balances are modelled in MATLAB and Simulink. A mathematical process integration model of a typical Nordic sawmill is used to analyze the effects on the energy flows in the overall site, as well as to evaluate the site economics. Different plant sizes have been considered in order to assess the economy-of-scale effect. The technical data required as input are collected from the literature and, in some cases, from experiments. The investment cost is evaluated on the basis of conducted studies, third party supplier budget quotations and in-house database information. This paper presents complete material and energy balances of the considered processes and the resulting process economics. Results show that in order for the integrated SNG production to be favored, depending on the sawmill size, a biofuel subsidy in the order of 28–52 €/MWh SNG is required.

  7. Sampling methods

    International Nuclear Information System (INIS)

    Loughran, R.J.; Wallbrink, P.J.; Walling, D.E.; Appleby, P.G.

    2002-01-01

    Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental sampling methods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories

  8. Accurate and Less-Disturbing Active Anti-Islanding Method based on PLL for Grid-Connected PV Inverters

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2008-01-01

    Islanding prediction is a necessary feature of inverter-based photovoltaic (PV) system in order to meet stringent standard requirements for interconnection with the electrical grid. Both passive and active anti-islanding methods exist. Typically, active methods modify a given parameter, which also...... extracted from the voltage at PCC moves outside of a preset threshold value. This new active anti-islanding method meets both standard requirements IEEE 929-2000, IEEE 1547.1 and VDE 0126.1.1. The disturbance used by this method is small compared to other active anti-islanding methods, such as active...

  9. Partitioning Evapotranspiration for Three Typical Ecosystems in the Heihe River Basin, Northwestern China

    Science.gov (United States)

    Zhou, S.; Yu, B.; Zhang, Y.; Huang, Y.; Wang, G.

    2017-12-01

    It is crucial to improve water use efficiency (WUE) and the transpiration fraction of evapotranspiration (T/ET) for water conservation in arid regions. As a link between carbon and water cycling, WUE is defined as the ratio of gross primary productivity (GPP) and ET at the ecosystem scale. By incorporating the effect of vapor pressure deficit (VPD), two underlying WUE (uWUE) formulations, i.e. a potential uWUE (uWUEp=GPP·VPD0.5/T) and an apparent uWUE (uWUEa=GPP·VPD0.5/ET), were proposed. uWUEp is nearly constant for a given vegetation type, while uWUEa varies with T/ET. The ratio of uWUEa and uWUEp was then used to estimate T/ET. This new method for ET partitioning was applied to three typical ecosystems in the Heihe River Basin. Growing season T/ET at the Daman site (0.63) was higher than that at the Arou and Huyanglin sites (0.55) due to the application of plastic film mulching. The effect of leaf area index (LAI) on seasonal variations in T/ET was strong for Arou (R2=0.74) and Daman (R2=0.76) sites, but weak for Huyanglin (R2=0.44) site. Daily T/ET derived using the uWUE method agreed with that using the isotope and lysimeter/eddy covariance methods during the peak growth season at the Daman site. The estimated T using the uWUE method showed consistent seasonal and diurnal patterns and magnitudes with that using the sap flow method at the Huyanglin site. In addition, the uWUE method is scale-independent, and can effectively capture T/ET variations in relation to LAI changes and the abrupt T/ET changes in response to individual irrigation events. These advantages make the uWUE method more effective for ET partitioning at the ecosystem scale, and can be used for water resources management by predicting seasonal pattern of irrigation water requirements in arid regions.

  10. A novel calibration method for phase-locked loops

    DEFF Research Database (Denmark)

    Cassia, Marco; Shah, Peter Jivan; Bruun, Erik

    2005-01-01

    A novel method to calibrate the frequency response of a Phase-Locked Loop is presented. The method requires just an additional digital counter to measure the natural frequency of the PLL; moreover it is capable of estimating the static phase offset. The measured value can be used to tune the PLL ...... response to the desired value. The method is demonstrated mathematically on a typical PLL topology and it is extended to SigmaDelta fractional-N PLLs. A set of simulations performed with two different simulators is used to verify the applicability of the method.......A novel method to calibrate the frequency response of a Phase-Locked Loop is presented. The method requires just an additional digital counter to measure the natural frequency of the PLL; moreover it is capable of estimating the static phase offset. The measured value can be used to tune the PLL...

  11. A novel method of 18F radiolabeling for PET.

    NARCIS (Netherlands)

    McBride, W.J.; Sharkey, R.M.; Karacay, H.; D'Souza, C.A.; Rossi, E.A.; Laverman, P.; Chang, C.H.; Boerman, O.C.; Goldenberg, D.M.

    2009-01-01

    Small biomolecules are typically radiolabeled with (18)F by binding it to a carbon atom, a process that usually is designed uniquely for each new molecule and requires several steps and hours to produce. We report a facile method wherein (18)F is first attached to aluminum as Al(18)F, which is then

  12. State analysis requirements database for engineering complex embedded systems

    Science.gov (United States)

    Bennett, Matthew B.; Rasmussen, Robert D.; Ingham, Michel D.

    2004-01-01

    It has become clear that spacecraft system complexity is reaching a threshold where customary methods of control are no longer affordable or sufficiently reliable. At the heart of this problem are the conventional approaches to systems and software engineering based on subsystem-level functional decomposition, which fail to scale in the tangled web of interactions typically encountered in complex spacecraft designs. Furthermore, there is a fundamental gap between the requirements on software specified by systems engineers and the implementation of these requirements by software engineers. Software engineers must perform the translation of requirements into software code, hoping to accurately capture the systems engineer's understanding of the system behavior, which is not always explicitly specified. This gap opens up the possibility for misinterpretation of the systems engineer's intent, potentially leading to software errors. This problem is addressed by a systems engineering tool called the State Analysis Database, which provides a tool for capturing system and software requirements in the form of explicit models. This paper describes how requirements for complex aerospace systems can be developed using the State Analysis Database.

  13. Theoretical overview of heating power and necessary heating supply temperatures in typical Danish single-family houses from the 1900s

    DEFF Research Database (Denmark)

    Østergaard, Dorte Skaarup; Svendsen, Svend

    2016-01-01

    in typical Danish single-family houses constructed in the 1900s. The study provides a simplified theoretical overview of typical building constructions and standards for the calculation of design heat loss and design heating power in Denmark in the 1900s. The heating power and heating demand in six typical...... Danish single-family houses constructed in the 1900s were estimated based on simple steady-state calculations. We found that the radiators in existing single-family houses should not necessarilrbe expected to be over-dimensioned compared to current design heat loss. However, there is considerable...... potential for using low-temperature space heating in existing single-family houses in typical operation conditions. Older houses were not always found to require higher heating system temperatures than newer houses. We found that when these houses have gone through reasonable energy renovations, most...

  14. Anchorage of equipment - requirements and verification methods with emphasis on equipment of existing and constructed VVER-type nuclear power plants

    International Nuclear Information System (INIS)

    Masopust, R.

    1999-01-01

    Criteria and verification methods which are recommended for use in the capacity evaluation of anchorage of safety-related equipment at WWER-type nuclear power plants are presented. Developed in compliance with the relevant basic standards documents specifically for anchorage of WWER-type equipment components, the criteria and methods cover different types of anchor bolts and other anchorage elements which are typical of existing, constructed, or reconstructed WWER-type nuclear power plants

  15. Plutonium-239 production rate study using a typical fusion reactor

    International Nuclear Information System (INIS)

    Faghihi, F.; Havasi, H.; Amin-Mozafari, M.

    2008-01-01

    The purpose of the present paper is to compute fissile 239 Pu material by supposed typical fusion reactor operation to make the fuel requirement for other purposes (e.g. MOX fissile fuel, etc.). It is assumed that there is a fusion reactor has a cylindrical geometry and uses uniformly distributed deuterium-tritium as fuel so that neutron wall load is taken at 10(MW)/(m 2 ) . Moreover, the reactor core is surrounded by six suggested blankets to make best performance of the physical conditions described herein. We determined neutron flux in each considered blanket as well as tritium self-sufficiency using two groups neutron energy and then computation is followed by the MCNP-4C code. Finally, material depletion according to a set of dynamical coupled differential equations is solved to estimate 239 Pu production rate. Produced 239 Pu is compared with two typical fission reactors to find performance of plutonium breeding ratio in the case of the fusion reactor. We found that 0.92% of initial U is converted into fissile Pu by our suggested fusion reactor with thermal power of 3000 MW. For comparison, 239 Pu yield of suggested large scale PWR is about 0.65% and for LMFBR is close to 1.7%. The results show that the fusion reactor has an acceptable efficiency for Pu production compared with a large scale PWR fission reactor type

  16. Definition of areas requiring criticality alarm annunciation and emergency control

    International Nuclear Information System (INIS)

    Hobson, J.M.

    1988-01-01

    The design of fissile material handling at British Nuclear Fuels plc requires the provision of a criticality incident detection system unless a specific case for omission can be formally made. Where such systems are provided, the 100 mSv contour resulting from a reference criticality incident must be restricted to an area of administrative control within which it is reasonably practicable to provide alarm annunciation and for which emergency arrangements can be defined. For typical reprocessing plant applications, the definition of these areas, and their restriction by provision of shielding where necessary, potentially requires a very large number of three dimensional neutron transport calculations in complex geometries. However, by considering the requirements and nature of this assessment, simple generic methods have been developed and justified. Consequently, rapid and inexpensive assessments of control areas can be carried out

  17. Adjust the method of the FMEA to the requirements of the aviation industry

    Directory of Open Access Journals (Sweden)

    Andrzej FELLNER

    2015-12-01

    Full Text Available The article presents a summary of current methods used in aviation and rail transport. It also contains a proposal to adjust the method of the FMEA to the latest requirements of the airline industry. The authors suggested tables of indicators Zn, Pr and Dt necessary to implement FMEA method of risk analysis taking into account current achievements aerospace and rail safety. Also proposed acceptable limits of the RPN number which allows you to classify threats.

  18. 12 CFR 408.6 - Typical classes of action.

    Science.gov (United States)

    2010-01-01

    ... 12 Banks and Banking 4 2010-01-01 2010-01-01 false Typical classes of action. 408.6 Section 408.6 Banks and Banking EXPORT-IMPORT BANK OF THE UNITED STATES PROCEDURES FOR COMPLIANCE WITH THE NATIONAL ENVIRONMENTAL POLICY ACT Eximbank Implementing Procedures § 408.6 Typical classes of action. (a) Section 1507.3...

  19. Chapter 15: Commercial New Construction Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Keates, Steven [ADM Associates, Inc., Atlanta, GA (United States)

    2017-10-09

    This protocol is intended to describe the recommended method when evaluating the whole-building performance of new construction projects in the commercial sector. The protocol focuses on energy conservation measures (ECMs) or packages of measures where evaluators can analyze impacts using building simulation. These ECMs typically require the use of calibrated building simulations under Option D of the International Performance Measurement and Verification Protocol (IPMVP).

  20. Study on the three-station typical network deployments of workspace Measurement and Positioning System

    Science.gov (United States)

    Xiong, Zhi; Zhu, J. G.; Xue, B.; Ye, Sh. H.; Xiong, Y.

    2013-10-01

    As a novel network coordinate measurement system based on multi-directional positioning, workspace Measurement and Positioning System (wMPS) has outstanding advantages of good parallelism, wide measurement range and high measurement accuracy, which makes it to be the research hotspots and important development direction in the field of large-scale measurement. Since station deployment has a significant impact on the measurement range and accuracy, and also restricts the use-cost, the optimization method of station deployment was researched in this paper. Firstly, positioning error model was established. Then focusing on the small network consisted of three stations, the typical deployments and error distribution characteristics were studied. Finally, through measuring the simulated fuselage using typical deployments at the industrial spot and comparing the results with Laser Tracker, some conclusions are obtained. The comparison results show that under existing prototype conditions, I_3 typical deployment of which three stations are distributed in a straight line has an average error of 0.30 mm and the maximum error is 0.50 mm in the range of 12 m. Meanwhile, C_3 typical deployment of which three stations are uniformly distributed in the half-circumference of an circle has an average error of 0.17 mm and the maximum error is 0.28 mm. Obviously, C_3 typical deployment has a higher control effect on precision than I_3 type. The research work provides effective theoretical support for global measurement network optimization in the future work.

  1. ONE TYPICAL EXTREMUM IN ELECTRICAL PROBLEMS

    Directory of Open Access Journals (Sweden)

    V. I. Goroshko

    2014-01-01

    Full Text Available The aim of this work is to attract attention of teachers, scientific personnel, engineers and students to one peculiarity of extremum seeking in different electrical problems. This feature lies in the fact that in many parts of electrical engineering extremum seeking comes to analysis one and the same mathematical structure (T-structure, but differences lie only in many symbols (designation.In one problems this structure appear in finale, the most simple form, but in others – T-structure is “veiled”, and as a rule  we need  elementary algebraic transformation to detect it.Taking into account high frequency of this structure appearing in electrical problems, in the first part of article the authors  carried out the investigation of extremum characteristics of T-structure and show the results in easy algorithms. To determine the typical T-structure there were taken five problems-examples for extremum seeking  from different parts of electrical engineering. The first and the second examples belong to the theory of electrical circuits.In the first example the problem of maximum active load power obtaining was considered, in the second we see the solution of problem for inductive coupled circuit adjustment in order to obtain the hump current. In the third example the band active filter, built on operating amplifier, is analyzed. According to these methods, taken in the first part of article, the frequency is determined, on which amplifier provides maximum  amplification factor. The forth example deals with analysis of efficiency of transformer. According to algorithm, the optimal efficiency of transformer’s load and also equation for its maximum was determined in this article. In the fifth example the mechanical characteristics of induction motor is analyzed. It is indicated how, on the basis of algorithms article, to obtain equations for critical slip and motor moment, and also the simple development of formula Klossa.The methods of

  2. A New Method of On-line Grid Impedance Estimation for PV Inverter

    DEFF Research Database (Denmark)

    Teodorescu, Remus; Asiminoaei, Lucian; Blaabjerg, Frede

    2004-01-01

    for on-line measuring the grid impedance is presented. The presented method requires no extra hardware being accommodated by typical PV inverters, sensors and CPU, to provide a fast and low cost approach of on-line impedance measurement. By injecting a non-characteristic harmonic current and measuring...

  3. Design requirements, criteria and methods for seismic qualification of CANDU power plants

    International Nuclear Information System (INIS)

    Singh, N.; Duff, C.G.

    1979-10-01

    This report describes the requirements and criteria for the seismic design and qualification of systems and equipment in CANDU nuclear power plants. Acceptable methods and techniques for seismic qualification of CANDU nuclear power plants to mitigate the effects or the consequences of earthquakes are also described. (auth)

  4. Economy of typical food: technical restrictions and organizative challenges

    Directory of Open Access Journals (Sweden)

    Elena Viganò

    2009-10-01

    Full Text Available The economic analysis of typical agri-food products requires to be focused on the following issues: i the specific features of the offering system; ii the technical restrictions established by the EU regulations on Protected designation of origin (Pdo and Pgi and, iii the strategies aimed at product differentiation and for value creation for the consumer. Considering this latest aspect, it is important to notice that the specificity of the agricultural raw materials, the use of traditional production techniques of production coming from the tradition of the place and certification represent only a prerequisite for the differentiation of the product on the market against standard products. The problem is that the specificity of local product comes from attributes (tangible and intangible of quality which are not directly accessible, nor verifiable, by the consumer when he/she makes purchasing choices. This situation persists despite the greater propensity of modern consumer to make investments in information and his/her greater attention and larger background towards the acknowledgement of different offers based on quality. This paper tends to develop an analysis on a theoretical and operative basis upon open strategies that can be implemented at the enterprise level, and that of agro-food chain and of territorial system in order to promote the quality of products to consumers. In particular, the work addresses the problems connected to the establishment of competitive advantages for Protected Designation of Origin (Pdo and Protected Geographical Indication (Pgi, highlighting that in order to achieve those advantages, firms offering typical products need to differentiate their offering on both material and immaterial ground acting on intrinsic and extrinsic attributes of quality of products, on specific features (natural, historical, cultural, etc. of territorial, on the efficiency of the offering organizational structure, and finally on the

  5. Economy of typical food: technical restrictions and organizative challenges

    Directory of Open Access Journals (Sweden)

    Elena Viganò

    2011-02-01

    Full Text Available The economic analysis of typical agri-food products requires to be focused on the following issues: i the specific features of the offering system; ii the technical restrictions established by the EU regulations on Protected designation of origin (Pdo and Pgi and, iii the strategies aimed at product differentiation and for value creation for the consumer. Considering this latest aspect, it is important to notice that the specificity of the agricultural raw materials, the use of traditional production techniques of production coming from the tradition of the place and certification represent only a prerequisite for the differentiation of the product on the market against standard products. The problem is that the specificity of local product comes from attributes (tangible and intangible of quality which are not directly accessible, nor verifiable, by the consumer when he/she makes purchasing choices. This situation persists despite the greater propensity of modern consumer to make investments in information and his/her greater attention and larger background towards the acknowledgement of different offers based on quality. This paper tends to develop an analysis on a theoretical and operative basis upon open strategies that can be implemented at the enterprise level, and that of agro-food chain and of territorial system in order to promote the quality of products to consumers. In particular, the work addresses the problems connected to the establishment of competitive advantages for Protected Designation of Origin (Pdo and Protected Geographical Indication (Pgi, highlighting that in order to achieve those advantages, firms offering typical products need to differentiate their offering on both material and immaterial ground acting on intrinsic and extrinsic attributes of quality of products, on specific features (natural, historical, cultural, etc. of territorial, on the efficiency of the offering organizational structure, and finally on the

  6. Anthropic reasoning and typicality in multiverse cosmology and string theory

    International Nuclear Information System (INIS)

    Weinstein, Steven

    2006-01-01

    Anthropic arguments in multiverse cosmology and string theory rely on the weak anthropic principle (WAP). We show that the principle is fundamentally ambiguous. It can be formulated in one of two ways, which we refer to as WAP 1 and WAP 2 . We show that WAP 2 , the version most commonly used in anthropic reasoning, makes no physical predictions unless supplemented by a further assumption of 'typicality', and we argue that this assumption is both misguided and unjustified. WAP 1 , however, requires no such supplementation; it directly implies that any theory that assigns a non-zero probability to our universe predicts that we will observe our universe with probability one. We argue, therefore, that WAP 1 is preferable, and note that it has the benefit of avoiding the inductive overreach characteristic of much anthropic reasoning

  7. An adaptation of Krylov subspace methods to path following

    Energy Technology Data Exchange (ETDEWEB)

    Walker, H.F. [Utah State Univ., Logan, UT (United States)

    1996-12-31

    Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.

  8. Early Freezing of Gait: Atypical versus Typical Parkinson Disorders

    Directory of Open Access Journals (Sweden)

    Abraham Lieberman

    2015-01-01

    Full Text Available In 18 months, 850 patients were referred to Muhammad Ali Parkinson Center (MAPC. Among them, 810 patients had typical Parkinson disease (PD and 212 had PD for ≤5 years. Among the 212 patients with early PD, 27 (12.7% had freezing of gait (FOG. Forty of the 850 had atypical parkinsonism. Among these 40 patients, all of whom had symptoms for ≤5 years, 12 (30.0% had FOG. FOG improved with levodopa in 21/27 patients with typical PD but did not improve in the 12 patients with atypical parkinsonism. FOG was associated with falls in both groups of patients. We believe that FOG unresponsive to levodopa in typical PD resembles FOG in atypical parkinsonism. We thus compared the 6 typical PD patients with FOG unresponsive to levodopa plus the 12 patients with atypical parkinsonism with the 21 patients with typical PD responsive to levodopa. We compared them by tests of locomotion and postural stability. Among the patients with FOG unresponsive to levodopa, postural stability was more impaired than locomotion. This finding leads us to believe that, in these patients, postural stability, not locomotion, is the principal problem underlying FOG.

  9. Aligning Requirements-Driven Software Processes with IT Governance

    OpenAIRE

    Nguyen Huynh Anh, Vu; Kolp, Manuel; Heng, Samedi; Wautelet, Yves

    2017-01-01

    Requirements Engineering is closely intertwined with Information Technology (IT) Governance. Aligning IT Governance principles with Requirements-Driven Software Processes allows them to propose governance and management rules for software development to cope with stakeholders’ requirements and expectations. Typically, the goal of IT Governance in software engineering is to ensure that the results of a software organization business processes meet the strategic requirements of the organization...

  10. Social maturity and theory of mind in typically developing children and those on the autism spectrum.

    Science.gov (United States)

    Peterson, Candida C; Slaughter, Virginia P; Paynter, Jessica

    2007-12-01

    Results of several studies using the Vineland scale to explore links between social behavior and theory of mind (ToM) have produced mixed results, especially for children on the autism spectrum. The present pair of studies developed a psychometrically sound, age-referenced measure of social maturity to explore these issues further. In Study 1, 37 typically developing preschoolers took a battery of standard false belief tests of ToM and were rated by their teachers on a newly developed age-referenced social maturity scale with 7 items. In Study 2, a further group of 43 children aged 4 to 12 years (13 with autism, 14 with Asperger's disorder and 16 with typical development) took part in the same procedure. In Study 1, ToM was found to predict typical preschoolers' social maturity independently of age and verbal maturity. In Study 2, children with autism scored below age-matched and younger typical developers in both ToM and social maturity. Those with Asperger's disorder did well on ToM but poorly on social maturity. Study 2 replicated Study 1's finding (for typical children and for the full sample) that ToM was linked with social maturity independently of age and verbal ability, although the link was not independent of autism diagnosis. Teachers are capable of rating children's social behavior with peers as advanced, on-time or delayed for their age. Suggestive links between these ratings and ToM require further investigation, especially among children on the autism spectrum.

  11. Computed tomography derived fractional flow reserve testing in stable patients with typical angina pectoris

    DEFF Research Database (Denmark)

    Møller Jensen, Jesper; Erik Bøtker, Hans; Norling Mathiassen, Ole

    2017-01-01

    Aims: To assess the use of downstream coronary angiography (ICA) and short-term safety of frontline coronary CT angiography (CTA) with selective CT-derived fractional flow reserve (FFRCT) testing in stable patients with typical angina pectoris. Methods and results: Between 1 January 2016 and 30 J...... of safe cancellation of planned ICAs....

  12. Hydrogen deflagration simulations under typical containment conditions for nuclear safety

    Energy Technology Data Exchange (ETDEWEB)

    Yanez, J., E-mail: jorge.yanez@kit.edu [Institute for Energy and Nuclear Technology, Karlsruhe Institute of Technology, Kaiserstrasse 12, 76131 Karlsruhe (Germany); Kotchourko, A.; Lelyakin, A. [Institute for Energy and Nuclear Technology, Karlsruhe Institute of Technology, Kaiserstrasse 12, 76131 Karlsruhe (Germany)

    2012-09-15

    Highlights: Black-Right-Pointing-Pointer Lean H{sub 2}-air combustion experiments highly relevant to typical NPP simulated. Black-Right-Pointing-Pointer Analyzed effect of temperature, concentration of H{sub 2}, and steam concentration. Black-Right-Pointing-Pointer Similar conditions and H{sub 2} concentration yielded different combustion regimes. Black-Right-Pointing-Pointer Flame instabilities (FIs) were the effect driving divergences. Black-Right-Pointing-Pointer Model developed for acoustic FI in simulations. Agreement experiments obtained. - Abstract: This paper presents the modeling of low-concentration hydrogen deflagrations performed with the recently developed KYLCOM model specially created to perform calculations in large scale domains. Three experiments carried out in THAI facility (performed in the frames of international OECD THAI experimental program) were selected to be analyzed. The tests allow studying lean mixture hydrogen combustion at normal ambient, elevated temperature and superheated and saturated conditions. The experimental conditions considered together with the facility size and shape grant a high relevance degree to the typical NPP containment conditions. The results of the simulations were thoroughly compared with the experimental data, and the comparison was supplemented by the analysis of the combustion regimes taking place in the considered tests. Results of the analysis demonstrated that despite the comparatively small difference in mixture properties, three different combustion regimes can be definitely identified. The simulations of one of the cases required of the modeling of the acoustic-parametric instability which was carefully undertaken.

  13. Plutonium-239 production rate study using a typical fusion reactor

    Energy Technology Data Exchange (ETDEWEB)

    Faghihi, F. [Research Center for Radiation Protection, Shiraz University, Shiraz (Iran, Islamic Republic of)], E-mail: faghihif@shirazu.ac.ir; Havasi, H.; Amin-Mozafari, M. [Department of Nuclear Engineering, School of Engineering, Shiraz University, 71348-51154 Shiraz (Iran, Islamic Republic of)

    2008-05-15

    The purpose of the present paper is to compute fissile {sup 239}Pu material by supposed typical fusion reactor operation to make the fuel requirement for other purposes (e.g. MOX fissile fuel, etc.). It is assumed that there is a fusion reactor has a cylindrical geometry and uses uniformly distributed deuterium-tritium as fuel so that neutron wall load is taken at 10(MW)/(m{sup 2}) . Moreover, the reactor core is surrounded by six suggested blankets to make best performance of the physical conditions described herein. We determined neutron flux in each considered blanket as well as tritium self-sufficiency using two groups neutron energy and then computation is followed by the MCNP-4C code. Finally, material depletion according to a set of dynamical coupled differential equations is solved to estimate {sup 239}Pu production rate. Produced {sup 239}Pu is compared with two typical fission reactors to find performance of plutonium breeding ratio in the case of the fusion reactor. We found that 0.92% of initial U is converted into fissile Pu by our suggested fusion reactor with thermal power of 3000 MW. For comparison, {sup 239}Pu yield of suggested large scale PWR is about 0.65% and for LMFBR is close to 1.7%. The results show that the fusion reactor has an acceptable efficiency for Pu production compared with a large scale PWR fission reactor type.

  14. Predictors and consequences of gender typicality: the mediating role of communality.

    Science.gov (United States)

    DiDonato, Matthew D; Berenbaum, Sheri A

    2013-04-01

    Considerable work has shown the benefits for psychological health of being gender typed (i.e., perceiving oneself in ways that are consistent with one's sex). Nevertheless, little is known about the reasons for the link. In two studies of young adults (total N = 673), we studied (1) the ways in which gender typing is predicted from gender-related interests and personal qualities, and (2) links between gender typing and adjustment (self-esteem and negative emotionality). In the first study, gender typicality was positively predicted by a variety of gender-related characteristics and by communal traits, a female-typed characteristic; gender typicality was also positively associated with adjustment. To clarify the role of communality in predicting gender typicality and its link with adjustment, we conducted a follow-up study examining both gender typicality and "university typicality." Gender typicality was again predicted by gender-related characteristics and communality, and associated with adjustment. Further, university typicality was also predicted by communality and associated with adjustment. Mediation analyses showed that feelings of communality were partly responsible for the links between gender/university typicality and adjustment. Thus, the psychological benefits suggested to accrue from gender typicality may not be specific to gender, but rather may reflect the benefits of normativity in general. These findings were discussed in relation to the broader literature on the relation between identity and adjustment.

  15. New concepts, requirements and methods concerning the periodic inspection of the CANDU fuel channels

    International Nuclear Information System (INIS)

    Denis, J.R.

    1995-01-01

    Periodic inspection of fuel channels is essential for a proper assessment of the structural integrity of these vital components of the reactor. The development of wet channel technologies for non-destructive examination (NDE) of pressure tubes and the high technical performance and reliability of the CIGAR equipment have led, in less than 1 0 years, to the accumulation of a very significant volume of data concerning the flaw mechanisms and structural behaviour of the CANDU fuel channels. On this basis, a new form of the CAN/CSA-N285.4 Standard for Periodic Inspection of CANDU Nuclear Power Plant components was elaborated, introducing new concepts and requirements, in accord with the powerful NDE methods now available. This paper presents these concepts and requirements, and discusses the NDE methods, presently used or under development, to satisfy these requirements. Specific features regarding the fuel channel inspections of Cernavoda NGS Unit 1 are also discussed. (author)

  16. Frequency of celiac disease in adult patients with typical or atypical malabsorption symptoms in isfahan, iran.

    Science.gov (United States)

    Emami, Mohammad Hassan; Kouhestani, Soheila; Karimi, Somayeh; Baghaei, Abdolmahdi; Janghorbani, Mohsen; Jamali, Nahid; Gholamrezaei, Ali

    2012-01-01

    Aim. Atypical presentations of celiac disease (CD) have now been shown to be much more common than classical (typical) form. We evaluated the frequency of CD among adult patients with typical or atypical symptoms of CD. Materials and Methods. Patients referred to two outpatient gastroenterology clinics in Isfahan (IRAN) were categorized into those with typical or atypical symptoms of CD. IgA antitissue transglutaminase antibody was assessed and followed by duodenal biopsy. In patients for whom endoscopy was indicated (independent of the serology), duodenal biopsy was taken. Histopathological changes were assessed according to the Marsh classification. Results. During the study period, 151 and 173 patients with typical and atypical symptoms were evaluated (mean age = 32.8 ± 12.6 and 35.8 ± 14.8 years, 47.0% and 56.0% female, resp.). Frequency of CD in patients with typical and atypical symptoms was calculated, respectively, as 5.9% (9/151) and 1.25% (3/173) based on positive serology and pathology. The overall frequency was estimated as at least 9.2% (14/151) and 4.0% (7/173) when data of seronegative patients were also considered. Conclusions. CD is more frequent among patients with typical symptoms of malabsorption and these patients should undergo duodenal biopsy, irrespective of the serology. In patients with atypical symptoms, serological tests should be performed followed by endoscopic biopsy, and routine duodenal biopsy is recommended when endoscopic evaluation is indicated because of symptoms.

  17. Frequency of Celiac Disease in Adult Patients with Typical or Atypical Malabsorption Symptoms in Isfahan, Iran

    Directory of Open Access Journals (Sweden)

    Mohammad Hassan Emami

    2012-01-01

    Full Text Available Aim. Atypical presentations of celiac disease (CD have now been shown to be much more common than classical (typical form. We evaluated the frequency of CD among adult patients with typical or atypical symptoms of CD. Materials and Methods. Patients referred to two outpatient gastroenterology clinics in Isfahan (IRAN were categorized into those with typical or atypical symptoms of CD. IgA antitissue transglutaminase antibody was assessed and followed by duodenal biopsy. In patients for whom endoscopy was indicated (independent of the serology, duodenal biopsy was taken. Histopathological changes were assessed according to the Marsh classification. Results. During the study period, 151 and 173 patients with typical and atypical symptoms were evaluated (mean age = 32.8±12.6 and 35.8±14.8 years, 47.0% and 56.0% female, resp.. Frequency of CD in patients with typical and atypical symptoms was calculated, respectively, as 5.9% (9/151 and 1.25% (3/173 based on positive serology and pathology. The overall frequency was estimated as at least 9.2% (14/151 and 4.0% (7/173 when data of seronegative patients were also considered. Conclusions. CD is more frequent among patients with typical symptoms of malabsorption and these patients should undergo duodenal biopsy, irrespective of the serology. In patients with atypical symptoms, serological tests should be performed followed by endoscopic biopsy, and routine duodenal biopsy is recommended when endoscopic evaluation is indicated because of symptoms.

  18. Typical Features of Amelogenesis Imperfecta in Two Patients with Bartter’s Syndrome

    Directory of Open Access Journals (Sweden)

    Hercílio Martelli-Júnior

    2012-12-01

    Full Text Available Background/Aims: Amelogenesis imperfecta (AI is due to many inherited defects of enamel formation that affect the quantity and quality of enamel, leading to delay in tooth eruption and cosmetic consequences. AI has been described in association with nephrocalcinosis, which is called the enamel-renal syndrome. The aim of this case report is to describe typical features of AI in 2 patients with Bartter’s syndrome (BS for the first time. Methods: -Eight patients with confirmed BS were systematically screened for dental abnormalities as part of protocol. Those with suggestive clinical features of AI were submitted to panoramic X-ray and decayed teeth were analyzed by scanning electron microscopy. Results: Typical features of AI were detected in 2 girls with BS. These 2 patients showed nephrocalcinosis, and diagnosis and adequate clinical control were delayed. Genetic analysis detected the mutation responsible for BS in 1 of these patients. In this case, BS was due to a homozygous mutation of exon 5 of the KCNJ1 gene resulting in a substitution of valine for alanine at the codon 214 (A214V. Conclusions: The finding of typical features of AI in BS might constitute preliminary evidence that abnormalities of the biomineralization process found in patients with renal tubular disorders might also affect calcium deposition in dental tissues.

  19. Anthropic reasoning and typicality in multiverse cosmology and string theory

    Energy Technology Data Exchange (ETDEWEB)

    Weinstein, Steven [Perimeter Institute for Theoretical Physics, 31 Caroline St, Waterloo, ON N2L 2Y5 (Canada); Department of Philosophy, University of Waterloo, Waterloo, ON N2L 3G1 (Canada); Department of Physics, University of Waterloo, Waterloo, ON N2L 3G1 (Canada)

    2006-06-21

    Anthropic arguments in multiverse cosmology and string theory rely on the weak anthropic principle (WAP). We show that the principle is fundamentally ambiguous. It can be formulated in one of two ways, which we refer to as WAP{sub 1} and WAP{sub 2}. We show that WAP{sub 2}, the version most commonly used in anthropic reasoning, makes no physical predictions unless supplemented by a further assumption of 'typicality', and we argue that this assumption is both misguided and unjustified. WAP{sub 1}, however, requires no such supplementation; it directly implies that any theory that assigns a non-zero probability to our universe predicts that we will observe our universe with probability one. We argue, therefore, that WAP{sub 1} is preferable, and note that it has the benefit of avoiding the inductive overreach characteristic of much anthropic reasoning.

  20. What do students do when asked to diagnose their mistakes? Does it help them? II. A more typical quiz context

    Directory of Open Access Journals (Sweden)

    Edit Yerushalmi

    2012-09-01

    Full Text Available “Self-diagnosis tasks” aim at fostering students’ learning in an examination context by requiring students to present diagnoses of their solutions to quiz problems. We examined the relationship between students’ learning from self-diagnosis and the typicality of the problem situation. Four recitation groups in an introductory physics class (∼200 students were divided into a control group and three intervention groups in which different levels of guidance were provided to aid students in their performance of self-diagnosis activities. The self-diagnosis task was administered twice, first in an atypical problem situation and then in a typical one. In a companion paper we reported our findings in the context of an atypical problem situation. Here we report our findings in the context of a typical problem situation and discuss the effect of problem typicality on students’ self-diagnosis performance and subsequent success in solving transfer problems. We show that the self-diagnosis score was correlated with subsequent problem-solving performance only in the context of a typical problem situation, and only when textbooks and notebooks were the sole means of guidance available to the students for assisting them with diagnosis.

  1. DEVELOPMENT OF METHODOLOGY FOR TRAFFIC ACCIDENT FORECASTING AT VARIOUS TYPICAL URBAN AREAS

    OpenAIRE

    D. V. Kapsky

    2012-01-01

    The paper provides investigation results pertaining to development of methodology for forecasting traffic accidents using a “conflict zone” method that considers potential danger for two typical urban areas, namely: signaled crossings and bumps that are made in the areas of zebra crossings and it also considers various types and kinds of conflicts. The investigations have made it possible to obtain various indices of threshold sensitivity in respect of  potential risks  and in relation to tra...

  2. A simple method for the measurement of reflective foil emissivity

    International Nuclear Information System (INIS)

    Ballico, M. J.; Ham, E. W. M. van der

    2013-01-01

    Reflective metal foil is widely used to reduce radiative heat transfer within the roof space of buildings. Such foils are typically mass-produced by vapor-deposition of a thin metallic coating onto a variety of substrates, ranging from plastic-coated reinforced paper to 'bubble-wrap'. Although the emissivity of such surfaces is almost negligible in the thermal infrared, typically less than 0.03, an insufficiently thick metal coating, or organic contamination of the surface, can significantly increase this value. To ensure that the quality of the installed insulation is satisfactory, Australian building code AS/NZS 4201.5:1994 requires a practical agreed method for measurement of the emissivity, and the standard ASTM-E408 is implied. Unfortunately this standard is not a 'primary method' and requires the use of specified expensive apparatus and calibrated reference materials. At NMIA we have developed a simple primary technique, based on an apparatus to thermally modulate the sample and record the apparent modulation in infra-red radiance with commercially available radiation thermometers. The method achieves an absolute accuracy in the emissivity of approximately 0.004 (k=2). This paper theoretically analyses the equivalence between the thermal emissivity measured in this manner, the effective thermal emissivity in application, and the apparent emissivity measured in accordance with ASTM-E408

  3. A simple method for the measurement of reflective foil emissivity

    Science.gov (United States)

    Ballico, M. J.; van der Ham, E. W. M.

    2013-09-01

    Reflective metal foil is widely used to reduce radiative heat transfer within the roof space of buildings. Such foils are typically mass-produced by vapor-deposition of a thin metallic coating onto a variety of substrates, ranging from plastic-coated reinforced paper to "bubble-wrap". Although the emissivity of such surfaces is almost negligible in the thermal infrared, typically less than 0.03, an insufficiently thick metal coating, or organic contamination of the surface, can significantly increase this value. To ensure that the quality of the installed insulation is satisfactory, Australian building code AS/NZS 4201.5:1994 requires a practical agreed method for measurement of the emissivity, and the standard ASTM-E408 is implied. Unfortunately this standard is not a "primary method" and requires the use of specified expensive apparatus and calibrated reference materials. At NMIA we have developed a simple primary technique, based on an apparatus to thermally modulate the sample and record the apparent modulation in infra-red radiance with commercially available radiation thermometers. The method achieves an absolute accuracy in the emissivity of approximately 0.004 (k=2). This paper theoretically analyses the equivalence between the thermal emissivity measured in this manner, the effective thermal emissivity in application, and the apparent emissivity measured in accordance with ASTM-E408.

  4. Recall Memory in Children with Down Syndrome and Typically Developing Peers Matched on Developmental Age

    Science.gov (United States)

    Milojevich, H.; Lukowski, A.

    2016-01-01

    Background: Whereas research has indicated that children with Down syndrome (DS) imitate demonstrated actions over short delays, it is presently unknown whether children with DS recall information over lengthy delays at levels comparable with typically developing (TD) children matched on developmental age. Method: In the present research, 10…

  5. Assessment of produced water contaminated soils to determine remediation requirements

    International Nuclear Information System (INIS)

    Clodfelter, C.

    1995-01-01

    Produced water and drilling fluids can impact the agricultural properties of soil and result in potential regulatory and legal liabilities. Produced water typically is classified as saline or a brine and affects surface soils by increasing the sodium and chloride content. Sources of produced water which can lead to problems include spills from flowlines and tank batteries, permitted surface water discharges and pit areas, particularly the larger pits including reserve pits, emergency pits and saltwater disposal pits. Methods to assess produced water spills include soil sampling with various chemical analyses and surface geophysical methods. A variety of laboratory analytical methods are available for soil assessment which include electrical conductivity, sodium adsorption ratio, cation exchange capacity, exchangeable sodium percent and others. Limiting the list of analytical parameters to reduce cost and still obtain the data necessary to assess the extent of contamination and determine remediation requirements can be difficult. The advantage to using analytical techniques is that often regulatory remediation standards are tied to soil properties determined from laboratory analysis. Surface geophysical techniques can be an inexpensive method to rapidly determine the extent and relative magnitude of saline soils. Data interpretations can also provide an indication of the horizontal as well as the vertical extent of impacted soils. The following discussion focuses on produced water spills on soil and assessment of the impacted soil. Produced water typically contains dissolved hydrocarbons which are not addressed in this discussion

  6. Identifying Typical Movements Among Indoor Objects

    DEFF Research Database (Denmark)

    Radaelli, Laura; Sabonis, Dovydas; Lu, Hua

    2013-01-01

    With the proliferation of mobile computing, positioning systems are becoming available that enable indoor location-based services. As a result, indoor tracking data is also becoming available. This paper puts focus on one use of such data, namely the identification of typical movement patterns...

  7. Interpreting Space-Mission LET Requirements for SEGR in Power MOSFETs

    Science.gov (United States)

    Lauenstein, J. M.; Ladbury, R. L.; Batchelor, D. A.; Goldsman, N.; Kim, H. S.; Phan, A. M.

    2010-01-01

    A Technology Computer Aided Design (TCAD) simulation-based method is developed to evaluate whether derating of high-energy heavy-ion accelerator test data bounds the risk for single-event gate rupture (SEGR) from much higher energy on-orbit ions for a mission linear energy transfer (LET) requirement. It is shown that a typical derating factor of 0.75 applied to a single-event effect (SEE) response curve defined by high-energy accelerator SEGR test data provides reasonable on-orbit hardness assurance, although in a high-voltage power MOSFET, it did not bound the risk of failure.

  8. Quantitative Nuclear Medicine Imaging: Concepts, Requirements and Methods

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-01-15

    The absolute quantification of radionuclide distribution has been a goal since the early days of nuclear medicine. Nevertheless, the apparent complexity and sometimes limited accuracy of these methods have prevented them from being widely used in important applications such as targeted radionuclide therapy or kinetic analysis. The intricacy of the effects degrading nuclear medicine images and the lack of availability of adequate methods to compensate for these effects have frequently been seen as insurmountable obstacles in the use of quantitative nuclear medicine in clinical institutions. In the last few decades, several research groups have consistently devoted their efforts to the filling of these gaps. As a result, many efficient methods are now available that make quantification a clinical reality, provided appropriate compensation tools are used. Despite these efforts, many clinical institutions still lack the knowledge and tools to adequately measure and estimate the accumulated activities in the human body, thereby using potentially outdated protocols and procedures. The purpose of the present publication is to review the current state of the art of image quantification and to provide medical physicists and other related professionals facing quantification tasks with a solid background of tools and methods. It describes and analyses the physical effects that degrade image quality and affect the accuracy of quantification, and describes methods to compensate for them in planar, single photon emission computed tomography (SPECT) and positron emission tomography (PET) images. The fast paced development of the computational infrastructure, both hardware and software, has made drastic changes in the ways image quantification is now performed. The measuring equipment has evolved from the simple blind probes to planar and three dimensional imaging, supported by SPECT, PET and hybrid equipment. Methods of iterative reconstruction have been developed to allow for

  9. Use of quality planning methods in optimizing welding wire quality characteristics

    Directory of Open Access Journals (Sweden)

    D. Vykydal

    2013-10-01

    Full Text Available The quality of a product is given by the extent, to which the product meets customer requirements. It is generally accepted that the extent, to which the product meets such customer requirements, and, consequently, the resulting quality of the product itself, substantially depend on the early stages of the product lifecycle, i.e. on the design and development stages. Appropriate means for effective product quality planning can be found among quality management methods and tools. These methods are typically employed in engineering production and automotive industry. This paper focuses on exploring the potential of Quality Function Deployment (QFD and Failure Mode and Effect Analysis (FMEA methods for use in metallurgical production, an industrial branch where they have not been commonly employed as yet.

  10. Trading Robustness Requirements in Mars Entry Trajectory Design

    Science.gov (United States)

    Lafleur, Jarret M.

    2009-01-01

    One of the most important metrics characterizing an atmospheric entry trajectory in preliminary design is the size of its predicted landing ellipse. Often, requirements for this ellipse are set early in design and significantly influence both the expected scientific return from a particular mission and the cost of development. Requirements typically specify a certain probability level (6-level) for the prescribed ellipse, and frequently this latter requirement is taken at 36. However, searches for the justification of 36 as a robustness requirement suggest it is an empirical rule of thumb borrowed from non-aerospace fields. This paper presents an investigation into the sensitivity of trajectory performance to varying robustness (6-level) requirements. The treatment of robustness as a distinct objective is discussed, and an analysis framework is presented involving the manipulation of design variables to effect trades between performance and robustness objectives. The scenario for which this method is illustrated is the ballistic entry of an MSL-class Mars entry vehicle. Here, the design variable is entry flight path angle, and objectives are parachute deploy altitude performance and error ellipse robustness. Resulting plots show the sensitivities between these objectives and trends in the entry flight path angles required to design to these objectives. Relevance to the trajectory designer is discussed, as are potential steps for further development and use of this type of analysis.

  11. A method for determining customer requirement weights based on TFMF and TLR

    Science.gov (United States)

    Ai, Qingsong; Shu, Ting; Liu, Quan; Zhou, Zude; Xiao, Zheng

    2013-11-01

    'Customer requirements' (CRs) management plays an important role in enterprise systems (ESs) by processing customer-focused information. Quality function deployment (QFD) is one of the main CRs analysis methods. Because CR weights are crucial for the input of QFD, we developed a method for determining CR weights based on trapezoidal fuzzy membership function (TFMF) and 2-tuple linguistic representation (TLR). To improve the accuracy of CR weights, we propose to apply TFMF to describe CR weights so that they can be appropriately represented. Because the fuzzy logic is not capable of aggregating information without loss, TLR model is adopted as well. We first describe the basic concepts of TFMF and TLR and then introduce an approach to compute CR weights. Finally, an example is provided to explain and verify the proposed method.

  12. Critical requirements of the SSTR method

    International Nuclear Information System (INIS)

    Gold, R.

    1975-08-01

    Discrepancies have been reported in absolute fission rate measurements observed with Solid State Tract Recorders (SSTR) and fission chambers which lie well outside experimental error. As a result of these comparisons, the reliability of the SSTR method has been seriously questioned, and the fission chamber method has been advanced for sole use in absolute fission rate determinations. In view of the absolute accuracy already reported and well documented for the SSTR method, this conclusion is both surprising and unfortunate. Two independent methods are highly desirable. Moreover, these two methods more than compliment one another, since certain in-core experiments may be amenable to either but not both techniques. Consequently, one cannot abandon the SSTR method without sacrificing crucial advantages. A critical reappraisal of certain aspects of the SSTR method is offered in the hope that the source of the current controversy can be uncovered and a long term beneficial agreement between these two methods can therefore be established. (WHK)

  13. Far-infrared irradiation drying behavior of typical biomass briquettes

    International Nuclear Information System (INIS)

    Chen, N.N.; Chen, M.Q.; Fu, B.A.; Song, J.J.

    2017-01-01

    Infrared radiation drying behaviors of four typical biomass briquettes (populus tomentosa leaves, cotton stalk, spent coffee grounds and eucalyptus bark) were investigated based on a lab-scale setup. The effect of radiation source temperatures (100–200 °C) on the far-infrared drying kinetics and heat transfer of the samples was addressed. As the temperature went up from 100 °C to 200 °C, the time required for the four biomass briquettes drying decreased by about 59–66%, and the average values of temperature for the four biomass briquettes increased by about 33–39 °C, while the average radiation heat transfer fluxes increased by about 3.3 times (3.7 times only for the leaves). The specific energy consumptions were 0.622–0.849 kW h kg"−"1. The Modified Midilli model had the better representing for the moisture ratio change of the briquettes. The values of the activation energy for the briquettes in the first falling rate stage were between 20.35 and 24.83 kJ mol"−"1, while those in the second falling rate stage were between 17.89 and 21.93 kJ mol"−"1. The activation energy for the eucalyptus bark briquette in two falling rate stages was the least one, and that for the cotton stalk briquette was less than that for the rest two briquettes. - Highlights: • Far infrared drying behaviors of four typical biomass briquettes were addressed. • The effect of radiation source temperatures on IR drying kinetics was stated. • Radiation heat transfer flux between the sample and heater was evaluated. • Midilli model had the better representing for the drying process of the samples.

  14. From document to database: modernizing requirements management

    International Nuclear Information System (INIS)

    Giajnorio, J.; Hamilton, S.

    2007-01-01

    The creation, communication, and management of design requirements are central to the successful completion of any large engineering project, both technically and commercially. Design requirements in the Canadian nuclear industry are typically numbered lists in multiple documents created using word processing software. As an alternative, GE Nuclear Products implemented a central requirements management database for a major project at Bruce Power. The database configured the off-the-shelf software product, Telelogic Doors, to GE's requirements structure. This paper describes the advantages realized by this scheme. Examples include traceability from customer requirements through to test procedures, concurrent engineering, and automated change history. (author)

  15. Ensemble perception of emotions in autistic and typical children and adolescents

    Directory of Open Access Journals (Sweden)

    Themelis Karaminis

    2017-04-01

    Full Text Available Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a an ‘ensemble’ emotion discrimination task; b a baseline (single-face emotion discrimination task; and c a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average.

  16. A new method for fatigue life prediction based on the Thick Level Set approach

    NARCIS (Netherlands)

    Voormeeren, L.O.; van der Meer, F.P.; Maljaars, J.; Sluys, L.J.

    2017-01-01

    The last decade has seen a growing interest in cohesive zone models for fatigue applications. These cohesive zone models often suffer from a lack of generality and applying them typically requires calibrating a large number of model-specific parameters. To improve on these issues a new method has

  17. A new method for fatigue life prediction based on the Thick Level set approach

    NARCIS (Netherlands)

    Voormeeren, L.O.; Meer, F.P. van der; Maljaars, J.; Sluys, L.J.

    2017-01-01

    The last decade has seen a growing interest in cohesive zone models for fatigue applications. These cohesive zone models often suffer from a lack of generality and applying them typically requires calibrating a large number of model-specific parameters. To improve on these issues a new method has

  18. For Your Local Eyes Only: Culture-Specific Face Typicality Influences Perceptions of Trustworthiness.

    Science.gov (United States)

    Sofer, Carmel; Dotsch, Ron; Oikawa, Masanori; Oikawa, Haruka; Wigboldus, Daniel H J; Todorov, Alexander

    2017-08-01

    Recent findings show that typical faces are judged as more trustworthy than atypical faces. However, it is not clear whether employment of typicality cues in trustworthiness judgment happens across cultures and if these cues are culture specific. In two studies, conducted in Japan and Israel, participants judged trustworthiness and attractiveness of faces. In Study 1, faces varied along a cross-cultural dimension ranging from a Japanese to an Israeli typical face. Own-culture typical faces were perceived as more trustworthy than other-culture typical faces, suggesting that people in both cultures employ typicality cues when judging trustworthiness, but that the cues, indicative of typicality, are culture dependent. Because perceivers may be less familiar with other-culture typicality cues, Study 2 tested the extent to which they rely on available facial information other than typicality, when judging other-culture faces. In Study 2, Japanese and Israeli faces varied from either Japanese or Israeli attractive to unattractive with the respective typical face at the midpoint. For own-culture faces, trustworthiness judgments peaked around own-culture typical face. However, when judging other-culture faces, both cultures also employed attractiveness cues, but this effect was more apparent for Japanese participants. Our findings highlight the importance of culture when considering the effect of typicality on trustworthiness judgments.

  19. From Requirements via Colored Workflow Nets to an Implementation in Several Workflow Systems

    DEFF Research Database (Denmark)

    Mans, Ronny S.; van der Aalst, Willibrordus Martinus Pancratius; Molemann, A.J.

    2007-01-01

    Care organizations, such as hospitals, need to support complex and dynamic workflows. More- over, many disciplines are involved. This makes it important to avoid the typical disconnect between requirements and the actual implementation of the system. This paper proposes an approach where an Execu......Care organizations, such as hospitals, need to support complex and dynamic workflows. More- over, many disciplines are involved. This makes it important to avoid the typical disconnect between requirements and the actual implementation of the system. This paper proposes an approach where...... an Executable Use Case (EUC) and Colored Care organizations, such as hospitals, need to support complex and dynamic workflows. Moreover, many disciplines are involved. This makes it important to avoid the typical disconnect between requirements and the actual implementation of the system. This paper proposes...

  20. Development of a segmentation method for analysis of Campos basin typical reservoir rocks

    Energy Technology Data Exchange (ETDEWEB)

    Rego, Eneida Arendt; Bueno, Andre Duarte [Universidade Estadual do Norte Fluminense Darcy Ribeiro (UENF), Macae, RJ (Brazil). Lab. de Engenharia e Exploracao de Petroleo (LENEP)]. E-mails: eneida@lenep.uenf.br; bueno@lenep.uenf.br

    2008-07-01

    This paper represents a master thesis proposal in Exploration and Reservoir Engineering that have the objective to development a specific segmentation method for digital images of reservoir rocks, which produce better results than the global methods available in the bibliography for the determination of rocks physical properties as porosity and permeability. (author)

  1. Investigating hydrogen peroxide in rainwater of a typical midsized city in tropical Brazil using a novel application of a fluorometric method

    Science.gov (United States)

    Scaramboni, C.; Crispim, C. P.; Toledo, J. C.; Campos, M. L. A. M.

    2018-03-01

    This work investigates the effect of public policies related to vehicle emissions on the lower tropospheric concentrations of H2O2 in a typical midsized city in tropical Brazil. The concentrations of H2O2, SO42-, and NO3- in rainwater samples were determined from 2014 to 2017 in the municipality of Ribeirão Preto in São Paulo State. A fluorometric method, based on the formation of a highly fluorescent product (2‧,7‧-dichlorofluorescein, DCF), was adapted and optimized for the measurement of H2O2 in natural water samples including seawater. The method was highly specific, accurate and sensitive (LOD = 2 nmol L-1). Its main advantage compared to others, was that the fluorophore remained stable for at least 48 h, offering a longer time interval in which to perform the analysis and therefore facilitating fieldwork. Concentrations of H2O2 in rainwater ranged from 5.8 to 96 μmol L-1, with VWM of 28.6 ± 1.4 μmol L-1 (n = 77). Solar radiation appeared to have a greater impact on production than on consumption of H2O2. The annual VWM concentrations of H2O2 in rainwater were negatively correlated with sulfate (at pH important factors affecting the concentration of H2O2 in the atmosphere. This work expands the current records available for the Southern Hemisphere, where there is a considerable paucity of information regarding temporal production and loss of atmospheric H2O2.

  2. Study on safety evaluation method for impact protection structures of spent nuclear fuel carriers

    International Nuclear Information System (INIS)

    Endo, Hisayoshi; Yamada, Yasuhira; Hashizume, Yutaka

    2004-01-01

    From a safety assessment view point, tanker ships transporting spent nuclear fuels such as plutonium including MOX (mixed oxide) fuels and high level radioactive wastes, are required to have security structures for collision accidents. The requirement is now reviewing in keeping with reality of the preset condition. Here, as a typical scenario, the probabilistic safety of VLCC (very large crude carrier) was examined. The FEM (finite element method) simulation analysis and new simple analyses in behalf of Minorsky method based on experience rule have been developed to analyze the collision strength, and their validity were examined. (A. Hishinuma)

  3. Mission from Mars - a method for exploring user requirements for children in a narrative space

    DEFF Research Database (Denmark)

    Dindler, Christian; Ludvigsen, Martin; Lykke-Olesen, Andreas

    2005-01-01

    In this paper a particular design method is propagated as a supplement to existing descriptive approaches to current practice studies especially suitable for gathering requirements for the design of children's technology. The Mission from Mars method was applied during the design of an electronic...... school bag (eBag). The three-hour collaborative session provides a first-hand insight into children's practice in a fun and intriguing way. The method is proposed as a supplement to existing descriptive design methods for interaction design and children....

  4. Spectra of conditionalization and typicality in the multiverse

    Science.gov (United States)

    Azhar, Feraz

    2016-02-01

    An approach to testing theories describing a multiverse, that has gained interest of late, involves comparing theory-generated probability distributions over observables with their experimentally measured values. It is likely that such distributions, were we indeed able to calculate them unambiguously, will assign low probabilities to any such experimental measurements. An alternative to thereby rejecting these theories, is to conditionalize the distributions involved by restricting attention to domains of the multiverse in which we might arise. In order to elicit a crisp prediction, however, one needs to make a further assumption about how typical we are of the chosen domains. In this paper, we investigate interactions between the spectra of available assumptions regarding both conditionalization and typicality, and draw out the effects of these interactions in a concrete setting; namely, on predictions of the total number of species that contribute significantly to dark matter. In particular, for each conditionalization scheme studied, we analyze how correlations between densities of different dark matter species affect the prediction, and explicate the effects of assumptions regarding typicality. We find that the effects of correlations can depend on the conditionalization scheme, and that in each case atypicality can significantly change the prediction. In doing so, we demonstrate the existence of overlaps in the predictions of different "frameworks" consisting of conjunctions of theory, conditionalization scheme and typicality assumption. This conclusion highlights the acute challenges involved in using such tests to identify a preferred framework that aims to describe our observational situation in a multiverse.

  5. Unsupervised Segmentation Methods of TV Contents

    Directory of Open Access Journals (Sweden)

    Elie El-Khoury

    2010-01-01

    Full Text Available We present a generic algorithm to address various temporal segmentation topics of audiovisual contents such as speaker diarization, shot, or program segmentation. Based on a GLR approach, involving the ΔBIC criterion, this algorithm requires the value of only a few parameters to produce segmentation results at a desired scale and on most typical low-level features used in the field of content-based indexing. Results obtained on various corpora are of the same quality level than the ones obtained by other dedicated and state-of-the-art methods.

  6. Organisational reviews - requirements, methods and experience. Progress report 2006

    Energy Technology Data Exchange (ETDEWEB)

    Reiman, T.; Oedewald, P.; Wahlstroem, B. [VTT, Technical Research Centre of Finland (Finland); Rollenhagen, C.; Kahlbom, U. [Maelardalen University (FI)

    2007-04-15

    Organisational reviews are important instruments in the continuous quest for improved performance. In the nuclear field there has been an increasing regulatory interest in organisational performance, because incidents and accidents often point to organisational deficiencies as one of the major precursors. Many methods for organisational reviews have been proposed, but they are mostly based on ad hoc approaches to specific problems. The absence of well-established techniques for organisational reviews has already shown to cause discussions and controversies on different levels. The aim of the OrRe project is to collect the experiences from organisational reviews carried out so far and to reflect them in a theoretical model of organisational performance. Furthermore, the project aims to reflect on the criteria for the definition of the scope and content of organisational reviews. Finally, recommendations will be made for guidance for people participating in organisational reviews. This progress report describes regulatory practices in Finland and Sweden together with some case examples of organizational reviews and assessment in both countries. Some issues of concern are raised and an outline for the next year's work is proposed. Issues of concern include the sufficient depth of the assessment, the required competence in assessments, data and criteria problems, definition of the boundaries of the system to be assessed, and the necessary internal support and organisational maturity required for successful assessments. Finally, plans for next year's work are outlined. (au)

  7. Organisational reviews - requirements, methods and experience. Progress report 2006

    International Nuclear Information System (INIS)

    Reiman, T.; Oedewald, P.; Wahlstroem, B.; Rollenhagen, C.; Kahlbom, U.

    2007-04-01

    Organisational reviews are important instruments in the continuous quest for improved performance. In the nuclear field there has been an increasing regulatory interest in organisational performance, because incidents and accidents often point to organisational deficiencies as one of the major precursors. Many methods for organisational reviews have been proposed, but they are mostly based on ad hoc approaches to specific problems. The absence of well-established techniques for organisational reviews has already shown to cause discussions and controversies on different levels. The aim of the OrRe project is to collect the experiences from organisational reviews carried out so far and to reflect them in a theoretical model of organisational performance. Furthermore, the project aims to reflect on the criteria for the definition of the scope and content of organisational reviews. Finally, recommendations will be made for guidance for people participating in organisational reviews. This progress report describes regulatory practices in Finland and Sweden together with some case examples of organizational reviews and assessment in both countries. Some issues of concern are raised and an outline for the next year's work is proposed. Issues of concern include the sufficient depth of the assessment, the required competence in assessments, data and criteria problems, definition of the boundaries of the system to be assessed, and the necessary internal support and organisational maturity required for successful assessments. Finally, plans for next year's work are outlined. (au)

  8. Food and Wine Tourism: an Analysis of Italian Typical Products

    Directory of Open Access Journals (Sweden)

    Francesco Maria Olivieri

    2015-06-01

    Full Text Available The aim of this work is to focus the specific role of local food productions in spite of its relationship with tourism sector to valorization and promotion of the territorial cultural heritage. The modern agriculture has been and, in the recent years, several specific features are emerging referring to different territorials areas. Tourist would like to have a complete experience consumption of a destination, specifically to natural and cultural heritage and genuine food. This contribute addresses the topics connected to the relationship between typical productions system and tourism sector to underline the competitive advantages to local development. The typical productions are Designation of Protected Origin (Italian DOP, within wine certifications DOCG and DOC and Typical Geographical Indication (IGP and wine’s IGT. The aim is an analysis of the specialization of these kinds of production at Italian regional scale. The implication of the work has connected with defining a necessary and appropriate value strategies based on marketing principles in order to translate the benefit of typical productions to additional value for the local system. Thus, the final part of the paper describes the potential dynamics with the suitable accommodation typology of agriturismo and the typical production system of Italian Administrative Regions.

  9. PTL: A Propositional Typicality Logic

    CSIR Research Space (South Africa)

    Booth, R

    2012-09-01

    Full Text Available consequence relations first studied by Lehmann and col- leagues in the 90?s play a central role in nonmonotonic reasoning [13, 14]. This has been the case due to at least three main reasons. Firstly, they are based on semantic constructions that are elegant...) j ; 6j : ^ j PTL: A Propositional Typicality Logic 3 The semantics of (propositional) rational consequence is in terms of ranked models. These are partially ordered structures in which the ordering is modular. Definition 1. Given a set S...

  10. The synchronization method for distributed small satellite SAR

    Science.gov (United States)

    Xing, Lei; Gong, Xiaochun; Qiu, Wenxun; Sun, Zhaowei

    2007-11-01

    One of critical requirement for distributed small satellite SAR is the trigger time precision when all satellites turning on radar loads. This trigger operation is controlled by a dedicated communication tool or GPS system. In this paper a hardware platform is proposed which has integrated navigation, attitude control, and data handling system together. Based on it, a probabilistic synchronization method is proposed for SAR time precision requirement with ring architecture. To simplify design of transceiver, half-duplex communication way is used in this method. Research shows that time precision is relevant to relative frequency drift rate, satellite number, retry times, read error and round delay length. Installed with crystal oscillator short-term stability 10 -11 magnitude, this platform can achieve and maintain nanosecond order time error with a typical three satellites formation experiment during whole operating process.

  11. Performance of methods for estimation of table beet water requirement in Alagoas

    Directory of Open Access Journals (Sweden)

    Daniella P. dos Santos

    Full Text Available ABSTRACT Optimization of water use in agriculture is fundamental, particularly in regions where water scarcity is intense, requiring the adoption of technologies that promote increased irrigation efficiency. The objective of this study was to evaluate evapotranspiration models and to estimate the crop coefficients of beet grown in a drainage lysimeter in the Agreste region of Alagoas. The experiment was conducted at the Campus of the Federal University of Alagoas - UFAL, in the municipality of Arapiraca, AL, between March and April 2014. Crop evapotranspiration (ETc was estimated in drainage lysimeters and reference evapotranspiration (ETo by Penman-Monteith-FAO 56 and Hargreaves-Samani methods. The Hargreaves-Samani method presented a good performance index for ETo estimation compared with the Penman-Monteith-FAO method, indicating that it is adequate for the study area. Beet ETc showed a cumulative demand of 202.11 mm for a cumulative reference evapotranspiration of 152.00 mm. Kc values determined using the Penman-Monteith-FAO 56 and Hargreaves-Samani methods were overestimated, in comparison to the Kc values of the FAO-56 standard method. With the obtained results, it is possible to correct the equations of the methods for the region, allowing for adequate irrigation management.

  12. Mother-Child Play: Children with Down Syndrome and Typical Development

    Science.gov (United States)

    Venuti, P.; de Falco, S.; Esposito, G.; Bornstein, Marc H.

    2009-01-01

    Child solitary and collaborative mother-child play with 21 children with Down syndrome and 33 mental-age-matched typically developing children were compared. In solitary play, children with Down syndrome showed less exploratory but similar symbolic play compared to typically developing children. From solitary to collaborative play, children with…

  13. Applying formal method to design of nuclear power plant embedded protection system

    International Nuclear Information System (INIS)

    Kim, Jin Hyun; Kim, Il Gon; Sung, Chang Hoon; Choi, Jin Young; Lee, Na Young

    2001-01-01

    Nuclear power embedded protection systems is a typical safety-critical system, which detects its failure and shutdowns its operation of nuclear reactor. These systems are very dangerous so that it absolutely requires safety and reliability. Therefore nuclear power embedded protection system should fulfill verification and validation completely from the design stage. To develop embedded system, various V and V method have been provided and especially its design using Formal Method is studied in other advanced country. In this paper, we introduce design method of nuclear power embedded protection systems using various Formal-Method in various respect following nuclear power plant software development guideline

  14. Increasing the pump-up rate to polarize 3He gas using spin-exchange optical pumping method

    International Nuclear Information System (INIS)

    Lee, W.T.; Tong Xin; Rich, Dennis; Liu Yun; Fleenor, Michael; Ismaili, Akbar; Pierce, Joshua; Hagen, Mark; Dadras, Jonny; Robertson, J. Lee

    2009-01-01

    In recent years, polarized 3 He gas has increasingly been used as neutron polarizers and polarization analyzers. Two of the leading methods to polarize the 3 He gas are the spin-exchange optical pumping (SEOP) method and the meta-stable exchange optical pumping (MEOP) method. At present, the SEOP setup is comparatively compact due to the fact that it does not require the sophisticated compressor system used in the MEOP method. The temperature and the laser power available determine the speed, at which the SEOP method polarizes the 3 He gas. For the quantity of gas typically used in neutron scattering work, this speed is independent of the quantity of the gas required, whereas the polarizing time using the MEOP method is proportional to the quantity of gas required. Currently, using the SEOP method to polarize several bar-liters of 3 He to 70% polarization would require 20-40 h. This is an order of magnitude longer than the MEOP method for the same quantity of gas and polarization. It would therefore be advantageous to speed up the SEOP process. In this article, we analyze the requirements for temperature, laser power, and the type of alkali used in order to shorten the time required to polarize 3 He gas using the SEOP method.

  15. Narrative versus Style : Effect of Genre Typical Events versus Genre Typical Filmic Realizations on Film Viewers' Genre Recognition

    NARCIS (Netherlands)

    Visch, V.; Tan, E.

    2008-01-01

    This study investigated whether film viewers recognize four basic genres (comic, drama, action and nonfiction) on the basis of genre-typical event cues or of genretypical filmic realization cues of events. Event cues are similar to the narrative content of a film sequence, while filmic realization

  16. Typical electric bills, January 1, 1981

    International Nuclear Information System (INIS)

    1981-01-01

    The Typical Electric Bills report is prepared by the Electric Power Division; Office of Coal, Nuclear, Electric and Alternate Fuels; Energy Information Administration; Department of Energy. The publication is geared to a variety of applications by electric utilities, industry, consumes, educational institutions, and government in recognition of the growing importance of energy planning in contemporary society. 19 figs., 18 tabs

  17. Everyday social and conversation applications of theory-of-mind understanding by children with autism-spectrum disorders or typical development.

    Science.gov (United States)

    Peterson, Candida C; Garnett, Michelle; Kelly, Adrian; Attwood, Tony

    2009-02-01

    Children with autism-spectrum disorders (ASD) often fail laboratory false-belief tests of theory of mind (ToM). Yet how this impacts on their everyday social behavior is less clear, partly owing to uncertainty over which specific everyday conversational and social skills require ToM understanding. A new caregiver-report scale of these everyday applications of ToM was developed and validated in two studies. Study 1 obtained parent ratings of 339 children (85 with autism; 230 with Asperger's; 24 typically-developing) on the new scale and results revealed (a) that the scale had good psychometric properties and (b) that children with ASD had significantly more everyday mindreading difficulties than typical developers. In Study 2, we directly tested links between laboratory ToM and everyday mindreading using teacher ratings on the new scale. The sample of 25 children included 15 with autism and 10 typical developers aged 5-12 years. Children in both groups who passed laboratory ToM tests had fewer everyday mindreading difficulties than those of the same diagnosis who failed. Yet, intriguingly, autistic ToM-passers still had more problems with everyday mindreading than younger typically-developing ToM-failers. The possible roles of family conversation and peer interaction, along with ToM, in everyday social functioning were considered.

  18. A Typical Synergy

    Science.gov (United States)

    van Noort, Thomas; Achten, Peter; Plasmeijer, Rinus

    We present a typical synergy between dynamic types (dynamics) and generalised algebraic datatypes (GADTs). The former provides a clean approach to integrating dynamic typing in a statically typed language. It allows values to be wrapped together with their type in a uniform package, deferring type unification until run time using a pattern match annotated with the desired type. The latter allows for the explicit specification of constructor types, as to enforce their structural validity. In contrast to ADTs, GADTs are heterogeneous structures since each constructor type is implicitly universally quantified. Unfortunately, pattern matching only enforces structural validity and does not provide instantiation information on polymorphic types. Consequently, functions that manipulate such values, such as a type-safe update function, are cumbersome due to boilerplate type representation administration. In this paper we focus on improving such functions by providing a new GADT annotation via a natural synergy with dynamics. We formally define the semantics of the annotation and touch on novel other applications of this technique such as type dispatching and enforcing type equality invariants on GADT values.

  19. Impact of typical rather than nutrient-dense food choices in the US Department of Agriculture Food Patterns.

    Science.gov (United States)

    Britten, Patricia; Cleveland, Linda E; Koegel, Kristin L; Kuczynski, Kevin J; Nickols-Richardson, Sharon M

    2012-10-01

    The US Department of Agriculture (USDA) Food Patterns, released as part of the 2010 Dietary Guidelines for Americans, are designed to meet nutrient needs without exceeding energy requirements. They identify amounts to consume from each food group and recommend that nutrient-dense forms-lean or low-fat, without added sugars or salt-be consumed. Americans fall short of most food group intake targets and do not consume foods in nutrient-dense forms. Intake of calories from solid fats and added sugars exceed maximum limits by large margins. Our aim was to determine the potential effect on meeting USDA Food Pattern nutrient adequacy and moderation goals if Americans consumed the recommended quantities from each food group, but did not implement the advice to select nutrient-dense forms of food and instead made more typical food choices. Food-pattern modeling analysis using the USDA Food Patterns, which are structured to allow modifications in one or more aspects of the patterns, was used. Nutrient profiles for each food group were modified by replacing each nutrient-dense representative food with a similar but typical choice. Typical nutrient profiles were used to determine the energy and nutrient content of the food patterns. Moderation goals are not met when amounts of food in the USDA Food Patterns are followed and typical rather than nutrient-dense food choices are made. Energy, total fat, saturated fat, and sodium exceed limits in all patterns, often by substantial margins. With typical choices, calories were 15% to 30% (ie, 350 to 450 kcal) above the target calorie level for each pattern. Adequacy goals were not substantially affected by the use of typical food choices. If consumers consume the recommended quantities from each food group and subgroup, but fail to choose foods in low-fat, no-added-sugars, and low-sodium forms, they will not meet the USDA Food Patterns moderation goals or the 2010 Dietary Guidelines for Americans. Copyright © 2012 Academy of

  20. Shared temporoparietal dysfunction in dyslexia and typical readers with discrepantly high IQ.

    Science.gov (United States)

    Hancock, Roeland; Gabrieli, John D E; Hoeft, Fumiko

    2016-12-01

    It is currently believed that reading disability (RD) should be defined by reading level without regard to broader aptitude (IQ). There is debate, however, about how to classify individuals who read in the typical range but less well than would be expected by their higher IQ. We used functional magnetic resonance imaging (fMRI) in 49 children to examine whether those with typical, but discrepantly low reading ability relative to IQ, show dyslexia-like activation patterns during reading. Children who were typical readers with high-IQ discrepancy showed reduced activation in left temporoparietal neocortex relative to two control groups of typical readers without IQ discrepancy. This pattern was consistent and spatially overlapping with results in children with RD compared to typically reading children. The results suggest a shared neurological atypicality in regions associated with phonological processing between children with dyslexia and children with typical reading ability that is substantially below their IQ.

  1. A simple method to downscale daily wind statistics to hourly wind data

    OpenAIRE

    Guo, Zhongling

    2013-01-01

    Wind is the principal driver in the wind erosion models. The hourly wind speed data were generally required for precisely wind erosion modeling. In this study, a simple method to generate hourly wind speed data from daily wind statistics (daily average and maximum wind speeds together or daily average wind speed only) was established. A typical windy location with 3285 days (9 years) measured hourly wind speed data were used to validate the downscaling method. The results showed that the over...

  2. Advancing Continence in Typically Developing Children: Adapting the Procedures of Foxx and Azrin for Primary Care.

    Science.gov (United States)

    Warzak, William J; Forcino, Stacy S; Sanberg, Sela Ann; Gross, Amy C

    2016-01-01

    To (1) identify and summarize procedures of Foxx and Azrin's classic toilet training protocol that continue to be used in training typically developing children and (2) adapt recent findings with the original Foxx and Azrin procedures to inform practical suggestions for the rapid toilet training of typically developing children in the primary care setting. Literature searches of PsychINFO and MEDLINE databases used the search terms "(toilet* OR potty* AND train*)." Selection criteria were only peer-reviewed experimental articles that evaluated intensive toilet training with typically developing children. Exclusion criteria were (1) nonpeer reviewed research, (2) studies addressing encopresis and/or enuresis, (3) studies excluding typically developing children, and (4) studies evaluating toilet training during infancy. In addition to the study of Foxx and Azrin, only 4 publications met the above criteria. Toilet training procedures from each article were reviewed to determine which toilet training methods were similar to components described by Foxx and Azrin. Common training elements include increasing the frequency of learning opportunities through fluid loading and having differential consequences for being dry versus being wet and for voiding in the toilet versus elsewhere. There is little research on intensive toilet training of typically developing children. Practice sits and positive reinforcement for voids in the toilet are commonplace, consistent with the Foxx and Azrin protocol, whereas positive practice as a corrective procedure for wetting accidents often is omitted. Fluid loading and differential consequences for being dry versus being wet and for voiding in the toilet also are suggested procedures, consistent with the Foxx and Azrin protocol.

  3. A quantitative evaluation of seismic margin of typical sodium piping

    International Nuclear Information System (INIS)

    Morishita, Masaki

    1999-05-01

    It is widely recognized that the current seismic design methods for piping involve a large amount of safety margin. From this viewpoint, a series of seismic analyses and evaluations with various design codes were made on typical LMFBR main sodium piping systems. Actual capability against seismic loads were also estimated on the piping systems. Margins contained in the current codes were quantified based on these results, and potential benefits and impacts to the piping seismic design were assessed on possible mitigation of the current code allowables. From the study, the following points were clarified; 1) A combination of inelastic time history analysis and true (without margin)strength capability allows several to twenty times as large seismic load compared with the allowable load with the current methods. 2) The new rule of the ASME is relatively compatible with the results of inelastic analysis evaluation. Hence, this new rule might be a goal for the mitigation of seismic design rule. 3) With this mitigation, seismic design accommodation such as equipping with a large number of seismic supports may become unnecessary. (author)

  4. Effects of circumferential rigid wrist orthoses in rehabilitation of patients with radius fracture at typical site

    Directory of Open Access Journals (Sweden)

    Đurović Aleksandar

    2005-01-01

    Full Text Available Background. The use of orthoses is a questionable rehabilitation method for patients with the distal radius fracture at typical site. The aim of this study was to compare the effects of the rehabilitation on patients with radius fracture at the typical site, who wore circumferential static wrist orthoses, with those who did not wear them. Methods. Thirty patients were divided into 3 equal groups, 2 experimental groups, and 1 control group. The patients in the experimental groups were given the rehabilitation program of wearing serially manufactured (off-the-shelf, as well as custom-fit orthoses. Those in the control group did not wear wrist orthoses. Evaluation parameters were pain, edema, the range of the wrist motion, the quality of cylindrical, spherical, and pinch-spherical grasp, the strength of pinch and hand grasp, and patient's assessment of the effects of rehabilitation. Results. No significant difference in the effects of rehabilitation on the patients in experimental groups as opposed to control group was found. Patients in the first experimental group, and in control group were more satisfied with the effects of rehabilitation, as opposed to the patients in the second experimental group (p<0,05. Conclusion. The effects of circumferential static wrist orthoses in the rehabilitation of patients with distal radius fracture at the typical site were not clinically significant. There was no significant difference between the custom and off-the-shelf orthoses.

  5. Contribution of milk production to global greenhouse gas emissions. An estimation based on typical farms.

    Science.gov (United States)

    Hagemann, Martin; Ndambi, Asaah; Hemme, Torsten; Latacz-Lohmann, Uwe

    2012-02-01

    Studies on the contribution of milk production to global greenhouse gas (GHG) emissions are rare (FAO 2010) and often based on crude data which do not appropriately reflect the heterogeneity of farming systems. This article estimates GHG emissions from milk production in different dairy regions of the world based on a harmonised farm data and assesses the contribution of milk production to global GHG emissions. The methodology comprises three elements: (1) the International Farm Comparison Network (IFCN) concept of typical farms and the related globally standardised dairy model farms representing 45 dairy regions in 38 countries; (2) a partial life cycle assessment model for estimating GHG emissions of the typical dairy farms; and (3) standard regression analysis to estimate GHG emissions from milk production in countries for which no typical farms are available in the IFCN database. Across the 117 typical farms in the 38 countries analysed, the average emission rate is 1.50 kg CO(2) equivalents (CO(2)-eq.)/kg milk. The contribution of milk production to the global anthropogenic emissions is estimated at 1.3 Gt CO(2)-eq./year, accounting for 2.65% of total global anthropogenic emissions (49 Gt; IPCC, Synthesis Report for Policy Maker, Valencia, Spain, 2007). We emphasise that our estimates of the contribution of milk production to global GHG emissions are subject to uncertainty. Part of the uncertainty stems from the choice of the appropriate methods for estimating emissions at the level of the individual animal.

  6. Response to dynamic language tasks among typically developing Latino preschool children with bilingual experience.

    Science.gov (United States)

    Patterson, Janet L; Rodríguez, Barbara L; Dale, Philip S

    2013-02-01

    The purpose of this study was to determine whether typically developing preschool children with bilingual experience show evidence of learning within brief dynamic assessment language tasks administered in a graduated prompting framework. Dynamic assessment has shown promise for accurate identification of language impairment in bilingual children, and a graduated prompting approach may be well-suited to screening for language impairment. Three dynamic language tasks with graduated prompting were presented to 32 typically developing 4-year-olds in the language to which the child had the most exposure (16 Spanish, 16 English). The tasks were a novel word learning task, a semantic task, and a phonological awareness task. Children's performance was significantly higher on the last 2 items compared with the first 2 items for the semantic and the novel word learning tasks among children who required a prompt on the 1st item. There was no significant difference between the 1st and last items on the phonological awareness task. Within-task improvements in children's performance for some tasks administered within a brief, graduated prompting framework were observed. Thus, children's responses to graduated prompting may be an indicator of modifiability, depending on the task type and level of difficulty.

  7. Maturation of social attribution skills in typically developing children: an investigation using the social attribution task

    Directory of Open Access Journals (Sweden)

    Chan Raymond CK

    2010-02-01

    Full Text Available Abstract Background The assessment of social attribution skills in children can potentially identify and quantify developmental difficulties related to autism spectrum disorders and related conditions. However, relatively little is known about how these skills develop in typically developing children. Therefore the present study aimed to map the trajectory of social attribution skill acquisition in typically developing children from a young age. Methods In the conventional social attribution task (SAT participants ascribe feelings to moving shapes and describe their interaction in social terms. However, this format requires that participants understand both, that an inanimate shape is symbolic, and that its action is social in nature. This may be challenging for young children, and may be a potential confounder in studies of children with developmental disorders. Therefore we developed a modified SAT (mSAT using animate figures (e.g. animals to simplify the task. We used the SAT and mSAT to examine social attribution skill development in 154 healthy children (76 boys, 78 girls, ranging in age from 6 to 13 years and investigated the relationship between social attribution ability and executive function. Results The mSAT revealed a steady improvement in social attribution skills from the age of 6 years, and a significant advantage for girls compared to boys. In contrast, children under the age of 9 years performed at baseline on the conventional format and there were no gender differences apparent. Performance on neither task correlated with executive function after controlling for age and verbal IQ, suggesting that social attribution ability is independent of cognitive functioning. The present findings indicate that the mSAT is a sensitive measure of social attribution skills from a young age. This should be carefully considered when choosing assessments for young children and those with developmental disorders.

  8. Delay generation methods with reduced memory requirements

    DEFF Research Database (Denmark)

    Tomov, Borislav Gueorguiev; Jensen, Jørgen Arendt

    2003-01-01

    Modern diagnostic ultrasound beamformers require delay information for each sample along the image lines. In order to avoid storing large amounts of focusing data, delay generation techniques have to be used. In connection with developing a compact beamformer architecture, recursive algorithms were......) For the best parametric approach, the gate count was 2095, the maximum operation speed was 131.9 MHz, the power consumption at 40 MHz was 10.6 mW, and it requires 4 12-bit words for each image line and channel. 2) For the piecewise-linear approximation, the corresponding numbers are 1125 gates, 184.9 MHz, 7...

  9. The contribution of diffusion-weighted MR imaging to distinguishing typical from atypical meningiomas

    Energy Technology Data Exchange (ETDEWEB)

    Hakyemez, Bahattin [Uludag University School of Medicine, Department of Radiology, Gorukle, Bursa (Turkey); Bursa State Hospital, Department of Radiology, Bursa (Turkey); Yildirim, Nalan; Gokalp, Gokhan; Erdogan, Cuneyt; Parlak, Mufit [Uludag University School of Medicine, Department of Radiology, Gorukle, Bursa (Turkey)

    2006-08-15

    Atypical/malignant meningiomas recur more frequently then typical meningiomas. In this study, the contribution of diffusion-weighted MR imaging to the differentiation of atypical/malignant and typical meningiomas and to the determination of histological subtypes of typical meningiomas was investigated. The study was performed prospectively on 39 patients. The signal intensity of the lesions was evaluated on trace and apparent diffusion coefficient (ADC) images. ADC values were measured in the lesions and peritumoral edema. Student's t-test was used for statistical analysis. P<0.05 was considered statistically significant. Mean ADC values in atypical/malignant and typical meningiomas were 0.75{+-}0.21 and 1.17{+-}0.21, respectively. Mean ADC values for subtypes of typical meningiomas were as follows: meningothelial, 1.09{+-}0.20; transitional, 1.19{+-}0.07; fibroblastic, 1.29{+-}0.28; and angiomatous, 1.48{+-}0.10. Normal white matter was 0.91{+-}0.10. ADC values of typical meningiomas and atypical/malignant meningiomas significantly differed (P<0.001). However, the difference between peritumoral edema ADC values was not significant (P>0.05). Furthermore, the difference between the subtypes of typical meningiomas and atypical/malignant meningiomas was significant (P<0.001). Diffusion-weighted MR imaging findings of atypical/malignant meningiomas and typical meningiomas differ. Atypical/malignant meningiomas have lower intratumoral ADC values than typical meningiomas. Mean ADC values for peritumoral edema do not differ between typical and atypical meningiomas. (orig.)

  10. Requirements of, and operating experience with, gas analyses on high temperature reactors

    International Nuclear Information System (INIS)

    Nieder, R.

    1982-06-01

    Impurities in the helium coolant of the primary coolant circuit of HTGR's are mainly due to ingress of air or water, occasionally oil. Typical concentrations are given of H 2 O, H 2 , CO 2 , CO, N 2 , CH 4 and Ar in the AVR, Dragon, Peach Bottom and Fort St. Vrain reactors. A characteristic is presented of measuring devices for measuring non-active impurities in helium; measuring methods are described and a list is given of required and actual detection limits. Also given are concentrations of solid fission and activation products and tritium in the primary circuit of the AVR reactor

  11. Memory for sequences of events impaired in typical aging

    Science.gov (United States)

    Allen, Timothy A.; Morris, Andrea M.; Stark, Shauna M.; Fortin, Norbert J.

    2015-01-01

    Typical aging is associated with diminished episodic memory performance. To improve our understanding of the fundamental mechanisms underlying this age-related memory deficit, we previously developed an integrated, cross-species approach to link converging evidence from human and animal research. This novel approach focuses on the ability to remember sequences of events, an important feature of episodic memory. Unlike existing paradigms, this task is nonspatial, nonverbal, and can be used to isolate different cognitive processes that may be differentially affected in aging. Here, we used this task to make a comprehensive comparison of sequence memory performance between younger (18–22 yr) and older adults (62–86 yr). Specifically, participants viewed repeated sequences of six colored, fractal images and indicated whether each item was presented “in sequence” or “out of sequence.” Several out of sequence probe trials were used to provide a detailed assessment of sequence memory, including: (i) repeating an item from earlier in the sequence (“Repeats”; e.g., ABADEF), (ii) skipping ahead in the sequence (“Skips”; e.g., ABDDEF), and (iii) inserting an item from a different sequence into the same ordinal position (“Ordinal Transfers”; e.g., AB3DEF). We found that older adults performed as well as younger controls when tested on well-known and predictable sequences, but were severely impaired when tested using novel sequences. Importantly, overall sequence memory performance in older adults steadily declined with age, a decline not detected with other measures (RAVLT or BPS-O). We further characterized this deficit by showing that performance of older adults was severely impaired on specific probe trials that required detailed knowledge of the sequence (Skips and Ordinal Transfers), and was associated with a shift in their underlying mnemonic representation of the sequences. Collectively, these findings provide unambiguous evidence that the

  12. 21 CFR 111.320 - What requirements apply to laboratory methods for testing and examination?

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to laboratory methods for testing and examination? 111.320 Section 111.320 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CURRENT GOOD MANUFACTURING...

  13. Contamination profile on typical printed circuit board assemblies vs soldering process

    DEFF Research Database (Denmark)

    Conseil, Helene; Jellesen, Morten Stendahl; Ambat, Rajan

    2014-01-01

    Purpose – The purpose of this paper was to analyse typical printed circuit board assemblies (PCBAs) processed by reflow, wave or selective wave soldering for typical levels of process-related residues, resulting from a specific or combination of soldering processes. Typical solder flux residue...... structure was identified by Fourier transform infrared spectroscopy, while the concentration was measured using ion chromatography, and the electrical properties of the extracts were determined by measuring the leak current using a twin platinum electrode set-up. Localized extraction of residue was carried...

  14. GENERATION OF A TYPICAL METEOROLOGICAL YEAR FOR PORT HARCOURT ZONE

    Directory of Open Access Journals (Sweden)

    OGOLOMA O.B.

    2011-04-01

    Full Text Available This paper presents data for the typical meteorological year (TMY for the Port Harcourt climatic zone based on the hourly meteorological data recorded during the period 1983–2002, using the Finkelstein-Schafer statistical method. The data are the global solar radiation, wind velocity, dry bulb temperature, relative humidity, and others. The HVAC outside design conditions for the Port Harcourt climatic zone (latitude 4.44oN, longitude 7.1oE, elevation 20 m were found to be 26.7oC, 78.6% and 3.5 m/s for the dry bulb temperature, relative humidity and wind speed, respectively, and 13.5 MJ/m2/day for the global solar radiation. The TMY data for the zone are shown to be sufficiently reliable for engineering practice.

  15. Review of data requirements for groundwater flow and solute transport modelling and the ability of site investigation methods to meet these requirements

    International Nuclear Information System (INIS)

    McEwen, T.J.; Chapman, N.A.; Robinson, P.C.

    1990-08-01

    This report describes the data requirements for the codes that may be used in the modelling of groundwater flow and radionuclide transport during the assessment of a Nirex site for the deep disposal of low and intermediate level radioactive waste and also the site investigation methods that exist to supply the data for these codes. The data requirements for eight codes are reviewed, with most emphasis on three of the more significant codes, VANDAL, NAMMU and CHEMTARD. The largest part of the report describes and discusses the site investigation techniques and each technique is considered in terms of its ability to provide the data necessary to characterise the geological and hydrogeological environment around a potential repository. (author)

  16. A method of formal requirements analysis for NPP I and C systems based on object-oriented visual modeling with SCR

    International Nuclear Information System (INIS)

    Koo, S. R.; Seong, P. H.

    1999-01-01

    In this work, a formal requirements analysis method for Nuclear Power Plant (NPP) I and C systems is suggested. This method uses Unified Modeling Language (UML) for modeling systems visually and Software Cost Reduction (SCR) formalism for checking the system models. Since object-oriented method can analyze a document by the objects in a real system, UML models that use object-oriented method are useful for understanding problems and communicating with everyone involved in the project. In order to analyze the requirement more formally, SCR tabular notations is converted from UML models. To help flow-through from UML models to SCR specifications, additional syntactic extensions for UML notation and a converting procedure are defined. The combined method has been applied to Dynamic Safety System (DSS). From this application, three kinds of errors were detected in the existing DSS requirements

  17. Applications of hybrid time-frequency methods in nonlinear structural dynamics

    International Nuclear Information System (INIS)

    Politopoulos, I.; Piteau, Ph.; Borsoi, L.; Antunes, J.

    2014-01-01

    This paper presents a study on methods which may be used to compute the nonlinear response of systems whose linear properties are determined in the frequency or Laplace domain. Typically, this kind of situation may arise in soil-structure and fluid-structure interaction problems. In particular three methods are investigated: (a) the hybrid time-frequency method, (b) the computation of the convolution integral which requires an inverse Fourier or Laplace transform of the system's transfer function, and (c) the identification of an equivalent system defined in the time domain which may be solved with classical time integration methods. These methods are illustrated by their application to some simple, one degree of freedom, non-linear systems and their advantages and drawbacks are highlighted. (authors)

  18. A Study on a Control Method with a Ventilation Requirement of a VAV System in Multi-Zone

    Directory of Open Access Journals (Sweden)

    Hyo-Jun Kim

    2017-11-01

    Full Text Available The objective of this study was to propose a control method with a ventilation requirement of variable air volume (VAV system in multi-zone. In order to control the VAV system inmulti-zone, it is essential to control the terminal unit installed in each zone. A VAV terminal unit with conventional control method using a fixed minimum air flow can cause indoor air quality (IAQ issues depending on the variation in the number of occupants. This research proposes a control method with a ventilation requirement of the VAV terminal unit and AHU inmulti-zone. The integrated control method with an air flow increase model in the VAV terminal unit, AHU, and outdoor air intake rate increase model in the AHU was based on the indoor CO2 concentration. The conventional and proposed control algorithms were compared through a TRNSYS simulation program. The proposed VAV terminal unit control method satisfies all the conditions of indoor temperature, IAQ, and stratification. An energy comparison with the conventional control method showed that the method satisfies not only the indoor thermal comfort, IAQ, and stratification issue, but also reduces the energy consumption.

  19. Test Characteristics of Neck Fullness and Witnessed Neck Pulsations in the Diagnosis of Typical AV Nodal Reentrant Tachycardia

    Science.gov (United States)

    Sakhuja, Rahul; Smith, Lisa M; Tseng, Zian H; Badhwar, Nitish; Lee, Byron K; Lee, Randall J; Scheinman, Melvin M; Olgin, Jeffrey E; Marcus, Gregory M

    2011-01-01

    Summary Background Claims in the medical literature suggest that neck fullness and witnessed neck pulsations are useful in the diagnosis of typical AV nodal reentrant tachycardia (AVNRT). Hypothesis Neck fullness and witnessed neck pulsations have a high positive predictive value in the diagnosis of typical AVNRT. Methods We performed a cross sectional study of consecutive patients with palpitations presenting to a single electrophysiology (EP) laboratory over a 1 year period. Each patient underwent a standard questionnaire regarding neck fullness and/or witnessed neck pulsations during their palpitations. The reference standard for diagnosis was determined by electrocardiogram and invasive EP studies. Results Comparing typical AVNRT to atrial fibrillation (AF) or atrial flutter (AFL) patients, the proportions with neck fullness and witnessed neck pulsations did not significantly differ: in the best case scenario (using the upper end of the 95% confidence interval [CI]), none of the positive or negative predictive values exceeded 79%. After restricting the population to those with supraventricular tachycardia other than AF or AFL (SVT), neck fullness again exhibited poor test characteristics; however, witnessed neck pulsations exhibited a specificity of 97% (95% CI 90–100%) and a positive predictive value of 83% (95% CI 52–98%). After adjustment for potential confounders, SVT patients with witnessed neck pulsations had a 7 fold greater odds of having typical AVNRT, p=0.029. Conclusions Although neither neck fullness nor witnessed neck pulsations are useful in distinguishing typical AVNRT from AF or AFL, witnessed neck pulsations are specific for the presence of typical AVNRT among those with SVT. PMID:19479968

  20. Examples of in-service inspections and typical maintenance schedule for low-power research reactors

    International Nuclear Information System (INIS)

    Boeck, H.

    1997-01-01

    In-service inspection methods for low-power research reactors are described which have been developed during the past 37 years of the operation of the TRIGA reactor Vienna. Special tools have been developed during this period and their application for maintenance and in-serve inspection is discussed. Two practical in-service inspections at a TRIGA reactor and at a MTR reactor are presented. Further a typical maintenance plan for a TRIGA reactor is listed in the annex. (author)

  1. COMPARISON OF SPATIAL INTERPOLATION METHODS FOR WHEAT WATER REQUIREMENT AND ITS TEMPORAL DISTRIBUTION IN HAMEDAN PROVINCE (IRAN

    Directory of Open Access Journals (Sweden)

    M. H. Nazarifar

    2014-01-01

    Full Text Available Water is the main constraint for production of agricultural crops. The temporal and spatial variations in water requirement for agriculture products are limiting factors in the study of optimum use of water resources in regional planning and management. However, due to unfavorable distribution and density of meteorological stations, it is not possible to monitor the regional variations precisely. Therefore, there is a need to estimate the evapotranspiration of crops at places where meteorological data are not available and then extend the findings from points of measurements to regional scale. Geostatistical methods are among those methods that can be used for estimation of evapotranspiration at regional scale. The present study attempts to investigate different geostatistical methods for temporal and spatial estimation of water requirements for wheat crop in different periods. The study employs the data provided by 16 synoptic and climatology meteorological stations in Hamadan province in Iran. Evapotranspiration for each month and for the growth period were determined using Penman-Mantis and Torrent-White methods for different water periods based on Standardized Precipitation Index (SPI. Among the available geostatistical methods, three methods: Kriging Method, Cokriging Method, and inverse weighted distance were selected, and analyzed, using GS+ software. Analysis and selection of the suitable geostatistical method were performed based on two measures, namely Mean Absolute Error (MAE and Mean Bias Error (MBE. The findings suggest that, in general, during the drought period, Kriging method is the proper one for estimating water requirements for the six months: January, February, April, May, August, and December. However, weighted moving average is a better estimation method for the months March, June, September, and October. In addition, Kriging is the best method for July. In normal conditions, Kriging is suitable for April, August, December

  2. Calculating Electrical Requirements for Direct Current Electric Actuators

    Science.gov (United States)

    2017-11-29

    equation 1. The moment of inertia must be a composite value of all rotating masses including the load, actuator components, and the motor rotor . Both...to the torque required to accelerate, there is a load torque, TL. The load torque is a composite value representing the torque required to overcome...values can typically be incorporated into a conservative composite efficiency value that provides reasonably accurate results. Since this report

  3. Change in convergence and accommodation after two weeks of eye exercises in typical young adults

    OpenAIRE

    Horwood, Anna M.; Toor, Sonia S.; Riddell, Patricia M.

    2014-01-01

    Abstract: Introduction\\ud Although eye exercises appear to help heterophoria, convergence insufficiency and intermittent strabismus, true treatment effects can be confounded by placebo, practice and encouragement factors. This study assessed objective changes in vergence and accommodation responses in typical naïve young adults after two weeks of exercises compared to control conditions to assess the extent of treatment effects occur above other factors.\\ud Methods\\ud 156 asymptomatic young a...

  4. [Inventory and environmental impact of VOCs emission from the typical anthropogenic sources in Sichuan province].

    Science.gov (United States)

    Han, Li; Wang, Xing-Rui; He, Min; Guo, Wei-Guang

    2013-12-01

    Based on Sichuan province environmental statistical survey data and other relevant activity data, volatile organic compounds (VOCs) emissions from typical anthropogenic sources in Sichuan province were calculated for the year of 2011 by applying the emission factor method. Besides, ozone and secondary organic aerosol formation potentials of these typical anthropogenic sources were discussed. The total VOC emission from these sources was about 482 kt in Sichuan province, biomass burning, solvent utilization, industrial processes, storage and distribution of fuel, and fossil fuel combustion contributed 174 kt, 153 kt, 121 kt, 21 kt and 13 kt, respectively; architecture wall painting, furniture coating, wood decoration painting and artificial board were the major emission sectors of the solvent utilization; while for the industrial processes, 19.4% of VOCs emission was from the wine industry. Chengdu was the largest contributor compared to the other cities in Sichuan, whose VOCs emission from these typical anthropogenic sources in 2011 was 112 kt. OFP of these sources was 1,930 kt altogether. Solvent utilization contributed 50.5% of the total SOA formation potentials, biomass burning and industrial processes both contributed about 23% , with storage and distribution of fuel and fossil fuel combustion accounting for 1% and 1.4%, respectively.

  5. A new spinning reserve requirement forecast method for deregulated electricity markets

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    Ancillary services are necessary for maintaining the security and reliability of power systems and constitute an important part of trade in competitive electricity markets. Spinning Reserve (SR) is one of the most important ancillary services for saving power system stability and integrity in response to contingencies and disturbances that continuously occur in the power systems. Hence, an accurate day-ahead forecast of SR requirement helps the Independent System Operator (ISO) to conduct a reliable and economic operation of the power system. However, SR signal has complex, non-stationary and volatile behavior along the time domain and depends greatly on system load. In this paper, a new hybrid forecast engine is proposed for SR requirement prediction. The proposed forecast engine has an iterative training mechanism composed of Levenberg-Marquadt (LM) learning algorithm and Real Coded Genetic Algorithm (RCGA), implemented on the Multi-Layer Perceptron (MLP) neural network. The proposed forecast methodology is examined by means of real data of Pennsylvania-New Jersey-Maryland (PJM) electricity market and the California ISO (CAISO) controlled grid. The obtained forecast results are presented and compared with those of the other SR forecast methods. (author)

  6. A new spinning reserve requirement forecast method for deregulated electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Amjady, Nima; Keynia, Farshid [Department of Electrical Engineering, Semnan University, Semnan (Iran)

    2010-06-15

    Ancillary services are necessary for maintaining the security and reliability of power systems and constitute an important part of trade in competitive electricity markets. Spinning Reserve (SR) is one of the most important ancillary services for saving power system stability and integrity in response to contingencies and disturbances that continuously occur in the power systems. Hence, an accurate day-ahead forecast of SR requirement helps the Independent System Operator (ISO) to conduct a reliable and economic operation of the power system. However, SR signal has complex, non-stationary and volatile behavior along the time domain and depends greatly on system load. In this paper, a new hybrid forecast engine is proposed for SR requirement prediction. The proposed forecast engine has an iterative training mechanism composed of Levenberg-Marquadt (LM) learning algorithm and Real Coded Genetic Algorithm (RCGA), implemented on the Multi-Layer Perceptron (MLP) neural network. The proposed forecast methodology is examined by means of real data of Pennsylvania-New Jersey-Maryland (PJM) electricity market and the California ISO (CAISO) controlled grid. The obtained forecast results are presented and compared with those of the other SR forecast methods. (author)

  7. Variability in Classroom Social Communication: Performance of Children with Fetal Alcohol Spectrum Disorders and Typically Developing Peers

    Science.gov (United States)

    Kjellmer, Liselotte; Olswang, Lesley B.

    2013-01-01

    Purpose: In this study, the authors examined how variability in classroom social communication performance differed between children with fetal alcohol spectrum disorders (FASD) and pair-matched, typically developing peers. Method: Twelve pairs of children were observed in their classrooms, 40 min per day (20 min per child) for 4 days over a…

  8. Typically Female Features in Hungarian Shopping Tourism

    Directory of Open Access Journals (Sweden)

    Gábor Michalkó

    2006-06-01

    Full Text Available Although shopping has been long acknowledged as a major tourist activity, the extent and characteristics of shopping tourism have only recently become the subject of academic research and discussion. As a contribution to this field of knowledge, the paper presents the characteristics of shopping tourism in Hungary, and discusses the typically female features of outbound Hungarian shopping tourism. The research is based on a survey of 2473 Hungarian tourists carried out in 2005. As the findings of the study indicate, while female respondents were altogether more likely to be involved in tourist shopping than male travellers, no significant difference was experienced between the genders concerning the share of shopping expenses compared to their total travel budget. In their shopping behaviour, women were typically affected by price levels, and they proved to be both more selfish and more altruistic than men by purchasing more products for themselves and for their family members. The most significant differences between men and women were found in their product preferences as female tourists were more likely to purchase typically feminine goods such as clothes, shoes, bags and accessories, in the timing of shopping activities while abroad, and in the information sources used by tourists, since interpersonal influences such as friends’, guides’ and fellow travellers’ recommendations played a higher role in female travellers’ decisions.

  9. Sharpening methods for images captured through Bayer matrix

    Science.gov (United States)

    Kalevo, Ossi; Rantanen, Henry, Jr.

    2003-05-01

    Image resolution and sharpness are essential criteria for a human observer when estimating the image quality. Typically cheap small-sized, low-resolution CMOS-camera sensors do not provide sharp enough images, at least when comparing to high-end digital cameras. Sharpening function can be used to increase the subjective sharpness seen by the observer. In this paper, few methods to apply sharpening for images captured by CMOS imaging sensors through color filter array (CFA) are compared. The sharpening easily adds also the visibility of noise, pixel-cross talk and interpolation artifacts. Necessary arrangements to avoid the amplification of these unwanted phenomenon are discussed. By applying the sharpening only to the green component the processing power requirements can be clearly reduced. By adjusting the red and blue component sharpness, according to the green component sharpening, creation of false colors are reduced highly. Direction search sharpening method can be used to reduce the amplification of the artifacts caused by the CFA interpolation (CFAI). The comparison of the presented methods is based mainly on subjective image quality. Also the processing power and memory requirements are considered.

  10. A computer program for uncertainty analysis integrating regression and Bayesian methods

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Hill, Mary C.; Poeter, Eileen P.; Curtis, Gary

    2014-01-01

    This work develops a new functionality in UCODE_2014 to evaluate Bayesian credible intervals using the Markov Chain Monte Carlo (MCMC) method. The MCMC capability in UCODE_2014 is based on the FORTRAN version of the differential evolution adaptive Metropolis (DREAM) algorithm of Vrugt et al. (2009), which estimates the posterior probability density function of model parameters in high-dimensional and multimodal sampling problems. The UCODE MCMC capability provides eleven prior probability distributions and three ways to initialize the sampling process. It evaluates parametric and predictive uncertainties and it has parallel computing capability based on multiple chains to accelerate the sampling process. This paper tests and demonstrates the MCMC capability using a 10-dimensional multimodal mathematical function, a 100-dimensional Gaussian function, and a groundwater reactive transport model. The use of the MCMC capability is made straightforward and flexible by adopting the JUPITER API protocol. With the new MCMC capability, UCODE_2014 can be used to calculate three types of uncertainty intervals, which all can account for prior information: (1) linear confidence intervals which require linearity and Gaussian error assumptions and typically 10s–100s of highly parallelizable model runs after optimization, (2) nonlinear confidence intervals which require a smooth objective function surface and Gaussian observation error assumptions and typically 100s–1,000s of partially parallelizable model runs after optimization, and (3) MCMC Bayesian credible intervals which require few assumptions and commonly 10,000s–100,000s or more partially parallelizable model runs. Ready access allows users to select methods best suited to their work, and to compare methods in many circumstances.

  11. RAPID NAMING IN CHILDREN WITH SPECIFIC LANGUAGE IMPAIRMENT AND IN CHILDREN WITH TYPICAL LANGUAGE DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Neda MILOSHEVIĆ

    2017-03-01

    Full Text Available Aimed at the detailed insight into the phonological ability of Serbian-speaking children of preschool age, with and without language impairment, the ability of rapid naming was examined. Method: Operationalization of the set goal was carried out by using the Test for evaluating reading and writing pre-skills. In describing and analyzing the obtained data, methods of descriptive and inferential statistics were used. The sample included 120 subjects of both gender, 40 children diagnosed with specific language impairment (SLI, age from 5,11 to 7 years, and 80 children with typical language development (TLD, age between 5,11 and 7 years, with no statistically significant differences in relation to age and gender of the participants. Results: Summing up the overall results and achievements of children with SLI and children with TLD, we concluded that there are statistically significant differences in the rapid naming between children with specific language impairment and children with typical language development. Conclusions: As it is a global trend to work on preventing disorders and obstructions, and phonological skills in this age are a timely indicator of the development of reading and writing skills, the examined children with SLI are at risk for the occurrence of obstructions and disorders in the area of reading and writing abilities.

  12. Identifying typical patterns of vulnerability: A 5-step approach based on cluster analysis

    Science.gov (United States)

    Sietz, Diana; Lüdeke, Matthias; Kok, Marcel; Lucas, Paul; Carsten, Walther; Janssen, Peter

    2013-04-01

    Specific processes that shape the vulnerability of socio-ecological systems to climate, market and other stresses derive from diverse background conditions. Within the multitude of vulnerability-creating mechanisms, distinct processes recur in various regions inspiring research on typical patterns of vulnerability. The vulnerability patterns display typical combinations of the natural and socio-economic properties that shape a systems' vulnerability to particular stresses. Based on the identification of a limited number of vulnerability patterns, pattern analysis provides an efficient approach to improving our understanding of vulnerability and decision-making for vulnerability reduction. However, current pattern analyses often miss explicit descriptions of their methods and pay insufficient attention to the validity of their groupings. Therefore, the question arises as to how do we identify typical vulnerability patterns in order to enhance our understanding of a systems' vulnerability to stresses? A cluster-based pattern recognition applied at global and local levels is scrutinised with a focus on an applicable methodology and practicable insights. Taking the example of drylands, this presentation demonstrates the conditions necessary to identify typical vulnerability patterns. They are summarised in five methodological steps comprising the elicitation of relevant cause-effect hypotheses and the quantitative indication of mechanisms as well as an evaluation of robustness, a validation and a ranking of the identified patterns. Reflecting scale-dependent opportunities, a global study is able to support decision-making with insights into the up-scaling of interventions when available funds are limited. In contrast, local investigations encourage an outcome-based validation. This constitutes a crucial step in establishing the credibility of the patterns and hence their suitability for informing extension services and individual decisions. In this respect, working at

  13. Required doses for projection methods in X-ray diagnosis

    International Nuclear Information System (INIS)

    Hagemann, G.

    1992-01-01

    The ideal dose requirement has been stated by Cohen et al. (1981) by a formula basing on parallel beam, maximum quantum yield and Bucky grid effect depending on the signal to noise ratio and object contrast. This was checked by means of contrast detail diagrams measured at the hole phantom, and was additionally compared with measurement results obtained with acrylic glass phantoms. The optimal dose requirement is obtained by the maximum technically possible approach to the ideal requirement level. Examples are given, besides for x-ray equipment with Gd 2 O 2 S screen film systems for grid screen mammography, and new thoracic examination systems for mass screenings. Finally, a few values concerning the dose requirement or the analogous time required for fluorscent screening in angiography and interventional radiology, are stated, as well as for dentistry and paediatric x-ray diagnostics. (orig./HP) [de

  14. A Power-Efficient Propulsion Method for Magnetic Microrobots

    Directory of Open Access Journals (Sweden)

    Gioia Lucarini

    2014-07-01

    Full Text Available Current magnetic systems for microrobotic navigation consist of assemblies of electromagnets, which allow for the wireless accurate steering and propulsion of sub-millimetric bodies. However, large numbers of windings and/or high currents are needed in order to generate suitable magnetic fields and gradients. This means that magnetic navigation systems are typically cumbersome and require a lot of power, thus limiting their application fields. In this paper, we propose a novel propulsion method that is able to dramatically reduce the power demand of such systems. This propulsion method was conceived for navigation systems that achieve propulsion by pulling microrobots with magnetic gradients. We compare this power-efficient propulsion method with the traditional pulling propulsion, in the case of a microrobot swimming in a micro-structured confined liquid environment. Results show that both methods are equivalent in terms of accuracy and the velocity of the motion of the microrobots, while the new approach requires only one ninth of the power needed to generate the magnetic gradients. Substantial equivalence is demonstrated also in terms of the manoeuvrability of user-controlled microrobots along a complex path.

  15. Typical and Atypical Development of Basic Numerical Skills in Elementary School

    Science.gov (United States)

    Landerl, Karin; Kolle, Christina

    2009-01-01

    Deficits in basic numerical processing have been identified as a central and potentially causal problem in developmental dyscalculia; however, so far not much is known about the typical and atypical development of such skills. This study assessed basic number skills cross-sectionally in 262 typically developing and 51 dyscalculic children in…

  16. TYPICAL FORMS OF LIVER PATHOLOGY IN CHILDREN

    Directory of Open Access Journals (Sweden)

    Peter F. Litvitskiy

    2018-01-01

    Full Text Available This lecture for the system of postgraduate medical education analyzes causes, types, key links of pathogenesis, and manifestations of the main typical forms of liver pathology — liver failure, hepatic coma, jaundice, cholemia, acholia, cholelithiasis, and their complications in children. To control the retention of the lecture material, case problems and multiple-choice tests are given.

  17. Evaluation of Irrigation Methods for Highbush Blueberry. I. Growth and Water Requirements of Young Plants

    Science.gov (United States)

    A study was conducted in a new field of northern highbush blueberry (Vaccinium corymbosum L. 'Elliott') to determine the effects of different irrigation methods on growth and water requirements of uncropped plants during the first 2 years after planting. The plants were grown on mulched, raised beds...

  18. HTGR Industrial Application Functional and Operational Requirements

    International Nuclear Information System (INIS)

    Demick, L.E.

    2010-01-01

    This document specifies the functional and performance requirements to be used in the development of the conceptual design of a high temperature gas-cooled reactor (HTGR) based plant supplying energy to a typical industrial facility. These requirements were developed from collaboration with industry and HTGR suppliers over the preceding three years to identify the energy needs of industrial processes for which the HTGR technology is technically and economically viable. The functional and performance requirements specified herein are an effective representation of the industrial sector energy needs and an effective basis for developing a conceptual design of the plant that will serve the broadest range of industrial applications.

  19. SSI response of a typical shear wall structure

    International Nuclear Information System (INIS)

    Johnson, J.J.; Maslenikov, O.R.; Schewe, E.C.

    1985-01-01

    The seismic response of a typical shear structure in a commercial nuclear power plant was investigated for a series of site and foundation conditions using best estimate and design procedures. The structure selected is a part of the Zion AFT complex which is a connected group of reinforced concrete shear wall buildings, typical of nuclear power plant structures. Comparisons between best estimate responses quantified the effects of placing the structure on different sites and founding it in different manners. Calibration factors were developed by comparing simplified SSI design procedure responses to responses calculated by best estimate procedures. Nineteen basic cases were analyzed - each case was analyzed for ten earthquakes targeted to the NRC R.G. 1.60 design response spectra. The structure is a part of the Zion auxiliary-fuel handling turbine building (AFT) complex to the Zion nuclear power plants. (orig./HP)

  20. Development of risk benefit structural design method for innovative reactor plants

    International Nuclear Information System (INIS)

    Yoshio Kamishima; Tai Asayama; Yukio Takahashi; Masanori Tashimo; Hideo Machida; Yomomi Otani; Yasuharu Chuman

    2005-01-01

    The development of innovative nuclear plants where the energy in the future is carried out in Japan. The design method based on a risk benefit of having maintained mitigation of a risk and the improvement in economy is called for, in order to realize the national innovative nuclear plants. Main key technologies of the risk benefit structural design method are crack propagation evaluation technology and structural reliability evaluation technology. This research aims at pulling up these two technologies on an engineering practical use level. In this paper, requirements from the design of typical innovative nuclear plants and research plan are shown.(authors)

  1. Spatial Resolution of the ECE for JET Typical Parameters

    International Nuclear Information System (INIS)

    Tribaldos, V.

    2000-01-01

    The purpose of this report is to obtain estimations of the spatial resolution of the electron cyclotron emission (ECE) phenomena for the typical plasmas found in JET tokamak. The analysis of the spatial resolution of the ECE is based on the underlying physical process of emission and a working definition is presented and discussed. In making these estimations a typical JET pulse is being analysed taking into account the magnetic configuration, the density and temperature profiles, obtained with the EFIT code and from the LIDAR diagnostic. Ray tracing simulations are performed for a Maxwellian plasma taking into account the antenna pattern. (Author) 5 refs

  2. The Source Equivalence Acceleration Method

    International Nuclear Information System (INIS)

    Everson, Matthew S.; Forget, Benoit

    2015-01-01

    Highlights: • We present a new acceleration method, the Source Equivalence Acceleration Method. • SEAM forms an equivalent coarse group problem for any spatial method. • Equivalence is also formed across different spatial methods and angular quadratures. • Testing is conducted using OpenMOC and performance is compared with CMFD. • Results show that SEAM is preferable for very expensive transport calculations. - Abstract: Fine-group whole-core reactor analysis remains one of the long sought goals of the reactor physics community. Such a detailed analysis is typically too computationally expensive to be realized on anything except the largest of supercomputers. Recondensation using the Discrete Generalized Multigroup (DGM) method, though, offers a relatively cheap alternative to solving the fine group transport problem. DGM, however, suffered from inconsistencies when applied to high-order spatial methods. While an exact spatial recondensation method was developed and provided full spatial consistency with the fine group problem, this approach substantially increased memory requirements for realistic problems. The method described in this paper, called the Source Equivalence Acceleration Method (SEAM), forms a coarse-group problem which preserves the fine-group problem even when using higher order spatial methods. SEAM allows recondensation to converge to the fine-group solution with minimal memory requirements and little additional overhead. This method also provides for consistency when using different spatial methods and angular quadratures between the coarse group and fine group problems. SEAM was implemented in OpenMOC, a 2D MOC code developed at MIT, and its performance tested against Coarse Mesh Finite Difference (CMFD) acceleration on the C5G7 benchmark problem and on a 361 group version of the problem. For extremely expensive transport calculations, SEAM was able to outperform CMFD, resulting in speed-ups of 20–45 relative to the normal power

  3. A Method for and Issues Associated with the Determination of Space Suit Joint Requirements

    Science.gov (United States)

    Matty, Jennifer E.; Aitchison, Lindsay

    2009-01-01

    In the design of a new space suit it is necessary to have requirements that define what mobility space suit joints should be capable of achieving in both a system and at the component level. NASA elected to divide mobility into its constituent parts-range of motion (ROM) and torque- in an effort to develop clean design requirements that limit subject performance bias and are easily verified. Unfortunately, the measurement of mobility can be difficult to obtain. Current technologies, such as the Vicon motion capture system, allow for the relatively easy benchmarking of range of motion (ROM) for a wide array of space suit systems. The ROM evaluations require subjects in the suit to accurately evaluate the ranges humans can achieve in the suit. However, when it comes to torque, there are significant challenges for both benchmarking current performance and writing requirements for future suits. This is reflected in the fact that torque definitions have been applied to very few types of space suits and with limited success in defining all the joints accurately. This paper discussed the advantages and disadvantages to historical joint torque evaluation methods, describes more recent efforts directed at benchmarking joint torques of prototype space suits, and provides an outline for how NASA intends to address joint torque in design requirements for the Constellation Space Suit System (CSSS).

  4. Variety in emotional life: within-category typicality of emotional experiences is associated with neural activity in large-scale brain networks.

    Science.gov (United States)

    Wilson-Mendenhall, Christine D; Barrett, Lisa Feldman; Barsalou, Lawrence W

    2015-01-01

    The tremendous variability within categories of human emotional experience receives little empirical attention. We hypothesized that atypical instances of emotion categories (e.g. pleasant fear of thrill-seeking) would be processed less efficiently than typical instances of emotion categories (e.g. unpleasant fear of violent threat) in large-scale brain networks. During a novel fMRI paradigm, participants immersed themselves in scenarios designed to induce atypical and typical experiences of fear, sadness or happiness (scenario immersion), and then focused on and rated the pleasant or unpleasant feeling that emerged (valence focus) in most trials. As predicted, reliably greater activity in the 'default mode' network (including medial prefrontal cortex and posterior cingulate) was observed for atypical (vs typical) emotional experiences during scenario immersion, suggesting atypical instances require greater conceptual processing to situate the socio-emotional experience. During valence focus, reliably greater activity was observed for atypical (vs typical) emotional experiences in the 'salience' network (including anterior insula and anterior cingulate), suggesting atypical instances place greater demands on integrating shifting body signals with the sensory and social context. Consistent with emerging psychological construction approaches to emotion, these findings demonstrate that is it important to study the variability within common categories of emotional experience. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  5. Variability of Self-Regulatory Strategies in Children with Intellectual Disability and Typically Developing Children in Pretend Play Situations

    Science.gov (United States)

    Nader-Grosbois, N.; Vieillevoye, S.

    2012-01-01

    Objective: This study has examined whether or not self-regulatory strategies vary depending on pretend play situations in 40 children with intellectual disability and 40 typically developing children. Method: Their cognitive, linguistic and individual symbolic play levels were assessed in order to match the children of the two groups. During two…

  6. The BioFIND study: Characteristics of a clinically typical Parkinson's disease biomarker cohort

    Science.gov (United States)

    Goldman, Jennifer G.; Alcalay, Roy N.; Xie, Tao; Tuite, Paul; Henchcliffe, Claire; Hogarth, Penelope; Amara, Amy W.; Frank, Samuel; Rudolph, Alice; Casaceli, Cynthia; Andrews, Howard; Gwinn, Katrina; Sutherland, Margaret; Kopil, Catherine; Vincent, Lona; Frasier, Mark

    2016-01-01

    ABSTRACT Background Identifying PD‐specific biomarkers in biofluids will greatly aid in diagnosis, monitoring progression, and therapeutic interventions. PD biomarkers have been limited by poor discriminatory power, partly driven by heterogeneity of the disease, variability of collection protocols, and focus on de novo, unmedicated patients. Thus, a platform for biomarker discovery and validation in well‐characterized, clinically typical, moderate to advanced PD cohorts is critically needed. Methods BioFIND (Fox Investigation for New Discovery of Biomarkers in Parkinson's Disease) is a cross‐sectional, multicenter biomarker study that established a repository of clinical data, blood, DNA, RNA, CSF, saliva, and urine samples from 118 moderate to advanced PD and 88 healthy control subjects. Inclusion criteria were designed to maximize diagnostic specificity by selecting participants with clinically typical PD symptoms, and clinical data and biospecimen collection utilized standardized procedures to minimize variability across sites. Results We present the study methodology and data on the cohort's clinical characteristics. Motor scores and biospecimen samples including plasma are available for practically defined off and on states and thus enable testing the effects of PD medications on biomarkers. Other biospecimens are available from off state PD assessments and from controls. Conclusion Our cohort provides a valuable resource for biomarker discovery and validation in PD. Clinical data and biospecimens, available through The Michael J. Fox Foundation for Parkinson's Research and the National Institute of Neurological Disorders and Stroke, can serve as a platform for discovering biomarkers in clinically typical PD and comparisons across PD's broad and heterogeneous spectrum. © 2016 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society PMID:27113479

  7. Two-level method with coarse space size independent convergence

    Energy Technology Data Exchange (ETDEWEB)

    Vanek, P.; Brezina, M. [Univ. of Colorado, Denver, CO (United States); Tezaur, R.; Krizkova, J. [UWB, Plzen (Czech Republic)

    1996-12-31

    The basic disadvantage of the standard two-level method is the strong dependence of its convergence rate on the size of the coarse-level problem. In order to obtain the optimal convergence result, one is limited to using a coarse space which is only a few times smaller than the size of the fine-level one. Consequently, the asymptotic cost of the resulting method is the same as in the case of using a coarse-level solver for the original problem. Today`s two-level domain decomposition methods typically offer an improvement by yielding a rate of convergence which depends on the ratio of fine and coarse level only polylogarithmically. However, these methods require the use of local subdomain solvers for which straightforward application of iterative methods is problematic, while the usual application of direct solvers is expensive. We suggest a method diminishing significantly these difficulties.

  8. A novel method for creating custom shaped ballistic gelatin trainers using plaster molds.

    Science.gov (United States)

    Doctor, Michael; Katz, Anne; McNamara, Shannon O; Leifer, Jessica H; Bambrick-Santoyo, Gabriela; Saul, Turandot; Rose, Keith M

    2018-03-01

    Simulation based procedural training is an effective and frequently used method for teaching vascular access techniques which often require commercial trainers. These can be prohibitively expensive, which allows for homemade trainers made of gelatin to be a more cost-effective and attractive option. Previously described trainers are often rectangular with a flat surface that is dissimilar to human anatomy. We describe a novel method to create a more anatomically realistic trainer using ballistic gelatin, household items, and supplies commonly found in an emergency department such as the plaster wrap typically used to make splints.

  9. Application of nuclear analytical methods to heavy metal pollution studies of estuaries

    International Nuclear Information System (INIS)

    Anders, B.; Junge, W.; Knoth, J.; Michaelis, W.; Pepelnik, R.; Schwenke, H.

    1984-01-01

    Important objectives of heavy metal pollution studies of estuaries are the understanding of the transport phenomena in these complex ecosystems and the discovery of the pollution history and the geochemical background. Such studies require high precision and accuracy of the analytical methods. Moreover, pronounced spatial heterogeneities and temporal variabilities that are typical for estuaries necessitate the analysis of a great number of samples if relevant results are to be obtained. Both requirements can economically be fulfilled by a proper combination of analytical methods. Applications of energy-dispersive X-ray fluorescence analysis with total reflection of the exciting beam at the sample support and of neutron activation analysis with both thermal and fast neutrons are reported in the light of pollution studies performed in the Lower Elbe River. (orig.)

  10. TRACER - TRACING AND CONTROL OF ENGINEERING REQUIREMENTS

    Science.gov (United States)

    Turner, P. R.

    1994-01-01

    TRACER (Tracing and Control of Engineering Requirements) is a database/word processing system created to document and maintain the order of both requirements and descriptive material associated with an engineering project. A set of hierarchical documents are normally generated for a project whereby the requirements of the higher level documents levy requirements on the same level or lower level documents. Traditionally, the requirements are handled almost entirely by manual paper methods. The problem with a typical paper system, however, is that requirements written and changed continuously in different areas lead to misunderstandings and noncompliance. The purpose of TRACER is to automate the capture, tracing, reviewing, and managing of requirements for an engineering project. The engineering project still requires communications, negotiations, interactions, and iterations among people and organizations, but TRACER promotes succinct and precise identification and treatment of real requirements separate from the descriptive prose in a document. TRACER permits the documentation of an engineering project's requirements and progress in a logical, controllable, traceable manner. TRACER's attributes include the presentation of current requirements and status from any linked computer terminal and the ability to differentiate headers and descriptive material from the requirements. Related requirements can be linked and traced. The program also enables portions of documents to be printed, individual approval and release of requirements, and the tracing of requirements down into the equipment specification. Requirement "links" can be made "pending" and invisible to others until the pending link is made "binding". Individuals affected by linked requirements can be notified of significant changes with acknowledgement of the changes required. An unlimited number of documents can be created for a project and an ASCII import feature permits existing documents to be incorporated

  11. Assessing thermochromatography as a separation method for nuclear forensics. Current capability vis-a-vis forensic requirements

    International Nuclear Information System (INIS)

    Hanson, D.E.; Garrison, J.R.; Hall, H.L.

    2011-01-01

    Nuclear forensic science has become increasingly important for global nuclear security. However, many current laboratory analysis techniques are based on methods developed without the imperative for timely analysis that underlies the post-detonation forensics mission requirements. Current analysis of actinides, fission products, and fuel-specific materials requires time-consuming chemical separation coupled with nuclear counting or mass spectrometry. High-temperature gas-phase separations have been used in the past for the rapid separation of newly created elements/isotopes and as a basis for chemical classification of that element. We are assessing the utility of this method for rapid separation in the gas-phase to accelerate the separations of radioisotopes germane to post-detonation nuclear forensic investigations. The existing state of the art for thermo chromatographic separations, and its applicability to nuclear forensics, will be reviewed. (author)

  12. Characterization of Developer Application Methods Used in Fluorescent Penetrant Inspection

    Science.gov (United States)

    Brasche, L. J. H.; Lopez, R.; Eisenmann, D.

    2006-03-01

    Fluorescent penetrant inspection (FPI) is the most widely used inspection method for aviation components seeing use for production as well as an inservice inspection applications. FPI is a multiple step process requiring attention to the process parameters for each step in order to enable a successful inspection. A multiyear program is underway to evaluate the most important factors affecting the performance of FPI, to determine whether existing industry specifications adequately address control of the process parameters, and to provide the needed engineering data to the public domain. The final step prior to the inspection is the application of developer with typical aviation inspections involving the use of dry powder (form d) usually applied using either a pressure wand or dust storm chamber. Results from several typical dust storm chambers and wand applications have shown less than optimal performance. Measurements of indication brightness and recording of the UVA image, and in some cases, formal probability of detection (POD) studies were used to assess the developer application methods. Key conclusions and initial recommendations are provided.

  13. Research on new methods in transport theory

    International Nuclear Information System (INIS)

    Stefanovicj, D.

    1975-01-01

    Neutron transport theory is the basis for development of reactor theory and reactor calculational methods. It has to be acknowledged that recent applications of these disciplines have influenced considerably the development of power reactor concepts and technology. However, these achievements were implemented in a rather heuristic way, since the satisfaction of design demands were of utmost importance. Often this kind of approach turns out to be very restrictive and not even adequate for rather typical reactor applications. Many aspects and techniques of reactor theory and calculations ought to be reevaluated and/or reformulated on the more sound physical and mathematical foundations. At the same time, new reactor concepts and operational demands give rise to more sophisticated and complex design requirements. These new requirements can be met only by the development of new design techniques, which in the case of reactor neutronic calculation lead directly to the advanced transport theory methods. In addition, the rapid development of computer technology opens new opportunities for applications of advanced transport theory in practical calculations

  14. Typicality effects in artificial categories: is there a hemisphere difference?

    Science.gov (United States)

    Richards, L G; Chiarello, C

    1990-07-01

    In category classification tasks, typicality effects are usually found: accuracy and reaction time depend upon distance from a prototype. In this study, subjects learned either verbal or nonverbal dot pattern categories, followed by a lateralized classification task. Comparable typicality effects were found in both reaction time and accuracy across visual fields for both verbal and nonverbal categories. Both hemispheres appeared to use a similarity-to-prototype matching strategy in classification. This indicates that merely having a verbal label does not differentiate classification in the two hemispheres.

  15. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Verification

    Science.gov (United States)

    Hanson, John M.; Beard, Bernard B.

    2010-01-01

    This paper is focused on applying Monte Carlo simulation to probabilistic launch vehicle design and requirements verification. The approaches developed in this paper can be applied to other complex design efforts as well. Typically the verification must show that requirement "x" is met for at least "y" % of cases, with, say, 10% consumer risk or 90% confidence. Two particular aspects of making these runs for requirements verification will be explored in this paper. First, there are several types of uncertainties that should be handled in different ways, depending on when they become known (or not). The paper describes how to handle different types of uncertainties and how to develop vehicle models that can be used to examine their characteristics. This includes items that are not known exactly during the design phase but that will be known for each assembled vehicle (can be used to determine the payload capability and overall behavior of that vehicle), other items that become known before or on flight day (can be used for flight day trajectory design and go/no go decision), and items that remain unknown on flight day. Second, this paper explains a method (order statistics) for determining whether certain probabilistic requirements are met or not and enables the user to determine how many Monte Carlo samples are required. Order statistics is not new, but may not be known in general to the GN&C community. The methods also apply to determining the design values of parameters of interest in driving the vehicle design. The paper briefly discusses when it is desirable to fit a distribution to the experimental Monte Carlo results rather than using order statistics.

  16. The relationship between motor skills and cognitive skills in 4-16 year old typically developing children : A systematic review

    NARCIS (Netherlands)

    van der Fels, Irene M. J.; te Wierike, Sanne C. M.; Hartman, Esther; Elferink-Gemser, Marije T.; Smith, Joanne; Visscher, Chris

    2015-01-01

    Objectives: This review aims to give an overview of studies providing evidence for a relationship between motor and cognitive skills in typically developing children. Design: A systematic review. Methods: PubMed, Web of Science, and PsychINFO were searched for relevant articles. A total of 21

  17. SB certification handout material requirements, test methods, responsibilities, and minimum classification levels for mixture-based specification for flexible base.

    Science.gov (United States)

    2012-10-01

    A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...

  18. Beam heating requirements for a tokamak experimental power reactor

    International Nuclear Information System (INIS)

    Bertoncini, P.J.; Brooks, J.N.; Fasolo, J.A.; Stacey, W.M. Jr.

    1976-01-01

    Typical beam heating requirements for effective tokamak experimental power reactor (TEPR) operation have been studied in connection with the Argonne preliminary conceptual TEPR design. For an ignition level plasma (approximately 100 MWt fusion power) for the nominal case envisioned, the neutral beam is only used to heat the plasma to ignition. This typically requires a beam power output of 40 MW at 180 keV for about 3 sec with a total energy of 114 MJ supplied to the plasma. The beam requirements for an ignition device are not very sensitive to changes in wall-sputtered impurity levels or plasma resistivity. For a plasma that must be driven due to poor confinement, the beam must remain on for most of the burn cycle. For representative cases, beam powers of approximately 23 MW are required for a total on-time of 20 to 50 sec. Reqirements on power level, beam energy, on-time, and beam-generation efficiency all represent considerable advances over present technology. For the Argonne TEPR design, a total of 16 to 32 beam injectors is envisioned. For a 40-MW, 180-keV, one-component beam, each injector supplies about 7 to 14 A of neutrals to the plasma. For positive ion sources, about 50 to 100 A of ions are required per injector and some form of particle and/or energy recycling appears to be essential in order to meet the power and efficiency requirements

  19. [Precautions of physical performance requirements and test methods during product standard drafting process of medical devices].

    Science.gov (United States)

    Song, Jin-Zi; Wan, Min; Xu, Hui; Yao, Xiu-Jun; Zhang, Bo; Wang, Jin-Hong

    2009-09-01

    The major idea of this article is to discuss standardization and normalization for the product standard of medical devices. Analyze the problem related to the physical performance requirements and test methods during product standard drafting process and make corresponding suggestions.

  20. Rural Tourism and Local Development: Typical Productions of Lazio

    Directory of Open Access Journals (Sweden)

    Francesco Maria Olivieri

    2014-12-01

    Full Text Available The local development is based on the integration of the tourism sector with the whole economy. The rural tourism seems to be a good occasion to analyse the local development: consumption of "tourist products" located in specific local contexts. Starting from the food and wine supply chain and the localization of typical productions, the aim of the present work will be analyse the relationship with local development, rural tourism sustainability and accommodation system, referring to Lazio. Which are the findings to create tourism local system based on the relationship with touristic and food and wine supply chain? Italian tourism is based on accommodation system, so the whole consideration of the Italian cultural tourism: tourism made in Italy. The touristic added value to specific local context takes advantage from the synergy with food and wine supply chain: made in Italy of typical productions. Agritourism could be better accommodation typology to rural tourism and to exclusivity of consumption typical productions. The reciprocity among food and wine supply chain and tourism provides new insights on the key topics related to tourism development and to the organization of geographical space as well and considering its important contribution nowadays to the economic competitiveness.

  1. Energy-efficient houses built according to the energy performance requirements introduced in Denmark in 2006

    DEFF Research Database (Denmark)

    Tommerup, Henrik M.; Rose, Jørgen; Svendsen, Svend

    2007-01-01

    In order to meet new tighter building energy requirements introduced in Denmark in 2006 and prepare the way for future buildings with even lower energy consumption, single-family houses were built with the purpose to demonstrate that it is possible to build typical single-family houses with an en......% of the required and almost the level of typical passive houses.......In order to meet new tighter building energy requirements introduced in Denmark in 2006 and prepare the way for future buildings with even lower energy consumption, single-family houses were built with the purpose to demonstrate that it is possible to build typical single-family houses...... with an energy consumption that meets the demands without problems concerning building technology or economy. The paper gives a brief presentation of the houses and the applied energy-saving measures. The paper also presents results from measurements of the overall energy use, indoor climate and air tightness...

  2. A Photometric Machine-Learning Method to Infer Stellar Metallicity

    Science.gov (United States)

    Miller, Adam A.

    2015-01-01

    Following its formation, a star's metal content is one of the few factors that can significantly alter its evolution. Measurements of stellar metallicity ([Fe/H]) typically require a spectrum, but spectroscopic surveys are limited to a few x 10(exp 6) targets; photometric surveys, on the other hand, have detected > 10(exp 9) stars. I present a new machine-learning method to predict [Fe/H] from photometric colors measured by the Sloan Digital Sky Survey (SDSS). The training set consists of approx. 120,000 stars with SDSS photometry and reliable [Fe/H] measurements from the SEGUE Stellar Parameters Pipeline (SSPP). For bright stars (g' learning method is similar to the scatter in [Fe/H] measurements from low-resolution spectra..

  3. Breast Metastases from Extramammary Malignancies: Typical and Atypical Ultrasound Features

    Energy Technology Data Exchange (ETDEWEB)

    Mun, Sung Hee [Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 135-710 (Korea, Republic of); Department of Radiology, Catholic University of Daegu College of Medicine, Daegu 712-702 (Korea, Republic of); Ko, Eun Young; Han, Boo-Kyung; Shin, Jung Hee [Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 135-710 (Korea, Republic of); Kim, Suk Jung [Department of Radiology, Inje University College of Medicine, Busan Paik Hospital, Busan 614-735 (Korea, Republic of); Cho, Eun Yoon [Department of Pathology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 135-710 (Korea, Republic of)

    2014-07-01

    Breast metastases from extramammary malignancies are uncommon. The most common sources are lymphomas/leukemias and melanomas. Some of the less common sources include carcinomas of the lung, ovary, and stomach, and infrequently, carcinoid tumors, hypernephromas, carcinomas of the liver, tonsil, pleura, pancreas, cervix, perineum, endometrium and bladder. Breast metastases from extramammary malignancies have both hematogenous and lymphatic routes. According to their routes, there are common radiological features of metastatic diseases of the breast, but the features are not specific for metastases. Typical ultrasound (US) features of hematogenous metastases include single or multiple, round to oval shaped, well-circumscribed hypoechoic masses without spiculations, calcifications, or architectural distortion; these masses are commonly located superficially in subcutaneous tissue or immediately adjacent to the breast parenchyma that is relatively rich in blood supply. Typical US features of lymphatic breast metastases include diffusely and heterogeneously increased echogenicities in subcutaneous fat and glandular tissue and a thick trabecular pattern with secondary skin thickening, lymphedema, and lymph node enlargement. However, lesions show variable US features in some cases, and differentiation of these lesions from primary breast cancer or from benign lesions is difficult. In this review, we demonstrate various US appearances of breast metastases from extramammary malignancies as typical and atypical features, based on the results of US and other imaging studies performed at our institution. Awareness of the typical and atypical imaging features of these lesions may be helpful to diagnose metastatic lesions of the breast.

  4. Breast Metastases from Extramammary Malignancies: Typical and Atypical Ultrasound Features

    International Nuclear Information System (INIS)

    Mun, Sung Hee; Ko, Eun Young; Han, Boo-Kyung; Shin, Jung Hee; Kim, Suk Jung; Cho, Eun Yoon

    2014-01-01

    Breast metastases from extramammary malignancies are uncommon. The most common sources are lymphomas/leukemias and melanomas. Some of the less common sources include carcinomas of the lung, ovary, and stomach, and infrequently, carcinoid tumors, hypernephromas, carcinomas of the liver, tonsil, pleura, pancreas, cervix, perineum, endometrium and bladder. Breast metastases from extramammary malignancies have both hematogenous and lymphatic routes. According to their routes, there are common radiological features of metastatic diseases of the breast, but the features are not specific for metastases. Typical ultrasound (US) features of hematogenous metastases include single or multiple, round to oval shaped, well-circumscribed hypoechoic masses without spiculations, calcifications, or architectural distortion; these masses are commonly located superficially in subcutaneous tissue or immediately adjacent to the breast parenchyma that is relatively rich in blood supply. Typical US features of lymphatic breast metastases include diffusely and heterogeneously increased echogenicities in subcutaneous fat and glandular tissue and a thick trabecular pattern with secondary skin thickening, lymphedema, and lymph node enlargement. However, lesions show variable US features in some cases, and differentiation of these lesions from primary breast cancer or from benign lesions is difficult. In this review, we demonstrate various US appearances of breast metastases from extramammary malignancies as typical and atypical features, based on the results of US and other imaging studies performed at our institution. Awareness of the typical and atypical imaging features of these lesions may be helpful to diagnose metastatic lesions of the breast

  5. Total reflection X-ray spectroscopy as a rapid analytical method for uranium determination in drainage water

    International Nuclear Information System (INIS)

    Matsuyama, Tsugufumi; Sakai, Yasuhiro; Izumoto, Yukie; Imaseki, Hitoshi; Hamano, Tsuyoshi; Yoshii, Hiroshi

    2017-01-01

    Uranium concentrations in drainage water are typically determined by α-spectrometry. However, due to the low specific radioactivity of uranium, the evaporation of large volumes of drainage water, followed by several hours of measurements, is required. Thus, the development of a rapid and simple detection method for uranium in drainage water would enhance the operation efficiency of radiation control workers. We herein propose a novel methodology based on total reflection X-ray fluorescence (TXRF) for the measurement of uranium in contaminated water. TXRF is a particularly desirable method for the rapid and simple evaluation of uranium in contaminated water, as chemical pretreatment of the sample solution is not necessary, measurement times are typically several seconds, and the required sample volume is low. We herein employed sample solutions containing several different concentrations of uranyl acetate with yttrium as an internal standard. The solutions were placed onto sample holders, and were dried prior to TXRF measurements. The relative intensity, otherwise defined as the net intensity ratio of the Lα peak of uranium to the Kα peak of yttrium, was directly proportional to the uranium concentration. Using this method, a TXRF detection limit for uranium in contaminated water of 0.30 μg/g was achieved. (author)

  6. Defining Requirements and Related Methods for Designing Sensorized Garments

    Directory of Open Access Journals (Sweden)

    Giuseppe Andreoni

    2016-05-01

    Full Text Available Designing smart garments has strong interdisciplinary implications, specifically related to user and technical requirements, but also because of the very different applications they have: medicine, sport and fitness, lifestyle monitoring, workplace and job conditions analysis, etc. This paper aims to discuss some user, textile, and technical issues to be faced in sensorized clothes development. In relation to the user, the main requirements are anthropometric, gender-related, and aesthetical. In terms of these requirements, the user’s age, the target application, and fashion trends cannot be ignored, because they determine the compliance with the wearable system. Regarding textile requirements, functional factors—also influencing user comfort—are elasticity and washability, while more technical properties are the stability of the chemical agents’ effects for preserving the sensors’ efficacy and reliability, and assuring the proper duration of the product for the complete life cycle. From the technical side, the physiological issues are the most important: skin conductance, tolerance, irritation, and the effect of sweat and perspiration are key factors for reliable sensing. Other technical features such as battery size and duration, and the form factor of the sensor collector, should be considered, as they affect aesthetical requirements, which have proven to be crucial, as well as comfort and wearability.

  7. Method of fabricating a uranium-bearing foil

    Science.gov (United States)

    Gooch, Jackie G [Seymour, TN; DeMint, Amy L [Kingston, TN

    2012-04-24

    Methods of fabricating a uranium-bearing foil are described. The foil may be substantially pure uranium, or may be a uranium alloy such as a uranium-molybdenum alloy. The method typically includes a series of hot rolling operations on a cast plate material to form a thin sheet. These hot rolling operations are typically performed using a process where each pass reduces the thickness of the plate by a substantially constant percentage. The sheet is typically then annealed and then cooled. The process typically concludes with a series of cold rolling passes where each pass reduces the thickness of the plate by a substantially constant thickness amount to form the foil.

  8. Vapor generation methods for explosives detection research

    Energy Technology Data Exchange (ETDEWEB)

    Grate, Jay W.; Ewing, Robert G.; Atkinson, David A.

    2012-12-01

    The generation of calibrated vapor samples of explosives compounds remains a challenge due to the low vapor pressures of the explosives, adsorption of explosives on container and tubing walls, and the requirement to manage (typically) multiple temperature zones as the vapor is generated, diluted, and delivered. Methods that have been described to generate vapors can be classified as continuous or pulsed flow vapor generators. Vapor sources for continuous flow generators are typically explosives compounds supported on a solid support, or compounds contained in a permeation or diffusion device. Sources are held at elevated isothermal temperatures. Similar sources can be used for pulsed vapor generators; however, pulsed systems may also use injection of solutions onto heated surfaces with generation of both solvent and explosives vapors, transient peaks from a gas chromatograph, or vapors generated by s programmed thermal desorption. This article reviews vapor generator approaches with emphasis on the method of generating the vapors and on practical aspects of vapor dilution and handling. In addition, a gas chromatographic system with two ovens that is configurable with up to four heating ropes is proposed that could serve as a single integrated platform for explosives vapor generation and device testing. Issues related to standards, calibration, and safety are also discussed.

  9. Process qualification and control in electron beams--requirements, methods, new concepts and challenges

    International Nuclear Information System (INIS)

    Mittendorfer, J.; Gratzl, F.; Hanis, D.

    2004-01-01

    In this paper the status of process qualification and control in electron beam irradiation is analyzed in terms of requirements, concepts, methods and challenges for a state-of-the-art process control concept for medical device sterilization. Aspects from process qualification to routine process control are described together with the associated process variables. As a case study the 10 MeV beams at Mediscan GmbH are considered. Process control concepts like statistical process control (SPC) and a new concept to determine process capability is briefly discussed

  10. Flood control design requirements and flood evaluation methods of inland nuclear power plant

    International Nuclear Information System (INIS)

    Zhang Ailing; Wang Ping; Zhu Jingxing

    2011-01-01

    Effect of flooding is one of the key safety factors and environmental factors in inland nuclear power plant sitting. Up to now, the rule of law and standard systems are established for the selection of nuclear power plant location and flood control requirements in China. In this paper flood control standards of China and other countries are introduced. Several inland nuclear power plants are taken as examples to thoroughly discuss the related flood evaluation methods. The suggestions are also put forward in the paper. (authors)

  11. 252Cf-source-driven neutron noise analysis method

    International Nuclear Information System (INIS)

    Mihalczo, J.T.; King, W.T.; Blakeman, E.D.

    1985-01-01

    The 252 Cf-source-driven neutron noise analysis method has been tested in a a wide variety of experiments that have indicated the broad range of applicability of the method. The neutron multiplication factor, k/sub eff/ has been satisfactorily determined for a variety of materials including uranium metal, light water reactor fuel pins, fissile solutions, fuel plates in water, and interacting cylinders. For a uranyl nitrate solution tank which is typical of a fuel processing or reprocessing plant, the k/sub eff/ values were satisfactorily determined for values between 0.92 and 0.5 using a simple point kinetics interpretation of the experimental data. The short measurement times, in several cases as low as 1 min, have shown that the development of this method can lead to a practical subcriticality monitor for many in-plant applications. The further development of the method will require experiments and the development of theoretical methods to predict the experimental observables

  12. 252Cf-source-driven neutron noise analysis method

    International Nuclear Information System (INIS)

    Mihalczo, J.T.; King, W.T.; Blakeman, E.D.

    1985-01-01

    The 252 Cf-source-driven neutron noise analysis method has been tested in a wide variety of experiments that have indicated the broad range of applicability of the method. The neutron multiplication factor k/sub eff/ has been satisfactorily detemined for a variety of materials including uranium metal, light water reactor fuel pins, fissile solutions, fuel plates in water, and interacting cylinders. For a uranyl nitrate solution tank which is typical of a fuel processing or reprocessing plant, the k/sub eff/ values were satisfactorily determined for values between 0.92 and 0.5 using a simple point kinetics interpretation of the experimental data. The short measurement times, in several cases as low as 1 min, have shown that the development of this method can lead to a practical subcriticality monitor for many in-plant applications. The further development of the method will require experiments oriented toward particular applications including dynamic experiments and the development of theoretical methods to predict the experimental observables

  13. Key Design Requirements for Long-Reach Manipulators

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, D.S.

    2001-01-01

    Long-reach manipulators differ from industrial robots and teleoperators typically used in the nuclear industry in that the aspect ratio (length to diameter) of links is much greater and link flexibility, as well as joint or drive train flexibility, is likely to be significant. Long-reach manipulators will be required for a variety of applications in the Environmental Restoration and Waste Management Program. While each application will present specific functional, kinematic, and performance requirements, an approach for determining the kinematic applicability and performance characteristics is presented, with a focus on waste storage tank remediation. Requirements are identified, kinematic configurations are considered, and a parametric study of link design parameters and their effects on performance characteristics is presented.

  14. Key design requirements for long-reach manipulators

    International Nuclear Information System (INIS)

    Kwon, D.S.; March-Leuba, S.; Babcock, S.M.; Hamel, W.R.

    1993-09-01

    Long-reach manipulators differ from industrial robots and teleoperators typically used in the nuclear industry in that the aspect ratio (length to diameter) of links is much greater and link flexibility, as well as joint or drive train flexibility, is likely to be significant. Long-reach manipulators will be required for a variety of applications in the Environmental Restoration and Waste Management Program. While each application will present specific functional kinematic, and performance requirements an approach for determining the kinematic applicability and performance characteristics is presented, with a focus on waste storage tank remediation. Requirements are identified, kinematic configurations are considered, and a parametric study of link design parameters and their effects on performance characteristics is presented

  15. Key Design Requirements for Long-Reach Manipulators

    International Nuclear Information System (INIS)

    Kwon, D.S.

    2001-01-01

    Long-reach manipulators differ from industrial robots and teleoperators typically used in the nuclear industry in that the aspect ratio (length to diameter) of links is much greater and link flexibility, as well as joint or drive train flexibility, is likely to be significant. Long-reach manipulators will be required for a variety of applications in the Environmental Restoration and Waste Management Program. While each application will present specific functional, kinematic, and performance requirements, an approach for determining the kinematic applicability and performance characteristics is presented, with a focus on waste storage tank remediation. Requirements are identified, kinematic configurations are considered, and a parametric study of link design parameters and their effects on performance characteristics is presented

  16. BLOND, a building-level office environment dataset of typical electrical appliances

    Science.gov (United States)

    Kriechbaumer, Thomas; Jacobsen, Hans-Arno

    2018-03-01

    Energy metering has gained popularity as conventional meters are replaced by electronic smart meters that promise energy savings and higher comfort levels for occupants. Achieving these goals requires a deeper understanding of consumption patterns to reduce the energy footprint: load profile forecasting, power disaggregation, appliance identification, startup event detection, etc. Publicly available datasets are used to test, verify, and benchmark possible solutions to these problems. For this purpose, we present the BLOND dataset: continuous energy measurements of a typical office environment at high sampling rates with common appliances and load profiles. We provide voltage and current readings for aggregated circuits and matching fully-labeled ground truth data (individual appliance measurements). The dataset contains 53 appliances (16 classes) in a 3-phase power grid. BLOND-50 contains 213 days of measurements sampled at 50kSps (aggregate) and 6.4kSps (individual appliances). BLOND-250 consists of the same setup: 50 days, 250kSps (aggregate), 50kSps (individual appliances). These are the longest continuous measurements at such high sampling rates and fully-labeled ground truth we are aware of.

  17. BLOND, a building-level office environment dataset of typical electrical appliances.

    Science.gov (United States)

    Kriechbaumer, Thomas; Jacobsen, Hans-Arno

    2018-03-27

    Energy metering has gained popularity as conventional meters are replaced by electronic smart meters that promise energy savings and higher comfort levels for occupants. Achieving these goals requires a deeper understanding of consumption patterns to reduce the energy footprint: load profile forecasting, power disaggregation, appliance identification, startup event detection, etc. Publicly available datasets are used to test, verify, and benchmark possible solutions to these problems. For this purpose, we present the BLOND dataset: continuous energy measurements of a typical office environment at high sampling rates with common appliances and load profiles. We provide voltage and current readings for aggregated circuits and matching fully-labeled ground truth data (individual appliance measurements). The dataset contains 53 appliances (16 classes) in a 3-phase power grid. BLOND-50 contains 213 days of measurements sampled at 50kSps (aggregate) and 6.4kSps (individual appliances). BLOND-250 consists of the same setup: 50 days, 250kSps (aggregate), 50kSps (individual appliances). These are the longest continuous measurements at such high sampling rates and fully-labeled ground truth we are aware of.

  18. A method enabling simultaneous pressure and temperature measurement using a single piezoresistive MEMS pressure sensor

    International Nuclear Information System (INIS)

    Frantlović, Miloš; Stanković, Srđan; Jokić, Ivana; Lazić, Žarko; Smiljanić, Milče; Obradov, Marko; Vukelić, Branko; Jakšić, Zoran

    2016-01-01

    In this paper we present a high-performance, simple and low-cost method for simultaneous measurement of pressure and temperature using a single piezoresistive MEMS pressure sensor. The proposed measurement method utilizes the parasitic temperature sensitivity of the sensing element for both pressure measurement correction and temperature measurement. A parametric mathematical model of the sensor was established and its parameters were calculated using the obtained characterization data. Based on the model, a real-time sensor correction for both pressure and temperature measurements was implemented in a target measurement system. The proposed method was verified experimentally on a group of typical industrial-grade piezoresistive sensors. The obtained results indicate that the method enables the pressure measurement performance to exceed that of typical digital industrial pressure transmitters, achieving at the same time the temperature measurement performance comparable to industrial-grade platinum resistance temperature sensors. The presented work is directly applicable in industrial instrumentation, where it can add temperature measurement capability to the existing pressure measurement instruments, requiring little or no additional hardware, and without adverse effects on pressure measurement performance. (paper)

  19. Typical event horizons in AdS/CFT

    Energy Technology Data Exchange (ETDEWEB)

    Avery, Steven G.; Lowe, David A. [Department of Physics, Brown University,Providence, RI 02912 (United States)

    2016-01-14

    We consider the construction of local bulk operators in a black hole background dual to a pure state in conformal field theory. The properties of these operators in a microcanonical ensemble are studied. It has been argued in the literature that typical states in such an ensemble contain firewalls, or otherwise singular horizons. We argue this conclusion can be avoided with a proper definition of the interior operators.

  20. Typical event horizons in AdS/CFT

    Science.gov (United States)

    Avery, Steven G.; Lowe, David A.

    2016-01-01

    We consider the construction of local bulk operators in a black hole background dual to a pure state in conformal field theory. The properties of these operators in a microcanonical ensemble are studied. It has been argued in the literature that typical states in such an ensemble contain firewalls, or otherwise singular horizons. We argue this conclusion can be avoided with a proper definition of the interior operators.

  1. Typical performance of regular low-density parity-check codes over general symmetric channels

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Toshiyuki [Department of Electronics and Information Engineering, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397 (Japan); Saad, David [Neural Computing Research Group, Aston University, Aston Triangle, Birmingham B4 7ET (United Kingdom)

    2003-10-31

    Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.

  2. Typical performance of regular low-density parity-check codes over general symmetric channels

    International Nuclear Information System (INIS)

    Tanaka, Toshiyuki; Saad, David

    2003-01-01

    Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models

  3. 29 CFR 780.210 - The typical hatchery operations constitute “agriculture.”

    Science.gov (United States)

    2010-07-01

    ... EXEMPTIONS APPLICABLE TO AGRICULTURE, PROCESSING OF AGRICULTURAL COMMODITIES, AND RELATED SUBJECTS UNDER THE FAIR LABOR STANDARDS ACT Agriculture as It Relates to Specific Situations Hatchery Operations § 780.210 The typical hatchery operations constitute “agriculture.” As stated in § 780.127, the typical hatchery...

  4. Using Typical Infant Development to Inform Music Therapy with Children with Disabilities

    Science.gov (United States)

    Wheeler, Barbara L.; Stultz, Sylvia

    2008-01-01

    This article illustrates some ways in which observations of typically-developing infants can inform music therapy and other work with children with disabilities. The research project that is described examines typical infant development with special attention to musical relatedness and communication. Videotapes of sessions centering on musical…

  5. Integral method for the calculation of Hawking radiation in dispersive media. II. Asymmetric asymptotics.

    Science.gov (United States)

    Robertson, Scott

    2014-11-01

    Analog gravity experiments make feasible the realization of black hole space-times in a laboratory setting and the observational verification of Hawking radiation. Since such analog systems are typically dominated by dispersion, efficient techniques for calculating the predicted Hawking spectrum in the presence of strong dispersion are required. In the preceding paper, an integral method in Fourier space is proposed for stationary 1+1-dimensional backgrounds which are asymptotically symmetric. Here, this method is generalized to backgrounds which are different in the asymptotic regions to the left and right of the scattering region.

  6. Sample diversity and premise typicality in inductive reasoning: evidence for developmental change.

    Science.gov (United States)

    Rhodes, Marjorie; Brickman, Daniel; Gelman, Susan A

    2008-08-01

    Evaluating whether a limited sample of evidence provides a good basis for induction is a critical cognitive task. We hypothesized that whereas adults evaluate the inductive strength of samples containing multiple pieces of evidence by attending to the relations among the exemplars (e.g., sample diversity), six-year-olds would attend to the degree to which each individual exemplar in a sample independently appears informative (e.g., premise typicality). To test these hypotheses, participants were asked to select between diverse and non-diverse samples to help them learn about basic-level animal categories. Across various between-subject conditions (N=133), we varied the typicality present in the diverse and non-diverse samples. We found that adults reliably selected to examine diverse over non-diverse samples, regardless of exemplar typicality, six-year-olds preferred to examine samples containing typical exemplars, regardless of sample diversity, and nine-year-olds were somewhat in the midst of this developmental transition.

  7. Typical balance exercises or exergames for balance improvement?

    Science.gov (United States)

    Gioftsidou, Asimenia; Vernadakis, Nikolaos; Malliou, Paraskevi; Batzios, Stavros; Sofokleous, Polina; Antoniou, Panagiotis; Kouli, Olga; Tsapralis, Kyriakos; Godolias, George

    2013-01-01

    Balance training is an effective intervention to improve static postural sway and balance. The purpose of the present study was to investigate the effectiveness of the Nintendo Wii Fit Plus exercises for improving balance ability in healthy collegiate students in comparison with a typical balance training program. Forty students were randomly divided into two groups, a traditional (T group) and a Nintendo Wii group (W group) performed an 8 week balance program. The "W group" used the interactive games as a training method, while the "T group" used an exercise program with mini trampoline and inflatable discs (BOSU). Pre and Post-training participants completed balance assessments. Two-way repeated measures analyses of variance (ANOVAs) were conducted to determine the effect of training program. Analysis of the data illustrated that both training program groups demonstrated an improvement in Total, Anterior-posterior and Medial Lateral Stability Index scores for both limbs. Only at the test performed in the balance board with anterior-posterior motion, the improvement in balance ability was greater in the "T group" than the "W group", when the assessment was performed post-training (p=0.023). Findings support the effectiveness of using the Nintendo Wii gaming console as a balance training intervention tool.

  8. Prospective memory deficits in illicit polydrug users are associated with the average long-term typical dose of ecstasy typically consumed in a single session.

    Science.gov (United States)

    Gallagher, Denis T; Hadjiefthyvoulou, Florentia; Fisk, John E; Montgomery, Catharine; Robinson, Sarita J; Judge, Jeannie

    2014-01-01

    Neuroimaging evidence suggests that ecstasy-related reductions in SERT densities relate more closely to the number of tablets typically consumed per session rather than estimated total lifetime use. To better understand the basis of drug related deficits in prospective memory (p.m.) we explored the association between p.m. and average long-term typical dose and long-term frequency of use. Study 1: Sixty-five ecstasy/polydrug users and 85 nonecstasy users completed an event-based, a short-term and a long-term time-based p.m. task. Study 2: Study 1 data were merged with outcomes on the same p.m. measures from a previous study creating a combined sample of 103 ecstasy/polydrug users, 38 cannabis-only users, and 65 nonusers of illicit drugs. Study 1: Ecstasy/polydrug users had significant impairments on all p.m. outcomes compared with nonecstasy users. Study 2: Ecstasy/polydrug users were impaired in event-based p.m. compared with both other groups and in long-term time-based p.m. compared with nonillicit drug users. Both drug using groups did worse on the short-term time-based p.m. task compared with nonusers. Higher long-term average typical dose of ecstasy was associated with poorer performance on the event and short-term time-based p.m. tasks and accounted for unique variance in the two p.m. measures over and above the variance associated with cannabis and cocaine use. The typical ecstasy dose consumed in a single session is an important predictor of p.m. impairments with higher doses reflecting increasing tolerance giving rise to greater p.m. impairment.

  9. Dysphonia Severity Index in Typically Developing Indian Children.

    Science.gov (United States)

    Pebbili, Gopi Kishore; Kidwai, Juhi; Shabnam, Srushti

    2017-01-01

    Dysphonia is a variation in an individual's quality, pitch, or loudness from the voice characteristics typical of a speaker of similar age, gender, cultural background, and geographic location. Dysphonia Severity Index (DSI) is a recognized assessment tool based on a weighted combination of maximum phonation time, highest frequency, lowest intensity, and jitter (%) of an individual. Although dysphonia in adults is accurately evaluated using DSI, standard reference values for school-age children have not been studied. This study aims to document the DSI scores in typically developing children (8-12 years). A total of 42 typically developing children (8-12 years) without complaint of voice problem on the day of testing participated in the study. DSI was computed by substituting the raw scores of substituent parameters: maximum phonation time, highest frequency, lowest intensity, and jitter% using various modules of CSL 4500 software. The average DSI values obtained in children were 2.9 (1.23) and 3.8 (1.29) for males and females, respectively. DSI values are found to be significantly higher (P = 0.027) for females than those for males in Indian children. This could be attributed to the anatomical and behavioral differences among females and males. Further, pubertal changes set in earlier for females approximating an adult-like physiology, thereby leading to higher DSI values in them. The mean DSI value obtained for male and female Indian children can be used as a preliminary reference data against which the DSI values of school-age children with dysphonia can be compared. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  10. Mercury monitoring in fish using a non-lethal tissue biopsy method

    Science.gov (United States)

    Ackerson, J; Schmitt, Christopher J.; McKee, J; Brumbaugh, W. G.

    2010-01-01

    The occurrence of mercury in fish is well-known and often occurs at levels that warrant restricted consumption by sensitive human populations. Because of this, local wildlife and health agencies have developed monitoring programs to identify the magnitude of fish contamination and changes through time. Monitoring mercury levels in fish typically requires killing fish for removal of a fillet. Recently, researchers have proposed the use of a non-lethal tissue biopsy plug method as a surrogate for analysis of the entire fillet.

  11. Modelling object typicality in description logics

    CSIR Research Space (South Africa)

    Britz, K

    2009-12-01

    Full Text Available in the context under consideration, than those lower down. For any given class C, we assume that all objects in the appli- cation domain that are in (the interpretation of) C are more typical of C than those not in C. This is a technical construction which... to be modular partial orders, i.e. reflexive, transitive, anti- symmetric relations such that, for all a, b, c in ∆I , if a and b are incomparable and a is strictly below c, then b is also strictly below c. Modular partial orders have the effect...

  12. Typicality Mediates Performance during Category Verification in Both Ad-Hoc and Well-Defined Categories

    Science.gov (United States)

    Sandberg, Chaleece; Sebastian, Rajani; Kiran, Swathi

    2012-01-01

    Background: The typicality effect is present in neurologically intact populations for natural, ad-hoc, and well-defined categories. Although sparse, there is evidence of typicality effects in persons with chronic stroke aphasia for natural and ad-hoc categories. However, it is unknown exactly what influences the typicality effect in this…

  13. Narrative versus Style: Effect of Genre Typical Events versus Genre Typical Filmic Realizations on Film Viewers' Genre Recognition

    OpenAIRE

    Visch, V.; Tan, E.

    2008-01-01

    This study investigated whether film viewers recognize four basic genres (comic, drama, action and nonfiction) on the basis of genre-typical event cues or of genretypical filmic realization cues of events. Event cues are similar to the narrative content of a film sequence, while filmic realization cues are similar to stylistic surface cues of a film sequence. It was predicted that genre recognition of short film fragments is cued more by filmic realization cues than by event cues. The results...

  14. Typical Werner states satisfying all linear Bell inequalities with dichotomic measurements

    Science.gov (United States)

    Luo, Ming-Xing

    2018-04-01

    Quantum entanglement as a special resource inspires various distinct applications in quantum information processing. Unfortunately, it is NP-hard to detect general quantum entanglement using Bell testing. Our goal is to investigate quantum entanglement with white noises that appear frequently in experiment and quantum simulations. Surprisingly, for almost all multipartite generalized Greenberger-Horne-Zeilinger states there are entangled noisy states that satisfy all linear Bell inequalities consisting of full correlations with dichotomic inputs and outputs of each local observer. This result shows generic undetectability of mixed entangled states in contrast to Gisin's theorem of pure bipartite entangled states in terms of Bell nonlocality. We further provide an accessible method to show a nontrivial set of noisy entanglement with small number of parties satisfying all general linear Bell inequalities. These results imply typical incompleteness of special Bell theory in explaining entanglement.

  15. Instrumentation requirements for the ESF thermomechanical experiments

    International Nuclear Information System (INIS)

    Pott, J.; Brechtel, C.E.

    1992-01-01

    In situ thermomechanical experiments are planned as part of the Yucca Mountain Site Characterization Project that require instruments to measure stress and displacement at temperatures that exceed the typical specifications of existing geotechnical instruments. A high degree of instrument reliability will also be required to satisfy the objectives of the experiments, therefore a study was undertaken to identify areas where improvement in instrument performance was required. A preliminary list of instruments required for the experiments was developed, based on existing test planning and analysis. Projected temperature requirements were compared to specifications of existing instruments to identify instrumentation development needs. Different instrument technologies, not currently employed in geotechnical instrumentation, were reviewed to identify potential improvements of existing designs for the high temperature environment. Technologies with strong potentials to improve instrument performance with relatively high reliability include graphite fiber composite materials, fiber optics, and video imagery

  16. Ecosystem responses to warming and watering in typical and desert steppes

    OpenAIRE

    Zhenzhu Xu; Yanhui Hou; Lihua Zhang; Tao Liu; Guangsheng Zhou

    2016-01-01

    Global warming is projected to continue, leading to intense fluctuations in precipitation and heat waves and thereby affecting the productivity and the relevant biological processes of grassland ecosystems. Here, we determined the functional responses to warming and altered precipitation in both typical and desert steppes. The results showed that watering markedly increased the aboveground net primary productivity (ANPP) in a typical steppe during a drier year and in a desert steppe over two ...

  17. Typical exposure of children to EMF: exposimetry and dosimetry

    International Nuclear Information System (INIS)

    Valic, Blaz; Kos, Bor; Gajsek, Peter

    2015-01-01

    A survey study with portable exposimeters, worn by 21 children under the age of 17, and detailed measurements in an apartment above a transformer substation were carried out to determine the typical individual exposure of children to extremely low- and radio-frequency (RF) electromagnetic field. In total, portable exposimeters were worn for >2400 h. Based on the typical individual exposure the in situ electric field and specific absorption rate (SAR) values were calculated for an 11-y-old female human model. The average exposure was determined to be low compared with ICNIRP reference levels: 0.29 μT for an extremely low frequency (ELF) magnetic field and 0.09 V m -1 for GSM base stations, 0.11 V m -1 for DECT and 0.10 V m -1 for WiFi; other contributions could be neglected. However, some of the volunteers were more exposed: the highest realistic exposure, to which children could be exposed for a prolonged period of time, was 1.35 μT for ELF magnetic field and 0.38 V m -1 for DECT, 0.13 V m -1 for WiFi and 0.26 V m -1 for GSM base stations. Numerical calculations of the in situ electric field and SAR values for the typical and the worst-case situation show that, compared with ICNIRP basic restrictions, the average exposure is low. In the typical exposure scenario, the extremely low frequency exposure is <0.03 % and the RF exposure <0.001 % of the corresponding basic restriction. In the worst-case situation, the extremely low frequency exposure is <0.11 % and the RF exposure <0.007 % of the corresponding basic restrictions. Analysis of the exposures and the individual's perception of being exposed/ unexposed to an ELF magnetic field showed that it is impossible to estimate the individual exposure to an ELF magnetic field based only on the information provided by the individuals, as they do not have enough knowledge and information to properly identify the sources in their vicinity. (authors)

  18. The use of the case study method in radiation worker continuing training

    International Nuclear Information System (INIS)

    Stevens, R.D.

    1990-01-01

    Typical methods of continuing training are often viewed by employees as boring, redundant and unnecessary. It is hoped that the operating experience lesson in the required course, Radiation Worker Requalification, will be well received by employees because actual RFP events will be presented as case studies. The interactive learning atmosphere created by the case study method stimulates discussion, develops analytical abilities, and motivates employees to use lessons learned in the workplace. This problem solving approach to continuing training incorporates cause and effect analysis, a technique which is also used at RFP to investigate events. A method of designing the operating experience lesson in the Radiation Worker Requalification course is described in this paper. 7 refs., 2 figs

  19. Feasibility Exploration of Electrodermal Response to Food in Children with ASD Compared to Typically Developing Children

    Directory of Open Access Journals (Sweden)

    Michelle A. Suarez

    2018-01-01

    Full Text Available Background: Children with Autism Spectrum Disorder (ASD frequently have food selectivity that causes additional health and quality of life stressors for the child and the family. The causes of food selectivity are currently unknown but may be linked, at least in part, to sensory processing problems. Method: The purpose of this study was to test the feasibility of using electrodermal activity (EDA measurement in response to food to gain insight into the physiological processes associated with eating for children with ASD compared to typically developing children. In addition, differences in food acceptance and the relationship between food acceptance and sensory over-responsivity were explored. Results: Children with ASD had significantly different EDA during food presentation compared to typically developing controls. In addition, children with ASD accepted significantly fewer foods as part of their regular diet, and the number of foods accepts was significantly related to a measure of SOR. Discussion: This information has the potential to inform research and treatment for food selectivity

  20. South American Youth and Integration : Typical Situations and Youth ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    South American Youth and Integration : Typical Situations and Youth ... IDRC partner the World Economic Forum is building a hub for inclusive growth ... Brazil, Paraguay and Uruguay) and their perception of rights, democracy and regional.

  1. An efficient strongly coupled immersed boundary method for deforming bodies

    Science.gov (United States)

    Goza, Andres; Colonius, Tim

    2016-11-01

    Immersed boundary methods treat the fluid and immersed solid with separate domains. As a result, a nonlinear interface constraint must be satisfied when these methods are applied to flow-structure interaction problems. This typically results in a large nonlinear system of equations that is difficult to solve efficiently. Often, this system is solved with a block Gauss-Seidel procedure, which is easy to implement but can require many iterations to converge for small solid-to-fluid mass ratios. Alternatively, a Newton-Raphson procedure can be used to solve the nonlinear system. This typically leads to convergence in a small number of iterations for arbitrary mass ratios, but involves the use of large Jacobian matrices. We present an immersed boundary formulation that, like the Newton-Raphson approach, uses a linearization of the system to perform iterations. It therefore inherits the same favorable convergence behavior. However, we avoid large Jacobian matrices by using a block LU factorization of the linearized system. We derive our method for general deforming surfaces and perform verification on 2D test problems of flow past beams. These test problems involve large amplitude flapping and a wide range of mass ratios. This work was partially supported by the Jet Propulsion Laboratory and Air Force Office of Scientific Research.

  2. Application of the Oslo method to high resolution gamma spectra

    Science.gov (United States)

    Simon, A.; Guttormsen, M.; Larsen, A. C.; Beausang, C. W.; Humby, P.

    2015-10-01

    Hauser-Feshbach statistical model is a widely used tool for calculation of the reaction cross section, in particular for astrophysical processes. The HF model requires as an input an optical potential, gamma-strength function (GSF) and level density (LD) to properly model the statistical properties of the nucleus. The Oslo method is a well established technique to extract GSFs and LDs from experimental data, typically used for gamma-spectra obtained with scintillation detectors. Here, the first application of the Oslo method to high-resolution data obtained using the Ge detectors of the STARLITER setup at TAMU is discussed. The GSFs and LDs extracted from (p,d) and (p,t) reactions on 152154 ,Sm targets will be presented.

  3. Dysfunctional metacognition and drive for thinness in typical and atypical anorexia nervosa.

    Science.gov (United States)

    Davenport, Emily; Rushford, Nola; Soon, Siew; McDermott, Cressida

    2015-01-01

    Anorexia nervosa is complex and difficult to treat. In cognitive therapies the focus has been on cognitive content rather than process. Process-oriented therapies may modify the higher level cognitive processes of metacognition, reported as dysfunctional in adult anorexia nervosa. Their association with clinical features of anorexia nervosa, however, is unclear. With reclassification of anorexia nervosa by DSM-5 into typical and atypical groups, comparability of metacognition and drive for thinness across groups and relationships within groups is also unclear. Main objectives were to determine whether metacognitive factors differ across typical and atypical anorexia nervosa and a non-clinical community sample, and to explore a process model by determining whether drive for thinness is concurrently predicted by metacognitive factors. Women receiving treatment for anorexia nervosa (n = 119) and non-clinical community participants (n = 100), aged between 18 and 46 years, completed the Eating Disorders Inventory (3(rd) Edition) and Metacognitions Questionnaire (Brief Version). Body Mass Index (BMI) of 18.5 kg/m(2) differentiated between typical (n = 75) and atypical (n = 44) anorexia nervosa. Multivariate analyses of variance and regression analyses were conducted. Metacognitive profiles were similar in both typical and atypical anorexia nervosa and confirmed as more dysfunctional than in the non-clinical group. Drive for thinness was concurrently predicted in the typical patients by the metacognitive factors, positive beliefs about worry, and need to control thoughts; in the atypical patients by negative beliefs about worry and, inversely, by cognitive self-consciousness, and in the non-clinical group by cognitive self-consciousness. Despite having a healthier weight, the atypical group was as severely affected by dysfunctional metacognitions and drive for thinness as the typical group. Because metacognition concurrently predicted drive for thinness

  4. Estimating long-term uranium resource availability and discovery requirements. A Canadian case study

    International Nuclear Information System (INIS)

    Martin, H.L.; Azis, A.; Williams, R.M.

    1979-01-01

    Well-founded estimates of the rate at which a country's resources might be made available are a prime requisite for energy planners and policy makers at the national level. To meet this need, a method is discussed that can aid in the analysis of future supply patterns of uranium and other metals. Known sources are first appraised, on a mine-by-mine basis, in relation to projected domestic needs and expectable export levels. The gap between (a) production from current and anticipated mines, and (b) production levels needed to meet both domestic needs and export opportunities, would have to be met by new sources. Using as measuring sticks the resources and production capabilities of typical uranium deposits, a measure can be obtained of the required timing and magnitude of discovery needs. The new discoveries, when developed into mines, would need to be sufficient to meet not only any shortfalls in production capability, but also any special reserve requirements as stipulated, for example, under Canada's uranium export guidelines. Since the method can be followed simply and quickly, it can serve as a valuable tool for long-term supply assessments of any mineral commodity from a nation's mines. (author)

  5. Metabolic disorders with typical alterations in MRI

    International Nuclear Information System (INIS)

    Warmuth-Metz, M.

    2010-01-01

    The classification of metabolic disorders according to the etiology is not practical for neuroradiological purposes because the underlying defect does not uniformly transform into morphological characteristics. Therefore typical MR and clinical features of some easily identifiable metabolic disorders are presented. Canavan disease, Pelizaeus-Merzbacher disease, Alexander disease, X-chromosomal adrenoleukodystrophy and adrenomyeloneuropathy, mitochondrial disorders, such as MELAS (mitochondrial encephalopathy, lactic acidosis, and stroke-like episodes) and Leigh syndrome as well as L-2-hydroxyglutaric aciduria are presented. (orig.) [de

  6. A fast dose calculation method based on table lookup for IMRT optimization

    International Nuclear Information System (INIS)

    Wu Qiuwen; Djajaputra, David; Lauterbach, Marc; Wu Yan; Mohan, Radhe

    2003-01-01

    This note describes a fast dose calculation method that can be used to speed up the optimization process in intensity-modulated radiotherapy (IMRT). Most iterative optimization algorithms in IMRT require a large number of dose calculations to achieve convergence and therefore the total amount of time needed for the IMRT planning can be substantially reduced by using a faster dose calculation method. The method that is described in this note relies on an accurate dose calculation engine that is used to calculate an approximate dose kernel for each beam used in the treatment plan. Once the kernel is computed and saved, subsequent dose calculations can be done rapidly by looking up this kernel. Inaccuracies due to the approximate nature of the kernel in this method can be reduced by performing scheduled kernel updates. This fast dose calculation method can be performed more than two orders of magnitude faster than the typical superposition/convolution methods and therefore is suitable for applications in which speed is critical, e.g., in an IMRT optimization that requires a simulated annealing optimization algorithm or in a practical IMRT beam-angle optimization system. (note)

  7. Improving allowed outage time and surveillance test interval requirements: a study of their interactions using probabilistic methods

    International Nuclear Information System (INIS)

    Martorell, S.A.; Serradell, V.G.; Samanta, P.K.

    1995-01-01

    Technical Specifications (TS) define the limits and conditions for operating nuclear plants safely. We selected the Limiting Conditions for Operations (LCO) and Surveillance Requirements (SR), both within TS, as the main items to be evaluated using probabilistic methods. In particular, we focused on the Allowed Outage Time (AOT) and Surveillance Test Interval (STI) requirements in LCO and SR, respectively. Already, significant operating and design experience has accumulated revealing several problems which require modifications in some TS rules. Developments in Probabilistic Safety Assessment (PSA) allow the evaluation of effects due to such modifications in AOT and STI from a risk point of view. Thus, some changes have already been adopted in some plants. However, the combined effect of several changes in AOT and STI, i.e. through their interactions, is not addressed. This paper presents a methodology which encompasses, along with the definition of AOT and STI interactions, the quantification of interactions in terms of risk using PSA methods, an approach for evaluating simultaneous AOT and STI modifications, and an assessment of strategies for giving flexibility to plant operation through simultaneous changes on AOT and STI using trade-off-based risk criteria

  8. Laterality of Temporoparietal Causal Connectivity during the Prestimulus Period Correlates with Phonological Decoding Task Performance in Dyslexic and Typical Readers

    OpenAIRE

    Frye, Richard E.; Liederman, Jacqueline; McGraw Fisher, Janet; Wu, Meng-Hung

    2011-01-01

    We examined how effective connectivity into and out of the left and right temporoparietal areas (TPAs) to/from other key cortical areas affected phonological decoding in 7 dyslexic readers (DRs) and 10 typical readers (TRs) who were young adults. Granger causality was used to compute the effective connectivity of the preparatory network 500 ms prior to presentation of nonwords that required phonological decoding. Neuromagnetic activity was analyzed within the low, medium, and high beta and ga...

  9. Determination of illuminants representing typical white light emitting diodes sources

    DEFF Research Database (Denmark)

    Jost, S.; Ngo, M.; Ferrero, A.

    2017-01-01

    is to develop LED-based illuminants that describe typical white LED products based on their Spectral Power Distributions (SPDs). Some of these new illuminants will be recommended in the update of the CIE publication 15 on colorimetry with the other typical illuminants, and among them, some could be used......Solid-state lighting (SSL) products are already in use by consumers and are rapidly gaining the lighting market. Especially, white Light Emitting Diode (LED) sources are replacing banned incandescent lamps and other lighting technologies in most general lighting applications. The aim of this work...... to complement the CIE standard illuminant A for calibration use in photometry....

  10. Long-time integration methods for mesoscopic models of pattern-forming systems

    International Nuclear Information System (INIS)

    Abukhdeir, Nasser Mohieddin; Vlachos, Dionisios G.; Katsoulakis, Markos; Plexousakis, Michael

    2011-01-01

    Spectral methods for simulation of a mesoscopic diffusion model of surface pattern formation are evaluated for long simulation times. Backwards-differencing time-integration, coupled with an underlying Newton-Krylov nonlinear solver (SUNDIALS-CVODE), is found to substantially accelerate simulations, without the typical requirement of preconditioning. Quasi-equilibrium simulations of patterned phases predicted by the model are shown to agree well with linear stability analysis. Simulation results of the effect of repulsive particle-particle interactions on pattern relaxation time and short/long-range order are discussed.

  11. A Photometric Machine-Learning Method to Infer Stellar Metallicity

    Science.gov (United States)

    Miller, Adam A.

    2015-01-01

    Following its formation, a star's metal content is one of the few factors that can significantly alter its evolution. Measurements of stellar metallicity ([Fe/H]) typically require a spectrum, but spectroscopic surveys are limited to a few x 10(exp 6) targets; photometric surveys, on the other hand, have detected > 10(exp 9) stars. I present a new machine-learning method to predict [Fe/H] from photometric colors measured by the Sloan Digital Sky Survey (SDSS). The training set consists of approx. 120,000 stars with SDSS photometry and reliable [Fe/H] measurements from the SEGUE Stellar Parameters Pipeline (SSPP). For bright stars (g' machine-learning method is similar to the scatter in [Fe/H] measurements from low-resolution spectra..

  12. Minimum Colour Differences Required To Recognise Small Objects On A Colour CRT

    Science.gov (United States)

    Phillips, Peter L.

    1985-05-01

    Data is required to assist in the assessment, evaluation and optimisation of colour and other displays for both military and general use. A general aim is to develop a mathematical technique to aid optimisation and reduce the amount of expensive hardware development and trials necessary when introducing new displays. The present standards and methods available for evaluating colour differences are known not to apply to the perception of typical objects on a display. Data is required for irregular objects viewed at small angular subtense ((1°) and relating the recognition of form rather than colour matching. Therefore laboratory experiments have been carried out using a computer controlled CRT to measure the threshold colour difference that an observer requires between object and background so that he can discriminate a variety of similar objects. Measurements are included for a variety of background and object colourings. The results are presented in the CIE colorimetric system similar to current standards used by the display engineer. Apart from the characteristic small field tritanopia, the results show that larger colour differences are required for object recognition than those assumed from conventional colour discrimination data. A simple relationship to account for object size and background colour is suggested to aid visual performance assessments and modelling.

  13. Adaptive finite element methods for differential equations

    CERN Document Server

    Bangerth, Wolfgang

    2003-01-01

    These Lecture Notes discuss concepts of `self-adaptivity' in the numerical solution of differential equations, with emphasis on Galerkin finite element methods. The key issues are a posteriori error estimation and it automatic mesh adaptation. Besides the traditional approach of energy-norm error control, a new duality-based technique, the Dual Weighted Residual method for goal-oriented error estimation, is discussed in detail. This method aims at economical computation of arbitrary quantities of physical interest by properly adapting the computational mesh. This is typically required in the design cycles of technical applications. For example, the drag coefficient of a body immersed in a viscous flow is computed, then it is minimized by varying certain control parameters, and finally the stability of the resulting flow is investigated by solving an eigenvalue problem. `Goal-oriented' adaptivity is designed to achieve these tasks with minimal cost. At the end of each chapter some exercises are posed in order ...

  14. Economic method for helical gear flank surface characterisation

    Science.gov (United States)

    Koulin, G.; Reavie, T.; Frazer, R. C.; Shaw, B. A.

    2018-03-01

    Typically the quality of a gear pair is assessed based on simplified geometric tolerances which do not always correlate with functional performance. In order to identify and quantify functional performance based parameters, further development of the gear measurement approach is required. Methodology for interpolation of the full active helical gear flank surface, from sparse line measurements, is presented. The method seeks to identify the minimum number of line measurements required to sufficiently characterise an active gear flank. In the form ground gear example presented, a single helix and three profile line measurements was considered to be acceptable. The resulting surfaces can be used to simulate the meshing engagement of a gear pair and therefore provide insight into functional performance based parameters. Therefore the assessment of the quality can be based on the predicted performance in the context of an application.

  15. Biomedical Requirements for High Productivity Computing Systems

    Science.gov (United States)

    2005-04-01

    differences in heart muscle structure between normal and brittle-boned mice suffering from osteogenesis imperfecta (OI) because of a deficiency in the protein...reached. In a typical comparative modeling exercise one would use a heuristic algorithm to determine possible sequences of interest, then the Smith...example exercise , require a description of the cellular events that create demands for oxygen. Having cellular level equations together with

  16. Methods of viscosity measurements in sealed ampoules

    Science.gov (United States)

    Mazuruk, Konstantin

    1999-07-01

    Viscosity of semiconductors and metallic melts is usually measured by oscillating cup method. This method utilizes the melts contained in vacuum sealed silica ampoules, thus the problems related to volatility, contamination, and high temperature and pressure can be alleviate. In a typical design, the time required for a single measurement is of the order of one hour. In order to reduce this time to a minute range, a high resolution angular detection system is implemented in our design of the viscometer. Furthermore, an electromagnet generating a rotational magnetic field (RMF) is incorporated into the apparatus. This magnetic field can be used to remotely and nonintrusively measure the electrical conductivity of the melt. It can also be used to induce a well controlled rotational flow in the system. The transient behavior of this flow can potentially yield of the fluid. Based on RMF implementation, two novel viscometry methods are proposed in this work: a) the transient torque method, b) the resonance method. A unified theoretical approach to the three methods is presented along with the initial test result of the constructed apparatus. Advantages of each of the method are discussed.

  17. Effects of temperature and mass conservation on the typical chemical sequences of hydrogen oxidation

    Science.gov (United States)

    Nicholson, Schuyler B.; Alaghemandi, Mohammad; Green, Jason R.

    2018-01-01

    Macroscopic properties of reacting mixtures are necessary to design synthetic strategies, determine yield, and improve the energy and atom efficiency of many chemical processes. The set of time-ordered sequences of chemical species are one representation of the evolution from reactants to products. However, only a fraction of the possible sequences is typical, having the majority of the joint probability and characterizing the succession of chemical nonequilibrium states. Here, we extend a variational measure of typicality and apply it to atomistic simulations of a model for hydrogen oxidation over a range of temperatures. We demonstrate an information-theoretic methodology to identify typical sequences under the constraints of mass conservation. Including these constraints leads to an improved ability to learn the chemical sequence mechanism from experimentally accessible data. From these typical sequences, we show that two quantities defining the variational typical set of sequences—the joint entropy rate and the topological entropy rate—increase linearly with temperature. These results suggest that, away from explosion limits, data over a narrow range of thermodynamic parameters could be sufficient to extrapolate these typical features of combustion chemistry to other conditions.

  18. Identifying Similarities in Cognitive Subtest Functional Requirements: An Empirical Approach

    Science.gov (United States)

    Frisby, Craig L.; Parkin, Jason R.

    2007-01-01

    In the cognitive test interpretation literature, a Rational/Intuitive, Indirect Empirical, or Combined approach is typically used to construct conceptual taxonomies of the functional (behavioral) similarities between subtests. To address shortcomings of these approaches, the functional requirements for 49 subtests from six individually…

  19. Carbon balance of the typical grain crop rotation in Moscow region assessed by eddy covariance method

    Science.gov (United States)

    Meshalkina, Joulia; Yaroslavtsev, Alexis; Vassenev, Ivan

    2017-04-01

    Croplands could have equal or even greater net ecosystem production than several natural ecosystems (Hollinger et al., 2004), so agriculture plays a substantial role in mitigation strategies for the reduction of carbon dioxide emissions. In Central Russia, where agricultural soils carbon loses are 9 time higher than natural (forest's) soils ones (Stolbovoi, 2002), the reduction of carbon dioxide emissions in agroecosystems must be the central focus of the scientific efforts. Although the balance of the CO2 mostly attributed to management practices, limited information exists regarding the crop rotation overall as potential of C sequestration. In this study, we present data on carbon balance of the typical grain crop rotation in Moscow region followed for 4 years by measuring CO2 fluxes by paired eddy covariance stations (EC). The study was conducted at the Precision Farming Experimental Fields of the Russian Timiryazev State Agricultural University, Moscow, Russia. The experimental site has a temperate and continental climate and situated in south taiga zone with Arable Sod-Podzoluvisols (Albeluvisols Umbric). Two fields of the four-course rotation were studied in 2013-2016. Crop rotation included winter wheat (Triticum sativum L.), barley (Hordeum vulgare L.), potato crop (Solanum tuberosum L.) and cereal-legume mixture (Vicia sativa L. and Avena sativa L.). Crops sowing occurred during the period from mid-April to mid-May depending on weather conditions. Winter wheat was sown in the very beginning of September and the next year it occurred from under the snow in the phase of tillering. White mustard (Sinapis alba) was sown for green manure after harvesting winter wheat in mid of July. Barley was harvested in mid of August, potato crop was harvested in September. Cereal-legume mixture on herbage was collected depending on the weather from early July to mid-August. Carbon uptake (NEE negative values) was registered only for the fields with winter wheat and white

  20. A laser sheet self-calibration method for scanning PIV

    Science.gov (United States)

    Knutsen, Anna N.; Lawson, John M.; Dawson, James R.; Worth, Nicholas A.

    2017-10-01

    Knowledge of laser sheet position, orientation, and thickness is a fundamental requirement of scanning PIV and other laser-scanning methods. This paper describes the development and evaluation of a new laser sheet self-calibration method for stereoscopic scanning PIV, which allows the measurement of these properties from particle images themselves. The approach is to fit a laser sheet model by treating particles as randomly distributed probes of the laser sheet profile, whose position is obtained via a triangulation procedure enhanced by matching particle images according to their variation in brightness over a scan. Numerical simulations and tests with experimental data were used to quantify the sensitivity of the method to typical experimental error sources and validate its performance in practice. The numerical simulations demonstrate the accurate recovery of the laser sheet parameters over range of different seeding densities and sheet thicknesses. Furthermore, they show that the method is robust to significant image noise and camera misalignment. Tests with experimental data confirm that the laser sheet model can be accurately reconstructed with no impairment to PIV measurement accuracy. The new method is more efficient and robust in comparison with the standard (self-) calibration approach, which requires an involved, separate calibration step that is sensitive to experimental misalignments. The method significantly improves the practicality of making accurate scanning PIV measurements and broadens its potential applicability to scanning systems with significant vibrations.

  1. Flow velocity calculation to avoid instability in a typical research reactor core

    International Nuclear Information System (INIS)

    Oliveira, Carlos Alberto de; Mattar Neto, Miguel

    2011-01-01

    Flow velocity through a research reactor core composed by MTR-type fuel elements is investigated. Core cooling capacity must be available at the same time that fuel-plate collapse must be avoided. Fuel plates do not rupture during plate collapse, but their lateral deflections can close flow channels and lead to plate over-heating. The critical flow velocity is a speed at which the plates collapse by static instability type failure. In this paper, critical velocity and coolant velocity are evaluated for a typical MTR-type flat plate fuel element. Miller's method is used for prediction of critical velocity. The coolant velocity is limited to 2/3 of the critical velocity, that is a currently used criterion. Fuel plate characteristics are based on the open pool Australian light water reactor. (author)

  2. Beyond Euler's Method: Implicit Finite Differences in an Introductory ODE Course

    Science.gov (United States)

    Kull, Trent C.

    2011-01-01

    A typical introductory course in ordinary differential equations (ODEs) exposes students to exact solution methods. However, many differential equations must be approximated with numerical methods. Textbooks commonly include explicit methods such as Euler's and Improved Euler's. Implicit methods are typically introduced in more advanced courses…

  3. Development of Gender Discrimination: Effect of Sex-Typical and Sex-Atypical Toys.

    Science.gov (United States)

    Etaugh, Claire; Duits, Terri L.

    Toddlers (41 girls and 35 boys) between 18 and 37 months of age were given four gender discrimination tasks each consisting of 6 pairs of color drawings. Three of the tasks employed color drawings of preschool girls and boys holding either a sex-typical toy, a sex-atypical toy, or no toy. The fourth employed pictures of sex-typical masculine and…

  4. Digital Resonant Controller based on Modified Tustin Discretization Method

    Directory of Open Access Journals (Sweden)

    STOJIC, D.

    2016-11-01

    Full Text Available Resonant controllers are used in power converter voltage and current control due to their simplicity and accuracy. However, digital implementation of resonant controllers introduces problems related to zero and pole mapping from the continuous to the discrete time domain. Namely, some discretization methods introduce significant errors in the digital controller resonant frequency, resulting in the loss of the asymptotic AC reference tracking, especially at high resonant frequencies. The delay compensation typical for resonant controllers can also be compromised. Based on the existing analysis, it can be concluded that the Tustin discretization with frequency prewarping represents a preferable choice from the point of view of the resonant frequency accuracy. However, this discretization method has a shortcoming in applications that require real-time frequency adaptation, since complex trigonometric evaluation is required for each frequency change. In order to overcome this problem, in this paper the modified Tustin discretization method is proposed based on the Taylor series approximation of the frequency prewarping function. By comparing the novel discretization method with commonly used two-integrator-based proportional-resonant (PR digital controllers, it is shown that the resulting digital controller resonant frequency and time delay compensation errors are significantly reduced for the novel controller.

  5. METHODS FOR DETERMINING AGITATOR MIXING REQUIREMENTS FOR A MIXING & SAMPLING FACILITY TO FEED WTP (WASTE TREATMENT PLANT)

    Energy Technology Data Exchange (ETDEWEB)

    GRIFFIN PW

    2009-08-27

    The following report is a summary of work conducted to evaluate the ability of existing correlative techniques and alternative methods to accurately estimate impeller speed and power requirements for mechanical mixers proposed for use in a mixing and sampling facility (MSF). The proposed facility would accept high level waste sludges from Hanford double-shell tanks and feed uniformly mixed high level waste to the Waste Treatment Plant. Numerous methods are evaluated and discussed, and resulting recommendations provided.

  6. 7 CFR 632.52 - Identifying typical classes of action.

    Science.gov (United States)

    2010-01-01

    ... § 632.52 Identifying typical classes of action. (a) The RFO will analyze the environmental assessment of....12. These actions are determined by a limited environmental assessment that reasonably identifies the... 632.52 Agriculture Regulations of the Department of Agriculture (Continued) NATURAL RESOURCES...

  7. Parallel Fast Multipole Boundary Element Method for crustal dynamics

    International Nuclear Information System (INIS)

    Quevedo, Leonardo; Morra, Gabriele; Mueller, R Dietmar

    2010-01-01

    Crustal faults and sharp material transitions in the crust are usually represented as triangulated surfaces in structural geological models. The complex range of volumes separating such surfaces is typically three-dimensionally meshed in order to solve equations that describe crustal deformation with the finite-difference (FD) or finite-element (FEM) methods. We show here how the Boundary Element Method, combined with the Multipole approach, can revolutionise the calculation of stress and strain, solving the problem of computational scalability from reservoir to basin scales. The Fast Multipole Boundary Element Method (Fast BEM) tackles the difficulty of handling the intricate volume meshes and high resolution of crustal data that has put classical Finite 3D approaches in a performance crisis. The two main performance enhancements of this method: the reduction of required mesh elements from cubic to quadratic with linear size and linear-logarithmic runtime; achieve a reduction of memory and runtime requirements allowing the treatment of a new scale of geodynamic models. This approach was recently tested and applied in a series of papers by [1, 2, 3] for regional and global geodynamics, using KD trees for fast identification of near and far-field interacting elements, and MPI parallelised code on distributed memory architectures, and is now in active development for crustal dynamics. As the method is based on a free-surface, it allows easy data transfer to geological visualisation tools where only changes in boundaries and material properties are required as input parameters. In addition, easy volume mesh sampling of physical quantities enables direct integration with existing FD/FEM code.

  8. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    Science.gov (United States)

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  9. For your local eyes only: Culture-specific face typicality influences perceptions of trustworthiness

    NARCIS (Netherlands)

    Sofer, C.; Dotsch, R.; Oikawa, M.; Oikawa, H.; Wigboldus, D.H.J.; Todorov, A.T.

    2017-01-01

    Recent findings show that typical faces are judged as more trustworthy than atypical faces. However, it is not clear whether employment of typicality cues in trustworthiness judgment happens across cultures and if these cues are culture specific. In two studies, conducted in Japan and Israel,

  10. Intercomparison study of sampling methods for the determination of polychlorinated biphenyl (PCB) in seawater

    International Nuclear Information System (INIS)

    Schulz-Bull, D.E.

    1999-01-01

    The determination of organic pollutants in seawater is a serious problem, as their concentrations in the water column are typical in the fg - ng/L range. Available methods therefore includes extensive sampling and laboratory work. The development of simple sampling techniques for organochlorines (e.g. passive sampling with semipermeable membrane device (SPMD), mussel watch) is required. Three methods for the measurement of trace organochlorines in seawater were investigated: (1) the filtration (GF/F) and extraction (XAD-2 resin) of seawater with an in-situ pumping system, (2) biological-accumulation by mussels (mytilus edulis) and (3) passive sampling with SPMD

  11. Short-term cognitive improvement in schizophrenics treated with typical and atypical neuroleptics.

    Science.gov (United States)

    Rollnik, Jens D; Borsutzky, Marthias; Huber, Thomas J; Mogk, Hannu; Seifert, Jürgen; Emrich, Hinderk M; Schneider, Udo

    2002-01-01

    Atypical neuroleptics seem to be more beneficial than typical ones with respect to long-term neuropsychological functioning. Thus, most studies focus on the long-term effects of neuroleptics. We were interested in whether atypical neuroleptic treatment is also superior to typical drugs over relatively short periods of time. We studied 20 schizophrenic patients [10 males, mean age 35.5 years, mean Brief Psychiatric Rating Scale (BPRS) score at entry 58.9] admitted to our hospital with acute psychotic exacerbation. Nine of them were treated with typical and 11 with atypical neuroleptics. In addition, 14 healthy drug-free subjects (6 males, mean age 31.2 years) were enrolled in the study and compared to the patients. As neuropsychological tools, a divided attention test, the Vienna reaction time test, the Benton visual retention test, digit span and a Multiple Choice Word Fluency Test (MWT-B) were used during the first week after admission, within the third week and before discharge (approximately 3 months). Patients scored significantly worse than healthy controls on nearly all tests (except Vienna reaction time). Clinical ratings [BPRS and Positive and Negative Symptom Scale for Schizophrenia (PANSS)] improved markedly (p divided attention task (r = 0.705, p = 0.034). Neuropsychological functioning (explicit memory, p divided attention, p < 0.05) moderately improved for both groups under treatment but without a significant difference between atypical and typical antipsychotic drugs. Over short periods of time (3 months), neuropsychological disturbances in schizophrenia seem to be moderately responsive to both typical and atypical neuroleptics. Copyright 2002 S. Karger AG, Basel

  12. Typical skeletal changes due to metastasising neuroblastomas

    International Nuclear Information System (INIS)

    Eggerath, A.; Persigehl, M.; Mertens, R.; Technische Hochschule Aachen

    1983-01-01

    Compared with other solid tumours in childhood, neuroblastomas show a marked tendency to metastasise to the skeleton. The differentiation of these lesions from inflammatory and other malignant bone lesions in this age group is often difficult. The radiological findings in ten patients with metastasing and histologically confirmed neuroblastomas have been reviewed and the typical appearances in the skeleton are described. The most important features in the differential diagnosies are discussed and the significance of bone changes in the diagnosis of neuroblastoma have been evaluated. (orig.) [de

  13. Chapter 17: Residential Behavior Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Stewart, James [Cadmus, Waltham, MA (United States); Todd, Annika [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-11-01

    Residential behavior-based (BB) programs use strategies grounded in the behavioral and social sciences to influence household energy use. These may include providing households with real-time or delayed feedback about their energy use; supplying energy efficiency education and tips; rewarding households for reducing their energy use; comparing households to their peers; and establishing games, tournaments, and competitions. BB programs often target multiple energy end uses and encourage energy savings, demand savings, or both. Savings from BB programs are usually a small percentage of energy use, typically less than 5 percent. Utilities will continue to implement residential BB programs as large-scale, randomized control trials (RCTs); however, some are now experimenting with alternative program designs that are smaller scale; involve new communication channels such as the web, social media, and text messaging; or that employ novel strategies for encouraging behavior change (for example, Facebook competitions). These programs will create new evaluation challenges and may require different evaluation methods than those currently employed to verify any savings they generate. Quasi-experimental methods, however, require stronger assumptions to yield valid savings estimates and may not measure savings with the same degree of validity and accuracy as randomized experiments.

  14. A review of typical thermal fatigue failure models for solder joints of electronic components

    Science.gov (United States)

    Li, Xiaoyan; Sun, Ruifeng; Wang, Yongdong

    2017-09-01

    For electronic components, cyclic plastic strain makes it easier to accumulate fatigue damage than elastic strain. When the solder joints undertake thermal expansion or cold contraction, different thermal strain of the electronic component and its corresponding substrate is caused by the different coefficient of thermal expansion of the electronic component and its corresponding substrate, leading to the phenomenon of stress concentration. So repeatedly, cracks began to sprout and gradually extend [1]. In this paper, the typical thermal fatigue failure models of solder joints of electronic components are classified and the methods of obtaining the parameters in the model are summarized based on domestic and foreign literature research.

  15. Detection of the Typical Pulse Condition on Cun-Guan-Chi Based on Image Sensor

    Directory of Open Access Journals (Sweden)

    Aihua ZHANG

    2014-02-01

    Full Text Available In order to simulate the diagnosis by feeling the pulse with Traditional Chinese Medicine, a device based on CCD was designed to detect the pulse image of Cun-Guan-Chi. Using the MM-3 pulse model as experimental subject, the synchronous pulse image data of some typical pulse condition were collected by this device on Cun-Guan-Chi. The typical pulses include the normal pulse, the slippery pulse, the slow pulse and the soft pulse. According to the lens imaging principle, the pulse waves were extracted by using the area method, then the 3D pulse condition image was restructured and some features were extracted including the period, the frequency, the width, and the length. The slippery pulse data of pregnant women were collected by this device, and the pulse images were analyzed. The results are consistent based on comparing the features of the slippery pulse model with the slippery pulse of pregnant women. This study overcame shortages of the existing detection device such as the few detecting parts and the limited information, and more comprehensive 3D pulse condition information could be obtained. This work laid a foundation for realizing the objective diagnosis and revealing the comprehensive information of the pulse.

  16. Time accuracy requirements for fusion experiments: A case study at ASDEX Upgrade

    International Nuclear Information System (INIS)

    Raupp, Gerhard; Behler, Karl; Eixenberger, Horst; Fitzek, Michael; Kollotzek, Horst; Lohs, Andreas; Lueddecke, Klaus; Mueller, Peter; Merkel, Roland; Neu, Gregor; Schacht, Joerg; Schramm, Gerold; Treutterer, Wolfgang; Zasche, Dieter; Zehetbauer, Thomas

    2010-01-01

    To manage and operate a fusion device and measure meaningful data an accurate and stable time is needed. As a benchmark, we suggest to consider time accuracy as sufficient if it is better than typical data errors or process timescales. This allows to distinguish application domains and chose appropriate time distribution methods. For ASDEX Upgrade a standard NTP method provides Unix time for project and operation management tasks, and a dedicated time system generates and distributes a precise experiment time for physics applications. Applying the benchmark to ASDEX Upgrade shows that physics measurements tagged with experiment time meet the requirements, while correlation of NTP tagged operation data with physics data tagged with experiment time remains problematic. Closer coupling of the two initially free running time systems with daily re-sets was an efficient and satisfactory improvement. For ultimate accuracy and seamless integration, however, continuous adjustment of the experiment time clock frequency to NTP is needed, within frequency variation limits given by the benchmark.

  17. Human Behavior, Learning, and the Developing Brain: Typical Development

    Science.gov (United States)

    Coch, Donna, Ed.; Fischer, Kurt W., Ed.; Dawson, Geraldine, Ed.

    2010-01-01

    This volume brings together leading authorities from multiple disciplines to examine the relationship between brain development and behavior in typically developing children. Presented are innovative cross-sectional and longitudinal studies that shed light on brain-behavior connections in infancy and toddlerhood through adolescence. Chapters…

  18. Displacement sensing system and method

    Science.gov (United States)

    VunKannon, Jr., Robert S

    2006-08-08

    A displacement sensing system and method addresses demanding requirements for high precision sensing of displacement of a shaft, for use typically in a linear electro-dynamic machine, having low failure rates over multi-year unattended operation in hostile environments. Applications include outer space travel by spacecraft having high-temperature, sealed environments without opportunity for servicing over many years of operation. The displacement sensing system uses a three coil sensor configuration, including a reference and sense coils, to provide a pair of ratio-metric signals, which are inputted into a synchronous comparison circuit, which is synchronously processed for a resultant displacement determination. The pair of ratio-metric signals are similarly affected by environmental conditions so that the comparison circuit is able to subtract or nullify environmental conditions that would otherwise cause changes in accuracy to occur.

  19. Anticipating requirements changes-using futurology in requirements elicitation

    OpenAIRE

    Pimentel, João Henrique; Santos, Emanuel; Castro, Jaelson; Franch Gutiérrez, Javier

    2012-01-01

    It is well known that requirements changes in a later phase of software developments is a major source of software defects and costs. Thus, the need of techniques to control or reduce the amount of changes during software development projects. The authors advocate the use of foresight methods as a valuable input to requirements elicitation, with the potential to decrease the number of changes that would be required after deployment, by anticipating them. In this paper, the authors define a pr...

  20. The Paradox of "Structured" Methods for Software Requirements Management: A Case Study of an e-Government Development Project

    Science.gov (United States)

    Conboy, Kieran; Lang, Michael

    This chapter outlines the alternative perspectives of "rationalism" and "improvisation" within information systems development and describes the major shortcomings of each. It then discusses how these shortcomings manifested themselves within an e-government case study where a "structured" requirements management method was employed. Although this method was very prescriptive and firmly rooted in the "rational" paradigm, it was observed that users often resorted to improvised behaviour, such as privately making decisions on how certain aspects of the method should or should not be implemented.

  1. A Framework for the Application of Robust Design Methods and Tools

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Howard, Thomas J.

    2014-01-01

    can deliver are not always clear. Expectations to the output are sometimes misleading and imply the incorrect utilization of tools. A categorization of tools, methods and techniques typically associated with robust design methodology in the literature is provided in this paper in terms of purpose...... and deliverables of the individual tool or method. The majority of tools aims for optimizing an existing design solution or give an indication of how robust a design is, which requires a somewhat settled design. Furthermore, the categorization presented in this paper shows a lack in the methodology for tools...... of the existing tools. When to apply, what tool or method, for which purpose can be concluded. The paper also contributes with a framework for researchers to derive a generic landscape or database for RDM build upon the main premises and deliverables of each method....

  2. Occurrence and risk assessment of four typical fluoroquinolone antibiotics in raw and treated sewage and in receiving waters in Hangzhou, China.

    Science.gov (United States)

    Tong, Changlun; Zhuo, Xiajun; Guo, Yun

    2011-07-13

    A sensitive liquid chromatography-fluorescence detection method, combined with one-step solid-phase extraction, was established for detecting the residual levels of the four typical fluoroquinolone antibiotics (ofloxacin, norfloxacin, ciprofloxacin, and enrofloxacin) in influent, effluent, and surface waters from Hangzhou, China. For the various environmental water matrices, the overall recoveries were from 76.8 to 122%, and no obvious interferences of matrix effect were observed. The limit of quantitation of this method was estimated to be 17 ng/L for ciprofloxacin and norfloxacin, 20 ng/L for ofloxacin, and 27 ng/L for enrofloxacin. All of the four typical fluoroquinolone antibiotics were found in the wastewaters and surface waters. The residual contents of the four typical fluoroquinolone antibiotics in influent, effluent, and surface water samples are 108-1405, 54-429, and 7.0-51.6 ng/L, respectively. The removal rates of the selected fluoroquinolone antibiotics were 69.5 (ofloxacin), 61.3 (norfloxacin), and 50% (enrofloxacin), indicating that activated sludge treatment is effective except for ciprofloxacin and necessary to remove these fluoroquinolone antibiotics in municipal sewage. The risk to the aquatic environment was estimated by a ratio of measured environmental concentration and predicted no-effect concentration. At the concentrations, these fluoroquinolone antibiotics were found in influent, effluent, and surface waters, and they should not pose a risk for the aquatic environment.

  3. Adsorption Properties of Typical Lung Cancer Breath Gases on Ni-SWCNTs through Density Functional Theory

    Directory of Open Access Journals (Sweden)

    Qianqian Wan

    2017-01-01

    Full Text Available A lot of useful information is contained in the human breath gases, which makes it an effective way to diagnose diseases by detecting the typical breath gases. This work investigated the adsorption of typical lung cancer breath gases: benzene, styrene, isoprene, and 1-hexene onto the surface of intrinsic and Ni-doped single wall carbon nanotubes through density functional theory. Calculation results show that the typical lung cancer breath gases adsorb on intrinsic single wall carbon nanotubes surface by weak physisorption. Besides, the density of states changes little before and after typical lung cancer breath gases adsorption. Compared with single wall carbon nanotubes adsorption, single Ni atom doping significantly improves its adsorption properties to typical lung cancer breath gases by decreasing adsorption distance and increasing adsorption energy and charge transfer. The density of states presents different degrees of variation during the typical lung cancer breath gases adsorption, resulting in the specific change of conductivity of gas sensing material. Based on the different adsorption properties of Ni-SWCNTs to typical lung cancer breath gases, it provides an effective way to build a portable noninvasive portable device used to evaluate and diagnose lung cancer at early stage in time.

  4. Benchmark calculation programme concerning typical LMFBR structures

    International Nuclear Information System (INIS)

    Donea, J.; Ferrari, G.; Grossetie, J.C.; Terzaghi, A.

    1982-01-01

    This programme, which is part of a comprehensive activity aimed at resolving difficulties encountered in using design procedures based on ASME Code Case N-47, should allow to get confidence in computer codes which are supposed to provide a realistic prediction of the LMFBR component behaviour. The calculations started on static analysis of typical structures made of non linear materials stressed by cyclic loads. The fluid structure interaction analysis is also being considered. Reasons and details of the different benchmark calculations are described, results obtained are commented and future computational exercise indicated

  5. Group typicality, group loyalty and cognitive development.

    Science.gov (United States)

    Patterson, Meagan M

    2014-09-01

    Over the course of childhood, children's thinking about social groups changes in a variety of ways. Developmental Subjective Group Dynamics (DSGD) theory emphasizes children's understanding of the importance of conforming to group norms. Abrams et al.'s study, which uses DSGD theory as a framework, demonstrates the social cognitive skills underlying young elementary school children's thinking about group norms. Future research on children's thinking about groups and group norms should explore additional elements of this topic, including aspects of typicality beyond loyalty. © 2014 The British Psychological Society.

  6. Enhanced air dispersion modelling at a typical Chinese nuclear power plant site: Coupling RIMPUFF with two advanced diagnostic wind models.

    Science.gov (United States)

    Liu, Yun; Li, Hong; Sun, Sida; Fang, Sheng

    2017-09-01

    An enhanced air dispersion modelling scheme is proposed to cope with the building layout and complex terrain of a typical Chinese nuclear power plant (NPP) site. In this modelling, the California Meteorological Model (CALMET) and the Stationary Wind Fit and Turbulence (SWIFT) are coupled with the Risø Mesoscale PUFF model (RIMPUFF) for refined wind field calculation. The near-field diffusion coefficient correction scheme of the Atmospheric Relative Concentrations in the Building Wakes Computer Code (ARCON96) is adopted to characterize dispersion in building arrays. The proposed method is evaluated by a wind tunnel experiment that replicates the typical Chinese NPP site. For both wind speed/direction and air concentration, the enhanced modelling predictions agree well with the observations. The fraction of the predictions within a factor of 2 and 5 of observations exceeds 55% and 82% respectively in the building area and the complex terrain area. This demonstrates the feasibility of the new enhanced modelling for typical Chinese NPP sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Robust design requirements specification: a quantitative method for requirements development using quality loss functions

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard; Christensen, Martin Ebro; Howard, Thomas J.

    2016-01-01

    Product requirements serve many purposes in the product development process. Most importantly, they are meant to capture and facilitate product goals and acceptance criteria, as defined by stakeholders. Accurately communicating stakeholder goals and acceptance criteria can be challenging and more...

  8. Physico-chemical properties and fertility status of some typic ...

    African Journals Online (AJOL)

    Physico-chemical properties and fertility status of some typic plinthaquults in bauchi loval government area of Bauchi state, Nigeria. S Mustapha. Abstract. No Abstract. IJOTAFS Vol. 1 (2) 2007: pp. 120-124. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT.

  9. The influence of thematic congruency, typicality and divided attention on memory for radio advertisements.

    Science.gov (United States)

    Martín-Luengo, Beatriz; Luna, Karlos; Migueles, Malen

    2014-01-01

    We examined the effects of the thematic congruence between ads and the programme in which they are embedded. We also studied the typicality of the to-be-remembered information (high- and low-typicality elements), and the effect of divided attention in the memory for radio ad contents. Participants listened to four radio programmes with thematically congruent and incongruent ads embedded, and completed a true/false recognition test indicating the level of confidence in their answer. Half of the sample performed an additional task (divided attention group) while listening to the radio excerpts. In general, recognition memory was better for incongruent ads and low-typicality statements. Confidence in hits was higher in the undivided attention group, although there were no differences in performance. Our results suggest that the widespread idea of embedding ads into thematic-congruent programmes negatively affects memory for ads. In addition, low-typicality features that are usually highlighted by advertisers were better remembered than typical contents. Finally, metamemory evaluations were influenced by the inference that memory should be worse if we do several things at the same time.

  10. Risk assessment of influence factors on occupational hearing loss in noise – exposed workers in typical metal industry

    OpenAIRE

    Farhadian Maryam; Aliabadi Mohsen; Shahidi Reza

    2014-01-01

    Background & Objectives : Worker exposure conditions such as noise level, exposure duration, use of hearing protection devices and health behaviors are commonly related to noise induced hearing loss. The objective of this study was risk assessment of influence factors on occupational hearing loss in noise exposed workers in typical noisy process . Methods : Information about occupational exposure of seventy workers employed in a noisy press workshop was gathered using the standard quest...

  11. Emotion, gender, and gender typical identity in autobiographical memory.

    Science.gov (United States)

    Grysman, Azriel; Merrill, Natalie; Fivush, Robyn

    2017-03-01

    Gender differences in the emotional intensity and content of autobiographical memory (AM) are inconsistent across studies, and may be influenced as much by gender identity as by categorical gender. To explore this question, data were collected from 196 participants (age 18-40), split evenly between men and women. Participants narrated four memories, a neutral event, high point event, low point event, and self-defining memory, completed ratings of emotional intensity for each event, and completed four measures of gender typical identity. For self-reported emotional intensity, gender differences in AM were mediated by identification with stereotypical feminine gender norms. For narrative use of affect terms, both gender and gender typical identity predicted affective expression. The results confirm contextual models of gender identity (e.g., Diamond, 2012 . The desire disorder in research on sexual orientation in women: Contributions of dynamical systems theory. Archives of Sexual Behavior, 41, 73-83) and underscore the dynamic interplay between gender and gender identity in the emotional expression of autobiographical memories.

  12. Typical Vine or International Taste: Wine Consumers' Dilemma Between Beliefs and Preferences.

    Science.gov (United States)

    Scozzafava, Gabriele; Boncinelli, Fabio; Contini, Caterina; Romano, Caterina; Gerini, Francesca; Casini, Leonardo

    2016-01-01

    The wine-growing sector is probably one of the agricultural areas where the ties between product quality and territory are most evident. Geographical indication is a key element in this context, and previous literature has focused on demonstrating how certification of origin influences the wine purchaser's behavior. However, less attention has been devoted to understanding how the value of a given name of origin may or may not be determined by the various elements that characterize the typicality of the wine product on that territory: vines, production techniques, etc. It thus seems interesting, in this framework, to evaluate the impacts of several characteristic attributes on the preferences of consumers. This paper will analyze, in particular, the role of the presence of autochthonous vines in consumers' choices. The connection between name of origin and autochthonous vines appears to be particularly important in achieving product "recognisability", while introducing "international" vines in considerable measure into blends might result in the loss of the peculiarity of certain characteristic and typical local productions. A standardization of taste could thus risk compromising the reputation of traditional production areas. The objective of this study is to estimate, through an experimental auction on the case study of Chianti, the differences in willingness to pay for wines produced with different shares of typical vines. The results show that consumers have a willingness to pay for wine produced with typical blends 34% greater than for wines with international blends. However, this difference is not confirmed by blind tasting, raising the issue of the relationship between exante expectations about vine typicality and real wine sensorial characteristics. Finally, some recent patents related to wine testing and wine packaging are reviewed.

  13. Matrix method for acoustic levitation simulation.

    Science.gov (United States)

    Andrade, Marco A B; Perez, Nicolas; Buiochi, Flavio; Adamowski, Julio C

    2011-08-01

    A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.

  14. One for all: The effect of extinction stimulus typicality on return of fear.

    Science.gov (United States)

    Scheveneels, Sara; Boddez, Yannick; Bennett, Marc Patrick; Hermans, Dirk

    2017-12-01

    During exposure therapy, patients are encouraged to approach the feared stimulus, so they can experience that this stimulus is not followed by the anticipated aversive outcome. However, patients might treat the absence of the aversive outcome as an 'exception to the rule'. This could hamper the generalization of fear reduction when the patient is confronted with similar stimuli not used in therapy. We examined the effect of providing information about the typicality of the extinction stimulus on the generalization of extinction to a new but similar stimulus. In a differential fear conditioning procedure, an animal-like figure was paired with a brief electric shock to the wrist. In a subsequent extinction phase, a different but perceptually similar animal-like figure was presented without the shock. Before testing the generalization of extinction with a third animal-like figure, participants were either instructed that the extinction stimulus was a typical or an atypical member of the animal family. The typicality instruction effectively impacted the generalization of extinction; the third animal-like figure elicited lower shock expectancies in the typical relative to the atypical group. Skin conductance data mirrored these results, but did not reach significance. These findings suggest that verbal information about stimulus typicality can be a promising adjunctive to standard exposure treatments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. The impact of conventional dietary intake data coding methods on foods typically consumed by low-income African-American and White urban populations.

    Science.gov (United States)

    Mason, Marc A; Fanelli Kuczmarski, Marie; Allegro, Deanne; Zonderman, Alan B; Evans, Michele K

    2015-08-01

    Analysing dietary data to capture how individuals typically consume foods is dependent on the coding variables used. Individual foods consumed simultaneously, like coffee with milk, are given codes to identify these combinations. Our literature review revealed a lack of discussion about using combination codes in analysis. The present study identified foods consumed at mealtimes and by race when combination codes were or were not utilized. Duplicate analysis methods were performed on separate data sets. The original data set consisted of all foods reported; each food was coded as if it was consumed individually. The revised data set was derived from the original data set by first isolating coded foods consumed as individual items from those foods consumed simultaneously and assigning a code to designate a combination. Foods assigned a combination code, like pancakes with syrup, were aggregated and associated with a food group, defined by the major food component (i.e. pancakes), and then appended to the isolated coded foods. Healthy Aging in Neighborhoods of Diversity across the Life Span study. African-American and White adults with two dietary recalls (n 2177). Differences existed in lists of foods most frequently consumed by mealtime and race when comparing results based on original and revised data sets. African Americans reported consumption of sausage/luncheon meat and poultry, while ready-to-eat cereals and cakes/doughnuts/pastries were reported by Whites on recalls. Use of combination codes provided more accurate representation of how foods were consumed by populations. This information is beneficial when creating interventions and exploring diet-health relationships.

  16. Photonic arbitrary waveform generator based on Taylor synthesis method

    DEFF Research Database (Denmark)

    Liao, Shasha; Ding, Yunhong; Dong, Jianji

    2016-01-01

    Arbitrary waveform generation has been widely used in optical communication, radar system and many other applications. We propose and experimentally demonstrate a silicon-on-insulator (SOI) on chip optical arbitrary waveform generator, which is based on Taylor synthesis method. In our scheme......, a Gaussian pulse is launched to some cascaded microrings to obtain first-, second- and third-order differentiations. By controlling amplitude and phase of the initial pulse and successive differentiations, we can realize an arbitrary waveform generator according to Taylor expansion. We obtain several typical...... waveforms such as square waveform, triangular waveform, flat-top waveform, sawtooth waveform, Gaussian waveform and so on. Unlike other schemes based on Fourier synthesis or frequency-to-time mapping, our scheme is based on Taylor synthesis method. Our scheme does not require any spectral disperser or large...

  17. Characteristics of typical Pierce guns for PPM focused TWTs

    International Nuclear Information System (INIS)

    Harper, R.; Puri, M.P.

    1989-01-01

    The performance of typical moderate perveance Pierce type electron guns which are used in periodic permanent magnet focused traveling wave tubes are described with regard to adaptation for use in electron beam ion sources. The results of detailed electron trajectory computations for one particular gun design are presented

  18. A method for reduction of Acoustic Emission (AE) data with application in machine failure detection and diagnosis

    Science.gov (United States)

    Vicuña, Cristián Molina; Höweler, Christoph

    2017-12-01

    The use of AE in machine failure diagnosis has increased over the last years. Most AE-based failure diagnosis strategies use digital signal processing and thus require the sampling of AE signals. High sampling rates are required for this purpose (e.g. 2 MHz or higher), leading to streams of large amounts of data. This situation is aggravated if fine resolution and/or multiple sensors are required. These facts combine to produce bulky data, typically in the range of GBytes, for which sufficient storage space and efficient signal processing algorithms are required. This situation probably explains why, in practice, AE-based methods consist mostly in the calculation of scalar quantities such as RMS and Kurtosis, and the analysis of their evolution in time. While the scalar-based approach offers the advantage of maximum data reduction; it has the disadvantage that most part of the information contained in the raw AE signal is lost unrecoverably. This work presents a method offering large data reduction, while keeping the most important information conveyed by the raw AE signal, useful for failure detection and diagnosis. The proposed method consist in the construction of a synthetic, unevenly sampled signal which envelopes the AE bursts present on the raw AE signal in a triangular shape. The constructed signal - which we call TriSignal - also permits the estimation of most scalar quantities typically used for failure detection. But more importantly, it contains the information of the time of occurrence of the bursts, which is key for failure diagnosis. Lomb-Scargle normalized periodogram is used to construct the TriSignal spectrum, which reveals the frequency content of the TriSignal and provides the same information as the classic AE envelope. The paper includes application examples in planetary gearbox and low-speed rolling element bearing.

  19. Unified Description of the Mechanical Properties of Typical Marine Soil and Its Application

    Directory of Open Access Journals (Sweden)

    Yongqiang Li

    2017-01-01

    Full Text Available This study employed a modified elastoplastic constitutive model that can systematically describe the monotonic and cyclic mechanical behaviors of typical marine soils combining the subloading, normal, and superloading yield surfaces, in the seismic response analysis of three-dimensional (3D marine site. New evolution equations for stress-induced anisotropy development and the change in the overconsolidation of soils were proposed. This model can describe the unified behaviour of unsaturated soil and saturated soil using independent state variables and can uniquely describe the multiple mechanical properties of soils under general stress states, without changing the parameter values using the transform stress method. An effective stress-based, fully coupled, explicit finite element–finite difference method was established based on this model and three-phase field theory. A finite deformation analysis was presented by introducing the Green-Naghdi rate tensor. The simulation and analysis indicated that the proposed method was sufficient for simulating the seismic disaster process of 3D marine sites. The results suggested that the ground motion intensity would increase due to the local uneven complex topography and site effect and also provided the temporal and spatial distribution of landslide and collapse at the specific location of the marine site.

  20. ?Hunting with a knife and ? fork?: Examining central coherence in autism, attention deficit/hyperactivity disorder, and typical development with a linguistic task

    OpenAIRE

    Booth, Rhonda; Happ?, Francesca

    2010-01-01

    A local processing bias, referred to as ?weak central coherence,? has been postulated to underlie key aspects of autism spectrum disorder (ASD). Little research has examined whether individual differences in this cognitive style can be found in typical development, independent of intelligence, and how local processing relates to executive control. We present a brief and easy-to-administer test of coherence requiring global sentence completions. We report results from three studies assessing (...

  1. Gestures in Prelinguistic Turkish Children with Autism, Down Syndrome, and Typically Developing Children

    Science.gov (United States)

    Toret, Gokhan; Acarlar, Funda

    2011-01-01

    The purpose of this study was to examine gesture use in Turkish children with autism, Down syndrome, and typically developing children. Participants included 30 children in three groups: Ten children with Down syndrome, ten children with autism between 24-60 months of age, and ten typically developing children between 12-18 months of age.…

  2. Face-to-Face Interference in Typical and Atypical Development

    Science.gov (United States)

    Riby, Deborah M.; Doherty-Sneddon, Gwyneth; Whittle, Lisa

    2012-01-01

    Visual communication cues facilitate interpersonal communication. It is important that we look at faces to retrieve and subsequently process such cues. It is also important that we sometimes look away from faces as they increase cognitive load that may interfere with online processing. Indeed, when typically developing individuals hold face gaze…

  3. Waste minimization in analytical methods

    International Nuclear Information System (INIS)

    Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.

    1995-01-01

    The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department's goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision

  4. Value-Based Requirements Traceability: Lessons Learned

    Science.gov (United States)

    Egyed, Alexander; Grünbacher, Paul; Heindl, Matthias; Biffl, Stefan

    Traceability from requirements to code is mandated by numerous software development standards. These standards, however, are not explicit about the appropriate level of quality of trace links. From a technical perspective, trace quality should meet the needs of the intended trace utilizations. Unfortunately, long-term trace utilizations are typically unknown at the time of trace acquisition which represents a dilemma for many companies. This chapter suggests ways to balance the cost and benefits of requirements traceability. We present data from three case studies demonstrating that trace acquisition requires broad coverage but can tolerate imprecision. With this trade-off our lessons learned suggest a traceability strategy that (1) provides trace links more quickly, (2) refines trace links according to user-defined value considerations, and (3) supports the later refinement of trace links in case the initial value consideration has changed over time. The scope of our work considers the entire life cycle of traceability instead of just the creation of trace links.

  5. 'ISL pattern reserve requirements for today's spot price,' or 'how many in-place pounds are needed for a mining pattern to be profitable in today's market'

    International Nuclear Information System (INIS)

    Anthony, H.L.

    2000-01-01

    Recent uranium spot market values place additional burdens on the geologist and project manager to identify mineralized ore that will yield a profitable return on investment to the mining venture and its investors. The author reviews the various cost components that comprise the total work effort required to produce uranium via ISL methods to arrive at a suitable ore grade that will guarantee profitably. Amortization of costs based on recent expenditures for typical ISL operations are used in conjunction with wellfield development, operating and restoration costs to determine the ore value required to show a positive return on investment. (author)

  6. Comparing service use and costs among adolescents with autism spectrum disorders, special needs and typical development.

    Science.gov (United States)

    Barrett, Barbara; Mosweu, Iris; Jones, Catherine Rg; Charman, Tony; Baird, Gillian; Simonoff, Emily; Pickles, Andrew; Happé, Francesca; Byford, Sarah

    2015-07-01

    Autism spectrum disorder is a complex condition that requires specialised care. Knowledge of the costs of autism spectrum disorder, especially in comparison with other conditions, may be useful to galvanise policymakers and leverage investment in education and intervention to mitigate aspects of autism spectrum disorder that negatively impact individuals with the disorder and their families. This article describes the services and associated costs for four groups of individuals: adolescents with autistic disorder, adolescents with other autism spectrum disorders, adolescents with other special educational needs and typically developing adolescents using data from a large, well-characterised cohort assessed as part of the UK Special Needs and Autism Project at the age of 12 years. Average total costs per participant over 6 months were highest in the autistic disorder group (£11,029), followed by the special educational needs group (£9268), the broader autism spectrum disorder group (£8968) and the typically developing group (£2954). Specialised day or residential schooling accounted for the vast majority of costs. In regression analysis, lower age and lower adaptive functioning were associated with higher costs in the groups with an autism spectrum disorder. Sex, ethnicity, number of International Classification of Diseases (10th revision) symptoms, autism spectrum disorder symptom scores and levels of mental health difficulties were not associated with cost. © The Author(s) 2014.

  7. Water requirements of the iron and steel industry

    Science.gov (United States)

    Walling, Faulkner B.; Otts, Louis Ethelbert

    1967-01-01

    concentrate. Water use in concentration plants is related to the physical state of the ore. The data in this report indicate that grain size of the ore is the most important factor; the very fine grained taconite and jasper required the greatest amount of water. Reuse was not widely practiced in the iron ore industry.Consumption of water by integrated steel plants ranged from 0 to 2,010 gallons per ton of ingot steel and by steel processing plants from 120 to 3,420 gallons per ton. Consumption by a typical integrated steel plant was 681 gallons per ton of ingot steel, about 1.8 percent of the intake and about 1 percent of the gross water use. Consumption by a typical steel processing plant was 646 gallons per ton, 18 percent of the intake, and 3.2 percent of the gross water use. The quality of available water was found not to be a critical factor in choosing the location of steel plants, although changes in equipment and in operating procedures are necessary when poor-quality water is used. The use of saline water having a concentration of dissolved solids as much as 10,400 ppm (parts per million) was reported. This very saline water was used for cooling furnaces and for quenching slag. In operations such as rolling steel in which the water comes into contact with the steel being processed, better quality water is used, although water containing as much as 3,410 ppm dissolved solids has been used for this purpose. Treatment of water for use in the iron and steel industry was not widely practiced. Disinfection and treatment for scale and corrosion control were the most frequently used treatment methods.

  8. A simple headspace equilibration method for measuring dissolved methane

    Science.gov (United States)

    Magen, C; Lapham, L.L.; Pohlman, John W.; Marshall, Kristin N.; Bosman, S.; Casso, Michael; Chanton, J.P.

    2014-01-01

    Dissolved methane concentrations in the ocean are close to equilibrium with the atmosphere. Because methane is only sparingly soluble in seawater, measuring it without contamination is challenging for samples collected and processed in the presence of air. Several methods for analyzing dissolved methane are described in the literature, yet none has conducted a thorough assessment of the method yield, contamination issues during collection, transport and storage, and the effect of temperature changes and preservative. Previous extraction methods transfer methane from water to gas by either a "sparge and trap" or a "headspace equilibration" technique. The gas is then analyzed for methane by gas chromatography. Here, we revisit the headspace equilibration technique and describe a simple, inexpensive, and reliable method to measure methane in fresh and seawater, regardless of concentration. Within the range of concentrations typically found in surface seawaters (2-1000 nmol L-1), the yield of the method nears 100% of what is expected from solubility calculation following the addition of known amount of methane. In addition to being sensitive (detection limit of 0.1 ppmv, or 0.74 nmol L-1), this method requires less than 10 min per sample, and does not use highly toxic chemicals. It can be conducted with minimum materials and does not require the use of a gas chromatograph at the collection site. It can therefore be used in various remote working environments and conditions.

  9. Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows

    Science.gov (United States)

    Chen, Z.; Shu, C.; Tan, D.

    2018-05-01

    An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.

  10. Application of nuclear analytical methods to heavy metal pollution studies of estuaries

    International Nuclear Information System (INIS)

    Anders, B.; Junge, W.; Knoth, J.; Michaelis, W.; Pepelnik, R.; Schwenke, H.

    1984-01-01

    Important objectives of heavy metal pollution studies of estuaries are the understanding of the transport phenomena in these complex ecosystems and the discovery of the pollution history and the geochemical background. Such studies require high precision and accuracy of the analytical methods. Moreover, pronounced spatial heterogeneities and temporal variabilities that are typical for estuaries necessitate the analysis of a great number of samples if relevant results are to be obtained. Both requirements can economically be fulfilled by a proper combination of analytical methods. Applications of energy-dispersive X-ray fluorescence analysis with total reflection of the exciting beam at the sample support and of neutron activation analysis with both thermal and fast neutrons are reported in the light of pollution studies performed in the Lower Elbe River. Profiles are presented for the total heavy metal content determined from particulate matter and sediment. They include V, Mn, Fe, Ni, Cu, Zn, As, Pb, and Cd. 16 references 10 figures, 1 table

  11. Attention and Word Learning in Autistic, Language Delayed and Typically Developing Children

    Directory of Open Access Journals (Sweden)

    Elena eTenenbaum

    2014-05-01

    Full Text Available Previous work has demonstrated that patterns of social attention hold predictive value for language development in typically developing infants. The goal of this research was to explore how patterns of attention in autistic, language delayed, and typically developing children relate to early word learning and language abilities. We tracked patterns of eye movements to faces and objects while children watched videos of a woman teaching them a series of new words. Subsequent test trials measured participants’ recognition of these novel word-object pairings. Results indicated that greater attention to the speaker’s mouth was related to higher scores on standardized measures of language development for autistic and typically developing children (but not for language delayed children. This effect was mediated by age for typically developing, but not autistic children. When effects of age were controlled for, attention to the mouth among language delayed participants was negatively correlated with standardized measures of language learning. Attention to the speaker’s mouth and eyes while she was teaching the new words was also predictive of faster recognition of the newly learned words among autistic children. These results suggest that language delays among children with autism may be driven in part by aberrant social attention, and that the mechanisms underlying these delays may differ from those in language delayed participants without autism.

  12. Benchmarking of Typical Meteorological Year datasets dedicated to Concentrated-PV systems

    Science.gov (United States)

    Realpe, Ana Maria; Vernay, Christophe; Pitaval, Sébastien; Blanc, Philippe; Wald, Lucien; Lenoir, Camille

    2016-04-01

    Accurate analysis of meteorological and pyranometric data for long-term analysis is the basis of decision-making for banks and investors, regarding solar energy conversion systems. This has led to the development of methodologies for the generation of Typical Meteorological Years (TMY) datasets. The most used method for solar energy conversion systems was proposed in 1978 by the Sandia Laboratory (Hall et al., 1978) considering a specific weighted combination of different meteorological variables with notably global, diffuse horizontal and direct normal irradiances, air temperature, wind speed, relative humidity. In 2012, a new approach was proposed in the framework of the European project FP7 ENDORSE. It introduced the concept of "driver" that is defined by the user as an explicit function of the pyranometric and meteorological relevant variables to improve the representativeness of the TMY datasets with respect the specific solar energy conversion system of interest. The present study aims at comparing and benchmarking different TMY datasets considering a specific Concentrated-PV (CPV) system as the solar energy conversion system of interest. Using long-term (15+ years) time-series of high quality meteorological and pyranometric ground measurements, three types of TMY datasets generated by the following methods: the Sandia method, a simplified driver with DNI as the only representative variable and a more sophisticated driver. The latter takes into account the sensitivities of the CPV system with respect to the spectral distribution of the solar irradiance and wind speed. Different TMY datasets from the three methods have been generated considering different numbers of years in the historical dataset, ranging from 5 to 15 years. The comparisons and benchmarking of these TMY datasets are conducted considering the long-term time series of simulated CPV electric production as a reference. The results of this benchmarking clearly show that the Sandia method is not

  13. Standalone Photovoltaic System Sizing using Peak Sun Hour Method and Evaluation by TRNSYS Simulation

    OpenAIRE

    Riza, Dimas Firmanda Al; Gilani, Syed Ihtshamul-Haq

    2016-01-01

    This paper presents sizing and evaluation of a standalone photovoltaic system for residential load. Peak Sun Hour method is used to determine photovoltaic panel and battery capacity, then the sizing results is tested and evaluated using hourly time-step transient simulation model by using TRNSYS 16.0. The results shows for typical Malaysian terraced house that have about 6 kWh daily electricity load, the photovoltaic system requirement consist of 1.9 kWp photovoltaic panel and 2200 Ah battery...

  14. 30 CFR 48.3 - Training plans; time of submission; where filed; information required; time for approval; method...

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Training plans; time of submission; where filed....3 Training plans; time of submission; where filed; information required; time for approval; method... training plan shall be filed with the District Manager for the area in which the mine is located. (c) Each...

  15. A pattern recognition methodology for evaluation of load profiles and typical days of large electricity customers

    International Nuclear Information System (INIS)

    Tsekouras, G.J.; Kotoulas, P.B.; Tsirekis, C.D.; Dialynas, E.N.; Hatziargyriou, N.D.

    2008-01-01

    This paper describes a pattern recognition methodology for the classification of the daily chronological load curves of each large electricity customer, in order to estimate his typical days and his respective representative daily load profiles. It is based on pattern recognition methods, such as k-means, self-organized maps (SOM), fuzzy k-means and hierarchical clustering, which are theoretically described and properly adapted. The parameters of each clustering method are properly selected by an optimization process, which is separately applied for each one of six adequacy measures. The results can be used for the short-term and mid-term load forecasting of each consumer, for the choice of the proper tariffs and the feasibility studies of demand side management programs. This methodology is analytically applied for one medium voltage industrial customer and synoptically for a set of medium voltage customers of the Greek power system. The results of the clustering methods are presented and discussed. (author)

  16. The Monte Carlo method the method of statistical trials

    CERN Document Server

    Shreider, YuA

    1966-01-01

    The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio

  17. A case of typical pulmonary carcinoid tumor treated with bronchoscopic therapy followed by lobectomy

    Directory of Open Access Journals (Sweden)

    Porpodis K

    2012-02-01

    Full Text Available Konstantinos Porpodis1, Michael Karanikas2, Paul Zarogoulidis1, Theodoros Kontakiotis1, Alexandros Mitrakas2, Agisilaos Esebidis2, Maria Konoglou3, Kalliopi Domvri1, Alkis Iordanidis4, Nikolaos Katsikogiannis5, Nikolaos Courcoutsakis4, Konstantinos Zarogoulidis11Pulmonary Department, "G Papanikolaou" General Hospital, Aristotle University of Thessaloniki, Greece; 21st University Surgery Department, University General Hospital of Alexandroupolis, Democritus University of Thrace, Greece; 31st Pulmonary Department, "G Papanikolaou" General Hospital, Thessaloniki, Greece; 4Radiology Department, University General Hospital of Alexandroupolis, Democritus University of Thrace, Greece; 5Surgery Department (NHS, University General Hospital of Alexandroupolis, GreeceAbstract: Carcinoid bronchopulmonary tumors represent approximately 25% of all carcinoid tumors and 1%–2% of all lung neoplasms. The most common symptoms are: persistent cough, asthma-like wheezing, chest pain, dyspnea, hemoptysis and obstructive pneumonitis. We present a case of a young adult diagnosed with a typical carcinoid tumor. The diagnosis was established on the basis of imaging examination and bronchoscopic biopsy. The patient was treated with bronchoscopic electrocautery therapy to relieve the obstructed airway, followed by surgical lobectomy in order to entirely remove the exophytic damage. This approach was not only a palliative management to bronchial obstruction but also avoided pneumonectomy. Recent studies support the use of such interventional resection methods, as they may result in a more conservative surgical resection.Keywords: carcinoid tumor, typical lung carcinoid, therapeutic bronchoscopy, surgical resection

  18. Non-linear thermal and structural analysis of a typical spent fuel silo

    International Nuclear Information System (INIS)

    Alvarez, L.M.; Mancini, G.R.; Spina, O.A.F.; Sala, G.; Paglia, F.

    1993-01-01

    A numerical method for the non-linear structural analysis of a typical reinforced concrete spent fuel silo under thermal loads is proposed. The numerical time integration was performed by means of a time explicit axisymmetric finite-difference numerical operator. An analysis was made of influences by heat, viscoelasticity and cracking upon the concrete behaviour between concrete pouring stage and the first period of the silo's normal operation. The following parameters were considered for the heat generation and transmission process: Heat generated during the concrete's hardening stage, Solar radiation effects, Natural convection, Spent-fuel heat generation. For the modelling of the reinforced concrete behaviour, use was made of a simplified formulation of: Visco-elastic effects, Thermal cracking, Steel reinforcement. A comparison between some experimental temperature characteristic values obtained from the numerical integration process and empirical data obtained from a 1:1 scaled prototype was also carried out. (author)

  19. Gender-typical olfactory regulation of sexual behavior in goldfish

    Directory of Open Access Journals (Sweden)

    Makito eKobayashi

    2014-04-01

    Full Text Available It is known that olfaction is essential for the occurrence of sexual behavior in male goldfish. Sex pheromones from ovulatory females elicit male sexual behavior, chasing and sperm releasing act. In female goldfish, ovarian prostaglandin F2α (PGF elicits female sexual behavior, egg releasing act. It has been considered that olfaction does not affect sexual behavior in female goldfish. In the present study, we reexamined the involvement of olfaction in sexual behavior of female goldfish. Olfaction was blocked in male and female goldfish by two methods: nasal occlusion (NO which blocks the reception of olfactants, and olfactory tract section (OTX which blocks transmission of olfactory information from the olfactory bulb to the telencephalon. Sexual behavior of goldfish was induced by administration of PGF to females, an established method for inducing goldfish sexual behavior in both sexes. Sexual behavior in males was suppressed by NO and OTX as previously reported because of lack of pheromone stimulation. In females, NO suppressed sexual behavior but OTX did not affect the occurrence of sexual behavior. Females treated with both NO and OTX performed sexual behavior normally. These results indicate that olfaction is essential in female goldfish to perform sexual behavior as in males but in a different manner. The lack of olfaction in males causes lack of pheromonal stimulation, resulting in no behavior elicited. Whereas the results of female experiments suggest that lack of olfaction in females causes strong inhibition of sexual behavior mediated by the olfactory pathway. Olfactory tract section is considered to block the pathway and remove this inhibition, resulting in the resumption of the behavior. By subtract sectioning of the olfactory tract, it was found that this inhibition was mediated by the medial olfactory tracts, not the lateral olfactory tracts. Thus, it is concluded that goldfish has gender-typical olfactory regulation for sexual

  20. Domain decomposed preconditioners with Krylov subspace methods as subdomain solvers

    Energy Technology Data Exchange (ETDEWEB)

    Pernice, M. [Univ. of Utah, Salt Lake City, UT (United States)

    1994-12-31

    Domain decomposed preconditioners for nonsymmetric partial differential equations typically require the solution of problems on the subdomains. Most implementations employ exact solvers to obtain these solutions. Consequently work and storage requirements for the subdomain problems grow rapidly with the size of the subdomain problems. Subdomain solves constitute the single largest computational cost of a domain decomposed preconditioner, and improving the efficiency of this phase of the computation will have a significant impact on the performance of the overall method. The small local memory available on the nodes of most message-passing multicomputers motivates consideration of the use of an iterative method for solving subdomain problems. For large-scale systems of equations that are derived from three-dimensional problems, memory considerations alone may dictate the need for using iterative methods for the subdomain problems. In addition to reduced storage requirements, use of an iterative solver on the subdomains allows flexibility in specifying the accuracy of the subdomain solutions. Substantial savings in solution time is possible if the quality of the domain decomposed preconditioner is not degraded too much by relaxing the accuracy of the subdomain solutions. While some work in this direction has been conducted for symmetric problems, similar studies for nonsymmetric problems appear not to have been pursued. This work represents a first step in this direction, and explores the effectiveness of performing subdomain solves using several transpose-free Krylov subspace methods, GMRES, transpose-free QMR, CGS, and a smoothed version of CGS. Depending on the difficulty of the subdomain problem and the convergence tolerance used, a reduction in solution time is possible in addition to the reduced memory requirements. The domain decomposed preconditioner is a Schur complement method in which the interface operators are approximated using interface probing.

  1. On the Rankin-Selberg method for higher genus string amplitudes

    CERN Document Server

    Florakis, Ioannis

    2017-01-01

    Closed string amplitudes at genus $h\\leq 3$ are given by integrals of Siegel modular functions on a fundamental domain of the Siegel upper half-plane. When the integrand is of rapid decay near the cusps, the integral can be computed by the Rankin-Selberg method, which consists of inserting an Eisenstein series $E_h(s)$ in the integrand, computing the integral by the orbit method, and finally extracting the residue at a suitable value of $s$. String amplitudes, however, typically involve integrands with polynomial or even exponential growth at the cusps, and a renormalization scheme is required to treat infrared divergences. Generalizing Zagier's extension of the Rankin-Selberg method at genus one, we develop the Rankin-Selberg method for Siegel modular functions of degree 2 and 3 with polynomial growth near the cusps. In particular, we show that the renormalized modular integral of the Siegel-Narain partition function of an even self-dual lattice of signature $(d,d)$ is proportional to a residue of the Langla...

  2. METHODS FOR DETERMINING AGITATOR MIXING REQUIREMENTS FOR A MIXING and SAMPLING FACILITY TO FEED WTP (WASTE TREATMENT PLANT)

    International Nuclear Information System (INIS)

    Griffin, P.W.

    2009-01-01

    The following report is a summary of work conducted to evaluate the ability of existing correlative techniques and alternative methods to accurately estimate impeller speed and power requirements for mechanical mixers proposed for use in a mixing and sampling facility (MSF). The proposed facility would accept high level waste sludges from Hanford double-shell tanks and feed uniformly mixed high level waste to the Waste Treatment Plant. Numerous methods are evaluated and discussed, and resulting recommendations provided.

  3. Blast casting requires fresh assessment of methods

    Energy Technology Data Exchange (ETDEWEB)

    Pilshaw, S.R.

    1987-08-01

    The article discusses the reasons why conventional blasting operations, mainly that of explosive products, drilling and initiation methods are inefficient, and suggests new methods and materials to overcome the problems of the conventional operations. The author suggests that the use of bulk ANFO for casting, instead of high energy and density explosives with high velocity detonation is more effective in producing heave action results. Similarly the drilling of smaller blast holes than is conventional allows better loading distribution of explosives in the rock mass. The author also suggests that casting would be more efficient if the shot rows were loaded differently to produce a variable burden blasting pattern.

  4. GENERAL REQUIREMENTS FOR SIMULATION MODELS IN WASTE MANAGEMENT

    International Nuclear Information System (INIS)

    Miller, Ian; Kossik, Rick; Voss, Charlie

    2003-01-01

    Most waste management activities are decided upon and carried out in a public or semi-public arena, typically involving the waste management organization, one or more regulators, and often other stakeholders and members of the public. In these environments, simulation modeling can be a powerful tool in reaching a consensus on the best path forward, but only if the models that are developed are understood and accepted by all of the parties involved. These requirements for understanding and acceptance of the models constrain the appropriate software and model development procedures that are employed. This paper discusses requirements for both simulation software and for the models that are developed using the software. Requirements for the software include transparency, accessibility, flexibility, extensibility, quality assurance, ability to do discrete and/or continuous simulation, and efficiency. Requirements for the models that are developed include traceability, transparency, credibility/validity, and quality control. The paper discusses these requirements with specific reference to the requirements for performance assessment models that are used for predicting the long-term safety of waste disposal facilities, such as the proposed Yucca Mountain repository

  5. Rapid assessment methods in eye care: An overview

    Directory of Open Access Journals (Sweden)

    Srinivas Marmamula

    2012-01-01

    Full Text Available Reliable information is required for the planning and management of eye care services. While classical research methods provide reliable estimates, they are prohibitively expensive and resource intensive. Rapid assessment (RA methods are indispensable tools in situations where data are needed quickly and where time- or cost-related factors prohibit the use of classical epidemiological surveys. These methods have been developed and field tested, and can be applied across almost the entire gamut of health care. The 1990s witnessed the emergence of RA methods in eye care for cataract, onchocerciasis, and trachoma and, more recently, the main causes of avoidable blindness and visual impairment. The important features of RA methods include the use of local resources, simplified sampling methodology, and a simple examination protocol/data collection method that can be performed by locally available personnel. The analysis is quick and easy to interpret. The entire process is inexpensive, so the survey may be repeated once every 5-10 years to assess the changing trends in disease burden. RA survey methods are typically linked with an intervention. This article provides an overview of the RA methods commonly used in eye care, and emphasizes the selection of appropriate methods based on the local need and context.

  6. 16 CFR Figure 5 to Part 1610 - An Example of a Typical Gas Shield

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false An Example of a Typical Gas Shield 5 Figure 5 to Part 1610 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT... Example of a Typical Gas Shield ER25MR08.004 ...

  7. 16 CFR Figure 4 to Part 1610 - An Example of a Typical Indicator Finger

    Science.gov (United States)

    2010-01-01

    ... 16 Commercial Practices 2 2010-01-01 2010-01-01 false An Example of a Typical Indicator Finger 4 Figure 4 to Part 1610 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FLAMMABLE FABRICS ACT... Example of a Typical Indicator Finger ER25MR08.003 ...

  8. The effects of typical and atypical antipsychotics on the electrical activity of the brain in a rat model

    Directory of Open Access Journals (Sweden)

    Oytun Erbaş

    2013-09-01

    Full Text Available Objective: Antipsychotic drugs are known to have strongeffect on the bioelectric activity in the brain. However,some studies addressing the changes on electroencephalography(EEG caused by typical and atypical antipsychoticdrugs are conflicting. We aimed to compare the effectsof typical and atypical antipsychotics on the electricalactivity in the brain via EEG recordings in a rat model.Methods: Thirty-two Sprague Dawley adult male ratswere used in the study. The rats were divided into fivegroups, randomly (n=7, for each group. The first groupwas used as control group and administered 1 ml/kg salineintraperitoneally (IP. Haloperidol (1 mg/kg (group 2,chlorpromazine (5 mg/kg (group 3, olanzapine (1 mg/kg(group 4, ziprasidone (1 mg/ kg (group 5 were injectedIP for five consecutive days. Then, EEG recordings ofeach group were taken for 30 minutes.Results: The percentages of delta and theta waves inhaloperidol, chlorpromazine, olanzapine and ziprasidonegroups were found to have a highly significant differencecompared with the saline administration group (p<0.001.The theta waves in the olanzapine and ziprasidonegroups were increased compared with haloperidol andchlorpromazine groups (p<0.05.Conclusion: The typical and atypical antipsychotic drugsmay be risk factor for EEG abnormalities. This studyshows that antipsychotic drugs should be used with caution.J Clin Exp Invest 2013; 4 (3: 279-284Key words: Haloperidol, chlorpromazine, olanzapine,ziprasidone, EEG, rat

  9. The Roots of Disillusioned American Dream in Typical American

    Institute of Scientific and Technical Information of China (English)

    古冬华

    2016-01-01

    Typical American is one of Gish Jen’s notable novels catching attention of the American literary circle. The motif of disillusioned American dream can be seen clearly through the experiences of three main characters. From perspectives of the consumer culture and cultural conflicts, this paper analyzes the roots of the disillusioned American dream in the novel.

  10. Typical Versus Atypical Anorexia Nervosa Among Adolescents: Clinical Characteristics and Implications for ICD-11.

    Science.gov (United States)

    Silén, Yasmina; Raevuori, Anu; Jüriloo, Elisabeth; Tainio, Veli-Matti; Marttunen, Mauri; Keski-Rahkonen, Anna

    2015-09-01

    There is scant research on the clinical utility of differentiating International Classification of Diseases (ICD) 10 diagnoses F50.0 anorexia nervosa (typical AN) and F50.1 atypical anorexia. We reviewed systematically records of 47 adolescents who fulfilled criteria for ICD-10 F50.0 (n = 34) or F50.1 (n = 13), assessing the impact of diagnostic subtype, comorbidity, background factors and treatment choices on recovery. Atypical AN patients were significantly older (p = 0.03), heavier (minimum body mass index 16.7 vs 15.1 kg/m(2) , p = 0.003) and less prone to comorbidities (38% vs 71%, p = 0.04) and had shorter, less intensive and less costly treatments than typical AN patients. The diagnosis of typical versus atypical AN was the sole significant predictor of treatment success: recovery from atypical AN was 4.3 times (95% confidence interval [1.1, 17.5]) as likely as recovery from typical AN. Overall, our findings indicate that a broader definition of AN may dilute the prognostic value of the diagnosis, and therefore, ICD-11 should retain its distinction between typical and atypical AN. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.

  11. The importance of being 'well-placed': the influence of context on perceived typicality and esthetic appraisal of product appearance.

    Science.gov (United States)

    Blijlevens, Janneke; Gemser, Gerda; Mugge, Ruth

    2012-01-01

    Earlier findings have suggested that esthetic appraisal of product appearances is influenced by perceived typicality. However, prior empirical research on typicality and esthetic appraisal of product appearances has not explicitly taken context effects into account. In this paper, we investigate how a specific context influences perceived typicality and thus the esthetic appraisal of product appearances by manipulating the degree of typicality of a product's appearance and its context. The findings of two studies demonstrate that the perceived typicality of a product appearance and consequently its esthetic appraisal vary depending on the typicality of the context in which the product is presented. Specifically, contrast effects occur for product appearances that are perceived as typical. Typical product appearances are perceived as more typical and are more esthetically appealing when presented in an atypical context compared to when presented in a typical context. No differences in perceived typicality and esthetic appraisal were found for product appearances that are perceived as atypical. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Compression of Cognitive Flexibility and Adjustment of Students with Developmental Coordination Disorder (DCD and Typically Developing Students

    Directory of Open Access Journals (Sweden)

    Hasan Sadeghi

    2012-10-01

    Full Text Available Objectives: The aim of this research is to compare cognitive flexibility and adjustment between two groups of students with Developmental Coordination Disorder (DCD and typically developing students. Methods: For this purpose, 50 students with DCD and 50 typically developing students were chosen among 12 primary schools. The Developmental Coordination Disorder Questionnaire (DCD-Q, Adjustment Inventory for School Students (AISS and Wisconsin Card Sorting Test (WCST were used to measure the research variables. Results: The results of the multivariate analysis of variance (MANOVA showed that the mean score of cognitive flexibility and emotional, educational and social adjustment is significantly higher in the students with developmental coordination disorder (P<0.001. The results of multivariate regression analysis also showed that a 25% variance percentage of cognitive flexibility and adjustment can explain the variance of developmental coordination disorder in people with such a disorder (P<0.001. Discussion: The result of the present study provides further evidence based on low cognitive flexibility and Adjustment in students with DCD.

  13. 305 Building K basin mockup facility functions and requirements

    International Nuclear Information System (INIS)

    Steele, R.M.

    1994-01-01

    This document develops functions and requirements for installation and operation of a cold mockup test facility within the 305 Building. The test facility will emulate a portion of a typical spent nuclear fuel storage basin (e.g., 105-KE Basin) to support evaluation of equipment and processes for safe storage and disposition of the spent nuclear fuel currently within the K Basins

  14. Quantitative numerical method for analysing slip traces observed by AFM

    International Nuclear Information System (INIS)

    Veselý, J; Cieslar, M; Coupeau, C; Bonneville, J

    2013-01-01

    Atomic force microscopy (AFM) is used more and more routinely to study, at the nanometre scale, the slip traces produced on the surface of deformed crystalline materials. Taking full advantage of the quantitative height data of the slip traces, which can be extracted from these observations, requires however an adequate and robust processing of the images. In this paper an original method is presented, which allows the fitting of AFM scan-lines with a specific parameterized step function without any averaging treatment of the original data. This yields a quantitative and full description of the changes in step shape along the slip trace. The strength of the proposed method is established on several typical examples met in plasticity by analysing nano-scale structures formed on the sample surface by emerging dislocations. (paper)

  15. The Inverse System Method Applied to the Derivation of Power System Non—linear Control Laws

    Institute of Scientific and Technical Information of China (English)

    DonghaiLI; XuezhiJIANG; 等

    1997-01-01

    The differential geometric method has been applied to a series of power system non-linear control problems effectively.However a set of differential equations must be solved for obtaining the required diffeomorphic transformation.Therefore the derivation of control laws is very complicated.In fact because of the specificity of power system models the required diffeomorphic transformation may be obtained directly,so it is unnecessary to solve a set of differential equations.In addition inverse system method is equivalent to differential geometric method in reality and not limited to affine nonlinear systems,Its physical meaning is able to be viewed directly and its deduction needs only algebraic operation and derivation,so control laws can be obtained easily and the application to engineering is very convenient.Authors of this paper take steam valving control of power system as a typical case to be studied.It is demonstrated that the control law deduced by inverse system method is just the same as one by differential geometric method.The conclusion will simplify the control law derivations of steam valving,excitation,converter and static var compensator by differential geometric method and may be suited to similar control problems in other areas.

  16. Simulation of temporal and spatial distribution of required irrigation water by crop models and the pan evaporation coefficient method

    Science.gov (United States)

    Yang, Yan-min; Yang, Yonghui; Han, Shu-min; Hu, Yu-kun

    2009-07-01

    Hebei Plain is the most important agricultural belt in North China. Intensive irrigation, low and uneven precipitation have led to severe water shortage on the plain. This study is an attempt to resolve this crucial issue of water shortage for sustainable agricultural production and water resources management. The paper models distributed regional irrigation requirement for a range of cultivated crops on the plain. Classic crop models like DSSAT- wheat/maize and COTTON2K are used in combination with pan-evaporation coefficient method to estimate water requirements for wheat, corn, cotton, fruit-trees and vegetables. The approach is more accurate than the static approach adopted in previous studies. This is because the combination use of crop models and pan-evaporation coefficient method dynamically accounts for irrigation requirement at different growth stages of crops, agronomic practices, and field and climatic conditions. The simulation results show increasing Required Irrigation Amount (RIA) with time. RIA ranges from 5.08×109 m3 to 14.42×109 m3 for the period 1986~2006, with an annual average of 10.6×109 m3. Percent average water use by wheat, fruit trees, vegetable, corn and cotton is 41%, 12%, 12%, 11%, 7% and 17% respectively. RIA for April and May (the period with the highest irrigation water use) is 1.78×109 m3 and 2.41×109 m3 respectively. The counties in the piedmont regions of Mount Taihang have high RIA while the central and eastern regions/counties have low irrigation requirement.

  17. Identifying typical physical activity on smartphone with varying positions and orientations.

    Science.gov (United States)

    Miao, Fen; He, Yi; Liu, Jinlei; Li, Ye; Ayoola, Idowu

    2015-04-13

    Traditional activity recognition solutions are not widely applicable due to a high cost and inconvenience to use with numerous sensors. This paper aims to automatically recognize physical activity with the help of the built-in sensors of the widespread smartphone without any limitation of firm attachment to the human body. By introducing a method to judge whether the phone is in a pocket, we investigated the data collected from six positions of seven subjects, chose five signals that are insensitive to orientation for activity classification. Decision trees (J48), Naive Bayes and Sequential minimal optimization (SMO) were employed to recognize five activities: static, walking, running, walking upstairs and walking downstairs. The experimental results based on 8,097 activity data demonstrated that the J48 classifier produced the best performance with an average recognition accuracy of 89.6% during the three classifiers, and thus would serve as the optimal online classifier. The utilization of the built-in sensors of the smartphone to recognize typical physical activities without any limitation of firm attachment is feasible.

  18. RELAP-7 Software Verification and Validation Plan: Requirements Traceability Matrix (RTM) Part 1 – Physics and numerical methods

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Yong Joon [Idaho National Lab. (INL), Idaho Falls, ID (United States); Yoo, Jun Soo [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    This INL plan comprehensively describes the Requirements Traceability Matrix (RTM) on main physics and numerical method of the RELAP-7. The plan also describes the testing-based software verification and validation (SV&V) process—a set of specially designed software models used to test RELAP-7.

  19. Comparing Levels of Mastery Motivation in Children with Cerebral Palsy (CP) and Typically Developing Children.

    Science.gov (United States)

    Salavati, Mahyar; Vameghi, Roshanak; Hosseini, Seyed Ali; Saeedi, Ahmad; Gharib, Masoud

    2018-02-01

    The present study aimed to compare motivation in school-age children with CP and typically developing children. 229 parents of children with cerebral palsy and 212 parents of typically developing children participated in the present cross sectional study and completed demographic and DMQ18 forms. The rest of information was measured by an occupational therapist. Average age was equal to 127.12±24.56 months for children with cerebral palsy (CP) and 128.08±15.90 for typically developing children. Independent t-test used to compare two groups; and Pearson correlation coefficient by SPSS software applied to study correlation with other factors. There were differences between DMQ subscales of CP and typically developing groups in terms of all subscales ( P Manual ability classification system (r=-0.782, P<0.001) and cognitive impairment (r=-0.161, P<0.05). Children with CP had lower mastery motivation than typically developing children. Rehabilitation efforts should take to enhance motivation, so that children felt empowered to do tasks or practices.

  20. Ecosystem responses to warming and watering in typical and desert steppes

    Science.gov (United States)

    Xu, Zhenzhu; Hou, Yanhui; Zhang, Lihua; Liu, Tao; Zhou, Guangsheng

    2016-10-01

    Global warming is projected to continue, leading to intense fluctuations in precipitation and heat waves and thereby affecting the productivity and the relevant biological processes of grassland ecosystems. Here, we determined the functional responses to warming and altered precipitation in both typical and desert steppes. The results showed that watering markedly increased the aboveground net primary productivity (ANPP) in a typical steppe during a drier year and in a desert steppe over two years, whereas warming manipulation had no significant effect. The soil microbial biomass carbon (MBC) and the soil respiration (SR) were increased by watering in both steppes, but the SR was significantly decreased by warming in the desert steppe only. The inorganic nitrogen components varied irregularly, with generally lower levels in the desert steppe. The belowground traits of soil total organic carbon (TOC) and the MBC were more closely associated with the ANPP in the desert than in the typical steppes. The results showed that the desert steppe with lower productivity may respond strongly to precipitation changes, particularly with warming, highlighting the positive effect of adding water with warming. Our study implies that the habitat- and year-specific responses to warming and watering should be considered when predicting an ecosystem’s functional responses under climate change scenarios.

  1. Analysis of typical meteorological years in different climates of China

    International Nuclear Information System (INIS)

    Yang, Liu; Lam, Joseph C.; Liu, Jiaping

    2007-01-01

    Typical meteorological years (TMYs) for 60 cities in the five major climatic zones (severe cold, cold, hot summer and cold winter, hot summer and warm winter, mild) in China were investigated. Long term (1971-2000) measured weather data such as dry bulb and dew point temperatures, wind speed and global solar radiation were gathered and analysed. A total of seven climatic indices were used to select the 12 typical meteorological months (TMMs) that made up the TMY for each city. In general, the cumulative distribution functions of the TMMs selected tended to follow their long term counterparts quite well. There was no persistent trend in any particular years being more representative than the others, though 1978 and 1982 tended to be picked most often. This paper presents the work and its findings. Future work on the assessment of TMYs in building energy simulation is also discussed

  2. Deep Borehole Field Test Requirements and Controlled Assumptions.

    Energy Technology Data Exchange (ETDEWEB)

    Hardin, Ernest [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-01

    This document presents design requirements and controlled assumptions intended for use in the engineering development and testing of: 1) prototype packages for radioactive waste disposal in deep boreholes; 2) a waste package surface handling system; and 3) a subsurface system for emplacing and retrieving packages in deep boreholes. Engineering development and testing is being performed as part of the Deep Borehole Field Test (DBFT; SNL 2014a). This document presents parallel sets of requirements for a waste disposal system and for the DBFT, showing the close relationship. In addition to design, it will also inform planning for drilling, construction, and scientific characterization activities for the DBFT. The information presented here follows typical preparations for engineering design. It includes functional and operating requirements for handling and emplacement/retrieval equipment, waste package design and emplacement requirements, borehole construction requirements, sealing requirements, and performance criteria. Assumptions are included where they could impact engineering design. Design solutions are avoided in the requirements discussion. Deep Borehole Field Test Requirements and Controlled Assumptions July 21, 2015 iv ACKNOWLEDGEMENTS This set of requirements and assumptions has benefited greatly from reviews by Gordon Appel, Geoff Freeze, Kris Kuhlman, Bob MacKinnon, Steve Pye, David Sassani, Dave Sevougian, and Jiann Su.

  3. The emergence of typical entanglement in two-party random processes

    International Nuclear Information System (INIS)

    Dahlsten, O C O; Oliveira, R; Plenio, M B

    2007-01-01

    We investigate the entanglement within a system undergoing a random, local process. We find that there is initially a phase of very fast generation and spread of entanglement. At the end of this phase the entanglement is typically maximal. In Oliveira et al (2007 Phys. Rev. Lett. 98 130502) we proved that the maximal entanglement is reached to a fixed arbitrary accuracy within O(N 3 ) steps, where N is the total number of qubits. Here we provide a detailed and more pedagogical proof. We demonstrate that one can use the so-called stabilizer gates to simulate this process efficiently on a classical computer. Furthermore, we discuss three ways of identifying the transition from the phase of rapid spread of entanglement to the stationary phase: (i) the time when saturation of the maximal entanglement is achieved, (ii) the cutoff moment, when the entanglement probability distribution is practically stationary, and (iii) the moment block entanglement exhibits volume scaling. We furthermore investigate the mixed state and multipartite setting. Numerically, we find that the mutual information appears to behave similarly to the quantum correlations and that there is a well-behaved phase-space flow of entanglement properties towards an equilibrium. We describe how the emergence of typical entanglement can be used to create a much simpler tripartite entanglement description. The results form a bridge between certain abstract results concerning typical (also known as generic) entanglement relative to an unbiased distribution on pure states and the more physical picture of distributions emerging from random local interactions

  4. Review: typically-developing students' views and experiences of inclusive education.

    Science.gov (United States)

    Bates, Helen; McCafferty, Aileen; Quayle, Ethel; McKenzie, Karen

    2015-01-01

    The present review aimed to summarize and critique existing qualitative studies that have examined typically-developing students' views of inclusive education (i.e. the policy of teaching students with special educational needs in mainstream settings). Guidelines from the Centre for Reviews and Dissemination were followed, outlining the criteria by which journal articles were identified and critically appraised. Narrative Synthesis was used to summarize findings across studies. Fourteen studies met the review's inclusion criteria and were subjected to quality assessment. Analysis revealed that studies were of variable quality: three were of "good" methodological quality, seven of "medium" quality, and four of "poor" quality. With respect to findings, three overarching themes emerged: students expressed mostly negative attitudes towards peers with disabilities; were confused by the principles and practices of inclusive education; and made a number of recommendations for improving its future provision. A vital determinant of the success of inclusive education is the extent to which it is embraced by typically-developing students. Of concern, this review highlights that students tend not to understand inclusive education, and that this can breed hostility towards it. More qualitative research of high methodological quality is needed in this area. Implications for Rehabilitation Typically-developing students are key to the successful implementation of inclusive education. This review shows that most tend not to understand it, and can react by engaging in avoidance and/or targeted bullying of peers who receive additional support. Schools urgently need to provide teaching about inclusive education, and increase opportunities for contact between students who do and do not receive support (e.g. cooperative learning).

  5. Local Approximation and Hierarchical Methods for Stochastic Optimization

    Science.gov (United States)

    Cheng, Bolong

    In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the

  6. Numerical methods for hydrodynamic stability problems

    International Nuclear Information System (INIS)

    Fujimura, Kaoru

    1985-11-01

    Numerical methods for solving the Orr-Sommerfeld equation, which is the fundamental equation of the hydrodynamic stability theory for various shear flows, are reviewed and typical numerical results are presented. The methods of asymptotic solution, finite difference methods, initial value methods and expansions in orthogonal functions are compared. (author)

  7. Expert Elicitation Methods in Quantifying the Consequences of Acoustic Disturbance from Offshore Renewable Energy Developments.

    Science.gov (United States)

    Donovan, Carl; Harwood, John; King, Stephanie; Booth, Cormac; Caneco, Bruno; Walker, Cameron

    2016-01-01

    There are many developments for offshore renewable energy around the United Kingdom whose installation typically produces large amounts of far-reaching noise, potentially disturbing many marine mammals. The potential to affect the favorable conservation status of many species means extensive environmental impact assessment requirements for the licensing of such installation activities. Quantification of such complex risk problems is difficult and much of the key information is not readily available. Expert elicitation methods can be employed in such pressing cases. We describe the methodology used in an expert elicitation study conducted in the United Kingdom for combining expert opinions based on statistical distributions and copula-like methods.

  8. Suicide ideation and attempts in children with psychiatric disorders and typical development.

    Science.gov (United States)

    Dickerson Mayes, Susan; Calhoun, Susan L; Baweja, Raman; Mahr, Fauzia

    2015-01-01

    Children and adolescents with psychiatric disorders are at increased risk for suicide behavior. This is the first study to compare frequencies of suicide ideation and attempts in children and adolescents with specific psychiatric disorders and typical children while controlling for comorbidity and demographics. Mothers rated the frequency of suicide ideation and attempts in 1,706 children and adolescents with psychiatric disorders and typical development, 6-18 years of age. For the typical group, 0.5% had suicide behavior (ideation or attempts), versus 24% across the psychiatric groups (bulimia 48%, depression or anxiety disorder 34%, oppositional defiant disorder 33%, ADHD-combined type 22%, anorexia 22%, autism 18%, intellectual disability 17%, and ADHD-inattentive type 8%). Most alarming, 29% of adolescents with bulimia often or very often had suicide attempts, compared with 0-4% of patients in the other psychiatric groups. It is important for professionals to routinely screen all children and adolescents who have psychiatric disorders for suicide ideation and attempts and to treat the underlying psychiatric disorders that increase suicide risk.

  9. High gain requirements and high field Tokamak experiments

    International Nuclear Information System (INIS)

    Cohn, D.R.

    1994-01-01

    Operation at sufficiently high gain (ratio of fusion power to external heating power) is a fundamental requirement for tokamak power reactors. For typical reactor concepts, the gain is greater than 25. Self-heating from alpha particles in deuterium-tritium plasmas can greatly reduce ητ/temperature requirements for high gain. A range of high gain operating conditions is possible with different values of alpha-particle efficiency (fraction of alpha-particle power that actually heats the plasma) and with different ratios of self heating to external heating. At one extreme, there is ignited operation, where all of the required plasma heating is provided by alpha particles and the alpha-particle efficiency is 100%. At the other extreme, there is the case of no heating contribution from alpha particles. ητ/temperature requirements for high gain are determined as a function of alpha-particle heating efficiency. Possibilities for high gain experiments in deuterium-tritium, deuterium, and hydrogen plasmas are discussed

  10. General rule antielisive chartered the National Tax Code confrontation between the principle of ability to pay and the rule of closed typicality and the alleged tax by analogy

    Directory of Open Access Journals (Sweden)

    Willian Robert Nahra Filho

    2012-04-01

    Full Text Available Check the possibility of the general standard employment antielisiva the parental right, ouseja, taxation by analogy legal fact extratípico effects econômicosequivalentes the typical legal fact, based on the abuse of rights doctrine and noprincípio of ability. Analysis in the face of the principles of legalidadeestrita and closed typicality, the principle of security developments jurídica.Conclui the impossibility of taxation by analogy for the offense to dasegurança legal principle that stands for certainty and predictability in entreEstado relations and taxpayers. The breach of the principle of strict legality because deexigência specific and qualified law to detributos institution. The offense aoprincípio closed typicality that prevents the tax nãodescrito legal fact with all its details by law. These principles not passíveissequer limitation, since they are immutable clauses. Inability to restriçãoda full effectiveness of the rule of the principle of typicality contributiva.Impossibilidade ability to taxation by the integrative method of analogy, therefore existecerta charge, even if relative, creativity inherent in the method and dependent inexistirlacunas fill in relevant matters to detributos institution , existing in reality free of the right spaces.

  11. Finite element method for neutron diffusion problems in hexagonal geometry

    International Nuclear Information System (INIS)

    Wei, T.Y.C.; Hansen, K.F.

    1975-06-01

    The use of the finite element method for solving two-dimensional static neutron diffusion problems in hexagonal reactor configurations is considered. It is investigated as a possible alternative to the low-order finite difference method. Various piecewise polynomial spaces are examined for their use in hexagonal problems. The central questions which arise in the design of these spaces are the degree of incompleteness permissible and the advantages of using a low-order space fine-mesh approach over that of a high-order space coarse-mesh one. There is also the question of the degree of smoothness required. Two schemes for the construction of spaces are described and a number of specific spaces, constructed with the questions outlined above in mind, are presented. They range from a complete non-Lagrangian, non-Hermite quadratic space to an incomplete ninth order space. Results are presented for two-dimensional problems typical of a small high temperature gas-cooled reactor. From the results it is concluded that the space used should at least include the complete linear one. Complete spaces are to be preferred to totally incomplete ones. Once function continuity is imposed any additional degree of smoothness is of secondary importance. For flux shapes typical of the small high temperature gas-cooled reactor the linear space fine-mesh alternative is to be preferred to the perturbation quadratic space coarse-mesh one and the low-order finite difference method is to be preferred over both finite element schemes

  12. Receptor imaging of schizophrenic patients under treatment with typical and atypical neuroleptics

    International Nuclear Information System (INIS)

    Dresel, S.; Tatsch, K.; Meisenzahl, E.; Scherer, J.

    2002-01-01

    Schizophrenic psychosis is typically treated by typical and atypical neuroleptics. Both groups of drugs differ with regard to induction of extrapyramidal side effects. The occupancy of postsynaptic dopaminergic D2 receptors is considered to be an essential aspect of their antipsychotic properties. The dopamine D2 receptor status can be assessed by means of [I-123]IBZM SPECT. Studies on the typical neuroleptic haloperidol revealed an exponential dose response relationship measured by IBZM. Extrapyramidal side effects were presented by all patients below a threshold of the specific binding of IBZM below 0.4 (with one exception, norm value: >0.95). Also under treatment with the atypical neuroleptic clozapine an exponential dose response relationship was found. However, none of these patients showed extrapyramidal side effects. Recently introduced, new atypical neuroleptics such as risperidone and olanzapine again presented with an exponential relationship between daily dose and IBZM binding. The curves of the latter were in between the curves of haloperidol and clozapine. Extrapyramidal side effects were documented in a less number of patients treated with risperidone as compared to haloperidol, for olanzapine only one patient revealed these findings in our own patient group. The pharmacological profile of atypical neuroleptics shows - in addition to their binding to dopamine receptors - also high affinities to the receptors of other neurotransmitter systems, particularly the serotonergic system. Therefore, the lower incidence of extrapyramidal side effects seen by atypical in comparison to typical neuroleptics is at least in part most likely due to a complex interaction on a variety of neurotransmitter systems. (orig.) [de

  13. Structural MRI-based discrimination between autistic and typically developing brain

    Energy Technology Data Exchange (ETDEWEB)

    Fahmi, R; Hassan, H; Farag, A A [CVIP Lab., Univ. of Louisville, KY (United States); Elbaz, A [Dept. of Bioengineering, Univ. of Louisville, KY (United States); Casanova, M F [Dept. of Psychiatry and Behavioral science, Univ. of Louisville, KY (United States)

    2007-06-15

    Autism is a neurodevelopmental disorder characterized by marked deficits in communication, social interaction, and interests. Various studies of autism have suggested abnormalities in several brain regions, with an increasing agreement on the abnormal anatomy of the white matter (WM) and on deficits in the size of the corpus callosum (CC) and its sub-regions in autism. In this paper, we aim at using these abnormalities in order to devise robust classification methods of autistic vs. typically developing brains by analyzing their respective MRIs. Our analysis is based on shape descriptions and geometric models. We compute the 3D distance map to describe the shape of the WM, and use it as a statistical feature to discriminate between the two groups. We also use our recently proposed non-rigid registration technique to devise another classification approach by statistically analyzing and comparing the deformation fields generated from registering segmented CC's onto each others. The proposed techniques are tested on postmortem and on in-vivo brain MR data. At the 85% confidence level the WM-based classification algorithm correctly classified 14/14 postmortem-autistics and 12/12 in-vivo autistics, a 100% accuracy rate, and 13/15 postmortem controls (86% accuracy rate) and 30/30 in-vivo controls (100% accuracy rate). The technique based on the analysis of the CC was applied only on the in vivo data. At the 85% confidence rate, this technique correctly classified 10/15 autistics, a 0.66 accuracy rate, and 29/30 controls, a 0.96 accuracy rate. These results are very promising and show that, contrary to traditional methods, the proposed techniques are less sensitive to age and volume effects. (orig.)

  14. Structural MRI-based discrimination between autistic and typically developing brain

    International Nuclear Information System (INIS)

    Fahmi, R.; Hassan, H.; Farag, A.A.; Elbaz, A.; Casanova, M.F.

    2007-01-01

    Autism is a neurodevelopmental disorder characterized by marked deficits in communication, social interaction, and interests. Various studies of autism have suggested abnormalities in several brain regions, with an increasing agreement on the abnormal anatomy of the white matter (WM) and on deficits in the size of the corpus callosum (CC) and its sub-regions in autism. In this paper, we aim at using these abnormalities in order to devise robust classification methods of autistic vs. typically developing brains by analyzing their respective MRIs. Our analysis is based on shape descriptions and geometric models. We compute the 3D distance map to describe the shape of the WM, and use it as a statistical feature to discriminate between the two groups. We also use our recently proposed non-rigid registration technique to devise another classification approach by statistically analyzing and comparing the deformation fields generated from registering segmented CC's onto each others. The proposed techniques are tested on postmortem and on in-vivo brain MR data. At the 85% confidence level the WM-based classification algorithm correctly classified 14/14 postmortem-autistics and 12/12 in-vivo autistics, a 100% accuracy rate, and 13/15 postmortem controls (86% accuracy rate) and 30/30 in-vivo controls (100% accuracy rate). The technique based on the analysis of the CC was applied only on the in vivo data. At the 85% confidence rate, this technique correctly classified 10/15 autistics, a 0.66 accuracy rate, and 29/30 controls, a 0.96 accuracy rate. These results are very promising and show that, contrary to traditional methods, the proposed techniques are less sensitive to age and volume effects. (orig.)

  15. Structural MRI-based discrimination between autistic and typically developing brain

    Energy Technology Data Exchange (ETDEWEB)

    Fahmi, R.; Hassan, H.; Farag, A.A. [CVIP Lab., Univ. of Louisville, KY (United States); Elbaz, A. [Dept. of Bioengineering, Univ. of Louisville, KY (United States); Casanova, M.F. [Dept. of Psychiatry and Behavioral science, Univ. of Louisville, KY (United States)

    2007-06-15

    Autism is a neurodevelopmental disorder characterized by marked deficits in communication, social interaction, and interests. Various studies of autism have suggested abnormalities in several brain regions, with an increasing agreement on the abnormal anatomy of the white matter (WM) and on deficits in the size of the corpus callosum (CC) and its sub-regions in autism. In this paper, we aim at using these abnormalities in order to devise robust classification methods of autistic vs. typically developing brains by analyzing their respective MRIs. Our analysis is based on shape descriptions and geometric models. We compute the 3D distance map to describe the shape of the WM, and use it as a statistical feature to discriminate between the two groups. We also use our recently proposed non-rigid registration technique to devise another classification approach by statistically analyzing and comparing the deformation fields generated from registering segmented CC's onto each others. The proposed techniques are tested on postmortem and on in-vivo brain MR data. At the 85% confidence level the WM-based classification algorithm correctly classified 14/14 postmortem-autistics and 12/12 in-vivo autistics, a 100% accuracy rate, and 13/15 postmortem controls (86% accuracy rate) and 30/30 in-vivo controls (100% accuracy rate). The technique based on the analysis of the CC was applied only on the in vivo data. At the 85% confidence rate, this technique correctly classified 10/15 autistics, a 0.66 accuracy rate, and 29/30 controls, a 0.96 accuracy rate. These results are very promising and show that, contrary to traditional methods, the proposed techniques are less sensitive to age and volume effects. (orig.)

  16. Simulating of single phase flow in typical centrifugal pumps oil industry; Simulacao do escoamento monofasico em bombas centrifugas tipicas da industria do petroleo

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Ana Carla Costa; Silva, Aldrey Luis Morais da; Maitelli, Carla Wilza Souza de Paula [Universidade Federal do Rio Grande do Norte (UFRN), RN (Brazil)

    2012-07-01

    With the various techniques applied in production processes and oil exploration, has been using the artificial lift equipment with the aim of promoting an increase in flow in oil wells and gas. Choosing the most appropriate method of elevation depends on certain factors, among them the initial costs of installation, maintenance and conditions in the producing field, resulting in a more precise analysis of the project. Although there are other methods that represent a low cost and easy maintenance, the BCS method (Electrical Submersible Pumping), appears to be quite effective when it is intended to pump more flow of liquids from both terrestrial and marine environments, in conditions adverse temperature, presence of free gas in the mixture and viscous fluids. This method is based in most cases where the vessel pressure was low, and the fluid does not reach the surface without intervention of an artificial means which can lift them. Similar happens at the end of productive life of a resurgence for the well, or even when the flow of it is far below what is expected to produce, requiring a complement of natural energy through artificial lift. By definition, the BCS is a method of artificial lift in which a subsurface electric motor turns electrical energy into mechanical centrifugal pump and a multistage overlapping converts mechanical energy into kinetic energy of the engine bringing the fluid surface. In this study we performed computer simulations using a commercial program ANSYS #Registered Sign# CFX #Registered Sign# dimensions previously obtained by the 3D geometry in CAD format, with the objective of evaluating the single-phase flow inside typical centrifugal pump submerged in the oil industry. The variable measured was the height of elevation and drilling fluids are oil and water.(author)

  17. Typicality aids search for an unspecified target, but only in identification and not in attentional guidance.

    Science.gov (United States)

    Castelhano, Monica S; Pollatsek, Alexander; Cave, Kyle R

    2008-08-01

    Participants searched for a picture of an object, and the object was either a typical or an atypical category member. The object was cued by either the picture or its basic-level category name. Of greatest interest was whether it would be easier to search for typical objects than to search for atypical objects. The answer was"yes," but only in a qualified sense: There was a large typicality effect on response time only for name cues, and almost none of the effect was found in the time to locate (i.e., first fixate) the target. Instead, typicality influenced verification time-the time to respond to the target once it was fixated. Typicality is thus apparently irrelevant when the target is well specified by a picture cue; even when the target is underspecified (as with a name cue), it does not aid attentional guidance, but only facilitates categorization.

  18. Health-Related Quality of Life in Children Attending Special and Typical Education Greek Schools

    Science.gov (United States)

    Papadopoulou, D.; Malliou, P.; Kofotolis, N.; Vlachopoulos, S. P.; Kellis, E.

    2017-01-01

    The purpose of this study was to examine parental perceptions about Health Related Quality of Life (HRQoL) of typical education and special education students in Greece. The Pediatric Quality of Life Inventory (PedsQL) was administered to the parents of 251 children from typical schools, 46 students attending integration classes (IC) within a…

  19. Comparison of 2 electrophoretic methods and a wet-chemistry method in the analysis of canine lipoproteins.

    Science.gov (United States)

    Behling-Kelly, Erica

    2016-03-01

    The evaluation of lipoprotein metabolism in small animal medicine is hindered by the lack of a gold standard method and paucity of validation data to support the use of automated chemistry methods available in the typical veterinary clinical pathology laboratory. The physical and chemical differences between canine and human lipoproteins draw into question whether the transference of some of these human methodologies for the study of canine lipoproteins is valid. Validation of methodology must go hand in hand with exploratory studies into the diagnostic or prognostic utility of measuring specific lipoproteins in veterinary medicine. The goal of this study was to compare one commercially available wet-chemistry method to manual and automated lipoprotein electrophoresis in the analysis of canine lipoproteins. Canine lipoproteins from 50 dogs were prospectively analyzed by 2 electrophoretic methods, one automated and one manual method, and one wet-chemistry method. Electrophoretic methods identified a higher proportion of low-density lipoproteins than the wet-chemistry method. Automated electrophoresis occasionally failed to identify very low-density lipoproteins. Wet-chemistry methods designed for evaluation of human lipoproteins are insensitive to canine low-density lipoproteins and may not be applicable to the study of canine lipoproteins. Automated electrophoretic methods will likely require significant modifications if they are to be used in the analysis of canine lipoproteins. Studies aimed at determining the impact of a disease state on lipoproteins should thoroughly investigate the selected methodology prior to the onset of the study. © 2016 American Society for Veterinary Clinical Pathology.

  20. Adolescent alcohol exposure and persistence of adolescent-typical phenotypes into adulthood: a mini-review

    Science.gov (United States)

    Spear, Linda Patia; Swartzwelder, H. Scott

    2014-01-01

    Alcohol use is typically initiated during adolescence, which, along with young adulthood, is a vulnerable period for the onset of high-risk drinking and alcohol abuse. Given across-species commonalities in certain fundamental neurobehavioral characteristics of adolescence, studies in laboratory animals such as the rat have proved useful to assess persisting consequences of repeated alcohol exposure. Despite limited research to date, reports of long-lasting effects of adolescent ethanol exposure are emerging, along with certain common themes. One repeated finding is that adolescent exposure to ethanol sometimes results in the persistence of adolescent-typical phenotypes into adulthood. Instances of adolescent -like persistence have been seen in terms of baseline behavioral, cognitive, electrophysiological and neuroanatomical characteristics, along with the retention of adolescent-typical sensitivities to acute ethanol challenge. These effects are generally not observed after comparable ethanol exposure in adulthood. Persistence of adolescent-typical phenotypes is not always evident, and may be related to regionally-specific ethanol influences on the interplay between CNS excitation and inhibition critical for the timing of neuroplasticity. PMID:24813805

  1. Statistical Requirements For Pass-Fail Testing Of Contraband Detection Systems

    International Nuclear Information System (INIS)

    Gilliam, David M.

    2011-01-01

    Contraband detection systems for homeland security applications are typically tested for probability of detection (PD) and probability of false alarm (PFA) using pass-fail testing protocols. Test protocols usually require specified values for PD and PFA to be demonstrated at a specified level of statistical confidence CL. Based on a recent more theoretical treatment of this subject [1], this summary reviews the definition of CL and provides formulas and spreadsheet functions for constructing tables of general test requirements and for determining the minimum number of tests required. The formulas and tables in this article may be generally applied to many other applications of pass-fail testing, in addition to testing of contraband detection systems.

  2. Profitability of labour factor in the typical dairy farms in the world

    Directory of Open Access Journals (Sweden)

    Andrzej Parzonko

    2009-01-01

    Full Text Available The main purpose of the article was to analyse the productivity and profitability of labour factor and to present asset endowments of the typical dairy farms distinguished within IFCN (International Farm Comparison Network. Among analysed 103 typical dairy farms from 34 countries, the highest net dairy farm profit characterised large farms from USA, Australia and New Zealand. Those farms generated also significantly higher profit per working hour then the potential wages that could be earned outside the farm. The highest assets value per 100 kg of produced milk characterised European farms (especially with low production scale.

  3. Memory for radio advertisements: the effect of program and typicality.

    Science.gov (United States)

    Martín-Luengo, Beatriz; Luna, Karlos; Migueles, Malen

    2013-01-01

    We examined the influence of the type of radio program on the memory for radio advertisements. We also investigated the role in memory of the typicality (high or low) of the elements of the products advertised. Participants listened to three types of programs (interesting, boring, enjoyable) with two advertisements embedded in each. After completing a filler task, the participants performed a true/false recognition test. Hits and false alarm rates were higher for the interesting and enjoyable programs than for the boring one. There were also more hits and false alarms for the high-typicality elements. The response criterion for the advertisements embedded in the boring program was stricter than for the advertisements in other types of programs. We conclude that the type of program in which an advertisement is inserted and the nature of the elements of the advertisement affect both the number of hits and false alarms and the response criterion, but not the accuracy of the memory.

  4. Daily intakes of naturally occurring radioisotopes in typical Korean foods

    International Nuclear Information System (INIS)

    Choi, Min-Seok; Lin Xiujing; Lee, Sun Ah; Kim, Wan; Kang, Hee-Dong; Doh, Sih-Hong; Kim, Do-Sung; Lee, Dong-Myung

    2008-01-01

    The concentrations of naturally occurring radioisotopes ( 232 Th, 228 Th, 230 Th, 228 Ra, 226 Ra, and 40 K) in typical Korean foods were evaluated. The daily intakes of these radioisotopes were calculated by comparing concentrations in typical Korean foods and the daily consumption rates of these foods. Daily intakes were as follows: 232 Th, 0.00-0.23; 228 Th, 0.00-2.04; 230 Th, 0.00-0.26; 228 Ra, 0.02-2.73; 226 Ra, 0.01-4.37 mBq/day; and 40 K, 0.01-5.71 Bq/day. The total daily intake of the naturally occurring radioisotopes measured in this study from food was 39.46 Bq/day. The total annual internal dose resulting from ingestion of radioisotopes in food was 109.83 μSv/y, and the radioisotope with the highest daily intake was 40 K. These values were same level compiled in other countries

  5. Verbal communication skills in typical language development: a case series.

    Science.gov (United States)

    Abe, Camila Mayumi; Bretanha, Andreza Carolina; Bozza, Amanda; Ferraro, Gyovanna Junya Klinke; Lopes-Herrera, Simone Aparecida

    2013-01-01

    The aim of the current study was to investigate verbal communication skills in children with typical language development and ages between 6 and 8 years. Participants were 10 children of both genders in this age range without language alterations. A 30-minute video of each child's interaction with an adult (father and/or mother) was recorded, fully transcribed, and analyzed by two trained researchers in order to determine reliability. The recordings were analyzed according to a protocol that categorizes verbal communicative abilities, including dialogic, regulatory, narrative-discursive, and non-interactive skills. The frequency of use of each category of verbal communicative ability was analyzed (in percentage) for each subject. All subjects used more dialogical and regulatory skills, followed by narrative-discursive and non-interactive skills. This suggests that children in this age range are committed to continue dialog, which shows that children with typical language development have more dialogic interactions during spontaneous interactions with a familiar adult.

  6. Application of Finite Layer Method in Pavement Structural Analysis

    Directory of Open Access Journals (Sweden)

    Pengfei Liu

    2017-06-01

    Full Text Available The finite element (FE method has been widely used in predicting the structural responses of asphalt pavements. However, the three-dimensional (3D modeling in general-purpose FE software systems such as ABAQUS requires extensive computations and is relatively time-consuming. To address this issue, a specific computational code EasyFEM was developed based on the finite layer method (FLM for analyzing structural responses of asphalt pavements under a static load. Basically, it is a 3D FE code that requires only a one-dimensional (1D mesh by incorporating analytical methods and using Fourier series in the other two dimensions, which can significantly reduce the computational time and required resources due to the easy implementation of parallel computing technology. Moreover, a newly-developed Element Energy Projection (EEP method for super-convergent calculations was implemented in EasyFEM to improve the accuracy of solutions for strains and stresses over the whole pavement model. The accuracy of the program is verified by comparing it with results from BISAR and ABAQUS for a typical asphalt pavement structure. The results show that the predicted responses from ABAQUS and EasyFEM are in good agreement with each other. The EasyFEM with the EEP post-processing technique converges faster compared with the results derived from ordinary EasyFEM applications, which proves that the EEP technique can improve the accuracy of strains and stresses from EasyFEM. In summary, the EasyFEM has a potential to provide a flexible and robust platform for the numerical simulation of asphalt pavements and can easily be post-processed with the EEP technique to enhance its advantages.

  7. Clinical correlates of parenting stress in children with Tourette syndrome and in typically developing children.

    Science.gov (United States)

    Stewart, Stephanie B; Greene, Deanna J; Lessov-Schlaggar, Christina N; Church, Jessica A; Schlaggar, Bradley L

    2015-05-01

    To determine the impact of tic severity in children with Tourette syndrome on parenting stress and the impact of comorbid attention-deficit hyperactivity disorder (ADHD) and obsessive-compulsive disorder (OCD) symptomatology on parenting stress in both children with Tourette syndrome and typically developing children. Children with diagnosed Tourette syndrome (n=74) and tic-free typically developing control subjects (n=48) were enrolled in a cross-sectional study. Parenting stress was greater in the group with Tourette syndrome than the typically developing group. Increased levels of parenting stress were related to increased ADHD symptomatology in both children with Tourette syndrome and typically developing children. Symptomatology of OCD was correlated with parenting stress in Tourette syndrome. Parenting stress was independent of tic severity in patients with Tourette syndrome. For parents of children with Tourette syndrome, parenting stress appears to be related to the child's ADHD and OCD comorbidity and not to the severity of the child's tic. Subthreshold ADHD symptomatology also appears to be related to parenting stress in parents of typically developing children. These findings demonstrate that ADHD symptomatology impacts parental stress both in children with and without a chronic tic disorder. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. A trajectory design method via target practice for air-breathing hypersonic vehicle

    Science.gov (United States)

    Kong, Xue; Yang, Ming; Ning, Guodong; Wang, Songyan; Chao, Tao

    2017-11-01

    There are strong coupling interactions between aerodynamics and scramjet, this kind of aircraft also has multiple restrictions, such as the range and difference of dynamic pressure, airflow, and fuel. On the one hand, we need balance the requirement between maneuverability of vehicle and stabilization of scramjet. On the other hand, we need harmonize the change of altitude and the velocity. By describing aircraft's index system of climbing capability, acceleration capability, the coupling degree in aerospace, this paper further propose a rapid design method which based on target practice. This method aimed for reducing the coupling degree, it depresses the coupling between aircraft and engine in navigation phase, satisfy multiple restriction conditions to leave some control buffer and create good condition for control implementation. According to the simulation, this method could be used for multiple typical fly commissions such as climbing, acceleration or both.

  9. Developing a computational tool for predicting physical parameters of a typical VVER-1000 core based on artificial neural network

    International Nuclear Information System (INIS)

    Mirvakili, S.M.; Faghihi, F.; Khalafi, H.

    2012-01-01

    Highlights: ► Thermal–hydraulics parameters of a VVER-1000 core based on neural network (ANN), are carried out. ► Required data for ANN training are found based on modified COBRA-EN code and then linked each other using MATLAB software. ► Based on ANN method, average and maximum temperature of fuel and clad as well as MDNBR of each FA are predicted. -- Abstract: The main goal of the present article is to design a computational tool to predict physical parameters of the VVER-1000 nuclear reactor core based on artificial neural network (ANN), taking into account a detailed physical model of the fuel rods and coolant channels in a fuel assembly. Predictions of thermal characteristics of fuel, clad and coolant are performed using cascade feed forward ANN based on linear fission power distribution and power peaking factors of FAs and hot channels factors (which are found based on our previous neutronic calculations). A software package has been developed to prepare the required data for ANN training which applies a modified COBRA-EN code for sub-channel analysis and links the codes using the MATLAB software. Based on the current estimation system, five main core TH parameters are predicted, which include the average and maximum temperatures of fuel and clad as well as the minimum departure from nucleate boiling ratio (MDNBR) for each FA. To get the best conditions for the considered ANNs training, a comprehensive sensitivity study has been performed to examine the effects of variation of hidden neurons, hidden layers, transfer functions, and the learning algorithms on the training and simulation results. Performance evaluation results show that the developed ANN can be trained to estimate the core TH parameters of a typical VVER-1000 reactor quickly without loss of accuracy.

  10. CRASH SAFETY OF A TYPICAL BAY TABLE IN A RAILWAY VEHICLE

    Directory of Open Access Journals (Sweden)

    Emmanuel MATSIKA

    2015-12-01

    Full Text Available Increasingly, urban and high speed trains are incorporating tables (workstations as common railway vehicle interior furniture because passengers prefer seating by bay tables. Among table design characteristics, the most challenging is meeting crashworthiness requirements. Past accident data and sled test results have shown that in the event of railway vehicle frontal impact, occupants located in the bay seating are exposed to chest and abdominal injuries upon contact with tables resulting from secondary collision. In some cases tables have tended to be structurally weak; they easily detach from the side walls and/or floor mounting. Subsequently these become unguided missiles that strike occupants, resulting in injuries. This paper presents an analysis of the crash performance of a typical bay table. The results provide some understanding of the table’s crash safety, giving an indication of its impact aggression. Table materials are characterised using quasi-static compressive tests. In addition, experimental dynamic (impact tests are conducted using a pendulum representing a body block (mass. The results provide information about the possible loading of the table on the occupant in the event of a crash. Contact forces are compared with chest and abdominal injury tolerance thresholds to infer the collision injury potential. Recommendations are then made on design of bay tables to meet the “functional-strength-and-safety balance”.

  11. Lip colour affects perceived sex typicality and attractiveness of human faces.

    Science.gov (United States)

    Stephen, Ian D; McKeegan, Angela M

    2010-01-01

    The luminance contrast between facial features and facial skin is greater in women than in men, and women's use of make-up enhances this contrast. In black-and-white photographs, increased luminance contrast enhances femininity and attractiveness in women's faces, but reduces masculinity and attractiveness in men's faces. In Caucasians, much of the contrast between the lips and facial skin is in redness. Red lips have been considered attractive in women in geographically and temporally diverse cultures, possibly because they mimic vasodilation associated with sexual arousal. Here, we investigate the effects of lip luminance and colour contrast on the attractiveness and sex typicality (masculinity/femininity) of human faces. In a Caucasian sample, we allowed participants to manipulate the colour of the lips in colour-calibrated face photographs along CIELab L* (light--dark), a* (red--green), and b* (yellow--blue) axes to enhance apparent attractiveness and sex typicality. Participants increased redness contrast to enhance femininity and attractiveness of female faces, but reduced redness contrast to enhance masculinity of men's faces. Lip blueness was reduced more in female than male faces. Increased lightness contrast enhanced the attractiveness of both sexes, and had little effect on perceptions of sex typicality. The association between lip colour contrast and attractiveness in women's faces may be attributable to its association with oxygenated blood perfusion indicating oestrogen levels, sexual arousal, and cardiac and respiratory health.

  12. A possible method to produce a polarized antiproton beam at intermediate energies

    International Nuclear Information System (INIS)

    Spinka, H.; Vaandering, E.W.; Hofmann, J.S.

    1994-01-01

    A feasible and conservative design for a medium energy polarized antiproton beam has been presented. The design requires an intense beam of unpolarized antiprotons (≥ 10 7 /sec) from a typical secondary beam line in order to achieve reasonable anti pp elastic scattering count rates. All three beam spin directions can be achieved. Methods were discussed to reverse the spin directions in modest times, and to change to a polarized proton beam if desired. It is expected that experiments with such a beam would have a profound effect on the understanding of the anti NN interaction at intermediate energies

  13. [Simultaneous determination of 22 typical pharmaceuticals and personal care products in environmental water using ultra performance liquid chromatography- triple quadrupole mass spectrometry].

    Science.gov (United States)

    Wu, Chunying; Gu, Feng; Bai, Lu; Lu, Wenlong

    2015-08-01

    An analytical method for simultaneous determination of 22 typical pharmaceuticals and personal care products (PPCPs) in environmental water samples was developed by ultra performance liquid chromatography-triple quadrupole mass spectrometry (UPLC-MS/MS). An Oasis HLB solid phase extraction cartridge, methanol as washing solution, water containing 0. 1% formic acid-methanol (7:3, v/v) as the mobile phases were selected for sample pretreatment and chromatographic separation. Based on the optimized sample pretreatment procedures and separation condition, the target recoveries ranged from 73% to 125% in water with the relative standard deviations ( RSDs) from 8.8% to 17.5%, and the linear ranges were from 2 to 2 000 µg/L with correlation coefficients (R2) not less than 0.997. The method can be applied to simultaneous determination of the 22 typical PPCPs in environmental water samples because of its low detection limits and high recoveries. It can provide support and help for the related research on water environmental risk assessment and control of the micro-organic pollutants.

  14. A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine

    Science.gov (United States)

    Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong

    2015-08-01

    Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.

  15. Typical and Atypical Dementia Family Caregivers: Systematic and Objective Comparisons

    Science.gov (United States)

    Nichols, Linda O.; Martindale-Adams, Jennifer; Burns, Robert; Graney, Marshall J.; Zuber, Jeffrey

    2011-01-01

    This systematic, objective comparison of typical (spouse, children) and atypical (in-law, sibling, nephew/niece, grandchild) dementia family caregivers examined demographic, caregiving and clinical variables. Analysis was of 1,476 caregivers, of whom 125 were atypical, from the Resources for Enhancing Alzheimer's Caregivers Health (REACH I and II)…

  16. Geophysical methods for monitoring soil stabilization processes

    Science.gov (United States)

    Saneiyan, Sina; Ntarlagiannis, Dimitrios; Werkema, D. Dale; Ustra, Andréa

    2018-01-01

    Soil stabilization involves methods used to turn unconsolidated and unstable soil into a stiffer, consolidated medium that could support engineered structures, alter permeability, change subsurface flow, or immobilize contamination through mineral precipitation. Among the variety of available methods carbonate precipitation is a very promising one, especially when it is being induced through common soil borne microbes (MICP - microbial induced carbonate precipitation). Such microbial mediated precipitation has the added benefit of not harming the environment as other methods can be environmentally detrimental. Carbonate precipitation, typically in the form of calcite, is a naturally occurring process that can be manipulated to deliver the expected soil strengthening results or permeability changes. This study investigates the ability of spectral induced polarization and shear-wave velocity for monitoring calcite driven soil strengthening processes. The results support the use of these geophysical methods as soil strengthening characterization and long term monitoring tools, which is a requirement for viable soil stabilization projects. Both tested methods are sensitive to calcite precipitation, with SIP offering additional information related to long term stability of precipitated carbonate. Carbonate precipitation has been confirmed with direct methods, such as direct sampling and scanning electron microscopy (SEM). This study advances our understanding of soil strengthening processes and permeability alterations, and is a crucial step for the use of geophysical methods as monitoring tools in microbial induced soil alterations through carbonate precipitation.

  17. Hybrid SN Laplace Transform Method For Slab Lattice Calculations

    International Nuclear Information System (INIS)

    Segatto, Cynthia F.; Vilhena, Marco T.; Zani, Jose H.; Barros, Ricardo C.

    2008-01-01

    In typical lattice cells where a highly absorbing, small fuel element is embedded in the moderator, a large weakly absorbing medium, high-order transport methods become unnecessary. In this paper we describe a hybrid discrete ordinates (S N ) method for slab lattice calculations. This hybrid S N method combines the convenience of a low-order S N method in the moderator with a high-order S N method in the fuel. We use special fuel-moderator interface conditions based on an approximate angular flux interpolation analytical method and the Laplace transform (LTS N ) numerical method to calculate the neutron flux distribution and the thermal disadvantage factor. We present numerical results for a range of typical model problems. (authors)

  18. Low-level waste management in the South. Task 4.2 - long-term care requirements

    International Nuclear Information System (INIS)

    1983-01-01

    This paper provides an analysis of the long-term care requirements of low-level radioactive waste disposal facilities. Among the topics considered are the technical requirements for long-term care, the experiences of the three inactive and three active commercial disposal facilities concerning perpetual care and maintenance, and the financial management of a perpetual care fund. In addition, certain recommendations for the establishment of a perpetual care fund are provided. The predominant method of disposing of low-level radioactive wastes is shallow land burial. After studying alternative methods of disposal, the U.S Nuclear Regulatory Commission (NRC) concluded that there are no compelling reasons for abandoning this disposal method. Of the 22 shallow land burial facilities in the U.S., the federal government maintains 14 active and two inactive disposal sites. There are three active (Barnwell, South Carolina; Hanford, Washington; and Beatty, Nevada) and three inactive commercial disposal facilities (Maxey Flats, Kentucky; Sheffield, Illinois; and West Valley, New York). The life of a typical facility can be broken into five phases: preoperational, operational, closure, postclosure observation and maintenance, and institutional control. Long-term care of a shallow land burial facility will begin with the disposal site closure phase and continue through the postclosure observation and maintenance and institutional control phases. Since the postclosure observation and maintenance phase will last about five years and the institutional control phase 100 years, the importance of a well planned long-term care program is apparent. 26 references, 1 table

  19. A Survey of Requirements Engineering Methods for Pervasive Services

    NARCIS (Netherlands)

    Kolos, L.; van Eck, Pascal; Wieringa, Roelf J.

    Designing and deploying ubiquitous computing systems, such as those delivering large-scale mobile services, still requires large-scale investments in both development effort as well as infrastructure costs. Therefore, in order to develop the right system, the design process merits a thorough

  20. A two-dimensionally coincident second difference cosmic ray spike removal method for the fully automated processing of Raman spectra.

    Science.gov (United States)

    Schulze, H Georg; Turner, Robin F B

    2014-01-01

    Charge-coupled device detectors are vulnerable to cosmic rays that can contaminate Raman spectra with positive going spikes. Because spikes can adversely affect spectral processing and data analyses, they must be removed. Although both hardware-based and software-based spike removal methods exist, they typically require parameter and threshold specification dependent on well-considered user input. Here, we present a fully automated spike removal algorithm that proceeds without requiring user input. It is minimally dependent on sample attributes, and those that are required (e.g., standard deviation of spectral noise) can be determined with other fully automated procedures. At the core of the method is the identification and location of spikes with coincident second derivatives along both the spectral and spatiotemporal dimensions of two-dimensional datasets. The method can be applied to spectra that are relatively inhomogeneous because it provides fairly effective and selective targeting of spikes resulting in minimal distortion of spectra. Relatively effective spike removal obtained with full automation could provide substantial benefits to users where large numbers of spectra must be processed.

  1. Adjunctive Treatment of Acute Mania with Risperidone versus Typical Antipsychotics: A Retrospective Study

    Directory of Open Access Journals (Sweden)

    Jui-Hsiu Tsai

    2005-12-01

    Full Text Available Few studies have directly compared atypical antipsychotics (e.g. risperidone with typical antipsychotics as adjunctive therapy in patients hospitalized for acute mania, especially during a lengthy hospital stay. Our retrospective, case-controlled study is a chart review of 64 patients with Diagnostic and Statistical Manual of Mental Disorders, 4th edition, defined bipolar I disorder (current episode, mania. Patients were divided into two groups according to the adjunctive medications used: the risperidone group (mood stabilizers plus risperidone and the control group (mood stabilizers plus typical antipsychotics. Outcome at discharge, medications, adverse drug effects, and length of hospital stay were compared between groups, controlling for gender, age, number of prior admissions, and duration of illness. Results indicated no statistically significant differences between groups in the controlled factors, Global Assessment of Functioning and Clinical Global Impression-Improvement scores, and adverse drug events. Patients in the risperidone group used significantly lower doses of trihexyphenidyl than those in the control group (p < 0.05. Patients treated with risperidone had a shorter hospital stay than those treated with typical antipsychotics (p < 0.01. In conclusion, antipsychotics are effective as adjunctive agents in the treatment of acute mania. The use of risperidone, in particular, decreases the need for anticholinergics and may lead to a shorter hospital stay compared with typical antipsychotics.

  2. Typical and atypical presentations of aspergilloma

    International Nuclear Information System (INIS)

    Villajos, M.; Darnell, A.; Gallardo, X.; Castaner, E.; Mata, J. M.; Paedavila, E.

    1999-01-01

    To show the different forms of radiological presentations of aspergilloma, emphasizing the importance of recognizing the atypical forms. The explorations of 11 patients with aspergilloma were examined retrospectively between 1993 and 1997. These patients were studied using conventional X-rays and computed tomography (CT): Typical and atypical radiological findings were observed. In two patients, who presented recurrent hemoptysis, a percutaneous installation of amphotericin B was carried out with tomographic control. Out of the 11 patients, two were female and nine male. In eight of the cases the radiological findings showed an intercavity injury with different evolutionary forms, while in three of the cases there was a progressive pleural swelling. In the two patients treated pertinaciously, no significant radiological changes were observed, however, neither of them showed hemoptysis again. The pleural swelling adjacent to the cavity and/or the swelling of the cavity wall are atypical radiological presentations of the aspergilloma, that can accompany or precede the appearance of this illness. (Author) 7 refs

  3. The influence of gender and gender typicality on autobiographical memory across event types and age groups.

    Science.gov (United States)

    Grysman, Azriel; Fivush, Robyn; Merrill, Natalie A; Graci, Matthew

    2016-08-01

    Gender differences in autobiographical memory emerge in some data collection paradigms and not others. The present study included an extensive analysis of gender differences in autobiographical narratives. Data were collected from 196 participants, evenly split by gender and by age group (emerging adults, ages 18-29, and young adults, ages 30-40). Each participant reported four narratives, including an event that had occurred in the last 2 years, a high point, a low point, and a self-defining memory. Additionally, all participants completed self-report measures of masculine and feminine gender typicality. The narratives were coded along six dimensions-namely coherence, connectedness, agency, affect, factual elaboration, and interpretive elaboration. The results indicated that females expressed more affect, connection, and factual elaboration than males across all narratives, and that feminine typicality predicted increased connectedness in narratives. Masculine typicality predicted higher agency, lower connectedness, and lower affect, but only for some narratives and not others. These findings support an approach that views autobiographical reminiscing as a feminine-typed activity and that identifies gender differences as being linked to categorical gender, but also to one's feminine gender typicality, whereas the influences of masculine gender typicality were more context-dependent. We suggest that implicit gendered socialization and more explicit gender typicality each contribute to gendered autobiographies.

  4. Cross-modal integration in the brain is related to phonological awareness only in typical readers, not in those with reading difficulty

    Directory of Open Access Journals (Sweden)

    Chris eMcnorgan

    2013-07-01

    Full Text Available Fluent reading requires successfully mapping between visual orthographic and auditory phonological representations and is thus an intrinsically cross-modal process, though reading difficulty has often been characterized as a phonological deficit. However, recent evidence suggests that orthographic information influences phonological processing in typical developing (TD readers, but that this effect may be blunted in those with reading difficulty (RD, suggesting that the core deficit underlying reading may be a failure to integrate orthographic and phonological information. Twenty-six (13 TD and 13 RD children between 8 and 13 years of age participated in a functional magnetic resonance imaging (fMRI experiment designed to assess the role of phonemic awareness in cross-modal processing. Participants completed a rhyme judgment task for word pairs presented unimodally (auditory only and cross-modally (auditory followed by visual. For typically developing children, activations in a network of regions associated with processing and integrating phonology and orthography were correlated with elision (i.e. superior temporal sulcus and fusiform gyrus, a task that is particularly sensitive to phonemic awareness, but this correlation was found only in the cross-modal task. Elision was not correlated with activation for children with reading difficulty or for either group in the unimodal task. The results suggest that elision taps both phonemic awareness and cross-modal integration in typically developing readers, and that these processes are decoupled in children with reading difficulty.

  5. Aging Management Plan for a Typical Research Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Ebrahimi, Mahsa; Nazififard, Mohammad; Suh, Kune Y. [Seoul National University, Seoul (Korea, Republic of)

    2012-05-15

    Development of an aging management plan (AMP) is a crucial contributor to maintaining the reactor safety and controlling the risk of degradation of the concrete reactor building of a nuclear power plant. The design, operation and utilization of a research reactor (RR) fundamentally differ from those of power reactors. The AMP should nonetheless be present on account of radioactive materials and radiation risks involved. This is mainly because the RR is deemed to be used as an experiment itself or to conduct separate experiments during its operation. The AMP aims to determine the requisites for specific structural concrete components of the reactor building that entail regular inspections and maintenance to ensure safe and reliable operation of the plant. The safety of a RR necessitates the provision which is made in its design to facilitate aging management. Aging management of RR's structures is one of the vital factors to safety, to ensure continued adequacy of the safety level, reliable operation of the reactor, and compliance with the operational limits and conditions.Moreover, engineering systems should be qualified to meet the functional requirements for which they were designed with aging and environmental conditions for all situations and at all times taken into account. This study aims to present an integrated methodology for the application of an AMP for the concrete of the reactor building of a typical RR. For the purpose of safety analysis, geometry and ambient conditions were taken from a 5 MW pool-type, light-water moderated, heterogeneous, solid fuel RR in which the water is also used for cooling and shielding (Fig. 1). The reactor core is immersed in either section of a two-section concrete pool filled with water. This paper makes available background information regarding the document and the strategy developed to manage potential degradation of the reactor building concrete as well as specific programs and preventive and corrective

  6. Forming a method mindset : The role of knowledge and preference in facilitating heuristic method usage in design

    NARCIS (Netherlands)

    Daalhuizen, J.J.; Person, F.E.O.K.; Gattol, V.

    2013-01-01

    Both systematic and heuristic methods are common practice when designing. Yet, in teaching students how to design, heuristic methods are typically only granted a secondary role. So, how do designers and students develop a mindset for using heuristic methods? In this paper, we study how prior

  7. Automatic examination of nuclear reactor vessels with focused search units. Status and typical application to inspections performed in accordance with ASME code

    International Nuclear Information System (INIS)

    Verger, B.; Saglio, R.

    1981-05-01

    The use of focused search units in nuclear reactor vessel examinations has significantly increased the capability of flaw indication detection and characterization. These search units especially allow a more accurate sizing of indications and a more efficient follow up of their history. In this aspect, they are a unique tool in the area of safety and reliability of installations. It was this type of search unit which was adopted to perform the examinations required within the scope of inservice inspections of all P.W.R. reactors of the French nuclear program. This paper summarizes the results gathered through the 4l examinations performed over the last five years. A typical application of focused search units in automated inspections performed in accordance with ASME code requirements on P.W.R. nuclear reactor vessels is then described

  8. Maternal Perceptions of Mealtimes: Comparison of Children with Autism Spectrum Disorder and Children with Typical Development

    Directory of Open Access Journals (Sweden)

    Terry K. Crowe

    2017-03-01

    Full Text Available Background: This study examined mealtime techniques reported by mothers of preschool children with Autism Spectrum Disorder (ASD and mothers of children with typical development (TD. The mothers’ perceived levels of success and sources of information for mealtime techniques were also reported. Method: The participants were 24 mothers of children with ASD (ASD group and 24 mothers of children with typical development (TD group between 3 and 6 years of age. The Background Information Survey and the Mealtime Techniques Interview were administered. Results: The ASD group used significantly more techniques in categories of food appearances, restrictive diets, and vitamin/supplement therapy. The TD group used significantly more techniques in the categories of etiquette and negative consequences. Both groups rated techniques similarly with no significant difference between the perceived rate of success for each category. Finally, 91% of mealtime techniques for both groups were parent generated with few from professionals. Conclusion: The results showed that many of the mothers in both groups used similar mealtime techniques and most implemented techniques that were self-generated with generally moderate perception of success. Occupational therapists should collaboratively work with families to increase mealtime success by recommending interventions that are individualized and family centered.

  9. Genetic structure of typical and atypical populations of Candida albicans from Africa.

    Science.gov (United States)

    Forche, A; Schönian, G; Gräser, Y; Vilgalys, R; Mitchell, T G

    1999-11-01

    Atypical isolates of the pathogenic yeast Candida albicans have been reported with increasing frequency. To investigate the origin of a set of atypical isolates and their relationship to typical isolates, we employed a combination of molecular phylogenetic and population genetic analyses using rDNA sequencing, PCR fingerprinting, and analysis of co-dominant DNA nucleotide polymorphisms to characterize the population structure of one typical and two atypical populations of C. albicans from Angola and Madagascar. The extent of clonality and recombination was assessed in each population. The analyses revealed that the structure of all three populations of C. albicans was predominantly clonal but, as in previous studies, there was also evidence for recombination. Allele frequencies differed significantly between the typical and the atypical populations, suggesting very low levels of gene flow between them. However, allele frequencies were quite similar in the two atypical C. albicans populations, suggesting that they are closely related. Phylogenetic analysis of partial sequences encoding the nuclear 26S rDNA demonstrated that all three populations belong to a single monophyletic group, which includes the type strain of C. albicans. Copyright 1999 Academic Press.

  10. Probabilistic Requirements (Partial) Verification Methods Best Practices Improvement. Variables Acceptance Sampling Calculators: Empirical Testing. Volume 2

    Science.gov (United States)

    Johnson, Kenneth L.; White, K. Preston, Jr.

    2012-01-01

    The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.

  11. New computational methods used in the lattice code DRAGON

    International Nuclear Information System (INIS)

    Marleau, G.; Hebert, A.; Roy, R.

    1992-01-01

    The lattice code DRAGON is used to perform transport calculations inside cells and assemblies for multidimensional geometry using the collision probability method, including the interface current and J ± techniques. Typical geometries that can be treated using this code include CANDU 2-dimensional clusters, CANDU 3-dimensional assemblies, pressurized water reactor (PWR) rectangular and hexagonal assemblies. It contains a self-shielding module for the treatment of microscopic cross section libraries and a depletion module for burnup calculations. DRAGON was written in a modular form in such a way as to accept easily new collision probability options and make them readily available to all the modules that require collision probability matrices like the self-shielding module, the flux solution module and the homogenization module. In this paper the authors present an overview of DRAGON and discuss some of the methods that were implemented in DRAGON in order to improve on its performance

  12. Requirements for a geometry programming language for CFD applications

    Science.gov (United States)

    Gentry, Arvel E.

    1992-01-01

    A number of typical problems faced by the aerodynamicist in using computational fluid dynamics are presented to illustrate the need for a geometry programming language. The overall requirements for such a language are illustrated by examples from the Boeing Aero Grid and Paneling System (AGPS). Some of the problems in building such a system are also reviewed along with suggestions as to what to look for when evaluating new software problems.

  13. A novel STXBP1 mutation causes typical Rett syndrome in a Japanese girl.

    Science.gov (United States)

    Yuge, Kotaro; Iwama, Kazuhiro; Yonee, Chihiro; Matsufuji, Mayumi; Sano, Nozomi; Saikusa, Tomoko; Yae, Yukako; Yamashita, Yushiro; Mizuguchi, Takeshi; Matsumoto, Naomichi; Matsuishi, Toyojiro

    2018-06-01

    Rett syndrome (RTT) is a neurodevelopmental disorder mostly caused by mutations in Methyl-CpG-binding protein 2 (MECP2); however, mutations in various other genes may lead to RTT-like phenotypes. Here, we report the first case of a Japanese girl with RTT caused by a novel syntaxin-binding protein 1 (STXBP1) frameshift mutation (c.60delG, p.Lys21Argfs*16). She showed epilepsy at one year of age, regression of acquired psychomotor abilities thereafter, and exhibited stereotypic hand and limb movements at 3 years of age. Her epilepsy onset was earlier than is typical for RTT patients. However, she fully met the 2010 diagnostic criteria of typical RTT. STXBP1 mutations cause early infantile epileptic encephalopathy (EIEE), various intractable epilepsies, and neurodevelopmental disorders. However, the case described here presented a unique clinical presentation of typical RTT without EIEE and a novel STXBP1 mutation. Copyright © 2018 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  14. Guiding role of typical cases in clinical training for ophthalmology professional degree graduate students

    Directory of Open Access Journals (Sweden)

    Zhe Wang

    2014-05-01

    Full Text Available With the change of the concept of graduate enrollment, the recruiting proportion of clinical medicine professional degree graduate students is more and more, and the training of professional degree graduate students is increasingly focusing on practical. In our experience in clinical training for ophthalmology professional degree graduate students, increasing the ward clinical practice time is important. For particular emphasis on the guiding role of the typical cases, each professional group combined their professional characteristics of the typical cases to instruct the graduate students, training their clinical diagnosis and treatment ability, training their microsurgical techniques. From clinical medical writing, record summary, literature review, professional degree graduate students could expand their knowledge structure, practice their thesis writing ability. Based on the typical cases, expansion of knowledge coverage, they could improve the ability of diagnosis and treatment for special disease cases. In this rigorous training system, professional degree graduate students can learn by analogy, and focus on typical cases to get the most intuitive panoramic understanding of the diseases, with a minimum of time to master the most clinical knowledge, to enrich clinical experience, and to lay the foundation for future work in the assessment.

  15. Gender Norm Salience Across Middle Schools: Contextual Variations in Associations Between Gender Typicality and Socioemotional Distress.

    Science.gov (United States)

    Smith, Danielle Sayre; Schacter, Hannah L; Enders, Craig; Juvonen, Jaana

    2018-05-01

    Youth who feel they do not fit with gender norms frequently experience peer victimization and socioemotional distress. To gauge differences between schools, the current study examined the longitudinal effects of school-level gender norm salience-a within-school association between gender typicality and peer victimization-on socioemotional distress across 26 ethnically diverse middle schools (n boys  = 2607; n girls  = 2805). Boys (but not girls) reporting lower gender typicality experienced more loneliness and social anxiety in schools with more salient gender norms, even when accounting for both individual and school level victimization. Greater gender norm salience also predicted increased depressed mood among boys regardless of gender typicality. These findings suggest particular sensitivity among boys to environments in which low gender typicality is sanctioned.

  16. Comparative evaluation of three cognitive error analysis methods through an application to accident management tasks in NPPs

    International Nuclear Information System (INIS)

    Jung, Won Dea; Kim, Jae Whan; Ha, Jae Joo; Yoon, Wan C.

    1999-01-01

    This study was performed to comparatively evaluate selected Human Reliability Analysis (HRA) methods which mainly focus on cognitive error analysis, and to derive the requirement of a new human error analysis (HEA) framework for Accident Management (AM) in nuclear power plants(NPPs). In order to achieve this goal, we carried out a case study of human error analysis on an AM task in NPPs. In the study we evaluated three cognitive HEA methods, HRMS, CREAM and PHECA, which were selected through the review of the currently available seven cognitive HEA methods. The task of reactor cavity flooding was chosen for the application study as one of typical tasks of AM in NPPs. From the study, we derived seven requirement items for a new HEA method of AM in NPPs. We could also evaluate the applicability of three cognitive HEA methods to AM tasks. CREAM is considered to be more appropriate than others for the analysis of AM tasks. But, PHECA is regarded less appropriate for the predictive HEA technique as well as for the analysis of AM tasks. In addition to these, the advantages and disadvantages of each method are described. (author)

  17. Reduced power processor requirements for the 30-cm diameter HG ion thruster

    Science.gov (United States)

    Rawlin, V. K.

    1979-01-01

    The characteristics of power processors strongly impact the overall performance and cost of electric propulsion systems. A program was initiated to evaluate simplifications of the thruster-power processor interface requirements. The power processor requirements are mission dependent with major differences arising for those missions which require a nearly constant thruster operating point (typical of geocentric and some inbound planetary missions) and those requiring operation over a large range of input power (such as outbound planetary missions). This paper describes the results of tests which have indicated that as many as seven of the twelve power supplies may be eliminated from the present Functional Model Power Processor used with 30-cm diameter Hg ion thrusters.

  18. Use of a preconditioned Bi-conjugate gradient method for hybrid plasma stability analysis

    International Nuclear Information System (INIS)

    Mikic, Z.; Morse, E.C.

    1985-01-01

    The numerical stability analysis of compact toroidal plasmas using implicit time differencing requires the solution of a set of coupled, 2-dimensional, elliptic partial differential equations for the field quantities at every timestep. When the equations are spatially finite-differenced and written in matrix form, the resulting matrix is large, sparse, complex, non-Hermitian, and indefinite. The use of the preconditioned bi-conjugate gradient method for solving these equations is discussed. The effect of block-diagonal preconditioning and incomplete block-LU preconditionig on the convergence of the method is investigated. For typical matrices arising in our studies, the eigenvalue spectra of the original and preconditioned matrices are calculated as an illustration of the effectiveness of the preconditioning. We show that the preconditioned bi-conjugate gradient method coverages more rapidly than the conjugate gradient method applied to the normal equations, and that it is an effective iterative method for the class of non-Hermitian, indefinite problems of interest

  19. Stability analysis and time-step limits for a Monte Carlo Compton-scattering method

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.

    2010-01-01

    A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.

  20. Models for the estimation of diffuse solar radiation for typical cities in Turkey

    International Nuclear Information System (INIS)

    Bakirci, Kadir

    2015-01-01

    In solar energy applications, diffuse solar radiation component is required. Solar radiation data particularly in terms of diffuse component are not readily affordable, because of high price of measurements as well as difficulties in their maintenance and calibration. In this study, new empirical models for predicting the monthly mean diffuse solar radiation on a horizontal surface for typical cities in Turkey are established. Therefore, fifteen empirical models from studies in the literature are used. Also, eighteen diffuse solar radiation models are developed using long term sunshine duration and global solar radiation data. The accuracy of the developed models is evaluated in terms of different statistical indicators. It is found that the best performance is achieved for the third-order polynomial model based on sunshine duration and clearness index. - Highlights: • Diffuse radiation is given as a function of clearness index and sunshine fraction. • The diffuse radiation is an important parameter in solar energy applications. • The diffuse radiation measurement is for limited periods and it is very rare. • The new models can be used to estimate monthly average diffuse solar radiation. • The accuracy of the models is evaluated on the basis of statistical indicators

  1. Summary of typical routine maintenance activities at Tokai Reprocessing Plant. Supplement (March, 2002)

    International Nuclear Information System (INIS)

    2002-03-01

    Typical maintenance activities, such as replacement of worn out parts and cleaning of filter elements, routinely performed during steady operation are summarized. [The Summary of Typical Routine Maintenance Activities at Tokai Reprocessing Plant] (JNC TN 8450 2001-006) was already prepared in September, 2001. The purpose of this summary is to give elementary understanding on these activities to people who are responsible for explanation them to the public. At this time, the same kind of summary is prepared as a supplement of the previous one. (author)

  2. Why don't we ask? A complementary method for assessing the status of great apes.

    Directory of Open Access Journals (Sweden)

    Erik Meijaard

    Full Text Available Species conservation is difficult. Threats to species are typically high and immediate. Effective solutions for counteracting these threats, however, require synthesis of high quality evidence, appropriately targeted activities, typically costly implementation, and rapid re-evaluation and adaptation. Conservation management can be ineffective if there is insufficient understanding of the complex ecological, political, socio-cultural, and economic factors that underlie conservation threats. When information about these factors is incomplete, conservation managers may be unaware of the most urgent threats or unable to envision all consequences of potential management strategies. Conservation research aims to address the gap between what is known and what knowledge is needed for effective conservation. Such research, however, generally addresses a subset of the factors that underlie conservation threats, producing a limited, simplistic, and often biased view of complex, real world situations. A combination of approaches is required to provide the complete picture necessary to engage in effective conservation. Orangutan conservation (Pongo spp. offers an example: standard conservation assessments employ survey methods that focus on ecological variables, but do not usually address the socio-cultural factors that underlie threats. Here, we evaluate a complementary survey method based on interviews of nearly 7,000 people in 687 villages in Kalimantan, Indonesia. We address areas of potential methodological weakness in such surveys, including sampling and questionnaire design, respondent biases, statistical analyses, and sensitivity of resultant inferences. We show that interview-based surveys can provide cost-effective and statistically robust methods to better understand poorly known populations of species that are relatively easily identified by local people. Such surveys provide reasonably reliable estimates of relative presence and relative

  3. 40 CFR 136.6 - Method modifications and analytical requirements.

    Science.gov (United States)

    2010-07-01

    ... modifications and analytical requirements. (a) Definitions of terms used in this section. (1) Analyst means the..., oil and grease, total suspended solids, total phenolics, turbidity, chemical oxygen demand, and.... Except as set forth in paragraph (b)(3) of this section, an analyst may modify an approved test procedure...

  4. Initial circulatory response to active standing in Parkinson's disease without typical orthostatic hypotension

    Directory of Open Access Journals (Sweden)

    Guillermo Delgado

    2014-03-01

    Full Text Available While the circulatory response to orthostatic stress has been already evaluated in Parkinson's disease patients without typical orthostatic hypotension (PD-TOH, there is an initial response to the upright position which is uniquely associated with active standing (AS. We sought to assess this response and to compare it to that seen in young healthy controls (YHC. Method In 10 PD-TOH patients (8 males, 60±7 years, Hoehn and Yahr ≤3 the changes in systolic blood pressure (SBP and heart rate that occur in the first 30 seconds (sec of standing were examined. Both parameters were non-invasively and continuously monitored using the volume-clamp method by Peñáz and the Physiocal criteria by Wesseling. The choice of sample points was prompted by the results of previous studies. These sample points were compared to those of 10 YHC (8 males, 32±8 years. Results The main finding of the present investigation was an increased time between the AS onset and SBP overshoot in PD-TOH group (24±4 vs. 19±3 sec; p<0.05. Conclusion This delay might reflect a prolonged latency in the baroreflex-mediated vascular resistance response, but more studies are needed to confirm this preliminary hypothesis.

  5. Some basic requirements for the application of electrokinetic methods for the reconstruction of masonry with rising humidity

    Energy Technology Data Exchange (ETDEWEB)

    Friese, P; Jacobasch, H J; Boerner, M

    1987-12-01

    Based on some theoretical statements concerning the electro-osmosis the most important requirements for the application of electrokinetic methods for drying masonry with rising humidity are described. Samples of brick masonry (brick and mortar) were examined by means of an electrokinetic measuring system (EKM) with different electrolytes (CaSO/sub 4/ and KCl) being used for different concentrations. It was found for all samples, that the zeta potential is provided with a negative sign and that the absolute value of the zeta potential approaches zero with increasing electrolyte concentration. Based on these measurements, an upper limit of the electrolyte concentration of 0.1 Mol/liter is established for the application of electrokinetic methods for drying masonry.

  6. An improved UHPLC-UV method for separation and quantification of carotenoids in vegetable crops.

    Science.gov (United States)

    Maurer, Megan M; Mein, Jonathan R; Chaudhuri, Swapan K; Constant, Howard L

    2014-12-15

    Carotenoid identification and quantitation is critical for the development of improved nutrition plant varieties. Industrial analysis of carotenoids is typically carried out on multiple crops with potentially thousands of samples per crop, placing critical needs on speed and broad utility of the analytical methods. Current chromatographic methods for carotenoid analysis have had limited industrial application due to their low throughput, requiring up to 60 min for complete separation of all compounds. We have developed an improved UHPLC-UV method that resolves all major carotenoids found in broccoli (Brassica oleracea L. var. italica), carrot (Daucus carota), corn (Zea mays), and tomato (Solanum lycopersicum). The chromatographic method is completed in 13.5 min allowing for the resolution of the 11 carotenoids of interest, including the structural isomers lutein/zeaxanthin and α-/β-carotene. Additional minor carotenoids have also been separated and identified with this method, demonstrating the utility of this method across major commercial food crops. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Effectiveness and cost of atypical versus typical antipsychotic treatment for schizophrenia in routine care.

    Science.gov (United States)

    Stargardt, Tom; Weinbrenner, Susanne; Busse, Reinhard; Juckel, Georg; Gericke, Christian A

    2008-06-01

    In two recent randomised clinical trials, a meta-analysis and in an effectiveness study analysing routine data from the U.S. Veterans Administration the superiority of the newer atypical drugs over typical antipsychotic drugs, concerning both their efficacy and their side-effect profile, has been questioned. To analyse the effectiveness and cost of atypical versus typical antipsychotic treatment for schizophrenia in routine care. Cohort study using routine care data from a statutory sickness fund with 5.4 million insured in Germany. To be included, patients had to be discharged with a diagnosis of schizophrenia in 2003 and fulfil membership criteria. Main outcome measures were rehospitalisation rates, mean hospital bed days, mean length of stay, cost of inpatient and pharmaceutical care to the sickness fund during follow-up and medication used to treat side-effects. 3121 patients were included into the study. There were no statistically significant differences in the effectiveness of atypical and typical antipsychotics on rehospitalisation during follow-up (rehospitalisation rate ratio 1.07, 95% confidence interval 0.86 to 1.33). However, there were consistent observations of atypical antipsychotics being more effective for severe cases of schizophrenia (14.6% of study population; >61 prior bed days per year in 2000-2002) in the follow-up period, whereas for the other severity strata typical antipsychotics seemed more effective in reducing various rehospitalisation outcomes. Patients treated with atypical antipsychotics received significantly less prescriptions for anticholinergics or tiaprid (relative risk 0.26, 95% confidence interval 0.18 to 0.38). The effectiveness of atypical antipsychotics for schizophrenia on rehospitalisation measures appeared similar to that of typical antipsychotics. With the exception of severe cases, the higher costs for atypical antipsychotics were not offset by savings from reduced inpatient care. Major limitations include the lack of

  8. 40 CFR 141.74 - Analytical and monitoring requirements.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical and monitoring requirements... Analytical and monitoring requirements. (a) Analytical requirements. Only the analytical method(s) specified... as set forth in the article “National Field Evaluation of a Defined Substrate Method for the...

  9. Development of an optimal velocity selection method with velocity obstacle

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Geuk; Oh, Jun Ho [KAIST, Daejeon (Korea, Republic of)

    2015-08-15

    The Velocity obstacle (VO) method is one of the most well-known methods for local path planning, allowing consideration of dynamic obstacles and unexpected obstacles. Typical VO methods separate a velocity map into a collision area and a collision-free area. A robot can avoid collisions by selecting its velocity from within the collision-free area. However, if there are numerous obstacles near a robot, the robot will have very few velocity candidates. In this paper, a method for choosing optimal velocity components using the concept of pass-time and vertical clearance is proposed for the efficient movement of a robot. The pass-time is the time required for a robot to pass by an obstacle. By generating a latticized available velocity map for a robot, each velocity component can be evaluated using a cost function that considers the pass-time and other aspects. From the output of the cost function, even a velocity component that will cause a collision in the future can be chosen as a final velocity if the pass-time is sufficiently long enough.

  10. Cooperative Problem-Based Learning (CPBL: A Practical PBL Model for a Typical Course

    Directory of Open Access Journals (Sweden)

    Khairiyah Mohd-Yusof

    2011-09-01

    Full Text Available Problem-Based Learning (PBL is an inductive learning approach that uses a realistic problem as the starting point of learning. Unlike in medical education, which is more easily adaptable to PBL, implementing PBL in engineering courses in the traditional semester system set-up is challenging. While PBL is normally implemented in small groups of up to ten students with a dedicated tutor during PBL sessions in medical education, this is not plausible in engineering education because of the high enrolment and large class sizes. In a typical course, implementation of PBL consisting of students in small groups in medium to large classes is more practical. However, this type of implementation is more difficult to monitor, and thus requires good support and guidance in ensuring commitment and accountability of each student towards learning in his/her group. To provide the required support, Cooperative Learning (CL is identified to have the much needed elements to develop the small student groups to functional learning teams. Combining both CL and PBL results in a Cooperative Problem-Based Learning (CPBL model that provides a step by step guide for students to go through the PBL cycle in their teams, according to CL principles. Suitable for implementation in medium to large classes (approximately 40-60 students for one floating facilitator, with small groups consisting of 3-5 students, the CPBL model is designed to develop the students in the whole class into a learning community. This paper provides a detailed description of the CPBL model. A sample implementation in a third year Chemical Engineering course, Process Control and Dynamics, is also described.

  11. Spent fuel storage requirements

    International Nuclear Information System (INIS)

    Fletcher, J.

    1982-06-01

    Spent fuel storage requirements, as projected through the year 2000 for U.S. LWRs, were calculated using information supplied by the utilities reflecting plant status as of December 31, 1981. Projections through the year 2000 combined fuel discharge projections of the utilities with the assumed discharges of typical reactors required to meet the nuclear capacity of 165 GWe projected by the Energy Information Administration (EIA) for the year 2000. Three cases were developed and are summarized. A reference case, or maximum at-reactor (AR) capacity case, assumes that all reactor storage pools are increased to their maximum capacities as estimated by the utilities for spent fuel storage utilizing currently licensed technologies. The reference case assumes no transshipments between pools except as currently licensed by the Nuclear Regulatory Commission (NRC). This case identifies an initial requirement for 13 MTU of additional storage in 1984, and a cumulative requirement for 14,490 MTU additional storage in the year 2000. The reference case is bounded by two alternative cases. One, a current capacity case, assumes that only those pool storage capacity increases currently planned by the operating utilities will occur. The second, or maximum capacity with transshipment case, assumes maximum development of pool storage capacity as described above and also assumes no constraints on transshipment of spent fuel among pools of reactors of like type (BWR, PWR) within a given utility. In all cases, a full core discharge capability (full core reserve or FCR) is assumed to be maintained for each reactor, except that only one FCR is maintained when two reactors share a common pool. For the current AR capacity case the indicated storage requirements in the year 2000 are indicated to be 18,190 MTU; for the maximum capacity with transshipment case they are 11,320 MTU

  12. A perturbation-based susbtep method for coupled depletion Monte-Carlo codes

    International Nuclear Information System (INIS)

    Kotlyar, Dan; Aufiero, Manuele; Shwageraus, Eugene; Fratoni, Massimiliano

    2017-01-01

    Highlights: • The GPT method allows to calculate the sensitivity coefficients to any perturbation. • Full Jacobian of sensitivities, cross sections (XS) to concentrations, may be obtained. • The time dependent XS is obtained by combining the GPT and substep methods. • The proposed GPT substep method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. - Abstract: Coupled Monte Carlo (MC) methods are becoming widely used in reactor physics analysis and design. Many research groups therefore, developed their own coupled MC depletion codes. Typically, in such coupled code systems, neutron fluxes and cross sections are provided to the depletion module by solving a static neutron transport problem. These fluxes and cross sections are representative only of a specific time-point. In reality however, both quantities would change through the depletion time interval. Recently, Generalized Perturbation Theory (GPT) equivalent method that relies on collision history approach was implemented in Serpent MC code. This method was used here to calculate the sensitivity of each nuclide and reaction cross section due to the change in concentration of every isotope in the system. The coupling method proposed in this study also uses the substep approach, which incorporates these sensitivity coefficients to account for temporal changes in cross sections. As a result, a notable improvement in time dependent cross section behavior was obtained. The method was implemented in a wrapper script that couples Serpent with an external depletion solver. The performance of this method was compared with other existing methods. The results indicate that the proposed method requires substantially less MC transport solutions to achieve the same accuracy.

  13. 46 CFR 160.010-4 - General requirements for buoyant apparatus.

    Science.gov (United States)

    2010-10-01

    ... light twine. (h) Each peripheral body type buoyant apparatus without a net or platform on the inside... pigmented in a dark color. A typical method of securing lifelines and pendants to straps of webbing is shown...

  14. Typical School Day Experiences of Indian Children in Different Contexts.

    Science.gov (United States)

    Jaya, N.; Malar, G.

    2003-01-01

    Notes that India has experienced conditions that have lead to significant illiteracy, but that commitment to education can be found in lesser-known parts of India today. Profiles three schools in Tamil Nadu and describes a typical school day for a student with special needs, a student in a tribal setting, and a student in a rural setting. (TJQ)

  15. Assembling Typical Meteorological Year Data Sets for Building Energy Performance Using Reanalysis and Satellite-Based Data

    Directory of Open Access Journals (Sweden)

    Thomas Huld

    2018-02-01

    Full Text Available We present a method to generate Typical Meteorological Year (TMY data sets for use in calculations of the energy performance of buildings, based on satellite derived solar radiation data and other meteorological parameters obtained from reanalysis products. The great advantage of this method is the availability of data over large geographical regions, giving global coverage for the reanalysis and continental-scale coverage for the solar radiation data, making it possible to generate TMY data for nearly any location, independent of the availability of meteorological measurement stations in the area. The TMY data generated with this method have been validated against 487 meteorological stations in Europe, by calculating heating and cooling degree days, and by running building energy performance simulations using EnergyPlus. Results show that the generated data sets using a long time series perform better than the TMY data generated from station measurements for building heating calculations and nearly as well for cooling calculations, with relative standard deviations remaining below 6% for heating calculations. TMY data constructed using the proposed method yield somewhat larger deviations compared to TMY data constructed from station data. We outline a number of possibilities for further improvement using data sets that will become available in the near future.

  16. Assessing Requirements Quality through Requirements Coverage

    Science.gov (United States)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software

  17. National Land Imaging Requirements (NLIR) Pilot Project summary report: summary of moderate resolution imaging user requirements

    Science.gov (United States)

    Vadnais, Carolyn; Stensaas, Gregory

    2014-01-01

    Under the National Land Imaging Requirements (NLIR) Project, the U.S. Geological Survey (USGS) is developing a functional capability to obtain, characterize, manage, maintain and prioritize all Earth observing (EO) land remote sensing user requirements. The goal is a better understanding of community needs that can be supported with land remote sensing resources, and a means to match needs with appropriate solutions in an effective and efficient way. The NLIR Project is composed of two components. The first component is focused on the development of the Earth Observation Requirements Evaluation System (EORES) to capture, store and analyze user requirements, whereas, the second component is the mechanism and processes to elicit and document the user requirements that will populate the EORES. To develop the second component, the requirements elicitation methodology was exercised and refined through a pilot project conducted from June to September 2013. The pilot project focused specifically on applications and user requirements for moderate resolution imagery (5–120 meter resolution) as the test case for requirements development. The purpose of this summary report is to provide a high-level overview of the requirements elicitation process that was exercised through the pilot project and an early analysis of the moderate resolution imaging user requirements acquired to date to support ongoing USGS sustainable land imaging study needs. The pilot project engaged a limited set of Federal Government users from the operational and research communities and therefore the information captured represents only a subset of all land imaging user requirements. However, based on a comparison of results, trends, and analysis, the pilot captured a strong baseline of typical applications areas and user needs for moderate resolution imagery. Because these results are preliminary and represent only a sample of users and application areas, the information from this report should only

  18. Childhood apraxia of speech: A survey of praxis and typical speech characteristics.

    Science.gov (United States)

    Malmenholt, Ann; Lohmander, Anette; McAllister, Anita

    2017-07-01

    The purpose of this study was to investigate current knowledge of the diagnosis childhood apraxia of speech (CAS) in Sweden and compare speech characteristics and symptoms to those of earlier survey findings in mainly English-speakers. In a web-based questionnaire 178 Swedish speech-language pathologists (SLPs) anonymously answered questions about their perception of typical speech characteristics for CAS. They graded own assessment skills and estimated clinical occurrence. The seven top speech characteristics reported as typical for children with CAS were: inconsistent speech production (85%), sequencing difficulties (71%), oro-motor deficits (63%), vowel errors (62%), voicing errors (61%), consonant cluster deletions (54%), and prosodic disturbance (53%). Motor-programming deficits described as lack of automatization of speech movements were perceived by 82%. All listed characteristics were consistent with the American Speech-Language-Hearing Association (ASHA) consensus-based features, Strand's 10-point checklist, and the diagnostic model proposed by Ozanne. The mode for clinical occurrence was 5%. Number of suspected cases of CAS in the clinical caseload was approximately one new patient/year and SLP. The results support and add to findings from studies of CAS in English-speaking children with similar speech characteristics regarded as typical. Possibly, these findings could contribute to cross-linguistic consensus on CAS characteristics.

  19. Hemolytic porcine intestinal Escherichia coli without virulence-associated genes typical of intestinal pathogenic E. coli.

    Science.gov (United States)

    Schierack, Peter; Weinreich, Joerg; Ewers, Christa; Tachu, Babila; Nicholson, Bryon; Barth, Stefanie

    2011-12-01

    Testing 1,666 fecal or intestinal samples from healthy and diarrheic pigs, we obtained hemolytic Escherichia coli isolates from 593 samples. Focusing on hemolytic E. coli isolates without virulence-associated genes (VAGs) typical for enteropathogens, we found that such isolates carried a broad variety of VAGs typical for extraintestinal pathogenic E. coli.

  20. Designer's requirements for evaluation of sustainability

    DEFF Research Database (Denmark)

    Bey, Niki; Lenau, Torben Anker

    1998-01-01

    Today, sustainability of products is often evaluated on the basis of assessments of their environmental performance. Established means for this purpose are formal Life Cycle Assessment (LCA) methods. Designers have an essential influence on product design and are therefore one target group for life...... cycle-based evaluation methods. However, the application of LCA in the design process, when for example different materials and manufacturing processes have to be selected, is difficult. This is, among other things, because only a few designers have a deeper background in this area and even simplified...... LCAs involve calculations with a relatively high accuracy. Most LCA methods do therefore not qualify as hands-on tool for utilisation by typical designers.In this context, the authors raise the question, whether a largely simplified LCA-method which is exclusively based on energy considerations can...