Daudelin, Denise H; Selker, Harry P; Leslie, Laurel K
2015-12-01
There is growing appreciation that process improvement holds promise for improving quality and efficiency across the translational research continuum but frameworks for such programs are not often described. The purpose of this paper is to present a framework and case examples of a Research Process Improvement Program implemented at Tufts CTSI. To promote research process improvement, we developed online training seminars, workshops, and in-person consultation models to describe core process improvement principles and methods, demonstrate the use of improvement tools, and illustrate the application of these methods in case examples. We implemented these methods, as well as relational coordination theory, with junior researchers, pilot funding awardees, our CTRC, and CTSI resource and service providers. The program focuses on capacity building to address common process problems and quality gaps that threaten the efficient, timely and successful completion of clinical and translational studies. © 2015 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Poupeau, G.; Soliani Junior, E.
1988-01-01
This article discuss some applications of the 'nuclear tracks method' in geochronology, geochemistry and geophysic. In geochronology, after rapid presentation of the dating principles by 'Fission Track' and the kinds of geological events mensurable by this method, is showed some application in metallogeny and in petroleum geolocy. In geochemistry the 'fission tracks' method utilizations are related with mining prospecting and uranium prospecting. In geophysics an important application is the earthquake prevision, through the Ra 222 emanations continous control. (author) [pt
Studying the properties of Variational Data Assimilation Methods by Applying a Set of Test-Examples
DEFF Research Database (Denmark)
Thomsen, Per Grove; Zlatev, Zahari
2007-01-01
and backward computations are carried out by using the model under consideration and its adjoint equations (both the model and its adjoint are defined by systems of differential equations). The major difficulty is caused by the huge increase of the computational load (normally by a factor more than 100...... assimilation method (numerical algorithms for solving differential equations, splitting procedures and optimization algorithms) have been studied by using these tests. The presentation will include results from testing carried out in the study.......he variational data assimilation methods can successfully be used in different fields of science and engineering. An attempt to utilize available sets of observations in the efforts to improve (i) the models used to study different phenomena (ii) the model results is systematically carried out when...
Projector Method: theory and examples
International Nuclear Information System (INIS)
Dahl, E.D.
1985-01-01
The Projector Method technique for numerically analyzing lattice gauge theories was developed to take advantage of certain simplifying features of gauge theory models. Starting from a very general notion of what the Projector Method is, the techniques are applied to several model problems. After these examples have traced the development of the actual algorithm from the general principles of the Projector Method, a direct comparison between the Projector and the Euclidean Monte Carlo is made, followed by a discussion of the application to Periodic Quantum Electrodynamics in two and three spatial dimensions. Some methods for improving the efficiency of the Projector in various circumstances are outlined. 10 refs., 7 figs
Applied Bayesian hierarchical methods
National Research Council Canada - National Science Library
Congdon, P
2010-01-01
.... It also incorporates BayesX code, which is particularly useful in nonlinear regression. To demonstrate MCMC sampling from first principles, the author includes worked examples using the R package...
SOME EXAMPLES OF APPLIED SYSTEMS WITH SPEECH INTERFACE
Directory of Open Access Journals (Sweden)
V. A. Zhitko
2017-01-01
Full Text Available Three examples of applied systems with a speech interface are considered in the article. The first two of these provide the end user with the opportunity to ask verbally the question and to hear the response from the system, creating an addition to the traditional I / O via the keyboard and computer screen. The third example, the «IntonTrainer» system, provides the user with the possibility of voice interaction and is designed for in-depth self-learning of the intonation of oral speech.
Applied Bayesian hierarchical methods
National Research Council Canada - National Science Library
Congdon, P
2010-01-01
... . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Posterior Inference from Bayes Formula . . . . . . . . . . . . 1.3 Markov Chain Monte Carlo Sampling in Relation to Monte Carlo Methods: Obtaining Posterior...
Methods of applied mathematics
Hildebrand, Francis B
1992-01-01
This invaluable book offers engineers and physicists working knowledge of a number of mathematical facts and techniques not commonly treated in courses in advanced calculus, but nevertheless extremely useful when applied to typical problems in many different fields. It deals principally with linear algebraic equations, quadratic and Hermitian forms, operations with vectors and matrices, the calculus of variations, and the formulations and theory of linear integral equations. Annotated problems and exercises accompany each chapter.
Uranium prospection methods illustrated with examples
International Nuclear Information System (INIS)
Valsardieu, C.
1985-01-01
Uranium exploration methods are briefly reviewed: aerial (radiometric, spectrometric), surface (mapping, radiometric, geophysical, geochemical), sub-surface (well logging, boring) and mining methods in the different steps of a mine project: preliminary studies, general prospecting, detailed prospecting deposit area and deposit estimation. Choice of methods depends strongly on geographic and geologic environment. Three examples are given concerning: an intragranitic deposit Limousin (France), a deposit spatially related to a discordance Athabasca (Canada) and a sedimentary deposit Manyingee (Western Australia) [fr
Eliciting expert opinion for economic models: an applied example.
Leal, José; Wordsworth, Sarah; Legood, Rosa; Blair, Edward
2007-01-01
Expert opinion is considered as a legitimate source of information for decision-analytic modeling where required data are unavailable. Our objective was to develop a practical computer-based tool for eliciting expert opinion about the shape of the uncertainty distribution around individual model parameters. We first developed a prepilot survey with departmental colleagues to test a number of alternative approaches to eliciting opinions on the shape of the uncertainty distribution around individual parameters. This information was used to develop a survey instrument for an applied clinical example. This involved eliciting opinions from experts to inform a number of parameters involving Bernoulli processes in an economic model evaluating DNA testing for families with a genetic disease, hypertrophic cardiomyopathy. The experts were cardiologists, clinical geneticists, and laboratory scientists working with cardiomyopathy patient populations and DNA testing. Our initial prepilot work suggested that the more complex elicitation techniques advocated in the literature were difficult to use in practice. In contrast, our approach achieved a reasonable response rate (50%), provided logical answers, and was generally rated as easy to use by respondents. The computer software user interface permitted graphical feedback throughout the elicitation process. The distributions obtained were incorporated into the model, enabling the use of probabilistic sensitivity analysis. There is clearly a gap in the literature between theoretical elicitation techniques and tools that can be used in applied decision-analytic models. The results of this methodological study are potentially valuable for other decision analysts deriving expert opinion.
Applied Formal Methods for Elections
DEFF Research Database (Denmark)
Wang, Jian
Information technology is changing the way elections are organized. Technology renders the electoral process more efficient, but things could also go wrong: Voting software is complex, it consists of over thousands of lines of code, which makes it error-prone. Technical problems may cause delays...... bounded model-checking and satisfiability modulo theories (SMT) solvers can be used to check these criteria. Voter Experience: Technology profoundly affects the voter experience. These effects need to be measured and the data should be used to make decisions regarding the implementation of the electoral...... at polling stations, or even delay the announcement of the final result. This thesis describes a set of methods to be used, for example, by system developers, administrators, or decision makers to examine election technologies, social choice algorithms and voter experience. Technology: Verifiability refers...
Applying remote sensing to invasive species science—A tamarisk example
Morisette, Jeffrey T.
2011-01-01
The Invasive Species Science Branch of the Fort Collins Science Center provides research and technical assistance relating to management concerns for invasive species, including understanding how these species are introduced, identifying areas vulnerable to invasion, forecasting invasions, and developing control methods. This fact sheet considers the invasive plant species tamarisk (Tamarix spp), addressing three fundamental questions: *Where is it now? *What are the potential or realized ecological impacts of invasion? *Where can it survive and thrive if introduced? It provides peer-review examples of how the U.S. Geological Survey, working with other federal agencies and university partners, are applying remote-sensing technologies to address these key questions.
Applying homotopy analysis method for solving differential-difference equation
International Nuclear Information System (INIS)
Wang Zhen; Zou Li; Zhang Hongqing
2007-01-01
In this Letter, we apply the homotopy analysis method to solving the differential-difference equations. A simple but typical example is applied to illustrate the validity and the great potential of the generalized homotopy analysis method in solving differential-difference equation. Comparisons are made between the results of the proposed method and exact solutions. The results show that the homotopy analysis method is an attractive method in solving the differential-difference equations
Mixed Methods Sampling: A Typology with Examples
Teddlie, Charles; Yu, Fen
2007-01-01
This article presents a discussion of mixed methods (MM) sampling techniques. MM sampling involves combining well-established qualitative and quantitative techniques in creative ways to answer research questions posed by MM research designs. Several issues germane to MM sampling are presented including the differences between probability and…
Applied Formal Methods for Elections
DEFF Research Database (Denmark)
Wang, Jian
development time, or second dynamically, i.e. monitoring while an implementation is used during an election, or after the election is over, for forensic analysis. This thesis contains two chapters on this subject: the chapter Analyzing Implementations of Election Technologies describes a technique...... process. The chapter Measuring Voter Lines describes an automated data collection method for measuring voters' waiting time, and discusses statistical models designed to provide an understanding of the voter behavior in polling stations....
Carlsen, Lars; Bruggemann, Rainer
2018-06-03
In chemistry there is a long tradition in classification. Usually methods are adopted from the wide field of cluster analysis. Here, based on the example of 21 alkyl anilines we show that also concepts taken out from the mathematical discipline of partially ordered sets may also be applied. The chemical compounds are described by a multi-indicator system. For the present study four indicators, mainly taken from the field of environmental chemistry were applied and a Hasse diagram was constructed. A Hasse diagram is an acyclic, transitively reduced, triangle free graph that may have several components. The crucial question is, whether or not the Hasse diagram can be interpreted from a structural chemical point of view. This is indeed the case, but it must be clearly stated that a guarantee for meaningful results in general cannot be given. For that further theoretical work is needed. Two cluster analysis methods are applied (K-means and a hierarchical cluster method). In both cases the partitioning of the set of 21 compounds by the component structure of the Hasse diagram appears to be better interpretable. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
DEFF Research Database (Denmark)
Blekhman, I. I.; Sorokin, V. S.
2016-01-01
A general approach to study effects produced by oscillations applied to nonlinear dynamic systems is developed. It implies a transition from initial governing equations of motion to much more simple equations describing only the main slow component of motions (the vibro-transformed dynamics.......g., the requirement for the involved nonlinearities to be weak. The approach is illustrated by several relevant examples from various fields of science, e.g., mechanics, physics, chemistry and biophysics....... equations). The approach is named as the oscillatory strobodynamics, since motions are perceived as under a stroboscopic light. The vibro-transformed dynamics equations comprise terms that capture the averaged effect of oscillations. The method of direct separation of motions appears to be an efficient...
DEFF Research Database (Denmark)
Nielsen, Erland Hejn
2000-01-01
During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging to this ...
International Nuclear Information System (INIS)
Broc, J.S.
2006-12-01
Energy end-use Efficiency (EE) is a priority for energy policies to face resources exhaustion and to reduce pollutant emissions. At the same time, in France, local level is increasingly involved into the implementation of EE activities, whose frame is changing (energy market liberalization, new policy instruments). Needs for ex-post evaluation of the local EE activities are thus increasing, for regulation requirements and to support a necessary change of scale. Our thesis focuses on the original issue of the ex-post evaluation of local EE operations in France. The state of the art, through the analysis of the American and European experiences and of the reference guidebooks, gives a substantial methodological material and emphasises the key evaluation issues. Concurrently, local EE operations in France are characterized by an analysis of their environment and a work on their segmentation criteria. The combination of these criteria with the key evaluation issues provides an analysis framework used as the basis for the composition of evaluation methods. This also highlights the specific evaluation needs for local operations. A methodology is then developed to complete and adapt the existing material to design evaluation methods for local operations, so that stakeholders can easily appropriate. Evaluation results thus feed a know-how building process with experience feedback. These methods are to meet two main goals: to determine the operation results, and to detect the success/failure factors. The methodology was validated on concrete cases, where these objectives were reached. (author)
[Montessori method applied to dementia - literature review].
Brandão, Daniela Filipa Soares; Martín, José Ignacio
2012-06-01
The Montessori method was initially applied to children, but now it has also been applied to people with dementia. The purpose of this study is to systematically review the research on the effectiveness of this method using Medical Literature Analysis and Retrieval System Online (Medline) with the keywords dementia and Montessori method. We selected lo studies, in which there were significant improvements in participation and constructive engagement, and reduction of negative affects and passive engagement. Nevertheless, systematic reviews about this non-pharmacological intervention in dementia rate this method as weak in terms of effectiveness. This apparent discrepancy can be explained because the Montessori method may have, in fact, a small influence on dimensions such as behavioral problems, or because there is no research about this method with high levels of control, such as the presence of several control groups or a double-blind study.
Geostatistical methods applied to field model residuals
DEFF Research Database (Denmark)
Maule, Fox; Mosegaard, K.; Olsen, Nils
consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...
Institute of Scientific and Technical Information of China (English)
Zhang Yu[1
2016-01-01
The traditional Japanese grammar teaching often only pays attention to the interpretation of syntax and the integrity of grammar structure. This violates the cultivation of communicative competence, and is not in conformity with the society’s requirements of applied foreign language talents. Cognitive linguistics theory, which links language form with semantic concept, reveals the internal relation of man’s thinking and language. If we can subtly apply cognitive linguistic theory into Japanese grammar teaching to explore the cognitive process in the speakers’ brain while expressing, we can get a good understanding of diffi cult points and “special case”. This paper explores the introductory methods and efficacy of the cognitive linguistics theory applied in Japanese grammar teaching method, by lecturing causative sentences an example.
Applied mathematical methods in nuclear thermal hydraulics
International Nuclear Information System (INIS)
Ransom, V.H.; Trapp, J.A.
1983-01-01
Applied mathematical methods are used extensively in modeling of nuclear reactor thermal-hydraulic behavior. This application has required significant extension to the state-of-the-art. The problems encountered in modeling of two-phase fluid transients and the development of associated numerical solution methods are reviewed and quantified using results from a numerical study of an analogous linear system of differential equations. In particular, some possible approaches for formulating a well-posed numerical problem for an ill-posed differential model are investigated and discussed. The need for closer attention to numerical fidelity is indicated
Entropy viscosity method applied to Euler equations
International Nuclear Information System (INIS)
Delchini, M. O.; Ragusa, J. C.; Berry, R. A.
2013-01-01
The entropy viscosity method [4] has been successfully applied to hyperbolic systems of equations such as Burgers equation and Euler equations. The method consists in adding dissipative terms to the governing equations, where a viscosity coefficient modulates the amount of dissipation. The entropy viscosity method has been applied to the 1-D Euler equations with variable area using a continuous finite element discretization in the MOOSE framework and our results show that it has the ability to efficiently smooth out oscillations and accurately resolve shocks. Two equations of state are considered: Ideal Gas and Stiffened Gas Equations Of State. Results are provided for a second-order time implicit schemes (BDF2). Some typical Riemann problems are run with the entropy viscosity method to demonstrate some of its features. Then, a 1-D convergent-divergent nozzle is considered with open boundary conditions. The correct steady-state is reached for the liquid and gas phases with a time implicit scheme. The entropy viscosity method correctly behaves in every problem run. For each test problem, results are shown for both equations of state considered here. (authors)
Analytical methods applied to water pollution
International Nuclear Information System (INIS)
Baudin, G.
1977-01-01
A comparison of different methods applied to water analysis is given. The discussion is limited to the problems presented by inorganic elements, accessible to nuclear activation analysis methods. The following methods were compared: activation analysis: with gamma-ray spectrometry, atomic absorption spectrometry, fluorimetry, emission spectrometry, colorimetry or spectrophotometry, X-ray fluorescence, mass spectrometry, voltametry, polarography or other electrochemical methods, activation analysis-beta measurements. Drinking-water, irrigation waters, sea waters, industrial wastes and very pure waters are the subjects of the investigations. The comparative evaluation is made on the basis of storage of samples, in situ analysis, treatment and concentration, specificity and interference, monoelement or multielement analysis, analysis time and accuracy. The significance of the neutron analysis is shown. (T.G.)
Multiple predictor smoothing methods for sensitivity analysis: Example results
International Nuclear Information System (INIS)
Storlie, Curtis B.; Helton, Jon C.
2008-01-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described in the first part of this presentation: (i) locally weighted regression (LOESS), (ii) additive models, (iii) projection pursuit regression, and (iv) recursive partitioning regression. In this, the second and concluding part of the presentation, the indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present
Dating method by fission tracks: some Brazilian examples
International Nuclear Information System (INIS)
Fonseca, Ariadne do Carmo
1996-01-01
The Fission Track method (TF) complements the dating of a interval of tectonic events occurred in low temperatures not detected by another radiometric methods. In the South part of Craton of Sao Francisco the dating of apatites of archaean rocks produced ages TF between 900 and 500 Ma, reflecting the progressive acting of the Brazilian margin mobile belts in the archaean craton areas. Apatite of some igneous and metamorphic rocks of the Braziliana age, in the Faixa Ribeira segment, between the Rio de Janeiro and Salvador cities, produced TF ages between 140 and 80 Ma. The basaltic and alkaline volcanism related to the Atlantic Ocean opening dated from this interval. The TF dating in apatites of the continental margin rocks allowed to date the event. In the Cabo Frio region (Southeastern part of Rio de Janeiro State), titanite and apatite of the Transamazonic orthognaisses produced TF dates between 190 and 80 to 40 Ma. The age around 190 Ma date previously the rift formation precursor of the South Atlantic Ocean opening, while the ages between 80 and 40 Ma were related to the alkaline rocks intrusion. The examples mentioned demonstrate the event diversity which may be dated by the Fission Tracks method, mainly in the craton area and margin belts study
A CNN-Based Method of Vehicle Detection from Aerial Images Using Hard Example Mining
Directory of Open Access Journals (Sweden)
Yohei Koga
2018-01-01
Full Text Available Recently, deep learning techniques have had a practical role in vehicle detection. While much effort has been spent on applying deep learning to vehicle detection, the effective use of training data has not been thoroughly studied, although it has great potential for improving training results, especially in cases where the training data are sparse. In this paper, we proposed using hard example mining (HEM in the training process of a convolutional neural network (CNN for vehicle detection in aerial images. We applied HEM to stochastic gradient descent (SGD to choose the most informative training data by calculating the loss values in each batch and employing the examples with the largest losses. We picked 100 out of both 500 and 1000 examples for training in one iteration, and we tested different ratios of positive to negative examples in the training data to evaluate how the balance of positive and negative examples would affect the performance. In any case, our method always outperformed the plain SGD. The experimental results for images from New York showed improved performance over a CNN trained in plain SGD where the F1 score of our method was 0.02 higher.
Ferrer-Dufol, Ana; Menao-Guillen, Sebastian
2009-04-10
The relationship between basic research and its potential clinical applications is often a difficult subject. Clinical toxicology has always been very dependent on experimental research whose usefulness has been impaired by the existence of huge differences in the toxicity expression of different substances, inter- and intra-species which make it difficult to predict clinical effects in humans. The new methods in molecular biology developed in the last decades are furnishing very useful tools to study some of the more relevant molecules implied in toxicokinetic and toxicodynamic processes. We aim to show some meaningful examples of how recent research developments with genes and proteins have clear applications to understand significant clinical matters, such as inter-individual variations in susceptibility to chemicals, and other phenomena related to the way some substances act to induce variations in the expression and functionality of these targets.
Examples of New Models Applied in Selected Simulation Systems with Respect to Database
Directory of Open Access Journals (Sweden)
Ignaszak Z.
2013-03-01
Full Text Available The tolerance of damage rule progressively meets the approval in the design casting parts procedures. Therefore, there were appeared the new challenges and expectations for permanent development of process virtualization in the mechanical engineering industry. Virtualization is increasingly developed on the stage of product design and materials technologies optimization. Increasing expectations of design and process engineers regarding the practical effectiveness of applied simulation systems with new proposed up-grades modules is observed. The purpose is to obtain simulation tools allowing the most possible realistic prognosis of the casting structure, including indication, with the highest possible probability, places in the casting that are endangered with the possibility of shrinkage- and gas porosity formation. This 3D map of discontinuities and structure transformed in local mechanical characteristics are used to calculate the local stresses and safety factors. The needs of tolerance of damage and new approach to evaluate the quality of such prognosis must be defined. These problems of validation of new models/modules used to predict the shrinkage- and gas porosity including the chosen structure parameters in the example of AlSi7 alloy are discussed in the paper.
Examples of New Models Applied in Selected Simulation Systems with Respect to Database
Directory of Open Access Journals (Sweden)
Z. Ignaszak
2013-01-01
Full Text Available The tolerance of damage rule progressively meets the approval in the design casting parts procedures. Therefore, there were appeared thenew challenges and expectations for permanent development of process virtualization in the mechanical engineering industry.Virtualization is increasingly developed on the stage of product design and materials technologies optimization. Increasing expectations of design and process engineers regarding the practical effectiveness of applied simulation systems with new proposed up-grades modules is observed. The purpose is to obtain simulation tools allowing the most possible realistic prognosis of the casting structure, including indication, with the highest possible probability, places in the casting that are endangered with the possibility of shrinkage– and gas porosity formation. This 3D map of discontinuities and structure transformed in local mechanical characteristics are used to calculate the local stresses and safety factors. The needs of tolerance of damage and new approach to evaluate the quality of such prognosis must be defined. These problems of validation of new models/modules used to predict the shrinkage– and gas porosity including the chosen structure parameters in the example of AlSi7 alloy are discussed in the paper.
International Nuclear Information System (INIS)
Brux, H.
1993-01-01
The conflict between wind power plants and the appearance of the landscape is explained. Legal regulations forcing one to take it into account are pointed out. After an introduction into the theoretical basis, methods of solution for the operation of aesthetic landscape judgments are introduced by examples from planning practice. Finally, the frequently unused possibilities of site optimisation with the aid of applied biology and landscape planning are pointed out. (orig.) [de
A Method for Snow Reanalysis: The Sierra Nevada (USA) Example
Girotto, Manuela; Margulis, Steven; Cortes, Gonzalo; Durand, Michael
2017-01-01
This work presents a state-of-the art methodology for constructing snow water equivalent (SWE) reanalysis. The method is comprised of two main components: (1) a coupled land surface model and snow depletion curve model, which is used to generate an ensemble of predictions of SWE and snow cover area for a given set of (uncertain) inputs, and (2) a reanalysis step, which updates estimation variables to be consistent with the satellite observed depletion of the fractional snow cover time series. This method was applied over the Sierra Nevada (USA) based on the assimilation of remotely sensed fractional snow covered area data from the Landsat 5-8 record (1985-2016). The verified dataset (based on a comparison with over 9000 station years of in situ data) exhibited mean and root-mean-square errors less than 3 and 13 cm, respectively, and correlation greater than 0.95 compared with in situ SWE observations. The method (fully Bayesian), resolution (daily, 90-meter), temporal extent (31 years), and accuracy provide a unique dataset for investigating snow processes. This presentation illustrates how the reanalysis dataset was used to provide a basic accounting of the stored snowpack water in the Sierra Nevada over the last 31 years and ultimately improve real-time streamflow predictions.
Applied Mathematical Methods in Theoretical Physics
Masujima, Michio
2005-04-01
All there is to know about functional analysis, integral equations and calculus of variations in a single volume. This advanced textbook is divided into two parts: The first on integral equations and the second on the calculus of variations. It begins with a short introduction to functional analysis, including a short review of complex analysis, before continuing a systematic discussion of different types of equations, such as Volterra integral equations, singular integral equations of Cauchy type, integral equations of the Fredholm type, with a special emphasis on Wiener-Hopf integral equations and Wiener-Hopf sum equations. After a few remarks on the historical development, the second part starts with an introduction to the calculus of variations and the relationship between integral equations and applications of the calculus of variations. It further covers applications of the calculus of variations developed in the second half of the 20th century in the fields of quantum mechanics, quantum statistical mechanics and quantum field theory. Throughout the book, the author presents over 150 problems and exercises -- many from such branches of physics as quantum mechanics, quantum statistical mechanics, and quantum field theory -- together with outlines of the solutions in each case. Detailed solutions are given, supplementing the materials discussed in the main text, allowing problems to be solved making direct use of the method illustrated. The original references are given for difficult problems. The result is complete coverage of the mathematical tools and techniques used by physicists and applied mathematicians Intended for senior undergraduates and first-year graduates in science and engineering, this is equally useful as a reference and self-study guide.
Using crowdsourcing to evaluate published scientific literature: methods and example.
Directory of Open Access Journals (Sweden)
Andrew W Brown
Full Text Available Systematically evaluating scientific literature is a time consuming endeavor that requires hours of coding and rating. Here, we describe a method to distribute these tasks across a large group through online crowdsourcing. Using Amazon's Mechanical Turk, crowdsourced workers (microworkers completed four groups of tasks to evaluate the question, "Do nutrition-obesity studies with conclusions concordant with popular opinion receive more attention in the scientific community than do those that are discordant?" 1 Microworkers who passed a qualification test (19% passed evaluated abstracts to determine if they were about human studies investigating nutrition and obesity. Agreement between the first two raters' conclusions was moderate (κ = 0.586, with consensus being reached in 96% of abstracts. 2 Microworkers iteratively synthesized free-text answers describing the studied foods into one coherent term. Approximately 84% of foods were agreed upon, with only 4 and 8% of ratings failing manual review in different steps. 3 Microworkers were asked to rate the perceived obesogenicity of the synthesized food terms. Over 99% of responses were complete and usable, and opinions of the microworkers qualitatively matched the authors' expert expectations (e.g., sugar-sweetened beverages were thought to cause obesity and fruits and vegetables were thought to prevent obesity. 4 Microworkers extracted citation counts for each paper through Google Scholar. Microworkers reached consensus or unanimous agreement for all successful searches. To answer the example question, data were aggregated and analyzed, and showed no significant association between popular opinion and attention the paper received as measured by Scimago Journal Rank and citation counts. Direct microworker costs totaled $221.75, (estimated cost at minimum wage: $312.61. We discuss important points to consider to ensure good quality control and appropriate pay for microworkers. With good
Using Crowdsourcing to Evaluate Published Scientific Literature: Methods and Example
Brown, Andrew W.; Allison, David B.
2014-01-01
Systematically evaluating scientific literature is a time consuming endeavor that requires hours of coding and rating. Here, we describe a method to distribute these tasks across a large group through online crowdsourcing. Using Amazon's Mechanical Turk, crowdsourced workers (microworkers) completed four groups of tasks to evaluate the question, “Do nutrition-obesity studies with conclusions concordant with popular opinion receive more attention in the scientific community than do those that are discordant?” 1) Microworkers who passed a qualification test (19% passed) evaluated abstracts to determine if they were about human studies investigating nutrition and obesity. Agreement between the first two raters' conclusions was moderate (κ = 0.586), with consensus being reached in 96% of abstracts. 2) Microworkers iteratively synthesized free-text answers describing the studied foods into one coherent term. Approximately 84% of foods were agreed upon, with only 4 and 8% of ratings failing manual review in different steps. 3) Microworkers were asked to rate the perceived obesogenicity of the synthesized food terms. Over 99% of responses were complete and usable, and opinions of the microworkers qualitatively matched the authors' expert expectations (e.g., sugar-sweetened beverages were thought to cause obesity and fruits and vegetables were thought to prevent obesity). 4) Microworkers extracted citation counts for each paper through Google Scholar. Microworkers reached consensus or unanimous agreement for all successful searches. To answer the example question, data were aggregated and analyzed, and showed no significant association between popular opinion and attention the paper received as measured by Scimago Journal Rank and citation counts. Direct microworker costs totaled $221.75, (estimated cost at minimum wage: $312.61). We discuss important points to consider to ensure good quality control and appropriate pay for microworkers. With good reliability and
Method and data analysis example of fatigue tests
International Nuclear Information System (INIS)
Nogami, Shuhei
2015-01-01
In the design and operation of a nuclear fusion reactor, it is important to accurately assess the fatigue life. Fatigue life is evaluated by preparing a database on the relationship between the added stress / strain amplitude and the number of cycles to failure based on the fatigue tests on standard specimens, and by comparing this relationship with the generated stress / strain of the actual constructions. This paper mainly chooses low-cycle fatigue as an object, and explains standard test methods, fatigue limit, life prediction formula and the like. Using reduced-activation ferrite steel F82H as a material, strain controlled low-cycle fatigue test was performed under room temperature atmosphere. From these results, the relationship between strain and the number of cycles to failure was analyzed. It was found that the relationship is asymptotic to the formula of Coffin-Manson Law under high-strain (low-cycle condition), and asymptotic to the formula of Basquin Law under low-strain (high-cycle condition). For F82H to be used for the blanket of a nuclear fusion prototype reactor, the arrangement of fatigue life data up to about 700°C and the establishment of optimal fatigue design curves are urgent tasks. As for fusion reactor structural materials, the evaluation of neutron irradiation effect on fatigue damage behavior and life is indispensable. For this purpose, it is necessary to establish standardized testing techniques when applied to small specimens. (A.O.)
Methods of gas hydrate concentration estimation with field examples
Digital Repository Service at National Institute of Oceanography (India)
Kumar, D.; Dash, R.; Dewangan, P.
physics and seismic inversion: examples from the northern deepwater Gulf of Mexico: The Leading Edge, 23, 60-66. Dash R., 2007, Crustal structure and marine gas hydrate studies near Vancouver Island using seismic tomography: PhD thesis, University...-resistivity logs: Examples from Green Canyon, Gulf of Mexico: SEG expanded abstracts, 26, 1579-1583. Singh, S. C., Minshull, T. A., and Spence, G. D., 1993, Velocity structure of a gas hydrate reflector: Science, 260, 204-207. Sloan, E. D. Jr., 1998, Clathrate...
International Nuclear Information System (INIS)
Curado, E.M.F.; Hassouni, Y.; Rego-Monteiro, M.A.; Rodrigues, Ligia M.C.S.
2008-01-01
We discuss the role of generalized Heisenberg algebras (GHA) in obtaining an algebraic method to describe physical systems. The method consists in finding the GHA associated to a physical system and the relations between its generators and the physical observables. We choose as an example the infinite square-well potential for which we discuss the representations of the corresponding GHA. We suggest a way of constructing a physical realization of the generators of some GHA and apply it to the square-well potential. An expression for the position operator x in terms of the generators of the algebra is given and we compute its matrix elements
Approaches to qualitative research in mathematics education examples of methodology and methods
Bikner-Ahsbahs, Angelika; Presmeg, Norma
2014-01-01
This volume documents a range of qualitative research approaches emerged within mathematics education over the last three decades, whilst at the same time revealing their underlying methodologies. Continuing the discussion as begun in the two 2003 ZDM issues dedicated to qualitative empirical methods, this book presents astate of the art overview on qualitative research in mathematics education and beyond. The structure of the book allows the reader to use it as an actual guide for the selection of an appropriate methodology, on a basis of both theoretical depth and practical implications. The methods and examples illustrate how different methodologies come to life when applied to a specific question in a specific context. Many of the methodologies described are also applicable outside mathematics education, but the examples provided are chosen so as to situate the approach in a mathematical context.
Li, Mingyang; Zheng, Ang; Duan, Wenjuan; Mu, Xin; Liu, Chunli; Yang, Yang; Wang, Xin
2018-06-01
System of Health Accounts 2011 (SHA 2011) is a new health care accounts system, revised from SHA 1.0 by the Organisation for Economic Co-operation and Development (OECD), the World Health Organization (WHO) and Eurostat. It keeps the former tri-axial relationship and develops three analytical interfaces, in order to fix the existing shortcomings and make it more convenient for analysis and comparison across countries. SHA 2011 was introduced in China in 2014, and little about its application in China has been reported. This study takes children as an example to study how to apply SHA 2011 at the subnational level in the practical situation of China's health system. Multistage random sampling method was applied and 3 532 517 samples from 252 institutions were included in the study. Official yearbooks and account reports helped the estimation of provincial data. The formula to calculate Current Health Expenditure (CHE) was introduced step-by-step. STATA 10.0 was used for statistics. Under the frame of SHA 2011, the CHE for children in Liaoning was calculated as US$ 0.74 billion in 2014; 98.56% of the expenditure was spent in hospital and the allocation to primary health care institutions was insufficient. Infection, maternal and prenatal diseases cost the most in terms of Global Burden of Disease (GBD), and respiratory system diseases took the leading place in terms of International Classification of Disease Tenth Revision (ICD-10). In addition, medical income contributed most to the health financing. The method to apply SHA 2011 at the subnational level is feasible in China. It makes health accounts more adaptable to rapidly developing health systems and makes the financing data more readily available for analytical use. SHA 2011 is a better health expenditure accounts system to reveal the actual burden on residents and deserves further promotion in China as well as around the world.
Nonlinear time series theory, methods and applications with R examples
Douc, Randal; Stoffer, David
2014-01-01
FOUNDATIONSLinear ModelsStochastic Processes The Covariance World Linear Processes The Multivariate Cases Numerical Examples ExercisesLinear Gaussian State Space Models Model Basics Filtering, Smoothing, and Forecasting Maximum Likelihood Estimation Smoothing Splines and the Kalman Smoother Asymptotic Distribution of the MLE Missing Data Modifications Structural Component Models State-Space Models with Correlated Errors Exercises Beyond Linear ModelsNonlinear Non-Gaussian Data Volterra Series Expansion Cumulants and Higher-Order Spectra Bilinear Models Conditionally Heteroscedastic Models Thre
Applying scrum methods to ITS projects.
2017-08-01
The introduction of new technology generally brings new challenges and new methods to help with deployments. Agile methodologies have been introduced in the information technology industry to potentially speed up development. The Federal Highway Admi...
Applying Fuzzy Possibilistic Methods on Critical Objects
DEFF Research Database (Denmark)
Yazdani, Hossein; Ortiz-Arroyo, Daniel; Choros, Kazimierz
2016-01-01
Providing a ﬂexible environment to process data objects is a desirable goal of machine learning algorithms. In fuzzy and possibilistic methods, the relevance of data objects is evaluated and a membership degree is assigned. However, some critical objects objects have the potential ability to affect...... the performance of the clustering algorithms if they remain in a speciﬁc cluster or they are moved into another. In this paper we analyze and compare how critical objects affect the behaviour of fuzzy possibilistic methods in several data sets. The comparison is based on the accuracy and ability of learning...... methods to provide a proper searching space for data objects. The membership functions used by each method when dealing with critical objects is also evaluated. Our results show that relaxing the conditions of participation for data objects in as many partitions as they can, is beneﬁcial....
Quality assurance and applied statistics. Method 3
International Nuclear Information System (INIS)
1992-01-01
This German-Industry-Standards-paperback contains the International Standards from the Series ISO 9000 (or, as the case may be, the European Standards from the Series EN 29000) concerning quality assurance and including the already completed supplementary guidelines with ISO 9000- and ISO 9004-section numbers, which have been adopted as German Industry Standards and which are observed and applied world-wide to a great extent. It also includes the German-Industry-Standards ISO 10011 parts 1, 2 and 3 concerning the auditing of quality-assurance systems and the German-Industry-Standard ISO 10012 part 1 concerning quality-assurance demands (confirmation system) for measuring devices. The standards also include English and French versions. They are applicable independent of the user's line of industry and thus constitute basic standards. (orig.) [de
Lavine method applied to three body problems
International Nuclear Information System (INIS)
Mourre, Eric.
1975-09-01
The methods presently proposed for the three body problem in quantum mechanics, using the Faddeev approach for proving the asymptotic completeness, come up against the presence of new singularities when the potentials considered v(α)(x(α)) for two-particle interactions decay less rapidly than /x(α)/ -2 ; and also when trials are made for solving the problem with a representation space whose dimension for a particle is lower than three. A method is given that allows the mathematical approach to be extended to three body problem, in spite of singularities. Applications are given [fr
Applying Human Computation Methods to Information Science
Harris, Christopher Glenn
2013-01-01
Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…
Applying Mixed Methods Techniques in Strategic Planning
Voorhees, Richard A.
2008-01-01
In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…
[The diagnostic methods applied in mycology].
Kurnatowska, Alicja; Kurnatowski, Piotr
2008-01-01
The systemic fungal invasions are recognized with increasing frequency and constitute a primary cause of morbidity and mortality, especially in immunocompromised patients. Early diagnosis improves prognosis, but remains a problem because there is lack of sensitive tests to aid in the diagnosis of systemic mycoses on the one hand, and on the other the patients only present unspecific signs and symptoms, thus delaying early diagnosis. The diagnosis depends upon a combination of clinical observation and laboratory investigation. The successful laboratory diagnosis of fungal infection depends in major part on the collection of appropriate clinical specimens for investigations and on the selection of appropriate microbiological test procedures. So these problems (collection of specimens, direct techniques, staining methods, cultures on different media and non-culture-based methods) are presented in article.
Monte Carlo method applied to medical physics
International Nuclear Information System (INIS)
Oliveira, C.; Goncalves, I.F.; Chaves, A.; Lopes, M.C.; Teixeira, N.; Matos, B.; Goncalves, I.C.; Ramalho, A.; Salgado, J.
2000-01-01
The main application of the Monte Carlo method to medical physics is dose calculation. This paper shows some results of two dose calculation studies and two other different applications: optimisation of neutron field for Boron Neutron Capture Therapy and optimization of a filter for a beam tube for several purposes. The time necessary for Monte Carlo calculations - the highest boundary for its intensive utilisation - is being over-passed with faster and cheaper computers. (author)
Proteomics methods applied to malaria: Plasmodium falciparum
International Nuclear Information System (INIS)
Cuesta Astroz, Yesid; Segura Latorre, Cesar
2012-01-01
Malaria is a parasitic disease that has a high impact on public health in developing countries. The sequencing of the plasmodium falciparum genome and the development of proteomics have enabled a breakthrough in understanding the biology of the parasite. Proteomics have allowed to characterize qualitatively and quantitatively the parasite s expression of proteins and has provided information on protein expression under conditions of stress induced by antimalarial. Given the complexity of their life cycle, this takes place in the vertebrate host and mosquito vector. It has proven difficult to characterize the protein expression during each stage throughout the infection process in order to determine the proteome that mediates several metabolic, physiological and energetic processes. Two dimensional electrophoresis, liquid chromatography and mass spectrometry have been useful to assess the effects of antimalarial on parasite protein expression and to characterize the proteomic profile of different p. falciparum stages and organelles. The purpose of this review is to present state of the art tools and advances in proteomics applied to the study of malaria, and to present different experimental strategies used to study the parasite's proteome in order to show the advantages and disadvantages of each one.
Directory of Open Access Journals (Sweden)
Yu Xiang Zeng
2013-12-01
Full Text Available The q-difference equations are a class of important models both in q-calculus and applied sciences. The variational iteration method is extended to approximately solve the initial value problems of q-difference equations. A q-analogue of the Lagrange multiplier is presented and three examples are illustrated to show the method's efficiency.
Methods for model selection in applied science and engineering.
Energy Technology Data Exchange (ETDEWEB)
Field, Richard V., Jr.
2004-10-01
Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be
METHOD OF APPLYING NICKEL COATINGS ON URANIUM
Gray, A.G.
1959-07-14
A method is presented for protectively coating uranium which comprises etching the uranium in an aqueous etching solution containing chloride ions, electroplating a coating of nickel on the etched uranium and heating the nickel plated uranium by immersion thereof in a molten bath composed of a material selected from the group consisting of sodium chloride, potassium chloride, lithium chloride, and mixtures thereof, maintained at a temperature of between 700 and 800 deg C, for a time sufficient to alloy the nickel and uranium and form an integral protective coating of corrosion-resistant uranium-nickel alloy.
Versatile Formal Methods Applied to Quantum Information.
Energy Technology Data Exchange (ETDEWEB)
Witzel, Wayne [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Rudinger, Kenneth Michael [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Sarovar, Mohan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-11-01
Using a novel formal methods approach, we have generated computer-veri ed proofs of major theorems pertinent to the quantum phase estimation algorithm. This was accomplished using our Prove-It software package in Python. While many formal methods tools are available, their practical utility is limited. Translating a problem of interest into these systems and working through the steps of a proof is an art form that requires much expertise. One must surrender to the preferences and restrictions of the tool regarding how mathematical notions are expressed and what deductions are allowed. Automation is a major driver that forces restrictions. Our focus, on the other hand, is to produce a tool that allows users the ability to con rm proofs that are essentially known already. This goal is valuable in itself. We demonstrate the viability of our approach that allows the user great exibility in expressing state- ments and composing derivations. There were no major obstacles in following a textbook proof of the quantum phase estimation algorithm. There were tedious details of algebraic manipulations that we needed to implement (and a few that we did not have time to enter into our system) and some basic components that we needed to rethink, but there were no serious roadblocks. In the process, we made a number of convenient additions to our Prove-It package that will make certain algebraic manipulations easier to perform in the future. In fact, our intent is for our system to build upon itself in this manner.
Matjasko, Jennifer L; Cawley, John H; Baker-Goering, Madeleine M; Yokum, David V
2016-05-01
Behavioral economics provides an empirically informed perspective on how individuals make decisions, including the important realization that even subtle features of the environment can have meaningful impacts on behavior. This commentary provides examples from the literature and recent government initiatives that incorporate concepts from behavioral economics in order to improve health, decision making, and government efficiency. The examples highlight the potential for behavioral economics to improve the effectiveness of public health policy at low cost. Although incorporating insights from behavioral economics into public health policy has the potential to improve population health, its integration into government public health programs and policies requires careful design and continual evaluation of such interventions. Limitations and drawbacks of the approach are discussed. Copyright © 2016 American Journal of Preventive Medicine. All rights reserved.
Horner, Robert H; Sugai, George
2015-05-01
School-wide Positive Behavioral Interventions and Supports (PBIS) is an example of applied behavior analysis implemented at a scale of social importance. In this paper, PBIS is defined and the contributions of behavior analysis in shaping both the content and implementation of PBIS are reviewed. Specific lessons learned from implementation of PBIS over the past 20 years are summarized.
The Economics of Adaptation: Concepts, Methods and Examples
DEFF Research Database (Denmark)
Callaway, John MacIntosh; Naswa, Prakriti; Trærup, Sara Lærke Meltofte
and sectoral level strategies, plans and policies. Furthermore, we see it at the local level, where people are already adapting to the early impacts of climate change that affect livelihoods through, for example, changing rainfall patterns, drought, and frequency and intensity of extreme events. Analyses...... of the costs and benefits of climate change impacts and adaptation measures are important to inform future action. Despite the growth in the volume of research and studies on the economics of climate change adaptation over the past 10 years, there are still important gaps and weaknesses in the existing...... knowledge that limit effective and efficient decision-making and implementation of adaptation measures. Much of the literature to date has focussed on aggregate (national, regional and global) estimates of the economic costs of climate change impacts. There has been much less attention to the economics...
Multi-example feature-constrained back-projection method for image super-resolution
Institute of Scientific and Technical Information of China (English)
Junlei Zhang; Dianguang Gai; Xin Zhang; Xuemei Li
2017-01-01
Example-based super-resolution algorithms,which predict unknown high-resolution image information using a relationship model learnt from known high- and low-resolution image pairs, have attracted considerable interest in the field of image processing. In this paper, we propose a multi-example feature-constrained back-projection method for image super-resolution. Firstly, we take advantage of a feature-constrained polynomial interpolation method to enlarge the low-resolution image. Next, we consider low-frequency images of different resolutions to provide an example pair. Then, we use adaptive k NN search to find similar patches in the low-resolution image for every image patch in the high-resolution low-frequency image, leading to a regression model between similar patches to be learnt. The learnt model is applied to the low-resolution high-frequency image to produce high-resolution high-frequency information. An iterative back-projection algorithm is used as the final step to determine the final high-resolution image.Experimental results demonstrate that our method improves the visual quality of the high-resolution image.
Optimization methods applied to hybrid vehicle design
Donoghue, J. F.; Burghart, J. H.
1983-01-01
The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.
Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.
2018-01-01
Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…
Dynamical Systems Method and Applications Theoretical Developments and Numerical Examples
Ramm, Alexander G
2012-01-01
Demonstrates the application of DSM to solve a broad range of operator equations The dynamical systems method (DSM) is a powerful computational method for solving operator equations. With this book as their guide, readers will master the application of DSM to solve a variety of linear and nonlinear problems as well as ill-posed and well-posed problems. The authors offer a clear, step-by-step, systematic development of DSM that enables readers to grasp the method's underlying logic and its numerous applications. Dynamical Systems Method and Applications begins with a general introduction and
Applying the Socratic Method to Physics Education
Corcoran, Ed
2005-04-01
We have restructured University Physics I and II in accordance with methods that PER has shown to be effective, including a more interactive discussion- and activity-based curriculum based on the premise that developing understanding requires an interactive process in which students have the opportunity to talk through and think through ideas with both other students and the teacher. Studies have shown that in classes implementing this approach to teaching as compared to classes using a traditional approach, students have significantly higher gains on the Force Concept Inventory (FCI). This has been true in UPI. However, UPI FCI results seem to suggest that there is a significant conceptual hole in students' understanding of Newton's Second Law. Two labs in UPI which teach Newton's Second Law will be redesigned replacing more activity with students as a group talking through, thinking through, and answering conceptual questions asked by the TA. The results will be measured by comparing FCI results to those from previous semesters, coupled with interviews. The results will be analyzed, and we will attempt to understand why gains were or were not made.
Scanning probe methods applied to molecular electronics
Energy Technology Data Exchange (ETDEWEB)
Pavlicek, Niko
2013-08-01
Scanning probe methods on insulating films offer a rich toolbox to study electronic, structural and spin properties of individual molecules. This work discusses three issues in the field of molecular and organic electronics. An STM head to be operated in high magnetic fields has been designed and built up. The STM head is very compact and rigid relying on a robust coarse approach mechanism. This will facilitate investigations of the spin properties of individual molecules in the future. Combined STM/AFM studies revealed a reversible molecular switch based on two stable configurations of DBTH molecules on ultrathin NaCl films. AFM experiments visualize the molecular structure in both states. Our experiments allowed to unambiguously determine the pathway of the switch. Finally, tunneling into and out of the frontier molecular orbitals of pentacene molecules has been investigated on different insulating films. These experiments show that the local symmetry of initial and final electron wave function are decisive for the ratio between elastic and vibration-assisted tunneling. The results can be generalized to electron transport in organic materials.
Climate Action Gaming Experiment: Methods and Example Results
Directory of Open Access Journals (Sweden)
Clifford Singer
2015-09-01
Full Text Available An exercise has been prepared and executed to simulate international interactions on policies related to greenhouse gases and global albedo management. Simulation participants are each assigned one of six regions that together contain all of the countries in the world. Participants make quinquennial policy decisions on greenhouse gas emissions, recapture of CO2 from the atmosphere, and/or modification of the global albedo. Costs of climate change and of implementing policy decisions impact each region’s gross domestic product. Participants are tasked with maximizing economic benefits to their region while nearly stabilizing atmospheric CO2 concentrations by the end of the simulation in Julian year 2195. Results are shown where regions most adversely affected by effects of greenhouse gas emissions resort to increases in the earth’s albedo to reduce net solar insolation. These actions induce temperate region countries to reduce net greenhouse gas emissions. An example outcome is a trajectory to the year 2195 of atmospheric greenhouse emissions and concentrations, sea level, and global average temperature.
Visual momentum: an example of cognitive models applied to interface design
International Nuclear Information System (INIS)
Woods, D.D.
1982-01-01
The growth of computer applications has radically changed the nature of the man-machine interface. Through increased automation, the nature of the human's task has shifted from an emphasis on perceptual-motor skills to an emphasis on cognitive activities (e.g., problem solving and decision making). The result is a need to improve the cognitive coupling of person and machine. The goal of this paper is to describe how knowledge from cognitive psychology can be used to provide guidance to display system designers and to solve human performance problems in person-machine systems. The mechanism is to explore one example of a principle of man-machine interaction - visual momentum - that was developed on the basis of a general model of human front-end cognitive processing
Examples of Applications of Vortex Methods to Wind Energy
DEFF Research Database (Denmark)
Branlard, Emmanuel Simon Pierre
2017-01-01
The current chapter presents wind-energy simulations obtained with the vortex code OmniVor (described in Chap. 44 ) and compared to BEM, CFD and measurements. The chapter begins by comparing rotor loads obtained with vortex methods, BEM and actuator-line simulations of wind turbines under uniform...... and yawed inflows. The second section compares wakes and flow fields obtained by actuator-disk simulations and a free-wake vortex code that uses vortex segments and vortex particles. The third section compares different implementations of viscous diffusion models and investigate their effects...
C/X-band SAR interferometry applied to ground monitoring: examples and new potential
Nutricato, Raffaele; Nitti, Davide O.; Bovenga, Fabio; Refice, Alberto; Wasowski, Janusz; Chiaradia, Maria T.
2013-10-01
Classical applications of the MTInSAR techniques have been carried out in the past on medium resolution data acquired by the ERS, Envisat (ENV) and Radarsat sensors. The new generation of high-resolution X-Band SAR sensors, such as TerraSAR-X (TSX) and the COSMO-SkyMed (CSK) constellation allows acquiring data with spatial resolution reaching metric/submetric values. Thanks to the finer spatial resolution with respect to C-band data, X-band InSAR applications result very promising for monitoring single man-made structures (buildings, bridges, railways and highways), as well as landslides. This is particularly relevant where C-band data show low density of coherent scatterers. Moreover, thanks again to the higher resolution, it is possible to infer reliable estimates of the displacement rates with a number of SAR scenes significantly lower than in C-band within the same time span or by using more images acquired in a narrower time span. We present examples of the application of a Persistent Scatterers Interferometry technique, namely the SPINUA algorithm, to data acquired by ENV, TSX and CSK on selected number of sites. Different cases are considered concerning monitoring of both instable slopes and infrastructure. Results are compared and commented with particular attention paid to the advantages provided by the new generation of X-band high resolution space-borne SAR sensors.
Reflections on Mixing Methods in Applied Linguistics Research
Hashemi, Mohammad R.
2012-01-01
This commentary advocates the use of mixed methods research--that is the integration of qualitative and quantitative methods in a single study--in applied linguistics. Based on preliminary findings from a research project in progress, some reflections on the current practice of mixing methods as a new trend in applied linguistics are put forward.…
On the methods and examples of aircraft impact analysis
International Nuclear Information System (INIS)
Arros, J.
2012-01-01
Conclusions: Aircraft impact analysis can be performed today within feasible run times using PCs and available advanced commercial finite element software tools. Adequate element and material model technologies exist. Explicit time integration enables analysis of very large deformation Missile/Target impacts. Meshless/particle based methods may be beneficial for large deformation concrete “punching shear” analysis – potentially solves the “element erosion” problem associated with FE, but are not generally implemented yet in major commercial software. Verification of the complicated modeling technologies continues to be a challenge. Not much work has been done yet on ACI shock loading – redundant and physically separated safety trains key to success. Analysis approach and detail should be “balanced” - commensurate with the significant uncertainties - do not “over-do” details of some parts of the model (e.g., the plane) and the analysis
Liability for oil spill damages: issues, methods, and examples
International Nuclear Information System (INIS)
Grigalunas, T.A.; Opaluch, J.J.; Diamantides, J.; Mazzotta, M.
1998-01-01
Liability is an important incentive-based instrument for preventing oil spills and provides a sustainable approach for restoring coastal resources injured by spills. However, the use of liability for environmental damages raises many challenges, including quantification of money measures of damages. In this article, case studies are used to illustrate the issues, methods, and challenges associated with assessing a range of damages, from those that can be measured relatively easily using market information to more 'esoteric', and much more difficult, cases involving non-market-valued losses. Also discussed are issues raised by the new national and international regulatory focus on restoration and by the simplified, compensatory formula used by some states. (author)
The agency problem and medical acting: an example of applying economic theory to medical ethics.
Langer, Andreas; Schröder-Bäck, Peter; Brink, Alexander; Eurich, Johannes
2009-03-01
In this article, the authors attempt to build a bridge between economic theory and medical ethics to offer a new perspective to tackle ethical challenges in the physician-patient encounter. They apply elements of new institutional economics to the ethically relevant dimensions of the physician-patient relationship in a descriptive heuristic sense. The principal-agent theory can be used to analytically grasp existing action problems in the physician-patient relationship and as a basis for shaping recommendations at the institutional level. Furthermore, the patients' increased self-determination and modern opportunities for the medical laity to inform themselves lead to a less asymmetrical distribution of information between physician and patient and therefore require new interaction models. Based on the analysis presented here, the authors recommend that, apart from the physician's necessary individual ethics, greater consideration should be given to approaches of institutional ethics and hence to incentive systems within medical ethics.
The Vroom and Yetton Normative Leadership Model Applied to Public School Case Examples.
Sample, John
This paper seeks to familiarize school administrators with the Vroom and Yetton Normative Leadership model by presenting its essential components and providing original case studies for its application to school settings. The five decision-making methods of the Vroom and Yetton model, including two "autocratic," two…
Is response-guided therapy being applied in the clinical setting? The hepatitis C example.
Harris, Jennifer B; Ward, Melea A; Schwab, Phil
2015-02-01
Response-guided therapy (RGT) is a treatment model that bases adjustments to therapeutic regimens on individualized patient physiologic response. This approach is applied to patients with chronic hepatitis C virus (HCV) infection who are treated with a triple therapy regimen of boceprevir or telaprevir in combination with pegylated interferon and ribavirin. As RGT expands in other pharmacologic regimens, including the treatment of breast cancer and acute myeloid leukemia, a measurement of how this approach is applied in clinical practice is important to determine whether the benefits of RGT are being optimized. To measure adherence to the RGT guidelines and to the treatment futility rules based on the drug labeling information for boceprevir and for telaprevir in the treatment of patients with chronic HCV infection. A retrospective observational cohort study was conducted using the large Humana research database, which includes pharmacy, medical, and laboratory claims, as well as enrollment data for more than 1.5 million fully insured commercial members, 1.9 million Medicare Advantage members, and 2.4 million Medicare Part D members from all 50 states. The study population included patients aged ≥18 years to <90 years who were fully insured with commercial or Medicare Advantage coverage. A pharmacy claim for boceprevir or telaprevir was used to identify patients receiving triple therapy for HCV infection. Medical, pharmacy, and laboratory claims were reviewed from the date of the first boceprevir or telaprevir pharmacy claim between May 2011 and February 2012 through a 32-week follow-up period, during which patients were required to have continuous health plan enrollment eligibility. This time period allowed for the occurrences of required HCV RNA laboratory monitoring and the assessment of treatment patterns. The use of RGT for boceprevir and telaprevir includes the monitoring of HCV RNA levels at routine intervals to determine how to proceed with therapy
Ziemann, Alexandra; Fouillet, Anne; Brand, Helmut; Krafft, Thomas
2016-01-01
Syndromic surveillance aims at augmenting traditional public health surveillance with timely information. To gain a head start, it mainly analyses existing data such as from web searches or patient records. Despite the setup of many syndromic surveillance systems, there is still much doubt about the benefit of the approach. There are diverse interactions between performance indicators such as timeliness and various system characteristics. This makes the performance assessment of syndromic surveillance systems a complex endeavour. We assessed if the comparison of several syndromic surveillance systems through Qualitative Comparative Analysis helps to evaluate performance and identify key success factors. We compiled case-based, mixed data on performance and characteristics of 19 syndromic surveillance systems in Europe from scientific and grey literature and from site visits. We identified success factors by applying crisp-set Qualitative Comparative Analysis. We focused on two main areas of syndromic surveillance application: seasonal influenza surveillance and situational awareness during different types of potentially health threatening events. We found that syndromic surveillance systems might detect the onset or peak of seasonal influenza earlier if they analyse non-clinical data sources. Timely situational awareness during different types of events is supported by an automated syndromic surveillance system capable of analysing multiple syndromes. To our surprise, the analysis of multiple data sources was no key success factor for situational awareness. We suggest to consider these key success factors when designing or further developing syndromic surveillance systems. Qualitative Comparative Analysis helped interpreting complex, mixed data on small-N cases and resulted in concrete and practically relevant findings.
Directory of Open Access Journals (Sweden)
Bertrand R.
2006-11-01
Full Text Available Une nouvelle méthode de préparation des matières organiques dispersées, destinées à être étudiées au microscope, est décrite et proposée. Le montage de la matière organique est fait par des lames de verre. Il permet de faire tous les types d'observations microscopiques (lumière transmise, réfléchie ou fluorescente sur une même préparation. Ses avantages sur la méthode classique d'imprégnation sur des briquettes sont illustrés par un exemple dans la séquence ordovicienne et siluro-dévonienne de l'île d'Anticosti et de l'est des Appalaches québécoises au Canada. This paper presents a new method to be used in the preparation of dispersed organic matter for microscopic studies. The organic matter is spread on a glass slide in order to permit all types of microscopic observation (transmitted, reflected or fluorescent light on a single mount. An example of its application, taken from the Ordovician and Siluro-Devonian sequence of Anticosti Island and Eastern Appalachians of Quebec, shows the advantages of this new method over the traditional plugmethod.
Colorimetry applied to the field of cultural heritage: examples of study cases
Directory of Open Access Journals (Sweden)
Salvatore Lorusso
2007-07-01
cohesion, anisotropy of the material and different exposure conditions of the works. Such researches may contribute to applying colorimetry in the field of cultural heritage.
Printing method and printer used for applying this method
2006-01-01
The invention pertains to a method for transferring ink to a receiving material using an inkjet printer having an ink chamber (10) with a nozzle (8) and an electromechanical transducer (16) in cooperative connection with the ink chamber, comprising actuating the transducer to generate a pressure
Splendor and misery of the distorted wave method applied to heavy ions transfer reactions
International Nuclear Information System (INIS)
Mermaz, M.C.
1979-01-01
The success and failure of the Distorted Wave Method (DWM) applied to heavy ion transfer reactions are illustrated by few examples: one and multi-nucleon transfer reactions induced by 15 N and 18 O on 28 Si target nucleus performed on the vicinity of Coulomb barrier respectively at 44 and 56 MeV incident energy
Discrimination symbol applying method for sintered nuclear fuel product
International Nuclear Information System (INIS)
Ishizaki, Jin
1998-01-01
The present invention provides a symbol applying method for applying discrimination information such as an enrichment degree on the end face of a sintered nuclear product. Namely, discrimination symbols of information of powders are applied by a sintering aid to the end face of a molded member formed by molding nuclear fuel powders under pressure. Then, the molded product is sintered. The sintering aid comprises aluminum oxide, a mixture of aluminum oxide and silicon dioxide, aluminum hydride or aluminum stearate alone or in admixture. As an applying means of the sintering aid, discrimination symbols of information of powders are drawn by an isostearic acid on the end face of the molded product, and the sintering aid is sprayed thereto, or the sintering aid is applied directly, or the sintering aid is suspended in isostearic acid, and the suspension is applied with a brush. As a result, visible discrimination information can be applied to the sintered member easily. (N.H.)
Building "Applied Linguistic Historiography": Rationale, Scope, and Methods
Smith, Richard
2016-01-01
In this article I argue for the establishment of "Applied Linguistic Historiography" (ALH), that is, a new domain of enquiry within applied linguistics involving a rigorous, scholarly, and self-reflexive approach to historical research. Considering issues of rationale, scope, and methods in turn, I provide reasons why ALH is needed and…
Applying Mixed Methods Research at the Synthesis Level: An Overview
Heyvaert, Mieke; Maes, Bea; Onghena, Patrick
2011-01-01
Historically, qualitative and quantitative approaches have been applied relatively separately in synthesizing qualitative and quantitative evidence, respectively, in several research domains. However, mixed methods approaches are becoming increasingly popular nowadays, and practices of combining qualitative and quantitative research components at…
Quantitative EEG Applying the Statistical Recognition Pattern Method
DEFF Research Database (Denmark)
Engedal, Knut; Snaedal, Jon; Hoegh, Peter
2015-01-01
BACKGROUND/AIM: The aim of this study was to examine the discriminatory power of quantitative EEG (qEEG) applying the statistical pattern recognition (SPR) method to separate Alzheimer's disease (AD) patients from elderly individuals without dementia and from other dementia patients. METHODS...
International Nuclear Information System (INIS)
Arndt, T.E.
1994-06-01
This paper presents an example of a component replacement (electric heater) when installed in an older ventilation system that was constructed before the issuance of ASME N509 and N510. Many of the existing ventilation systems at the Hanford Site were designed, fabricated, and installed before the issuance of ASME N509 and N510. Requiring the application of these codes to existing ventilation systems presents challenges to the engineer when design changes are needed. Although it may seem that the application of ASME N509 or N510 may be a hindrance at times, this does not need to occur. Proper preparation at the start of project or design modifications can minimize frustration to the engineer when it is judged that portions of ASME N509 and N510 do not apply in a particular application
Energy Technology Data Exchange (ETDEWEB)
Arndt, T.E. [Westinghouse Hanford Company, Richland, WA (United States)
1995-02-01
This paper presents an example of a component replacement (electric heater) when installed in an older ventilation system that was constructed before the issuance of ASME N509{sup 1} and N510{sup 2}. Many of the existing ventilation systems at the Hanford Site were designed, fabricated, and installed before the issuance of ASME N509{sup 1} and N510{sup 2}. Requiring the application of these codes to existing ventilation systems presents challenges to the engineer when design changes are needed. Although it may seem that the application of ASME N509{sup 1} or N510{sup 2} may be a hindrance at times, this does not need to occur. Proper preparation at the start of project or design modifications can minimize frustration to the engineer when it is judged that portions of ASME N509{sup 1} and N510{sup 2} do not apply in a particular application.
Theoretical Coalescence: A Method to Develop Qualitative Theory: The Example of Enduring.
Morse, Janice M
Qualitative research is frequently context bound, lacks generalizability, and is limited in scope. The purpose of this article was to describe a method, theoretical coalescence, that provides a strategy for analyzing complex, high-level concepts and for developing generalizable theory. Theoretical coalescence is a method of theoretical expansion, inductive inquiry, of theory development, that uses data (rather than themes, categories, and published extracts of data) as the primary source for analysis. Here, using the development of the lay concept of enduring as an example, I explore the scientific development of the concept in multiple settings over many projects and link it within the Praxis Theory of Suffering. As comprehension emerges when conducting theoretical coalescence, it is essential that raw data from various different situations be available for reinterpretation/reanalysis and comparison to identify the essential features of the concept. The concept is then reconstructed, with additional inquiry that builds description, and evidence is conducted and conceptualized to create a more expansive concept and theory. By utilizing apparently diverse data sets from different contexts that are linked by certain characteristics, the essential features of the concept emerge. Such inquiry is divergent and less bound by context yet purposeful, logical, and with significant pragmatic implications for practice in nursing and beyond our discipline. Theoretical coalescence is a means by which qualitative inquiry is broadened to make an impact, to accommodate new theoretical shifts and concepts, and to make qualitative research applied and accessible in new ways.
Electronic-projecting Moire method applying CBR-technology
Kuzyakov, O. N.; Lapteva, U. V.; Andreeva, M. A.
2018-01-01
Electronic-projecting method based on Moire effect for examining surface topology is suggested. Conditions of forming Moire fringes and their parameters’ dependence on reference parameters of object and virtual grids are analyzed. Control system structure and decision-making subsystem are elaborated. Subsystem execution includes CBR-technology, based on applying case base. The approach related to analysing and forming decision for each separate local area with consequent formation of common topology map is applied.
A Lagrangian meshfree method applied to linear and nonlinear elasticity.
Walker, Wade A
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.
Directory of Open Access Journals (Sweden)
Thomas B. McGuckian
2018-02-01
Full Text Available For an athlete to make an appropriate decision and successfully perform a skill, they need to perceive opportunities for action by visually exploring their environment. The head movements that support visual exploration can easily and accurately be recorded using Micro-Electro-Mechanical Systems (MEMS Inertial Measurement Units (IMU in research and applied settings. However, for IMU technology to be effective in applied settings, practitioners need to be able to communicate data to coaches and players. This paper presents methods of visualising and communicating exploratory head movement data, with the aim of giving a better understanding of (a individual differences in exploratory action, and (b how IMUs can be used in applied settings to assess and monitor visual exploratory action.
Applying the Taguchi method for optimized fabrication of bovine ...
African Journals Online (AJOL)
SERVER
2008-02-19
Feb 19, 2008 ... Nanobiotechnology Research Lab., School of Chemical Engineering, Babol University of Technology, Po.Box: 484, ... nanoparticle by applying the Taguchi method with characterization of the ... of BSA/ethanol and organic solvent adding rate. ... Sodium aside and all other chemicals were purchased from.
Using Mixed Methods to Analyze Video Data: A Mathematics Teacher Professional Development Example
DeCuir-Gunby, Jessica T.; Marshall, Patricia L.; McCulloch, Allison W.
2012-01-01
This article uses data from 65 teachers participating in a K-2 mathematics professional development research project as an example of how to analyze video recordings of teachers' classroom lessons using mixed methods. Through their discussion, the authors demonstrate how using a mixed methods approach to classroom video analysis allows researchers…
Aircraft operability methods applied to space launch vehicles
Young, Douglas
1997-01-01
The commercial space launch market requirement for low vehicle operations costs necessitates the application of methods and technologies developed and proven for complex aircraft systems. The ``building in'' of reliability and maintainability, which is applied extensively in the aircraft industry, has yet to be applied to the maximum extent possible on launch vehicles. Use of vehicle system and structural health monitoring, automated ground systems and diagnostic design methods derived from aircraft applications support the goal of achieving low cost launch vehicle operations. Transforming these operability techniques to space applications where diagnostic effectiveness has significantly different metrics is critical to the success of future launch systems. These concepts will be discussed with reference to broad launch vehicle applicability. Lessons learned and techniques used in the adaptation of these methods will be outlined drawing from recent aircraft programs and implementation on phase 1 of the X-33/RLV technology development program.
Magnetic stirring welding method applied to nuclear power plant
International Nuclear Information System (INIS)
Hirano, Kenji; Watando, Masayuki; Morishige, Norio; Enoo, Kazuhide; Yasuda, Yuuji
2002-01-01
In construction of a new nuclear power plant, carbon steel and stainless steel are used as base materials for the bottom linear plate of Reinforced Concrete Containment Vessel (RCCV) to achieve maintenance-free requirement, securing sufficient strength of structure. However, welding such different metals is difficult by ordinary method. To overcome the difficulty, the automated Magnetic Stirring Welding (MSW) method that can demonstrate good welding performance was studied for practical use, and weldability tests showed the good results. Based on the study, a new welding device for the MSW method was developed to apply it weld joints of different materials, and it practically used in part of a nuclear power plant. (author)
Linear algebraic methods applied to intensity modulated radiation therapy.
Crooks, S M; Xing, L
2001-10-01
Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.
Methods of applied mathematics with a software overview
Davis, Jon H
2016-01-01
This textbook, now in its second edition, provides students with a firm grasp of the fundamental notions and techniques of applied mathematics as well as the software skills to implement them. The text emphasizes the computational aspects of problem solving as well as the limitations and implicit assumptions inherent in the formal methods. Readers are also given a sense of the wide variety of problems in which the presented techniques are useful. Broadly organized around the theme of applied Fourier analysis, the treatment covers classical applications in partial differential equations and boundary value problems, and a substantial number of topics associated with Laplace, Fourier, and discrete transform theories. Some advanced topics are explored in the final chapters such as short-time Fourier analysis and geometrically based transforms applicable to boundary value problems. The topics covered are useful in a variety of applied fields such as continuum mechanics, mathematical physics, control theory, and si...
Which DTW Method Applied to Marine Univariate Time Series Imputation
Phan , Thi-Thu-Hong; Caillault , Émilie; Lefebvre , Alain; Bigand , André
2017-01-01
International audience; Missing data are ubiquitous in any domains of applied sciences. Processing datasets containing missing values can lead to a loss of efficiency and unreliable results, especially for large missing sub-sequence(s). Therefore, the aim of this paper is to build a framework for filling missing values in univariate time series and to perform a comparison of different similarity metrics used for the imputation task. This allows to suggest the most suitable methods for the imp...
Applying Qualitative Research Methods to Narrative Knowledge Engineering
O'Neill, Brian; Riedl, Mark
2014-01-01
We propose a methodology for knowledge engineering for narrative intelligence systems, based on techniques used to elicit themes in qualitative methods research. Our methodology uses coding techniques to identify actions in natural language corpora, and uses these actions to create planning operators and procedural knowledge, such as scripts. In an iterative process, coders create a taxonomy of codes relevant to the corpus, and apply those codes to each element of that corpus. These codes can...
APPLYING SPECTROSCOPIC METHODS ON ANALYSES OF HAZARDOUS WASTE
Dobrinić, Julijan; Kunić, Marija; Ciganj, Zlatko
2000-01-01
Abstract The paper presents results of measuring the content of heavy and other metals in waste samples from the hazardous waste disposal site of Sovjak near Rijeka. The preliminary design elaboration and the choice of the waste disposal sanification technology were preceded by the sampling and physico-chemical analyses of disposed waste, enabling its categorization. The following spectroscopic methods were applied on metal content analysis: Atomic absorption spectroscopy (AAS) and plas...
A new method of AHP applied to personal credit evaluation
Institute of Scientific and Technical Information of China (English)
JIANG Ming-hui; XIONG Qi; CAO Jing
2006-01-01
This paper presents a new negative judgment matrix that combines the advantages of the reciprocal judgment matrix and the fuzzy complementary judgment matrix, and then puts forth the properties of this new matrix. In view of these properties, this paper derives a clear sequencing formula for the new negative judgment matrix, which improves the sequencing principle of AHP. Finally, this new method is applied to personal credit evaluation to show its advantages of conciseness and swiftness.
Directory of Open Access Journals (Sweden)
Pegah Kassraian Fard
2016-12-01
Full Text Available Most psychiatric disorders are associated with subtle alterations in brain function and are subject to large inter-individual differences. Typically the diagnosis of these disorders requires time-consuming behavioral assessments administered by a multi-disciplinary team with extensive experience. Whilst the application of machine learning classification methods (ML classifiers to neuroimaging data has the potential to speed and simplify diagnosis of psychiatric disorders, the methods, assumptions, and analytical steps are not currently opaque and accessible to researchers and clinicians outside the field. In this paper, we describe potential classification pipelines for Autism Spectrum Disorder, as an example of a psychiatric disorder. The analyses are based on resting-state fMRI data derived from a multi-site data repository (ABIDE. We compare several popular ML classifiers such as support vector machines, neural networks and regression approaches, among others. In a tutorial style, written to be equally accessible for researchers and clinicians, we explain the rationale of each classification approach, clarify the underlying assumptions, and discuss possible pitfalls and challenges. We also provide the data as well as the MATLAB code we used to achieve our results. We show that out-of-the-box ML classifiers can yield classification accuracies of about 60-70%. Finally, we discuss how classification accuracy can be further improved, and we mention methodological developments that are needed to pave the way for the use of ML classifiers in clinical practice.
Novel biodosimetry methods applied to victims of the Goiania accident
International Nuclear Information System (INIS)
Straume, T.; Langlois, R.G.; Lucas, J.; Jensen, R.H.; Bigbee, W.L.; Ramalho, A.T.; Brandao-Mello, C.E.
1991-01-01
Two biodosimetric methods under development at the Lawrence Livermore National Laboratory were applied to five persons accidentally exposed to a 137Cs source in Goiania, Brazil. The methods used were somatic null mutations at the glycophorin A locus detected as missing proteins on the surface of blood erythrocytes and chromosome translocations in blood lymphocytes detected using fluorescence in-situ hybridization. Biodosimetric results obtained approximately 1 y after the accident using these new and largely unvalidated methods are in general agreement with results obtained immediately after the accident using dicentric chromosome aberrations. Additional follow-up of Goiania accident victims will (1) help provide the information needed to validate these new methods for use in biodosimetry and (2) provide independent estimates of dose
Newton-Krylov methods applied to nonequilibrium radiation diffusion
International Nuclear Information System (INIS)
Knoll, D.A.; Rider, W.J.; Olsen, G.L.
1998-01-01
The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton's method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton's method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step
DEFF Research Database (Denmark)
Thorson, James T.; Kristensen, Kasper
2016-01-01
Statistical models play an important role in fisheries science when reconciling ecological theory with available data for wild populations or experimental studies. Ecological models increasingly include both fixed and random effects, and are often estimated using maximum likelihood techniques...... configurations of an age-structured population dynamics model. This simulation experiment shows that the epsilon-method and the existing bias-correction method perform equally well in data-rich contexts, but the epsilon-method is slightly less biased in data-poor contexts. We then apply the epsilon......-method to a spatial regression model when estimating an index of population abundance, and compare results with an alternative bias-correction algorithm that involves Markov-chain Monte Carlo sampling. This example shows that the epsilon-method leads to a biologically significant difference in estimates of average...
Methods of noxious insects control by radiation on example of 'Stegobium paniceum L.'
International Nuclear Information System (INIS)
Krajewski, A.
1997-01-01
The radiation method of disinfestation on example of 'Stegobium paniceum L.' has been described. The different stadia of insect growth have been irradiated. Their radiosensitivity have been estimated on the base of dose-response relationship. Biological radiation effects have been observed as insect procreation limitation. 26 refs, 4 figs, 1 tab
Innovative Public Procurement Methods: Examples Of Selected Country And Lessons For Turkey
Directory of Open Access Journals (Sweden)
Elif Ayşe ŞAHİN İPEK
2016-12-01
Full Text Available Innovative public procurement considered as demand-side policies aimed at economic competitiveness, growth and development through the development of private sector innovation supply. In this study it is examined the methods of innovative procurement policy and country examples. It is exerted obstacles and solutions from the results of this examination.
Collecting and analyzing qualitative data: Hermeneutic principles, methods and case examples
Michael E. Patterson; Daniel R. Williams
2002-01-01
Over the past three decades, the use of qualitative research methods has become commonplace in social science as a whole and increasingly represented in tourism and recrearion research. In tourism, for example, Markwell and Basche (1998) recently noted the emergence of a pluralistic perspective on science and the growth of research employing qualitative frameworks....
Coates, Peter S; Casazza, Michael L; Ricca, Mark A; Brussee, Brianne E; Blomberg, Erik J; Gustafson, K Benjamin; Overton, Cory T; Davis, Dawn M; Niell, Lara E; Espinosa, Shawn P; Gardner, Scott C; Delehanty, David J
2016-02-01
Predictive species distributional models are a cornerstone of wildlife conservation planning. Constructing such models requires robust underpinning science that integrates formerly disparate data types to achieve effective species management.Greater sage-grouse Centrocercus urophasianus , hereafter 'sage-grouse' populations are declining throughout sagebrush-steppe ecosystems in North America, particularly within the Great Basin, which heightens the need for novel management tools that maximize the use of available information.Herein, we improve upon existing species distribution models by combining information about sage-grouse habitat quality, distribution and abundance from multiple data sources. To measure habitat, we created spatially explicit maps depicting habitat selection indices (HSI) informed by >35 500 independent telemetry locations from >1600 sage-grouse collected over 15 years across much of the Great Basin. These indices were derived from models that accounted for selection at different spatial scales and seasons. A region-wide HSI was calculated using the HSI surfaces modelled for 12 independent subregions and then demarcated into distinct habitat quality classes.We also employed a novel index to describe landscape patterns of sage-grouse abundance and space use (AUI). The AUI is a probabilistic composite of the following: (i) breeding density patterns based on the spatial configuration of breeding leks and associated trends in male attendance; and (ii) year-round patterns of space use indexed by the decreasing probability of use with increasing distance to leks. The continuous AUI surface was then reclassified into two classes representing high and low/no use and abundance. Synthesis and application s. Using the example of sage-grouse, we demonstrate how the joint application of indices of habitat selection, abundance and space use derived from multiple data sources yields a composite map that can guide effective allocation of management
Coates, Peter S.; Casazza, Michael L.; Ricca, Mark A.; Brussee, Brianne E.; Blomberg, Erik J.; Gustafson, K. Benjamin; Overton, Cory T.; Davis, Dawn M.; Niell, Lara E.; Espinosa, Shawn P.; Gardner, Scott C.; Delehanty, David J.
2016-01-01
Predictive species distributional models are a cornerstone of wildlife conservation planning. Constructing such models requires robust underpinning science that integrates formerly disparate data types to achieve effective species management. Greater sage-grouse Centrocercus urophasianus, hereafter “sage-grouse” populations are declining throughout sagebrush-steppe ecosystems in North America, particularly within the Great Basin, which heightens the need for novel management tools that maximize use of available information. Herein, we improve upon existing species distribution models by combining information about sage-grouse habitat quality, distribution, and abundance from multiple data sources. To measure habitat, we created spatially explicit maps depicting habitat selection indices (HSI) informed by > 35 500 independent telemetry locations from > 1600 sage-grouse collected over 15 years across much of the Great Basin. These indices were derived from models that accounted for selection at different spatial scales and seasons. A region-wide HSI was calculated using the HSI surfaces modelled for 12 independent subregions and then demarcated into distinct habitat quality classes. We also employed a novel index to describe landscape patterns of sage-grouse abundance and space use (AUI). The AUI is a probabilistic composite of: (i) breeding density patterns based on the spatial configuration of breeding leks and associated trends in male attendance; and (ii) year-round patterns of space use indexed by the decreasing probability of use with increasing distance to leks. The continuous AUI surface was then reclassified into two classes representing high and low/no use and abundance. Synthesis and applications. Using the example of sage-grouse, we demonstrate how the joint application of indices of habitat selection, abundance, and space use derived from multiple data sources yields a composite map that can guide effective allocation of management intensity across
Formal methods applied to industrial complex systems implementation of the B method
Boulanger, Jean-Louis
2014-01-01
This book presents real-world examples of formal techniques in an industrial context. It covers formal methods such as SCADE and/or the B Method, in various fields such as railways, aeronautics, and the automotive industry. The purpose of this book is to present a summary of experience on the use of "formal methods" (based on formal techniques such as proof, abstract interpretation and model-checking) in industrial examples of complex systems, based on the experience of people currently involved in the creation and assessment of safety critical system software. The involvement of people from
GPS surveying method applied to terminal area navigation flight experiments
Energy Technology Data Exchange (ETDEWEB)
Murata, M; Shingu, H; Satsushima, K; Tsuji, T; Ishikawa, K; Miyazawa, Y; Uchida, T [National Aerospace Laboratory, Tokyo (Japan)
1993-03-01
With an objective of evaluating accuracy of new landing and navigation systems such as microwave landing guidance system and global positioning satellite (GPS) system, flight experiments are being carried out using experimental aircraft. This aircraft mounts a GPS and evaluates its accuracy by comparing the standard orbits spotted by a Kalman filter from the laser tracing data on the aircraft with the navigation results. The GPS outputs position and speed information from an earth-centered-earth-fixed system called the World Geodetic System, 1984 (WGS84). However, in order to compare the navigation results with output from a reference orbit sensor or other navigation sensor, it is necessary to structure a high-precision reference coordinates system based on the WGS84. A method that applies the GPS phase interference measurement for this problem was proposed, and used actually in analyzing a flight experiment data. As referred to a case of the method having been applied to evaluating an independent navigation accuracy, the method was verified sufficiently effective and reliable not only in navigation method analysis, but also in the aspect of navigational operations. 12 refs., 10 figs., 5 tabs.
Applied statistical methods in agriculture, health and life sciences
Lawal, Bayo
2014-01-01
This textbook teaches crucial statistical methods to answer research questions using a unique range of statistical software programs, including MINITAB and R. This textbook is developed for undergraduate students in agriculture, nursing, biology and biomedical research. Graduate students will also find it to be a useful way to refresh their statistics skills and to reference software options. The unique combination of examples is approached using MINITAB and R for their individual strengths. Subjects covered include among others data description, probability distributions, experimental design, regression analysis, randomized design and biological assay. Unlike other biostatistics textbooks, this text also includes outliers, influential observations in regression and an introduction to survival analysis. Material is taken from the author's extensive teaching and research in Africa, USA and the UK. Sample problems, references and electronic supplementary material accompany each chapter.
Analysis of concrete beams using applied element method
Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen
2018-03-01
The Applied Element Method (AEM) is a displacement based method of structural analysis. Some of its features are similar to that of Finite Element Method (FEM). In AEM, the structure is analysed by dividing it into several elements similar to FEM. But, in AEM, elements are connected by springs instead of nodes as in the case of FEM. In this paper, background to AEM is discussed and necessary equations are derived. For illustrating the application of AEM, it has been used to analyse plain concrete beam of fixed support condition. The analysis is limited to the analysis of 2-dimensional structures. It was found that the number of springs has no much influence on the results. AEM could predict deflection and reactions with reasonable degree of accuracy.
The Lattice Boltzmann Method applied to neutron transport
International Nuclear Information System (INIS)
Erasmus, B.; Van Heerden, F. A.
2013-01-01
In this paper the applicability of the Lattice Boltzmann Method to neutron transport is investigated. One of the main features of the Lattice Boltzmann method is the simultaneous discretization of the phase space of the problem, whereby particles are restricted to move on a lattice. An iterative solution of the operator form of the neutron transport equation is presented here, with the first collision source as the starting point of the iteration scheme. A full description of the discretization scheme is given, along with the quadrature set used for the angular discretization. An angular refinement scheme is introduced to increase the angular coverage of the problem phase space and to mitigate lattice ray effects. The method is applied to a model problem to investigate its applicability to neutron transport and the results are compared to a reference solution calculated, using MCNP. (authors)
Advanced methods for image registration applied to JET videos
Energy Technology Data Exchange (ETDEWEB)
Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Murari, Andrea [Consorzio RFX, Associazione EURATOM-ENEA per la Fusione, Padova (Italy); Gelfusa, Michela [Associazione EURATOM-ENEA – University of Rome “Tor Vergata”, Roma (Italy); Tiseanu, Ion; Zoita, Vasile [EURATOM-MEdC Association, NILPRP, Bucharest (Romania); Arnoux, Gilles [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon (United Kingdom)
2015-10-15
Graphical abstract: - Highlights: • Development of an image registration method for JET IR and fast visible cameras. • Method based on SIFT descriptors and coherent point drift points set registration technique. • Method able to deal with extremely noisy images and very low luminosity images. • Computation time compatible with the inter-shot analysis. - Abstract: The last years have witnessed a significant increase in the use of digital cameras on JET. They are routinely applied for imaging in the IR and visible spectral regions. One of the main technical difficulties in interpreting the data of camera based diagnostics is the presence of movements of the field of view. Small movements occur due to machine shaking during normal pulses while large ones may arise during disruptions. Some cameras show a correlation of image movement with change of magnetic field strength. For deriving unaltered information from the videos and for allowing correct interpretation an image registration method, based on highly distinctive scale invariant feature transform (SIFT) descriptors and on the coherent point drift (CPD) points set registration technique, has been developed. The algorithm incorporates a complex procedure for rejecting outliers. The method has been applied for vibrations correction to videos collected by the JET wide angle infrared camera and for the correction of spurious rotations in the case of the JET fast visible camera (which is equipped with an image intensifier). The method has proved to be able to deal with the images provided by this camera frequently characterized by low contrast and a high level of blurring and noise.
Classification of Specialized Farms Applying Multivariate Statistical Methods
Directory of Open Access Journals (Sweden)
Zuzana Hloušková
2017-01-01
Full Text Available Classification of specialized farms applying multivariate statistical methods The paper is aimed at application of advanced multivariate statistical methods when classifying cattle breeding farming enterprises by their economic size. Advantage of the model is its ability to use a few selected indicators compared to the complex methodology of current classification model that requires knowledge of detailed structure of the herd turnover and structure of cultivated crops. Output of the paper is intended to be applied within farm structure research focused on future development of Czech agriculture. As data source, the farming enterprises database for 2014 has been used, from the FADN CZ system. The predictive model proposed exploits knowledge of actual size classes of the farms tested. Outcomes of the linear discriminatory analysis multifactor classification method have supported the chance of filing farming enterprises in the group of Small farms (98 % filed correctly, and the Large and Very Large enterprises (100 % filed correctly. The Medium Size farms have been correctly filed at 58.11 % only. Partial shortages of the process presented have been found when discriminating Medium and Small farms.
Balancing of linkages and robot manipulators advanced methods with illustrative examples
Arakelian, Vigen
2015-01-01
In this book advanced balancing methods for planar and spatial linkages, hand operated and automatic robot manipulators are presented. It is organized into three main parts and eight chapters. The main parts are the introduction to balancing, the balancing of linkages and the balancing of robot manipulators. The review of state-of-the-art literature including more than 500 references discloses particularities of shaking force/moment balancing and gravity compensation methods. Then new methods for balancing of linkages are considered. Methods provided in the second part of the book deal with the partial and complete shaking force/moment balancing of various linkages. A new field for balancing methods applications is the design of mechanical systems for fast manipulation. Special attention is given to the shaking force/moment balancing of robot manipulators. Gravity balancing methods are also discussed. The suggested balancing methods are illustrated by numerous examples.
Metrological evaluation of characterization methods applied to nuclear fuels
International Nuclear Information System (INIS)
Faeda, Kelly Cristina Martins; Lameiras, Fernando Soares; Camarano, Denise das Merces; Ferreira, Ricardo Alberto Neto; Migliorini, Fabricio Lima; Carneiro, Luciana Capanema Silva; Silva, Egonn Hendrigo Carvalho
2010-01-01
In manufacturing the nuclear fuel, characterizations are performed in order to assure the minimization of harmful effects. The uranium dioxide is the most used substance as nuclear reactor fuel because of many advantages, such as: high stability even when it is in contact with water at high temperatures, high fusion point, and high capacity to retain fission products. Several methods are used for characterization of nuclear fuels, such as thermogravimetric analysis for the ratio O / U, penetration-immersion method, helium pycnometer and mercury porosimetry for the density and porosity, BET method for the specific surface, chemical analyses for relevant impurities, and the laser flash method for thermophysical properties. Specific tools are needed to control the diameter and the sphericity of the microspheres and the properties of the coating layers (thickness, density, and degree of anisotropy). Other methods can also give information, such as scanning and transmission electron microscopy, X-ray diffraction, microanalysis, and mass spectroscopy of secondary ions for chemical analysis. The accuracy of measurement and level of uncertainty of the resulting data are important. This work describes a general metrological characterization of some techniques applied to the characterization of nuclear fuel. Sources of measurement uncertainty were analyzed. The purpose is to summarize selected properties of UO 2 that have been studied by CDTN in a program of fuel development for Pressurized Water Reactors (PWR). The selected properties are crucial for thermalhydraulic codes to study basic design accidents. The thermal characterization (thermal diffusivity and thermal conductivity) and the penetration immersion method (density and open porosity) of UO 2 samples were focused. The thermal characterization of UO 2 samples was determined by the laser flash method between room temperature and 448 K. The adaptive Monte Carlo Method was used to obtain the endpoints of the
Nuclear and nuclear related analytical methods applied in environmental research
International Nuclear Information System (INIS)
Popescu, Ion V.; Gheboianu, Anca; Bancuta, Iulian; Cimpoca, G. V; Stihi, Claudia; Radulescu, Cristiana; Oros Calin; Frontasyeva, Marina; Petre, Marian; Dulama, Ioana; Vlaicu, G.
2010-01-01
Nuclear Analytical Methods can be used for research activities on environmental studies like water quality assessment, pesticide residues, global climatic change (transboundary), pollution and remediation. Heavy metal pollution is a problem associated with areas of intensive industrial activity. In this work the moss bio monitoring technique was employed to study the atmospheric deposition in Dambovita County Romania. Also, there were used complementary nuclear and atomic analytical methods: Neutron Activation Analysis (NAA), Atomic Absorption Spectrometry (AAS) and Inductively Coupled Plasma Atomic Emission Spectrometry (ICP-AES). These high sensitivity analysis methods were used to determine the chemical composition of some samples of mosses placed in different areas with different pollution industrial sources. The concentrations of Cr, Fe, Mn, Ni and Zn were determined. The concentration of Fe from the same samples was determined using all these methods and we obtained a very good agreement, in statistical limits, which demonstrate the capability of these analytical methods to be applied on a large spectrum of environmental samples with the same results. (authors)
Applied systems ecology: models, data, and statistical methods
Energy Technology Data Exchange (ETDEWEB)
Eberhardt, L L
1976-01-01
In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.
Analysis of Brick Masonry Wall using Applied Element Method
Lincy Christy, D.; Madhavan Pillai, T. M.; Nagarajan, Praveen
2018-03-01
The Applied Element Method (AEM) is a versatile tool for structural analysis. Analysis is done by discretising the structure as in the case of Finite Element Method (FEM). In AEM, elements are connected by a set of normal and shear springs instead of nodes. AEM is extensively used for the analysis of brittle materials. Brick masonry wall can be effectively analyzed in the frame of AEM. The composite nature of masonry wall can be easily modelled using springs. The brick springs and mortar springs are assumed to be connected in series. The brick masonry wall is analyzed and failure load is determined for different loading cases. The results were used to find the best aspect ratio of brick to strengthen brick masonry wall.
Thermally stimulated current method applied to highly irradiated silicon diodes
Pintilie, I; Pintilie, I; Moll, Michael; Fretwurst, E; Lindström, G
2002-01-01
We propose an improved method for the analysis of Thermally Stimulated Currents (TSC) measured on highly irradiated silicon diodes. The proposed TSC formula for the evaluation of a set of TSC spectra obtained with different reverse biases leads not only to the concentration of electron and hole traps visible in the spectra but also gives an estimation for the concentration of defects which not give rise to a peak in the 30-220 K TSC temperature range (very shallow or very deep levels). The method is applied to a diode irradiated with a neutron fluence of phi sub n =1.82x10 sup 1 sup 3 n/cm sup 2.
Hybrid electrokinetic method applied to mix contaminated soil
Energy Technology Data Exchange (ETDEWEB)
Mansour, H.; Maria, E. [Dept. of Building Civil and Environmental Engineering, Concordia Univ., Montreal (Canada)
2001-07-01
Several industrials and municipal areas in North America are contaminated with heavy metals and petroleum products. This mix contamination presents a particularly difficult task for remediation when is exposed in clayey soil. The objective of this research was to find a method to cleanup mix contaminated clayey soils. Finally, a multifunctional hybrid electrokinetic method was investigated. Clayey soil was contaminated with lead and nickel (heavy metals) at the level of 1000 ppm and phenanthrene (PAH) of 600 ppm. Electrokinetic surfactant supply system was applied to mobilize, transport and removal of phenanthrene. A chelation agent (EDTA) was also electrokinetically supplied to mobilize heavy metals. The studies were performed on 8 lab scale electrokinetic cells. The mix contaminated clayey soil was subjected to DC total voltage gradient of 0.3 V/cm. Supplied liquids (surfactant and EDTA) were introduced in different periods of time (22 days, 42 days) in order to optimize the most excessive removal of contaminants. The ph, electrical parameters, volume supplied, and volume discharged was monitored continuously during each experiment. At the end of these tests soil and cathalyte were subjected to physico-chemical analysis. The paper discusses results of experiments including the optimal energy use, removal efficiency of phenanthrene, as well, transport and removal of heavy metals. The results of this study can be applied for in-situ hybrid electrokinetic technology to remediate clayey sites contaminated with petroleum product mixed with heavy metals (e.g. manufacture Gas Plant Sites). (orig.)
A Multifactorial Analysis of Reconstruction Methods Applied After Total Gastrectomy
Directory of Open Access Journals (Sweden)
Oktay Büyükaşık
2010-12-01
Full Text Available Aim: The aim of this study was to evaluate the reconstruction methods applied after total gastrectomy in terms of postoperative symptomology and nutrition. Methods: This retrospective study was conducted on 31 patients who underwent total gastrectomy due to gastric cancer in 2. Clinic of General Surgery, SSK Ankara Training Hospital. 6 different reconstruction methods were used and analyzed in terms of age, sex and postoperative complications. One from esophagus and two biopsy specimens from jejunum were taken through upper gastrointestinal endoscopy from all cases, and late period morphological and microbiological changes were examined. Postoperative weight change, dumping symptoms, reflux esophagitis, solid/liquid dysphagia, early satiety, postprandial pain, diarrhea and anorexia were assessed. Results: Of 31 patients,18 were males and 13 females; the youngest one was 33 years old, while the oldest- 69 years old. It was found that reconstruction without pouch was performed in 22 cases and with pouch in 9 cases. Early satiety, postprandial pain, dumping symptoms, diarrhea and anemia were found most commonly in cases with reconstruction without pouch. The rate of bacterial colonization of the jejunal mucosa was identical in both groups. Reflux esophagitis was most commonly seen in omega esophagojejunostomy (EJ, while the least-in Roux-en-Y, Tooley and Tanner 19 EJ. Conclusion: Reconstruction with pouch performed after total gastrectomy is still a preferable method. (The Medical Bulletin of Haseki 2010; 48:126-31
Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan
2017-01-01
Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Vujičić Momčilo D.
2017-01-01
Full Text Available This paper deals with comparative analysis of two different types of objective techniques for criteria weighing: Entropy and CRITIC and two MCDM methods: MOORA and SAW on example of an air conditioner selection. We used six variants for calculation of normalized performance ratings. Results showed that the decision of the best air conditioner was basically independent of the MCDM method used, despite the applied technique for determination of criteria weights. Complete ranking within all of the combinations of methods and techniques with diverse ratio calculation variants showed that the best ranked air conditioner was A7, while the worst ones were A5 and A9. Significant positive correlation was obtained for almost all the pairs of variants in all the combinations except for the MOORA - CRITIC combination with SAW - Entropy combination to have the highest correlations between variants (p < 0.01.
Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective
Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith
2010-01-01
The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…
Analytical methods applied to diverse types of Brazilian propolis
Directory of Open Access Journals (Sweden)
Marcucci Maria
2011-06-01
Full Text Available Abstract Propolis is a bee product, composed mainly of plant resins and beeswax, therefore its chemical composition varies due to the geographic and plant origins of these resins, as well as the species of bee. Brazil is an important supplier of propolis on the world market and, although green colored propolis from the southeast is the most known and studied, several other types of propolis from Apis mellifera and native stingless bees (also called cerumen can be found. Propolis is usually consumed as an extract, so the type of solvent and extractive procedures employed further affect its composition. Methods used for the extraction; analysis the percentage of resins, wax and insoluble material in crude propolis; determination of phenolic, flavonoid, amino acid and heavy metal contents are reviewed herein. Different chromatographic methods applied to the separation, identification and quantification of Brazilian propolis components and their relative strengths are discussed; as well as direct insertion mass spectrometry fingerprinting. Propolis has been used as a popular remedy for several centuries for a wide array of ailments. Its antimicrobial properties, present in propolis from different origins, have been extensively studied. But, more recently, anti-parasitic, anti-viral/immune stimulating, healing, anti-tumor, anti-inflammatory, antioxidant and analgesic activities of diverse types of Brazilian propolis have been evaluated. The most common methods employed and overviews of their relative results are presented.
Pedagogies in Action: A Community Resource Linking Teaching Methods to Examples of their Use
Manduca, C. A.; Fox, S. P.; Iverson, E. A.; Kirk, K.; Ormand, C. J.
2009-12-01
The Pedagogies in Action portal (http://serc.carleton.edu/sp) provides access to information on more than 40 teaching methods with examples of their use in geoscience and beyond. Each method is described with pages addressing what the method is, why or when it is useful, and how it can be implemented. New methods added this year include Teaching with Google Earth, Jigsaw, Teaching the Process of Science, Guided Discovery Problems, Teaching Urban Students, and Using ConceptTests. Examples then show specifically how the method has been used to teach concepts in a variety of disciplines. The example collection now includes 775 teaching activities of which more than 550 are drawn from the geosciences. Geoscience faculty are invited to add their own examples to this collection or to test examples in the collection and provide a review. Evaluation results show that the combination of modules and activities inspires teachers at all levels to use a new pedagogy and increases their confidence that they can use it successfully. In addition, submitting activities to the collection, including writing summary information for other instructors, helps them think more carefully about the design of their activity. The activity collections are used both for ready to use activities and to find ideas for new activities. The portal provides overarching access to materials developed by a wide variety of collaborating partners each of which uses the service to create a customized pedagogic portal addressing a more specific audience. Of interest to AGU members are pedagogic portals on Starting Point: Teaching Introductory Geoscience (http://serc.carleton.edu/introgeo); On the Cutting Edge (http://serc.carleton.edu/NAGTWorkshops); Enduring Resources for Earth System Education (http://earthref.org/ERESE) Microbial Life Educational Resources (http://serc.carleton.edu/microbe_life); the National Numeracy Network (http://serc.carleton.edu/nnn/index.html); CAUSE: The Consortium for
Teaching organization theory for healthcare management: three applied learning methods.
Olden, Peter C
2006-01-01
Organization theory (OT) provides a way of seeing, describing, analyzing, understanding, and improving organizations based on patterns of organizational design and behavior (Daft 2004). It gives managers models, principles, and methods with which to diagnose and fix organization structure, design, and process problems. Health care organizations (HCOs) face serious problems such as fatal medical errors, harmful treatment delays, misuse of scarce nurses, costly inefficiency, and service failures. Some of health care managers' most critical work involves designing and structuring their organizations so their missions, visions, and goals can be achieved-and in some cases so their organizations can survive. Thus, it is imperative that graduate healthcare management programs develop effective approaches for teaching OT to students who will manage HCOs. Guided by principles of education, three applied teaching/learning activities/assignments were created to teach OT in a graduate healthcare management program. These educationalmethods develop students' competency with OT applied to HCOs. The teaching techniques in this article may be useful to faculty teaching graduate courses in organization theory and related subjects such as leadership, quality, and operation management.
Six Sigma methods applied to cryogenic coolers assembly line
Ventre, Jean-Marc; Germain-Lacour, Michel; Martin, Jean-Yves; Cauquil, Jean-Marc; Benschop, Tonny; Griot, René
2009-05-01
Six Sigma method have been applied to manufacturing process of a rotary Stirling cooler: RM2. Name of the project is NoVa as main goal of the Six Sigma approach is to reduce variability (No Variability). Project has been based on the DMAIC guideline following five stages: Define, Measure, Analyse, Improve, Control. Objective has been set on the rate of coolers succeeding performance at first attempt with a goal value of 95%. A team has been gathered involving people and skills acting on the RM2 manufacturing line. Measurement System Analysis (MSA) has been applied to test bench and results after R&R gage show that measurement is one of the root cause for variability in RM2 process. Two more root causes have been identified by the team after process mapping analysis: regenerator filling factor and cleaning procedure. Causes for measurement variability have been identified and eradicated as shown by new results from R&R gage. Experimental results show that regenerator filling factor impacts process variability and affects yield. Improved process haven been set after new calibration process for test bench, new filling procedure for regenerator and an additional cleaning stage have been implemented. The objective for 95% coolers succeeding performance test at first attempt has been reached and kept for a significant period. RM2 manufacturing process is now managed according to Statistical Process Control based on control charts. Improvement in process capability have enabled introduction of sample testing procedure before delivery.
Metrological evaluation of characterization methods applied to nuclear fuels
Energy Technology Data Exchange (ETDEWEB)
Faeda, Kelly Cristina Martins; Lameiras, Fernando Soares; Camarano, Denise das Merces; Ferreira, Ricardo Alberto Neto; Migliorini, Fabricio Lima; Carneiro, Luciana Capanema Silva; Silva, Egonn Hendrigo Carvalho, E-mail: kellyfisica@gmail.co, E-mail: fernando.lameiras@pq.cnpq.b, E-mail: dmc@cdtn.b, E-mail: ranf@cdtn.b, E-mail: flmigliorini@hotmail.co, E-mail: lucsc@hotmail.co, E-mail: egonn@ufmg.b [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)
2010-07-01
In manufacturing the nuclear fuel, characterizations are performed in order to assure the minimization of harmful effects. The uranium dioxide is the most used substance as nuclear reactor fuel because of many advantages, such as: high stability even when it is in contact with water at high temperatures, high fusion point, and high capacity to retain fission products. Several methods are used for characterization of nuclear fuels, such as thermogravimetric analysis for the ratio O / U, penetration-immersion method, helium pycnometer and mercury porosimetry for the density and porosity, BET method for the specific surface, chemical analyses for relevant impurities, and the laser flash method for thermophysical properties. Specific tools are needed to control the diameter and the sphericity of the microspheres and the properties of the coating layers (thickness, density, and degree of anisotropy). Other methods can also give information, such as scanning and transmission electron microscopy, X-ray diffraction, microanalysis, and mass spectroscopy of secondary ions for chemical analysis. The accuracy of measurement and level of uncertainty of the resulting data are important. This work describes a general metrological characterization of some techniques applied to the characterization of nuclear fuel. Sources of measurement uncertainty were analyzed. The purpose is to summarize selected properties of UO{sub 2} that have been studied by CDTN in a program of fuel development for Pressurized Water Reactors (PWR). The selected properties are crucial for thermalhydraulic codes to study basic design accidents. The thermal characterization (thermal diffusivity and thermal conductivity) and the penetration immersion method (density and open porosity) of UO{sub 2} samples were focused. The thermal characterization of UO{sub 2} samples was determined by the laser flash method between room temperature and 448 K. The adaptive Monte Carlo Method was used to obtain the endpoints of
Caro, Daniel H.; Kyriakides, Leonidas; Televantou, Ioulia
2018-01-01
Omitted prior achievement bias is pervasive in international assessment studies and precludes causal inference. For example, reported negative associations between student-oriented teaching strategies and student performance are against expectations and might actually reflect omitted prior achievement bias. Namely, that these teaching strategies…
Applying systems ergonomics methods in sport: A systematic review.
Hulme, Adam; Thompson, Jason; Plant, Katherine L; Read, Gemma J M; Mclean, Scott; Clacy, Amanda; Salmon, Paul M
2018-04-16
As sports systems become increasingly more complex, competitive, and technology-centric, there is a greater need for systems ergonomics methods to consider the performance, health, and safety of athletes in context with the wider settings in which they operate. Therefore, the purpose of this systematic review was to identify and critically evaluate studies which have applied a systems ergonomics research approach in the context of sports performance and injury management. Five databases (PubMed, Scopus, ScienceDirect, Web of Science, and SPORTDiscus) were searched for the dates 01 January 1990 to 01 August 2017, inclusive, for original peer-reviewed journal articles and conference papers. Reported analyses were underpinned by a recognised systems ergonomics method, and study aims were related to the optimisation of sports performance (e.g. communication, playing style, technique, tactics, or equipment), and/or the management of sports injury (i.e. identification, prevention, or treatment). A total of seven articles were identified. Two articles were focussed on understanding and optimising sports performance, whereas five examined sports injury management. The methods used were the Event Analysis of Systemic Teamwork, Cognitive Work Analysis (the Work Domain Analysis Abstraction Hierarchy), Rasmussen's Risk Management Framework, and the Systems Theoretic Accident Model and Processes method. The individual sport application was distance running, whereas the team sports contexts examined were cycling, football, Australian Football League, and rugby union. The included systems ergonomics applications were highly flexible, covering both amateur and elite sports contexts. The studies were rated as valuable, providing descriptions of injury controls and causation, the factors influencing injury management, the allocation of responsibilities for injury prevention, as well as the factors and their interactions underpinning sports performance. Implications and future
The virtual fields method applied to spalling tests on concrete
Directory of Open Access Journals (Sweden)
Forquin P.
2012-08-01
Full Text Available For one decade spalling techniques based on the use of a metallic Hopkinson bar put in contact with a concrete sample have been widely employed to characterize the dynamic tensile strength of concrete at strain-rates ranging from a few tens to two hundreds of s−1. However, the processing method mainly based on the use of the velocity profile measured on the rear free surface of the sample (Novikov formula remains quite basic and an identification of the whole softening behaviour of the concrete is out of reach. In the present paper a new processing method is proposed based on the use of the Virtual Fields Method (VFM. First, a digital high speed camera is used to record the pictures of a grid glued on the specimen. Next, full-field measurements are used to obtain the axial displacement field at the surface of the specimen. Finally, a specific virtual field has been defined in the VFM equation to use the acceleration map as an alternative ‘load cell’. This method applied to three spalling tests allowed to identify Young’s modulus during the test. It was shown that this modulus is constant during the initial compressive part of the test and decreases in the tensile part when micro-damage exists. It was also shown that in such a simple inertial test, it was possible to reconstruct average axial stress profiles using only the acceleration data. Then, it was possible to construct local stress-strain curves and derive a tensile strength value.
An alternative method for determination of oscillator strengths: The example of Sc II
International Nuclear Information System (INIS)
Ruczkowski, J.; Elantkowska, M.; Dembczyński, J.
2014-01-01
We describe our method for determining oscillator strengths and hyperfine structure splittings that is an alternative to the commonly used, purely theoretical calculations, or to the semi-empirical approach combined with theoretically calculated transition integrals. We have developed our own computer programs that allow us to determine all attributes of the structure of complex atoms starting from the measured frequencies emitted by the atoms. As an example, we present the results of the calculation of the structure, electric dipole transitions, and hyperfine splittings of Sc II. The angular coefficients of the transition matrix in pure SL coupling were found from straightforward Racah algebra. The transition matrix was transformed into the actual intermediate coupling by the fine structure eigenvectors obtained from the semi-empirical approach. The transition integrals were treated as free parameters in the least squares fit to experimental gf values. For most transitions, the experimental and the calculated gf-values are consistent with the accuracy claimed in the NIST compilation. - Highlights: • The method of simultaneous determination of all the attributes of atomic structure. • The semi-empirical method of parameterization of oscillator strengths. • Illustration of the method application for the example of Sc II data
Flood Hazard Mapping by Applying Fuzzy TOPSIS Method
Han, K. Y.; Lee, J. Y.; Keum, H.; Kim, B. J.; Kim, T. H.
2017-12-01
There are lots of technical methods to integrate various factors for flood hazard mapping. The purpose of this study is to suggest the methodology of integrated flood hazard mapping using MCDM(Multi Criteria Decision Making). MCDM problems involve a set of alternatives that are evaluated on the basis of conflicting and incommensurate criteria. In this study, to apply MCDM to assessing flood risk, maximum flood depth, maximum velocity, and maximum travel time are considered as criterion, and each applied elements are considered as alternatives. The scheme to find the efficient alternative closest to a ideal value is appropriate way to assess flood risk of a lot of element units(alternatives) based on various flood indices. Therefore, TOPSIS which is most commonly used MCDM scheme is adopted to create flood hazard map. The indices for flood hazard mapping(maximum flood depth, maximum velocity, and maximum travel time) have uncertainty concerning simulation results due to various values according to flood scenario and topographical condition. These kind of ambiguity of indices can cause uncertainty of flood hazard map. To consider ambiguity and uncertainty of criterion, fuzzy logic is introduced which is able to handle ambiguous expression. In this paper, we made Flood Hazard Map according to levee breach overflow using the Fuzzy TOPSIS Technique. We confirmed the areas where the highest grade of hazard was recorded through the drawn-up integrated flood hazard map, and then produced flood hazard map can be compared them with those indicated in the existing flood risk maps. Also, we expect that if we can apply the flood hazard map methodology suggested in this paper even to manufacturing the current flood risk maps, we will be able to make a new flood hazard map to even consider the priorities for hazard areas, including more varied and important information than ever before. Keywords : Flood hazard map; levee break analysis; 2D analysis; MCDM; Fuzzy TOPSIS
Applying the partitioned multiobjective risk method (PMRM) to portfolio selection.
Reyes Santos, Joost; Haimes, Yacov Y
2004-06-01
The analysis of risk-return tradeoffs and their practical applications to portfolio analysis paved the way for Modern Portfolio Theory (MPT), which won Harry Markowitz a 1992 Nobel Prize in Economics. A typical approach in measuring a portfolio's expected return is based on the historical returns of the assets included in a portfolio. On the other hand, portfolio risk is usually measured using volatility, which is derived from the historical variance-covariance relationships among the portfolio assets. This article focuses on assessing portfolio risk, with emphasis on extreme risks. To date, volatility is a major measure of risk owing to its simplicity and validity for relatively small asset price fluctuations. Volatility is a justified measure for stable market performance, but it is weak in addressing portfolio risk under aberrant market fluctuations. Extreme market crashes such as that on October 19, 1987 ("Black Monday") and catastrophic events such as the terrorist attack of September 11, 2001 that led to a four-day suspension of trading on the New York Stock Exchange (NYSE) are a few examples where measuring risk via volatility can lead to inaccurate predictions. Thus, there is a need for a more robust metric of risk. By invoking the principles of the extreme-risk-analysis method through the partitioned multiobjective risk method (PMRM), this article contributes to the modeling of extreme risks in portfolio performance. A measure of an extreme portfolio risk, denoted by f(4), is defined as the conditional expectation for a lower-tail region of the distribution of the possible portfolio returns. This article presents a multiobjective problem formulation consisting of optimizing expected return and f(4), whose solution is determined using Evolver-a software that implements a genetic algorithm. Under business-as-usual market scenarios, the results of the proposed PMRM portfolio selection model are found to be compatible with those of the volatility-based model
A Systematic Method For Tracer Test Analysis: An Example Using Beowawe Tracer Data
Energy Technology Data Exchange (ETDEWEB)
G. Michael Shook
2005-01-01
Quantitative analysis of tracer data using moment analysis requires a strict adherence to a set of rules which include data normalization, correction for thermal decay, deconvolution, extrapolation, and integration. If done correctly, the method yields specific information on swept pore volume, flow geometry and fluid velocity, and an understanding of the nature of reservoir boundaries. All calculations required for the interpretation can be done in a spreadsheet. The steps required for moment analysis are reviewed in this paper. Data taken from the literature is used in an example calculation.
Applying sociodramatic methods in teaching transition to palliative care.
Baile, Walter F; Walters, Rebecca
2013-03-01
We introduce the technique of sociodrama, describe its key components, and illustrate how this simulation method was applied in a workshop format to address the challenge of discussing transition to palliative care. We describe how warm-up exercises prepared 15 learners who provide direct clinical care to patients with cancer for a dramatic portrayal of this dilemma. We then show how small-group brainstorming led to the creation of a challenging scenario wherein highly optimistic family members of a 20-year-old young man with terminal acute lymphocytic leukemia responded to information about the lack of further anticancer treatment with anger and blame toward the staff. We illustrate how the facilitators, using sociodramatic techniques of doubling and role reversal, helped learners to understand and articulate the hidden feelings of fear and loss behind the family's emotional reactions. By modeling effective communication skills, the facilitators demonstrated how key communication skills, such as empathic responses to anger and blame and using "wish" statements, could transform the conversation from one of conflict to one of problem solving with the family. We also describe how we set up practice dyads to give the learners an opportunity to try out new skills with each other. An evaluation of the workshop and similar workshops we conducted is presented. Copyright © 2013 U.S. Cancer Pain Relief Committee. Published by Elsevier Inc. All rights reserved.
Fuchs, Helmut V
2013-01-01
The author gives a comprehensive overview of materials and components for noise control and acoustical comfort. Sound absorbers must meet acoustical and architectural requirements, which fibrous or porous material alone can meet. Basics and applications are demonstrated, with representative examples for spatial acoustics, free-field test facilities and canal linings. Acoustic engineers and construction professionals will find some new basic concepts and tools for developments in order to improve acoustical comfort. Interference absorbers, active resonators and micro-perforated absorbers of different materials and designs complete the list of applications.
Applying multi-resolution numerical methods to geodynamics
Davies, David Rhodri
Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled
Analytic methods in applied probability in memory of Fridrikh Karpelevich
Suhov, Yu M
2002-01-01
This volume is dedicated to F. I. Karpelevich, an outstanding Russian mathematician who made important contributions to applied probability theory. The book contains original papers focusing on several areas of applied probability and its uses in modern industrial processes, telecommunications, computing, mathematical economics, and finance. It opens with a review of Karpelevich's contributions to applied probability theory and includes a bibliography of his works. Other articles discuss queueing network theory, in particular, in heavy traffic approximation (fluid models). The book is suitable
Reactor calculation in coarse mesh by finite element method applied to matrix response method
International Nuclear Information System (INIS)
Nakata, H.
1982-01-01
The finite element method is applied to the solution of the modified formulation of the matrix-response method aiming to do reactor calculations in coarse mesh. Good results are obtained with a short running time. The method is applicable to problems where the heterogeneity is predominant and to problems of evolution in coarse meshes where the burnup is variable in one same coarse mesh, making the cross section vary spatially with the evolution. (E.G.) [pt
Couraud, S; Chan, S; Avrillon, V; Horn, K; Try, S; Gérinière, L; Perrot, É; Guichon, C; Souquet, P-J; Ny, C
2013-10-01
According to UN, Cambodia is one of the poorest countries in the World. Respiratory diseases are current public health priorities. In this context, a new bronchoscopy unit (BSU) was created in the respiratory medicine department of Preah Kossamak hospital (PKH) thanks to a tight cooperation between a French and a Cambodian team. Aim of this study was to describe conditions of introduction of this equipment. Two guidelines for practice are available. They are respectively edited by the French and British societies of pulmonology. These guidelines were reviewed and compared to the conditions in which BS was introduced in PKH. Each item from guidelines was combined to a categorical value: "applied", "adapted" or "not applied". In 2009, 54 bronchoscopies were performed in PKH, mainly for suspicion of infectious or tumour disease. In total, 52% and 46% of the French and British guideline items respectively were followed in this Cambodian unit. Patient safety items are those highly followed. By contrast "staff safety" items were those weakly applied. Implementation of EBS in developing countries seems feasible in good conditions of quality and safety for patients. However, some recommendations cannot be applied due to local conditions. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation
Hatten, Noble; Russell, Ryan P.
2017-12-01
A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.
Damm, Bodo; Klose, Martin
2014-05-01
This contribution presents an initiative to develop a national landslide database for the Federal Republic of Germany. It highlights structure and contents of the landslide database and outlines its major data sources and the strategy of information retrieval. Furthermore, the contribution exemplifies the database potentials in applied landslide impact research, including statistics of landslide damage, repair, and mitigation. The landslide database offers due to systematic regional data compilation a differentiated data pool of more than 5,000 data sets and over 13,000 single data files. It dates back to 1137 AD and covers landslide sites throughout Germany. In seven main data blocks, the landslide database stores besides information on landslide types, dimensions, and processes, additional data on soil and bedrock properties, geomorphometry, and climatic or other major triggering events. A peculiarity of this landslide database is its storage of data sets on land use effects, damage impacts, hazard mitigation, and landslide costs. Compilation of landslide data is based on a two-tier strategy of data collection. The first step of information retrieval includes systematic web content mining and exploration of online archives of emergency agencies, fire and police departments, and news organizations. Using web and RSS feeds and soon also a focused web crawler, this enables effective nationwide data collection for recent landslides. On the basis of this information, in-depth data mining is performed to deepen and diversify the data pool in key landslide areas. This enables to gather detailed landslide information from, amongst others, agency records, geotechnical reports, climate statistics, maps, and satellite imagery. Landslide data is extracted from these information sources using a mix of methods, including statistical techniques, imagery analysis, and qualitative text interpretation. The landslide database is currently migrated to a spatial database system
International Nuclear Information System (INIS)
Ainsbury, Elizabeth A.; Lloyd, David C.; Rothkamm, Kai; Vinnikov, Volodymyr A.; Maznyk, Nataliya A.; Puig, Pedro; Higueras, Manuel
2014-01-01
Classical methods of assessing the uncertainty associated with radiation doses estimated using cytogenetic techniques are now extremely well defined. However, several authors have suggested that a Bayesian approach to uncertainty estimation may be more suitable for cytogenetic data, which are inherently stochastic in nature. The Bayesian analysis framework focuses on identification of probability distributions (for yield of aberrations or estimated dose), which also means that uncertainty is an intrinsic part of the analysis, rather than an 'afterthought'. In this paper Bayesian, as well as some more advanced classical, data analysis methods for radiation cytogenetics are reviewed that have been proposed in the literature. A practical overview of Bayesian cytogenetic dose estimation is also presented, with worked examples from the literature. (authors)
Dunham, Jason B.; Gallo, Kirsten
2008-01-01
In a species conservation context, translocations can be an important tool, but they frequently fail to successfully establish new populations. We consider the case of reintroductions for bull trout (Salvelinus confluentus), a federally-listed threatened species with a widespread but declining distribution in western North America. Our specific objectives in this work were to: 1) develop a general framework for assessing the feasibility of reintroduction for bull trout, 2) provide a detailed example of implementing this framework to assess the feasibility of reintroducing bull trout in the Clackamas River, Oregon, and 3) discuss the implications of this effort in the more general context of fish reintroductions as a conservation tool. Review of several case histories and our assessment of the Clackamas River suggest that an attempt to reintroduce bull trout could be successful, assuming adequate resources are committed to the subsequent stages of implementation, monitoring, and evaluation.
Directory of Open Access Journals (Sweden)
A. Mreła
2015-05-01
Abstract The paper presents discussion about using mathematical functions in order to help academic teachers to verify acquirement of learning outcomes by students on the example of the major “geodesy and cartography”. It is relatively easy to build fuzzy relation describing levels of realization and validation learning outcomes during subject examinations and the fuzzy relation with students’ grades is already built by teachers, the problem is to combine these two relations to get one which describes the level of acquiring learning outcomes by students. There are two main requirements facing this combinations and the paper shows that the best combination according to these requirements is algebraic composition. Keywords: learning outcome, fuzzy relation, algebraic composition.
Benaouda, D.; Wadge, G.; Whitmarsh, R. B.; Rothwell, R. G.; MacLeod, C.
1999-02-01
In boreholes with partial or no core recovery, interpretations of lithology in the remainder of the hole are routinely attempted using data from downhole geophysical sensors. We present a practical neural net-based technique that greatly enhances lithological interpretation in holes with partial core recovery by using downhole data to train classifiers to give a global classification scheme for those parts of the borehole for which no core was retrieved. We describe the system and its underlying methods of data exploration, selection and classification, and present a typical example of the system in use. Although the technique is equally applicable to oil industry boreholes, we apply it here to an Ocean Drilling Program (ODP) borehole (Hole 792E, Izu-Bonin forearc, a mixture of volcaniclastic sandstones, conglomerates and claystones). The quantitative benefits of quality-control measures and different subsampling strategies are shown. Direct comparisons between a number of discriminant analysis methods and the use of neural networks with back-propagation of error are presented. The neural networks perform better than the discriminant analysis techniques both in terms of performance rates with test data sets (2-3 per cent better) and in qualitative correlation with non-depth-matched core. We illustrate with the Hole 792E data how vital it is to have a system that permits the number and membership of training classes to be changed as analysis proceeds. The initial classification for Hole 792E evolved from a five-class to a three-class and then to a four-class scheme with resultant classification performance rates for the back-propagation neural network method of 83, 84 and 93 per cent respectively.
Bayesian Methods for the Physical Sciences. Learning from Examples in Astronomy and Physics.
Andreon, Stefano; Weaver, Brian
2015-05-01
Chapter 1: This chapter presents some basic steps for performing a good statistical analysis, all summarized in about one page. Chapter 2: This short chapter introduces the basics of probability theory inan intuitive fashion using simple examples. It also illustrates, again with examples, how to propagate errors and the difference between marginal and profile likelihoods. Chapter 3: This chapter introduces the computational tools and methods that we use for sampling from the posterior distribution. Since all numerical computations, and Bayesian ones are no exception, may end in errors, we also provide a few tips to check that the numerical computation is sampling from the posterior distribution. Chapter 4: Many of the concepts of building, running, and summarizing the resultsof a Bayesian analysis are described with this step-by-step guide using a basic (Gaussian) model. The chapter also introduces examples using Poisson and Binomial likelihoods, and how to combine repeated independent measurements. Chapter 5: All statistical analyses make assumptions, and Bayesian analyses are no exception. This chapter emphasizes that results depend on data and priors (assumptions). We illustrate this concept with examples where the prior plays greatly different roles, from major to negligible. We also provide some advice on how to look for information useful for sculpting the prior. Chapter 6: In this chapter we consider examples for which we want to estimate more than a single parameter. These common problems include estimating location and spread. We also consider examples that require the modeling of two populations (one we are interested in and a nuisance population) or averaging incompatible measurements. We also introduce quite complex examples dealing with upper limits and with a larger-than-expected scatter. Chapter 7: Rarely is a sample randomly selected from the population we wish to study. Often, samples are affected by selection effects, e.g., easier
Rock, Adam J.; Coventry, William L.; Morgan, Methuen I.; Loi, Natasha M.
2016-01-01
Generally, academic psychologists are mindful of the fact that, for many students, the study of research methods and statistics is anxiety provoking (Gal et al., 1997). Given the ubiquitous and distributed nature of eLearning systems (Nof et al., 2015), teachers of research methods and statistics need to cultivate an understanding of how to effectively use eLearning tools to inspire psychology students to learn. Consequently, the aim of the present paper is to discuss critically how using eLearning systems might engage psychology students in research methods and statistics. First, we critically appraise definitions of eLearning. Second, we examine numerous important pedagogical principles associated with effectively teaching research methods and statistics using eLearning systems. Subsequently, we provide practical examples of our own eLearning-based class activities designed to engage psychology students to learn statistical concepts such as Factor Analysis and Discriminant Function Analysis. Finally, we discuss general trends in eLearning and possible futures that are pertinent to teachers of research methods and statistics in psychology. PMID:27014147
Rock, Adam J; Coventry, William L; Morgan, Methuen I; Loi, Natasha M
2016-01-01
Generally, academic psychologists are mindful of the fact that, for many students, the study of research methods and statistics is anxiety provoking (Gal et al., 1997). Given the ubiquitous and distributed nature of eLearning systems (Nof et al., 2015), teachers of research methods and statistics need to cultivate an understanding of how to effectively use eLearning tools to inspire psychology students to learn. Consequently, the aim of the present paper is to discuss critically how using eLearning systems might engage psychology students in research methods and statistics. First, we critically appraise definitions of eLearning. Second, we examine numerous important pedagogical principles associated with effectively teaching research methods and statistics using eLearning systems. Subsequently, we provide practical examples of our own eLearning-based class activities designed to engage psychology students to learn statistical concepts such as Factor Analysis and Discriminant Function Analysis. Finally, we discuss general trends in eLearning and possible futures that are pertinent to teachers of research methods and statistics in psychology.
Energy Technology Data Exchange (ETDEWEB)
Broc, J S
2006-12-15
Energy end-use Efficiency (EE) is a priority for energy policies to face resources exhaustion and to reduce pollutant emissions. At the same time, in France, local level is increasingly involved into the implementation of EE activities, whose frame is changing (energy market liberalization, new policy instruments). Needs for ex-post evaluation of the local EE activities are thus increasing, for regulation requirements and to support a necessary change of scale. Our thesis focuses on the original issue of the ex-post evaluation of local EE operations in France. The state of the art, through the analysis of the American and European experiences and of the reference guidebooks, gives a substantial methodological material and emphasises the key evaluation issues. Concurrently, local EE operations in France are characterized by an analysis of their environment and a work on their segmentation criteria. The combination of these criteria with the key evaluation issues provides an analysis framework used as the basis for the composition of evaluation methods. This also highlights the specific evaluation needs for local operations. A methodology is then developed to complete and adapt the existing material to design evaluation methods for local operations, so that stakeholders can easily appropriate. Evaluation results thus feed a know-how building process with experience feedback. These methods are to meet two main goals: to determine the operation results, and to detect the success/failure factors. The methodology was validated on concrete cases, where these objectives were reached. (author)
Energy Technology Data Exchange (ETDEWEB)
Broc, J.S
2006-12-15
Energy end-use Efficiency (EE) is a priority for energy policies to face resources exhaustion and to reduce pollutant emissions. At the same time, in France, local level is increasingly involved into the implementation of EE activities, whose frame is changing (energy market liberalization, new policy instruments). Needs for ex-post evaluation of the local EE activities are thus increasing, for regulation requirements and to support a necessary change of scale. Our thesis focuses on the original issue of the ex-post evaluation of local EE operations in France. The state of the art, through the analysis of the American and European experiences and of the reference guidebooks, gives a substantial methodological material and emphasises the key evaluation issues. Concurrently, local EE operations in France are characterized by an analysis of their environment and a work on their segmentation criteria. The combination of these criteria with the key evaluation issues provides an analysis framework used as the basis for the composition of evaluation methods. This also highlights the specific evaluation needs for local operations. A methodology is then developed to complete and adapt the existing material to design evaluation methods for local operations, so that stakeholders can easily appropriate. Evaluation results thus feed a know-how building process with experience feedback. These methods are to meet two main goals: to determine the operation results, and to detect the success/failure factors. The methodology was validated on concrete cases, where these objectives were reached. (author)
Directory of Open Access Journals (Sweden)
Lucy Badalian
2011-04-01
Full Text Available In this work we outline a bio-ecological approach to studying history. We show that human societies from the first civilizations to our days are techno-ecosystems and do not differ much from the natural ecosystems of a lake or a forest that are also restricted by their supplies of food. Below we call them coenoses (sing. coenosis – this word from Greek is used in biology to denote a mutually dependent community of life-forms. Historically, a succession of distinctive nestled geo-climatic zones was domesticated as the older ones became exhausted due to growing demographic pressures. In this context, evolution is not synonymous with competition. Cooperation of mutually dependent species is crucial for domesticating a new ecosystem. At specific moments in its lifecycle, competition intensifies, leading to speciation. The dominant technology of each growing society serves as its unique adaptation to its geo-climatic zone. Using it, a particular society, just like a biological species, gains an evolutionary advantage over its neighbors by opening access to a new, previously inaccessible resource or, in plain English, a new source of food. For example, thermoregulation of warm blooded animals opened up colder habitats. Or, the use of canals in the uninhabited swamps of Mesopotamia paved the way to the irrigation agriculture of the great rivers’ deltas circa the V Millennium BC. It enormously increased both the grain yields and the population densities. The feeding chains that grew around the abundant grain evolved into the ancient egalitarian society, perfectly attuned to using mass labor. The 20th century, quite dissimilar in its technologies, customs, etc, unfolded according to the same master design. Oil deposits that, for millennia, sat around the world idly, turned into the foundation of the affluent consumer society, based on democracy. The car, along with the highways, suburbia and supermarkets became the symbol of modernity. Today, the
Valuing national effects of digital health investments: an applied method.
Hagens, Simon; Zelmer, Jennifer; Frazer, Cassandra; Gheorghiu, Bobby; Leaver, Chad
2015-01-01
This paper describes an approach which has been applied to value national outcomes of investments by federal, provincial and territorial governments, clinicians and healthcare organizations in digital health. Hypotheses are used to develop a model, which is revised and populated based upon the available evidence. Quantitative national estimates and qualitative findings are produced and validated through structured peer review processes. This methodology has applied in four studies since 2008.
Chapelle, Frank H.; Robertson, John F.; Landmeyer, James E.; Bradley, Paul M.
2000-01-01
Natural attenuation processes such as dispersion, advection, and biogradation serve to decrease concentrations of disssolved contaminants as they are transported in all ground-water systems. However, the efficiency of these natural attenuation processes and the degree to which they help attain remediation goals, varies considerably from site to site. This report provides a methodology for quantifying various natural attenuation mechanisms. This methodology incorporates information on (1) concentrations of contaminants in space and/or time; (2) ambient reduction/oxidation (redox) conditions; (3) rates and directions of ground-water flow; (4) rates of contaminant biodegradation; and (5) demographic considerations, such as the presence of nearby receptor exposure points or property boundaries. This document outlines the hydrologic, geochemical, and biologic data needed to assess the efficiency of natural attenuation, provides a screening tool for making preliminary assessments, and provides examples of how to determine when natural attenuation can be a useful component of site remediation at leaking underground storage tank sites.
Dose rate reduction method for NMCA applied BWR plants
International Nuclear Information System (INIS)
Nagase, Makoto; Aizawa, Motohiro; Ito, Tsuyoshi; Hosokawa, Hideyuki; Varela, Juan; Caine, Thomas
2012-09-01
BRAC (BWR Radiation Assessment and Control) dose rate is used as an indicator of the incorporation of activated corrosion by products into BWR recirculation piping, which is known to be a significant contributor to dose rate received by workers during refueling outages. In order to reduce radiation exposure of the workers during the outage, it is desirable to keep BRAC dose rates as low as possible. After HWC was adopted to reduce IGSCC, a BRAC dose rate increase was observed in many plants. As a countermeasure to these rapid dose rate increases under HWC conditions, Zn injection was widely adopted in United States and Europe resulting in a reduction of BRAC dose rates. However, BRAC dose rates in several plants remain high, prompting the industry to continue to investigate methods to achieve further reductions. In recent years a large portion of the BWR fleet has adopted NMCA (NobleChem TM ) to enhance the hydrogen injection effect to suppress SCC. After NMCA, especially OLNC (On-Line NobleChem TM ), BRAC dose rates were observed to decrease. In some OLNC applied BWR plants this reduction was observed year after year to reach a new reduced equilibrium level. This dose rate reduction trends suggest the potential dose reduction might be obtained by the combination of Pt and Zn injection. So, laboratory experiments and in-plant tests were carried out to evaluate the effect of Pt and Zn on Co-60 deposition behaviour. Firstly, laboratory experiments were conducted to study the effect of noble metal deposition on Co deposition on stainless steel surfaces. Polished type 316 stainless steel coupons were prepared and some of them were OLNC treated in the test loop before the Co deposition test. Water chemistry conditions to simulate HWC were as follows: Dissolved oxygen, hydrogen and hydrogen peroxide were below 5 ppb, 100 ppb and 0 ppb (no addition), respectively. Zn was injected to target a concentration of 5 ppb. The test was conducted up to 1500 hours at 553 K. Test
Directory of Open Access Journals (Sweden)
Josep Maria Blanco Pont
2017-02-01
Full Text Available Abstract: We present a case of designing a conceptual transformation process and of applied research. We have moved from the design and discussion of the Metacube concept to its conversion into a software whose results can be applied to information management in many fields, and in fact has already found one of its conversions or mutations in applications and educational games for tablets as the Zong-Ji Kids, which stimulate mental work in children aged 6 to 9 years. The game underwent in 2012 to a process of observational testing in a laboratory with 80 children, aged 5 to 13 years. The results indicate that the symbolic simplification of the elements of design and an easily understandable interface help to understand the application and assimilation of concepts linked with the learning and reinforcement of calculation and geometric concepts. Some of them are addition and subtraction or positive, negative, increment, decrease, increasing or decreasing rotation, contiguous and opposing facets and representation of a polygon with different levels of complexity. Five years later, the big data analysis of the online application show that the design decisions made were appropriate.
Simulation methods to estimate design power: an overview for applied research.
Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E
2011-06-20
Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.
Simplified inelastic analysis methods applied to fast breeder reactor core design
International Nuclear Information System (INIS)
Abo-El-Ata, M.M.
1978-01-01
The paper starts with a review of some currently available simplified inelastic analysis methods used in elevated temperature design for evaluating plastic and thermal creep strains. The primary purpose of the paper is to investigate how these simplified methods may be applied to fast breeder reactor core design where neutron irradiation effects are significant. One of the problems discussed is irradiation-induced creep and its effect on shakedown, ratcheting, and plastic cycling. Another problem is the development of swelling-induced stress which is an additional loading mechanism and must be taken into account. In this respect an expression for swelling-induced stress in the presence of irradiation creep is derived and a model for simplifying the stress analysis under these conditions is proposed. As an example, the effects of irradiation creep and swelling induced stress on the analysis of a thin walled tube under constant internal pressure and intermittent heat fluxes, simulating a fuel pin, is presented
Use of helicity methods in evaluating loop integrals: a QCD example
International Nuclear Information System (INIS)
Koerner, J.G.; Sieben, P.
1991-01-01
We discuss the use of helicity methods in evaluating loop diagrams by analyzing a specific example: the one-loop concentration to e + e - → qanti qg in massless QCD. By using covariant helicity representations for the spinor and vector wave functins we obtain the helicity amplitudes directly from the Feynman loop diagrams by covariant contraction. The necessary loop integrations are considerably simplified since one encounters only scalar loop integrals after contraction. We discuss crossing relations that allow one to obtain the corresponding one-loop helicity amplitudes for the crossed processes as e.g. qanti q → (W, Z, γ * ) + g including the real photon cases. As we treat the spin degrees of freedom in four dimensions and only continue momenta to n dimensions (dimensional reduction scheme) we explicate how our results are related to the usual dimensional regularization results. (orig.)
Method of applying a coating to a steel plate
Energy Technology Data Exchange (ETDEWEB)
Masuda, T; Murakami, S; Chihara, Y; Iijima, K
1968-07-19
An application of a coating material containing a radically or ionically polymerizable monomer that can be changed into a high molecular compound by irradiation with ionizing radiations is provided to protect steel from corrosion and the adhesion of organic material. In this irradiation, the radiation doses are not more than 30 Mrad. The coating material is at least one kind of vehicle selected from the group consisting of a radically or ionically polymerizable monomer, polymer, copolymer, or compound of this monomer. They are, for example, styrene, acrylate, methacrylate, vinyl pyridine and their derivatives, acrylonitrile, acrylamide, and other vinyl compounds, etc. The absorption doses may be 30 or less Mrad, but preferably in the range of from 10 to 1 Mrad. Advantages are that the auxiliary heating can be performed below 100/sup 0/C, and that hardening can be carried out below 50/sup 0/C. Furthermore, the irradiation time is shorter than 30 seconds; may kinds of vehicles can be used; and solvent is unnecessary. In one example, 15 parts of acrylamide, 40 parts of styrene and 45 parts of ethyl acrylate are copolymerized. This copolymer is dissolved into 100 parts of styrene and is mixed with 50 parts of rutile and 50 parts of yellow lead. The obtained vehicle is hardened with 10 Mrad. The coated film 30..mu.. thick shows no defects due to weathering after 3 months. In another example, a mixture of 80 parts of unsaturated polyester and 20 parts of ethylene dimethacrylate gives 3H by irradiation with 6 Mrad in inert gas.
Artificial Neural Network methods applied to sentiment analysis
Ebert, Sebastian
2017-01-01
Sentiment Analysis (SA) is the study of opinions and emotions that are conveyed by text. This field of study has commercial applications for example in market research (e.g., “What do customers like and dislike about a product?”) and consumer behavior (e.g., “Which book will a customer buy next when he wrote a positive review about book X?”). A private person can benefit from SA by automatic movie or restaurant recommendations, or from applications on the computer or smart phone that adapt to...
Schonrock-Adema, Johanna; Heijne-Penninga, Marjolein; van Hell, Elisabeth A.; Cohen-Schotanus, Janke
2009-01-01
Background: The validation of educational instruments, in particular the employment of factor analysis, can be improved in many instances. Aims: To demonstrate the superiority of a sophisticated method of factor analysis, implying an integration of recommendations described in the factor analysis
Directory of Open Access Journals (Sweden)
Żarczyński Piotr
2017-01-01
Full Text Available The research on new and innovative solutions, technologies and products carried out on an industrial scale is the most reliable method of verifying the validity of their implementation. The results obtained in this research method give almost one hundred percent certainty although, at the same time, the research on an industrial scale requires the expenditure of the highest amount of money. Therefore, this method is not commonly applied in the industrial practices. In the case of the decision to implement new and innovative technologies, it is reasonable to carry out industrial research, both because of the cognitive values and its economic efficiency. Research on an industrial scale may prevent investment failure as well as lead to an improvement of technologies, which is the source of economic efficiency. In this paper, an evaluation model of economic efficiency of the industrial scale research has been presented. This model is based on the discount method and the decision tree model. A practical application of this proposed evaluation model has been presented based on an example of the coal charge pre-drying technology before coke making in a coke oven battery, which may be preceded by industrial scale research on a new type of coal charge dryer.
Levels of reduction in van Manen's phenomenological hermeneutic method: an empirical example.
Heinonen, Kristiina
2015-05-01
To describe reduction as a method using van Manen's phenomenological hermeneutic research approach. Reduction involves several levels that can be distinguished for their methodological usefulness. Researchers can use reduction in different ways and dimensions for their methodological needs. A study of Finnish multiple-birth families in which open interviews (n=38) were conducted with public health nurses, family care workers and parents of twins. A systematic literature and knowledge review showed there were no articles on multiple-birth families that used van Manen's method. Discussion The phenomena of the 'lifeworlds' of multiple-birth families consist of three core essential themes as told by parents: 'a state of constant vigilance', 'ensuring that they can continue to cope' and 'opportunities to share with other people'. Reduction provides the opportunity to carry out in-depth phenomenological hermeneutic research and understand people's lives. It helps to keep research stages separate but also enables a consolidated view. Social care and healthcare professionals have to hear parents' voices better to comprehensively understand their situation; they need further tools and training to be able to empower parents of twins. This paper adds an empirical example to the discussion of phenomenology, hermeneutic study and reduction as a method. It opens up reduction for researchers to exploit.
LANDSCAPE ECOLOGICAL METHOD TO STUDY AGRICULTURAL VEGETATION: SOME EXAMPLES FROM THE PO VALLEY
Directory of Open Access Journals (Sweden)
E. GIGLIO
2006-01-01
Full Text Available Vegetation is the most important landscape component, as regards to its ability to catch solar energy and to transform it, but also to shape the landscape, to structure the space, to create the fit environment for different animal species, to contribute to the maintenance of a correct metastability level for the landscape, etc. It is a biological system which acts under the constraints of the principles of the System Theory and owns the same properties of any other living system: so, it is a complex adaptive, hierarchical, dynamic, dissipative, self-organizing, self-transcendent, autocatalytic, self-maintaining system and follows the non-equilibrium thermodynamic. Its ecological state can be investigated through the comparison between “gathered data” (pathology and “normal data” (physiology for analogous types of vegetation. The Biological Integrated School of Landscape Ecology provides an integrated methodology to define ecological threshold limits of the different Agricultural Landscape types and applies to agricultural vegetation the specific part of the new methodology already tested to studying forests (the Landscape Biological Survey of Vegetation. Ecological quality, better and worst parameters, biological territorial capacity of vegetated corridors, agricultural field, poplar groves, orchards and woody remnant patches are investigated. Some examples from diverse agricultural landscapes of the Po Valley will be discussed. KEY WORDS: agricultural landscape, vegetation, landscape ecology, landscape health, Biological Integrated Landscape Ecology, Landscape Biological Survey of vegetation.
LANDSCAPE ECOLOGICAL METHOD TO STUDY AGRICULTURAL VEGETATION: SOME EXAMPLES FROM THE PO VALLEY
Directory of Open Access Journals (Sweden)
E. GIGLIO
2006-05-01
Full Text Available Vegetation is the most important landscape component, as regards to its ability to catch solar energy and to transform it, but also to shape the landscape, to structure the space, to create the fit environment for different animal species, to contribute to the maintenance of a correct metastability level for the landscape, etc. It is a biological system which acts under the constraints of the principles of the System Theory and owns the same properties of any other living system: so, it is a complex adaptive, hierarchical, dynamic, dissipative, self-organizing, self-transcendent, autocatalytic, self-maintaining system and follows the non-equilibrium thermodynamic. Its ecological state can be investigated through the comparison between “gathered data” (pathology and “normal data” (physiology for analogous types of vegetation. The Biological Integrated School of Landscape Ecology provides an integrated methodology to define ecological threshold limits of the different Agricultural Landscape types and applies to agricultural vegetation the specific part of the new methodology already tested to studying forests (the Landscape Biological Survey of Vegetation. Ecological quality, better and worst parameters, biological territorial capacity of vegetated corridors, agricultural field, poplar groves, orchards and woody remnant patches are investigated. Some examples from diverse agricultural landscapes of the Po Valley will be discussed. KEY WORDS: agricultural landscape, vegetation, landscape ecology, landscape health, Biological Integrated Landscape Ecology, Landscape Biological Survey of vegetation.
The harmonics detection method based on neural network applied ...
African Journals Online (AJOL)
Several different methods have been used to sense load currents and extract its ... in order to produce a reference current in shunt active power filters (SAPF), and ... technique compared to other similar methods are found quite satisfactory by ...
Muon radiography method for fundamental and applied research
Alexandrov, A. B.; Vladymyrov, M. S.; Galkin, V. I.; Goncharova, L. A.; Grachev, V. M.; Vasina, S. G.; Konovalova, N. S.; Malovichko, A. A.; Managadze, A. K.; Okat'eva, N. M.; Polukhina, N. G.; Roganova, T. M.; Starkov, N. I.; Tioukov, V. E.; Chernyavsky, M. M.; Shchedrina, T. V.
2017-12-01
This paper focuses on the basic principles of the muon radiography method, reviews the major muon radiography experiments, and presents the first results in Russia obtained by the authors using this method based on emulsion track detectors.
Methodical Aspects of Applying Strategy Map in an Organization
Piotr Markiewicz
2013-01-01
One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC). The method was c...
Classical and modular methods applied to Diophantine equations
Dahmen, S.R.
2008-01-01
Deep methods from the theory of elliptic curves and modular forms have been used to prove Fermat's last theorem and solve other Diophantine equations. These so-called modular methods can often benefit from information obtained by other, classical, methods from number theory; and vice versa. In our
Slavnov, E. V.; Petrov, I. A.
2014-07-01
A method of determining the change in the fi ltration properties of oil-bearing crops in the process of their pressing by repeated dynamic loading is proposed. The use of this method is demonstrated by the example of rape-oil extrusion. It was established that the change in the mass concentration of the oil in a rape mix from 0.45 to 0.23 leads to a decrease in the permeability of the mix by 101.5-102 times depending on the pressure applied to it. It is shown that the dependence of the permeability of this mix on the pressure applied to it is nonmonotone in character.
The pseudo-harmonics method applied to depletion calculation
International Nuclear Information System (INIS)
Silva, F.C. da; Amaral, J.A.C.; Thome, Z.D.
1989-01-01
In this paper, a new method for performing depletion calculations, based on the use of the Pseudo-Harmonics perturbation method, was developed. The fuel burnup was considered as a global perturbation and the multigroup difusion equations were rewriten in such a way as to treat the soluble boron concentration as the eigenvalue. By doing this, the critical boron concentration can be obtained by a perturbation method. A test of the new method was performed for a H 2 O-colled, D 2 O-moderated reactor. Comparison with direct calculation showed that this method is very accurate and efficient. (author) [pt
Pauer, Frédéric; Schmidt, Katharina; Babac, Ana; Damm, Kathrin; Frank, Martin; von der Schulenburg, J-Matthias Graf
2016-09-09
The Analytic Hierarchy Process (AHP) is increasingly used to measure patient priorities. Studies have shown that there are several different approaches to data acquisition and data aggregation. The aim of this study was to measure the information needs of patients having a rare disease and to analyze the effects of these different AHP approaches. The ranking of information needs is then used to display information categories on a web-based information portal about rare diseases according to the patient's priorities. The information needs of patients suffering from rare diseases were identified by an Internet research study and a preliminary qualitative study. Hence, we designed a three-level hierarchy containing 13 criteria. For data acquisition, the differences in outcomes were investigated using individual versus group judgements separately. Furthermore, we analyzed the different effects when using the median and arithmetic and geometric means for data aggregation. A consistency ratio ≤0.2 was determined to represent an acceptable consistency level. Forty individual and three group judgements were collected from patients suffering from a rare disease and their close relatives. The consistency ratio of 31 individual and three group judgements was acceptable and thus these judgements were included in the study. To a large extent, the local ranks for individual and group judgements were similar. Interestingly, group judgements were in a significantly smaller range than individual judgements. According to our data, the ranks of the criteria differed slightly according to the data aggregation method used. It is important to explain and justify the choice of an appropriate method for data acquisition because response behaviors differ according to the method. We conclude that researchers should select a suitable method based on the thematic perspective or investigated topics in the study. Because the arithmetic mean is very vulnerable to outliers, the geometric mean
Espelt, Albert; Marí-Dell'Olmo, Marc; Penelo, Eva; Bosque-Prous, Marina
2016-06-14
To examine the differences between Prevalence Ratio (PR) and Odds Ratio (OR) in a cross-sectional study and to provide tools to calculate PR using two statistical packages widely used in substance use research (STATA and R). We used cross-sectional data from 41,263 participants of 16 European countries participating in the Survey on Health, Ageing and Retirement in Europe (SHARE). The dependent variable, hazardous drinking, was calculated using the Alcohol Use Disorders Identification Test - Consumption (AUDIT-C). The main independent variable was gender. Other variables used were: age, educational level and country of residence. PR of hazardous drinking in men with relation to women was estimated using Mantel-Haenszel method, log-binomial regression models and poisson regression models with robust variance. These estimations were compared to the OR calculated using logistic regression models. Prevalence of hazardous drinkers varied among countries. Generally, men have higher prevalence of hazardous drinking than women [PR=1.43 (1.38-1.47)]. Estimated PR was identical independently of the method and the statistical package used. However, OR overestimated PR, depending on the prevalence of hazardous drinking in the country. In cross-sectional studies, where comparisons between countries with differences in the prevalence of the disease or condition are made, it is advisable to use PR instead of OR.
Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.
Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy
2010-02-01
This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.
Waste classification and methods applied to specific disposal sites
International Nuclear Information System (INIS)
Rogers, V.C.
1979-01-01
An adequate definition of the classes of radioactive wastes is necessary to regulating the disposal of radioactive wastes. A classification system is proposed in which wastes are classified according to characteristics relating to their disposal. Several specific sites are analyzed with the methodology in order to gain insights into the classification of radioactive wastes. Also presented is the analysis of ocean dumping as it applies to waste classification. 5 refs
Directory of Open Access Journals (Sweden)
Igor J. Epler
2013-12-01
maintenance, - preventive maintenance, - combined maintenance. Some developed condition-based maintenance models Condition-based maintenance models can be classified in two groups as: - models of technical change in the current situation (with a use of condition inspection, - models of technical change in the situation (with a use of condition diagnostics. Some developed condition-based maintenance models The models of condition-based maintenance include: - condition-based maintenance model with parameters control, - condition-based maintenance model with the control of reliability levels. Condition-based maintenance model with parameters control Condition-based maintenance model with parameters control can be with: - periodic diagnostic controls (“the constant date”, - economic setting of the intervals of diagnostic controls, - continous diagnostic controls. Condition-based maintenance model with the reliability level control The essence of the condition-based maintenance model with the control of reliability levels is to use resources between two repairs without limitation, with execution of maintenance activities necessary to fix the failure occurred, while the actual reliability level is within the boundaries of the set (permissible norms. If deviations from these norms occur, the causes of deviations are analysed and measures taken to increase the reliability level of individual components and the system. The possibilities of the application of the condition-based maintenance model on the example of tank weapons The application of the condition-based maintenance model with parameters control is hard to realize in tank weapons, except for a tank cannon barrel. In this case, the intensity of the stress in the material on the critical barrel sections can be measured during operation, using electrical extensiometers. Voltage intensity can be used to determine
Directory of Open Access Journals (Sweden)
Stamatin Oleksandr V.
2014-03-01
Full Text Available The goal of the article is presentation of results of study of factors of influence upon quality of labour life of industrial employees and justification of a scorecard of its assessment at the micro-economic level with the use of statistical methods of study. The article proves that the quality of labour life is based on enterprise capabilities, which depend on economic results, identified by the use of financial, material and human resources, effectiveness of the innovation and investment activity. The article reveals main factors that influence the quality of labour life of industrial employees using example of engineering enterprises: labour remuneration, social provisions, possibility to develop personnel, progressive state of fixed assets, financial sustainability of the enterprise, and effectiveness of investing into innovation activity. The article proves expediency of use of statistical methods of study for assessment of quality of labour life of employees, namely: multi-dimensional factor analysis, neural networks and folded additive technique. Their use helped to reveal indicators that are the most sensitive to managerial impact for ensuring quality of labour life. The article justifies stages of methodical approach to assessment of the quality of labour life of industrial employees, which was applied at engineering enterprises, which proves its significance and theoretical substantiation.
Resonating group method as applied to the spectroscopy of α-transfer reactions
Subbotin, V. B.; Semjonov, V. M.; Gridnev, K. A.; Hefter, E. F.
1983-10-01
In the conventional approach to α-transfer reactions the finite- and/or zero-range distorted-wave Born approximation is used in liaison with a macroscopic description of the captured α particle in the residual nucleus. Here the specific example of 16O(6Li,d)20Ne reactions at different projectile energies is taken to present a microscopic resonating group method analysis of the α particle in the final nucleus (for the reaction part the simple zero-range distorted-wave Born approximation is employed). In the discussion of suitable nucleon-nucleon interactions, force number one of the effective interactions presented by Volkov is shown to be most appropriate for the system considered. Application of the continuous analog of Newton's method to the evaluation of the resonating group method equations yields an increased accuracy with respect to traditional methods. The resonating group method description induces only minor changes in the structures of the angular distributions, but it does serve its purpose in yielding reliable and consistent spectroscopic information. NUCLEAR STRUCTURE 16O(6Li,d)20Ne; E=20 to 32 MeV; calculated B(E2); reduced widths, dσdΩ extracted α-spectroscopic factors. ZRDWBA with microscope RGM description of residual α particle in 20Ne; application of continuous analog of Newton's method; tested and applied Volkov force No. 1; direct mechanism.
nuclear and atomic methods applied in the determination of some
African Journals Online (AJOL)
NAA is a quantitative and qualitative method for the precise determination of a number of major, minor and trace elements in different types of geological, environmental and biological samples. It is based on nuclear reaction between neutron and target nuclei of a sample material. It is a useful method for the simultaneous.
Instructions for applying inverse method for reactivity measurement
International Nuclear Information System (INIS)
Milosevic, M.
1988-11-01
This report is a brief description of the completed method for reactivity measurement. It contains description of the experimental procedure needed instrumentation and computer code IM for determining reactivity. The objective of this instructions manual is to enable experiments and reactivity measurement on any critical system according to the methods adopted at the RB reactor
The spectral volume method as applied to transport problems
International Nuclear Information System (INIS)
McClarren, Ryan G.
2011-01-01
We present a new spatial discretization for transport problems: the spectral volume method. This method, rst developed by Wang for computational fluid dynamics, divides each computational cell into several sub-cells and enforces particle balance on each of these sub-cells. Also, these sub-cells are used to build a polynomial reconstruction in the cell. The idea of dividing cells into many cells is a generalization of the simple corner balance and other similar schemes. The spectral volume method preserves particle conservation and preserves the asymptotic diffusion limit. We present results from the method on two transport problems in slab geometry using discrete ordinates and second through sixth order spectral volume schemes. The numerical results demonstrate the accuracy and preservation of the diffusion limit of the spectral volume method. Future work will explore possible bene ts of the scheme for high-performance computing and for resolving diffusive boundary layers. (author)
Literature Review of Applying Visual Method to Understand Mathematics
Directory of Open Access Journals (Sweden)
Yu Xiaojuan
2015-01-01
Full Text Available As a new method to understand mathematics, visualization offers a new way of understanding mathematical principles and phenomena via image thinking and geometric explanation. It aims to deepen the understanding of the nature of concepts or phenomena and enhance the cognitive ability of learners. This paper collates and summarizes the application of this visual method in the understanding of mathematics. It also makes a literature review of the existing research, especially with a visual demonstration of Euler’s formula, introduces the application of this method in solving relevant mathematical problems, and points out the differences and similarities between the visualization method and the numerical-graphic combination method, as well as matters needing attention for its application.
Methodical Aspects of Applying Strategy Map in an Organization
Directory of Open Access Journals (Sweden)
Piotr Markiewicz
2013-06-01
Full Text Available One of important aspects of strategic management is the instrumental aspect included in a rich set of methods and techniques used at particular stages of strategic management process. The object of interest in this study is the development of views and the implementation of strategy as an element of strategic management and instruments in the form of methods and techniques. The commonly used method in strategy implementation and measuring progress is Balanced Scorecard (BSC. The method was created as a result of implementing the project “Measuring performance in the Organization of the future” of 1990, completed by a team under the supervision of David Norton (Kaplan, Norton 2002. The developed method was used first of all to evaluate performance by decomposition of a strategy into four perspectives and identification of measures of achievement. In the middle of 1990s the method was improved by enriching it, first of all, with a strategy map, in which the process of transition of intangible assets into tangible financial effects is reflected (Kaplan, Norton 2001. Strategy map enables illustration of cause and effect relationship between processes in all four perspectives and performance indicators at the level of organization. The purpose of the study being prepared is to present methodical conditions of using strategy maps in the strategy implementation process in organizations of different nature.
Applying a life cycle approach to project management methods
Biggins, David; Trollsund, F.; Høiby, A.L.
2016-01-01
Project management is increasingly important to organisations because projects are the method\\ud by which organisations respond to their environment. A key element within project management\\ud is the standards and methods that are used to control and conduct projects, collectively known as\\ud project management methods (PMMs) and exemplified by PRINCE2, the Project Management\\ud Institute’s and the Association for Project Management’s Bodies of Knowledge (PMBOK and\\ud APMBOK. The purpose of t...
Method for curing alkyd resin compositions by applying ionizing radiation
International Nuclear Information System (INIS)
Watanabe, T.; Murata, K.; Maruyama, T.
1975-01-01
An alkyd resin composition is prepared by dissolving a polymerizable alkyd resin having from 10 to 50 percent of oil length into a vinyl monomer. The polymerizable alkyd resin is obtained by a half-esterification reaction of an acid anhydride having a polymerizable unsaturated group and an alkyd resin modified with conjugated unsaturated oil having at least one reactive hydroxyl group per one molecule. The alkyd resin composition thus obtained is coated on an article, and ionizing radiation is applied on the article to cure the coated film thereon. (U.S.)
Roetzheim, Richard G; Freund, Karen M; Corle, Don K; Murray, David M; Snyder, Frederick R; Kronman, Andrea C; Jean-Pierre, Pascal; Raich, Peter C; Holden, Alan Ec; Darnell, Julie S; Warren-Mears, Victoria; Patierno, Steven
2012-04-01
The Patient Navigation Research Program (PNRP) is a cooperative effort of nine research projects, with similar clinical criteria but with different study designs. To evaluate projects such as PNRP, it is desirable to perform a pooled analysis to increase power relative to the individual projects. There is no agreed-upon prospective methodology, however, for analyzing combined data arising from different study designs. Expert opinions were thus solicited from the members of the PNRP Design and Analysis Committee. To review possible methodologies for analyzing combined data arising from heterogeneous study designs. The Design and Analysis Committee critically reviewed the pros and cons of five potential methods for analyzing combined PNRP project data. The conclusions were based on simple consensus. The five approaches reviewed included the following: (1) analyzing and reporting each project separately, (2) combining data from all projects and performing an individual-level analysis, (3) pooling data from projects having similar study designs, (4) analyzing pooled data using a prospective meta-analytic technique, and (5) analyzing pooled data utilizing a novel simulated group-randomized design. Methodologies varied in their ability to incorporate data from all PNRP projects, to appropriately account for differing study designs, and to accommodate differing project sample sizes. The conclusions reached were based on expert opinion and not derived from actual analyses performed. The ability to analyze pooled data arising from differing study designs may provide pertinent information to inform programmatic, budgetary, and policy perspectives. Multisite community-based research may not lend itself well to the more stringent explanatory and pragmatic standards of a randomized controlled trial design. Given our growing interest in community-based population research, the challenges inherent in the analysis of heterogeneous study design are likely to become more salient
The integral equation method applied to eddy currents
International Nuclear Information System (INIS)
Biddlecombe, C.S.; Collie, C.J.; Simkin, J.; Trowbridge, C.W.
1976-04-01
An algorithm for the numerical solution of eddy current problems is described, based on the direct solution of the integral equation for the potentials. In this method only the conducting and iron regions need to be divided into elements, and there are no boundary conditions. Results from two computer programs using this method for iron free problems for various two-dimensional geometries are presented and compared with analytic solutions. (author)
Wakabayashi, Hideaki; Asai, Masamitsu; Matsumoto, Keiji; Yamakita, Jiro
2016-11-01
Nakayama's shadow theory first discussed the diffraction by a perfectly conducting grating in a planar mounting. In the theory, a new formulation by use of a scattering factor was proposed. This paper focuses on the middle regions of a multilayered dielectric grating placed in conical mounting. Applying the shadow theory to the matrix eigenvalues method, we compose new transformation and improved propagation matrices of the shadow theory for conical mounting. Using these matrices and scattering factors, being the basic quantity of diffraction amplitudes, we formulate a new description of three-dimensional scattering fields which is available even for cases where the eigenvalues are degenerate in any region. Some numerical examples are given for cases where the eigenvalues are degenerate in the middle regions.
Apply of torque method at rationalization of work
Directory of Open Access Journals (Sweden)
Bandurová Miriam
2001-03-01
Full Text Available Aim of the study was to analyse consumption of time for profession - cylinder grinder, by torque method.Method of torque following is used for detection of sorts and size of time slope, on detection of portion of individual sorts of time consumption and cause of time slope. By this way it is possible to find out coefficient of employment and recovery of workers in organizational unit. Advantage of torque survey is low costs on informations acquirement, non-fastidiousness per worker and observer, which is easy trained. It is mentally acceptable method for objects of survey.Finding and detection of reserves in activity of cylinders grinder result of torque was surveys. Loss of time presents till 8% of working time. In 5 - shift service and average occupiying of shift by 4,4 grinder ( from statistic information of service , loss at grinder of cylinders are for whole centre 1,48 worker.According presented information it was recommended to cancel one job place - grinder of cylinders - and reduce state about one grinder. Next job place isn't possible cancel, because grindery of cylinders must to adapt to the grind line by number of polished cylinders in shift and semi - finishing of polished cylinders can not be high for often changes in area of grinding and sortiment changes.By this contribution we confirmed convenience of exploitation of torque method as one of the methods using during the job rationalization.
Thermoluminescence as a dating method applied to the Morocco Neolithic
International Nuclear Information System (INIS)
Ousmoi, M.
1989-09-01
Thermoluminescence is an absolute dating method which is well adapted to the study of burnt clays and so of the prehistoric ceramics belonging to the Neolithic period. The purpose of this study is to establish a first absolute chronology of the septentrional morocco Neolithic between 3000 and 7000 years before us and some improvements of the TL dating. The first part of the thesis contains some hypothesis about the morocco Neolithic and some problems to solve. Then we study the TL dating method along with new process to ameliorate the quality of the results like the shift of quartz TL peaks or the crushing of samples. The methods which were employed using 24 samples belonging to various civilisations are: the quartz inclusion method and the fine grain technique. For the dosimetry, several methods were used: determination of the K 2 O contents, alpha counting, site dosimetry using TL dosimeters and a scintillation counter. The results which were found bring some interesting answers to the archeologic question and ameliorate the chronologic schema of the Northern morocco Neolithic: development of the old cardial Neolithic in the North, and perhaps in the center of Morocco (the region of Rabat), between 5500 and 7000 before us. Development of the recent middle Neolithic around 4000-5000 before us, with a protocampaniforme (Skhirat), little older than the campaniforme recognized in the south of Spain. Development of the bronze age around 2000-4000 before us [fr
A Review of Auditing Methods Applied to the Content of Controlled Biomedical Terminologies
Zhu, Xinxin; Fan, Jung-Wei; Baorto, David M.; Weng, Chunhua; Cimino, James J.
2012-01-01
Although controlled biomedical terminologies have been with us for centuries, it is only in the last couple of decades that close attention has been paid to the quality of these terminologies. The result of this attention has been the development of auditing methods that apply formal methods to assessing whether terminologies are complete and accurate. We have performed an extensive literature review to identify published descriptions of these methods and have created a framework for characterizing them. The framework considers manual, systematic and heuristic methods that use knowledge (within or external to the terminology) to measure quality factors of different aspects of the terminology content (terms, semantic classification, and semantic relationships). The quality factors examined included concept orientation, consistency, non-redundancy, soundness and comprehensive coverage. We reviewed 130 studies that were retrieved based on keyword search on publications in PubMed, and present our assessment of how they fit into our framework. We also identify which terminologies have been audited with the methods and provide examples to illustrate each part of the framework. PMID:19285571
Modal method for crack identification applied to reactor recirculation pump
International Nuclear Information System (INIS)
Miller, W.H.; Brook, R.
1991-01-01
Nuclear reactors have been operating and producing useful electricity for many years. Within the last few years, several plants have found cracks in the reactor coolant pump shaft near the thermal barrier. The modal method and results described herein show the analytical results of using a Modal Analysis test method to determine the presence, size, and location of a shaft crack. The authors have previously demonstrated that the test method can analytically and experimentally identify shaft cracks as small as five percent (5%) of the shaft diameter. Due to small differences in material property distribution, the attempt to identify cracks smaller than 3% of the shaft diameter has been shown to be impractical. The rotor dynamics model includes a detailed motor rotor, external weights and inertias, and realistic total support stiffness. Results of the rotor dynamics model have been verified through a comparison with on-site vibration test data
Boron autoradiography method applied to the study of steels
International Nuclear Information System (INIS)
Gugelmeier, R.; Barcelo, G.N.; Boado, J.H.; Fernandez, C.
1986-01-01
The boron state, contained in the steel microestructure, is determined. The autoradiography by neutrons is used, permiting to obtain boron distribution images by means of additional information which is difficult to acquire by other methods. The application of the method is described, based on the neutronic irradiation of a polished steel sample, over which a celulose nitrate sheet or other appropriate material is fixed to constitute the detector. The particles generated by the neutron-boron interaction affect the detector sheet, which is subsequently revealed with a chemical treatment and can be observed at the optical microscope. In the case of materials used for the construction of nuclear reactors, special attention must be given to the presence of boron, since owing to the exceptionaly high capacity of neutron absorption, lowest quantities of boron acquire importance. The adaption of the method to metallurgical problems allows the obtainment of a correlation between the boron distribution images and the material's microstructure. (M.E.L.) [es
Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model
Directory of Open Access Journals (Sweden)
Oluwaseun Egbelowo
2017-05-01
Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.
Applying Nyquist's method for stability determination to solar wind observations
Klein, Kristopher G.; Kasper, Justin C.; Korreck, K. E.; Stevens, Michael L.
2017-10-01
The role instabilities play in governing the evolution of solar and astrophysical plasmas is a matter of considerable scientific interest. The large number of sources of free energy accessible to such nearly collisionless plasmas makes general modeling of unstable behavior, accounting for the temperatures, densities, anisotropies, and relative drifts of a large number of populations, analytically difficult. We therefore seek a general method of stability determination that may be automated for future analysis of solar wind observations. This work describes an efficient application of the Nyquist instability method to the Vlasov dispersion relation appropriate for hot, collisionless, magnetized plasmas, including the solar wind. The algorithm recovers the familiar proton temperature anisotropy instabilities, as well as instabilities that had been previously identified using fits extracted from in situ observations in Gary et al. (2016). Future proposed applications of this method are discussed.
Efficient electronic structure methods applied to metal nanoparticles
DEFF Research Database (Denmark)
Larsen, Ask Hjorth
of efficient approaches to density functional theory and the application of these methods to metal nanoparticles. We describe the formalism and implementation of localized atom-centered basis sets within the projector augmented wave method. Basis sets allow for a dramatic increase in performance compared....... The basis set method is used to study the electronic effects for the contiguous range of clusters up to several hundred atoms. The s-electrons hybridize to form electronic shells consistent with the jellium model, leading to electronic magic numbers for clusters with full shells. Large electronic gaps...... and jumps in Fermi level near magic numbers can lead to alkali-like or halogen-like behaviour when main-group atoms adsorb onto gold clusters. A non-self-consistent NewnsAnderson model is used to more closely study the chemisorption of main-group atoms on magic-number Au clusters. The behaviour at magic...
Variance reduction methods applied to deep-penetration problems
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
All deep-penetration Monte Carlo calculations require variance reduction methods. Before beginning with a detailed approach to these methods, several general comments concerning deep-penetration calculations by Monte Carlo, the associated variance reduction, and the similarities and differences of these with regard to non-deep-penetration problems will be addressed. The experienced practitioner of Monte Carlo methods will easily find exceptions to any of these generalities, but it is felt that these comments will aid the novice in understanding some of the basic ideas and nomenclature. Also, from a practical point of view, the discussions and developments presented are oriented toward use of the computer codes which are presented in segments of this Monte Carlo course
Morfeld, M; Wirtz, M
2006-02-01
According to the established definition of Pfaff, health services research analyses patients' path through the institutions of the health care system. The focus is on development, evaluation and implementation of innovative measures of health care. By increasing its quality health services research strives for an improvement of efficacy and efficiency of the health care system. In order to allow for an appropriate evaluation it is essential to differentiate between structure, process and outcome quality referring to (1) the health care system in its entirety, (2) specific health care units as well as (3) processes of communication in different settings. Health services research comprises a large array of scientific disciplines like public health, medicine, social sciences and social care. For the purpose of managing its tasks adequately a special combination of instruments and methodological procedures is needed. Thus, diverse techniques of evaluation research as well as special requirements for study designs and assessment procedures are of vital importance. The example of the German disease management programmes illustrates the methodical requirements for a scientific evaluation.
Non-perturbative methods applied to multiphoton ionization
International Nuclear Information System (INIS)
Brandi, H.S.; Davidovich, L.; Zagury, N.
1982-09-01
The use of non-perturbative methods in the treatment of atomic ionization is discussed. Particular attention is given to schemes of the type proposed by Keldysh where multiphoton ionization and tunnel auto-ionization occur for high intensity fields. These methods are shown to correspond to a certain type of expansion of the T-matrix in the intra-atomic potential; in this manner a criterium concerning the range of application of these non-perturbative schemes is suggested. A brief comparison between the ionization rate of atoms in the presence of linearly and circularly polarized light is presented. (Author) [pt
On second quantization methods applied to classical statistical mechanics
International Nuclear Information System (INIS)
Matos Neto, A.; Vianna, J.D.M.
1984-01-01
A method of expressing statistical classical results in terms of mathematical entities usually associated to quantum field theoretical treatment of many particle systems (Fock space, commutators, field operators, state vector) is discussed. It is developed a linear response theory using the 'second quantized' Liouville equation introduced by Schonberg. The relationship of this method to that of Prigogine et al. is briefly analyzed. The chain of equations and the spectral representations for the new classical Green's functions are presented. Generalized operators defined on Fock space are discussed. It is shown that the correlation functions can be obtained from Green's functions defined with generalized operators. (Author) [pt
Review of PCMS and heat transfer enhancement methods applied ...
African Journals Online (AJOL)
Most available PCMs have low thermal conductivity making heat transfer enhancement necessary for power applications. The various methods of heat transfer enhancement in latent heat storage systems were also reviewed systematically. The review showed that three commercially - available PCMs are suitable in the ...
E-LEARNING METHOD APPLIED TO TECHNICAL GRAPHICS SUBJECTS
Directory of Open Access Journals (Sweden)
GOANTA Adrian Mihai
2011-11-01
Full Text Available The paper presents some of the author’s endeavors in creating video courses for the students from the Faculty of Engineering in Braila related to subjects involving technical graphics . There are also mentioned the steps taken in completing the method and how to achieve a feedback on the rate of access to these types of courses by the students.
Solar cells elaborated by chemical methods: examples of research and development at CIE-UNAM
International Nuclear Information System (INIS)
Rincon, Marina E.
2008-01-01
Full text: At the Energy Research Center (CIE-UNAM-Mexico), the major areas of renewable energy research are solar thermal energy, photovoltaic energy, geothermal energy, hydrogen energy, materials for renewable energy, and energy planning. Among the efforts to developed solar cells, both physical and chemical methods are in progress at CIE-UNAM. In this contribution we focus on the advancement in efficiency, stability, and cost, of photovoltaic junctions based on chemically deposited films. Examples of early research are a composite thin film electrode comprised of SnO2/Bi2S3 nanocrystallites (5 nm) prepared by sequential deposition of SnO2 and Bi2S3 films onto an optically transparent electrode; the co-deposition of pyrrole and Bi2S3 nanoparticles on chemically deposited bismuth sulfide substrates to explore new approaches to improve light-collection efficiency in polymer photovoltaics; the sensitization of titanium dioxide coatings by chemically deposited cadmium selenide and bismuthe sulfide thin films. Here the good photostability of the coatings was promising for the use of the sensitized films in photocatalytic as well as photovoltaic applications. More recently, chemically deposited cadmium sulfide thin films have been explored in planar hybrid heterojunctions with chemically synthesized poly 3-octylthiophene, as well as all-chemically deposited photovoltaic structures. Examples of the last are: chemically deposited thin films of CdS (80 nm), Sb2S3 (450 nm), and Ag2Se (150 nm) annealed at 300 C and integrated into a p-i-n structure glass/SnO2:F/n-CdS/Sb2S3/p-AgSbSe2/Ag, showing Voc ∼ 550 mV and Jsc ∼ 2.3 mA/cm2 at 1 kW/m2 (tungsten halogen) intensity. Similarly, chemically deposited SnS (450nm) and CuS (80nm) thin films integrated in a photovoltaic structure SnO2:F/CdS/SnS/CuS/Ag, showing Voc>300 mV and Jsc up to 5 mA/cm2 under 850 W/m2 tungsten halogen illumination. These photovoltaic structures have been found to be stable over a period extending over
Current Human Reliability Analysis Methods Applied to Computerized Procedures
Energy Technology Data Exchange (ETDEWEB)
Ronald L. Boring
2012-06-01
Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.
Probabilist methods applied to electric source problems in nuclear safety
International Nuclear Information System (INIS)
Carnino, A.; Llory, M.
1979-01-01
Nuclear Safety has frequently been asked to quantify safety margins and evaluate the hazard. In order to do so, the probabilist methods have proved to be the most promising. Without completely replacing determinist safety, they are now commonly used at the reliability or availability stages of systems as well as for determining the likely accidental sequences. In this paper an application linked to the problem of electric sources is described, whilst at the same time indicating the methods used. This is the calculation of the probable loss of all the electric sources of a pressurized water nuclear power station, the evaluation of the reliability of diesels by event trees of failures and the determination of accidental sequences which could be brought about by the 'total electric source loss' initiator and affect the installation or the environment [fr
Theoretical and applied aerodynamics and related numerical methods
Chattot, J J
2015-01-01
This book covers classical and modern aerodynamics, theories and related numerical methods, for senior and first-year graduate engineering students, including: -The classical potential (incompressible) flow theories for low speed aerodynamics of thin airfoils and high and low aspect ratio wings. - The linearized theories for compressible subsonic and supersonic aerodynamics. - The nonlinear transonic small disturbance potential flow theory, including supercritical wing sections, the extended transonic area rule with lift effect, transonic lifting line and swept or oblique wings to minimize wave drag. Unsteady flow is also briefly discussed. Numerical simulations based on relaxation mixed-finite difference methods are presented and explained. - Boundary layer theory for all Mach number regimes and viscous/inviscid interaction procedures used in practical aerodynamics calculations. There are also four chapters covering special topics, including wind turbines and propellers, airplane design, flow analogies and h...
Applying probabilistic methods for assessments and calculations for accident prevention
International Nuclear Information System (INIS)
Anon.
1984-01-01
The guidelines for the prevention of accidents require plant design-specific and radioecological calculations to be made in order to show that maximum acceptable expsoure values will not be exceeded in case of an accident. For this purpose, main parameters affecting the accident scenario have to be determined by probabilistic methods. This offers the advantage that parameters can be quantified on the basis of unambigious and realistic criteria, and final results can be defined in terms of conservativity. (DG) [de
Applying flow chemistry: methods, materials, and multistep synthesis.
McQuade, D Tyler; Seeberger, Peter H
2013-07-05
The synthesis of complex molecules requires control over both chemical reactivity and reaction conditions. While reactivity drives the majority of chemical discovery, advances in reaction condition control have accelerated method development/discovery. Recent tools include automated synthesizers and flow reactors. In this Synopsis, we describe how flow reactors have enabled chemical advances in our groups in the areas of single-stage reactions, materials synthesis, and multistep reactions. In each section, we detail the lessons learned and propose future directions.
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
The colour analysis method applied to homogeneous rocks
Directory of Open Access Journals (Sweden)
Halász Amadé
2015-12-01
Full Text Available Computer-aided colour analysis can facilitate cyclostratigraphic studies. Here we report on a case study involving the development of a digital colour analysis method for examination of the Boda Claystone Formation which is the most suitable in Hungary for the disposal of high-level radioactive waste. Rock type colours are reddish brown or brownish red, or any shade between brown and red. The method presented here could be used to differentiate similar colours and to identify gradual transitions between these; the latter are of great importance in a cyclostratigraphic analysis of the succession. Geophysical well-logging has demonstrated the existence of characteristic cyclic units, as detected by colour and natural gamma. Based on our research, colour, natural gamma and lithology correlate well. For core Ib-4, these features reveal the presence of orderly cycles with thicknesses of roughly 0.64 to 13 metres. Once the core has been scanned, this is a time- and cost-effective method.
Comparison Study of Subspace Identification Methods Applied to Flexible Structures
Abdelghani, M.; Verhaegen, M.; Van Overschee, P.; De Moor, B.
1998-09-01
In the past few years, various time domain methods for identifying dynamic models of mechanical structures from modal experimental data have appeared. Much attention has been given recently to so-called subspace methods for identifying state space models. This paper presents a detailed comparison study of these subspace identification methods: the eigensystem realisation algorithm with observer/Kalman filter Markov parameters computed from input/output data (ERA/OM), the robust version of the numerical algorithm for subspace system identification (N4SID), and a refined version of the past outputs scheme of the multiple-output error state space (MOESP) family of algorithms. The comparison is performed by simulating experimental data using the five mode reduced model of the NASA Mini-Mast structure. The general conclusion is that for the case of white noise excitations as well as coloured noise excitations, the N4SID/MOESP algorithms perform equally well but give better results (improved transfer function estimates, improved estimates of the output) compared to the ERA/OM algorithm. The key computational step in the three algorithms is the approximation of the extended observability matrix of the system to be identified, for N4SID/MOESP, or of the observer for the system to be identified, for the ERA/OM. Furthermore, the three algorithms only require the specification of one dimensioning parameter.
Applying Hierarchical Task Analysis Method to Discovery Layer Evaluation
Directory of Open Access Journals (Sweden)
Marlen Promann
2015-03-01
Full Text Available Libraries are implementing discovery layers to offer better user experiences. While usability tests have been helpful in evaluating the success or failure of implementing discovery layers in the library context, the focus has remained on its relative interface benefits over the traditional federated search. The informal site- and context specific usability tests have offered little to test the rigor of the discovery layers against the user goals, motivations and workflow they have been designed to support. This study proposes hierarchical task analysis (HTA as an important complementary evaluation method to usability testing of discovery layers. Relevant literature is reviewed for the discovery layers and the HTA method. As no previous application of HTA to the evaluation of discovery layers was found, this paper presents the application of HTA as an expert based and workflow centered (e.g. retrieving a relevant book or a journal article method to evaluating discovery layers. Purdue University’s Primo by Ex Libris was used to map eleven use cases as HTA charts. Nielsen’s Goal Composition theory was used as an analytical framework to evaluate the goal carts from two perspectives: a users’ physical interactions (i.e. clicks, and b user’s cognitive steps (i.e. decision points for what to do next. A brief comparison of HTA and usability test findings is offered as a way of conclusion.
Evaluation of Slow Release Fertilizer Applying Chemical and Spectroscopic methods
International Nuclear Information System (INIS)
AbdEl-Kader, A.A.; Al-Ashkar, E.A.
2005-01-01
Controlled-release fertilizer offers a number of advantages in relation to crop production in newly reclaimed soils. Butadiene styrene latex emulsion is one of the promising polymer for different purposes. In this work, laboratory evaluation of butadiene styrene latex emulsion 24/76 polymer loaded with a mixed fertilizer was carried out. Macro nutrients (N, P and K) and micro-nutrients(Zn, Fe, and Cu) were extracted by basic extract from the polymer fertilizer mixtures. Micro-sampling technique was investigated and applied to measure Zn, Fe, and Cu using flame atomic absorption spectrometry in order to overcome the nebulization difficulties due to high salt content samples. The cumulative releases of macro and micro-nutrients have been assessed. From the obtained results, it is clear that the release depends on both nutrients and polymer concentration in the mixture. Macro-nutrients are released more efficient than micro-nutrients of total added. Therefore it can be used for minimizing micro-nutrients hazard in soils
The lumped heat capacity method applied to target heating
Rickards, J.
2013-01-01
The temperature of metal samples was measured while they were bombarded by the beam from the a particle accelerator. The evolution of the temperature with time can be explained using the lumped heat capacity method of heat transfer. A strong dependence on the type of mounting was found. Se midió la temperatura de muestras metálicas al ser bombardeadas por el haz de iones del Acelerador Pelletron del Instituto de Física. La evolución de la temperatura con el tiempo se puede explicar usando ...
Directory of Open Access Journals (Sweden)
V. I. Freyman
2015-11-01
Full Text Available Subject of Research.Representation features of education results for competence-based educational programs are analyzed. Solution importance of decoding and proficiency estimation for elements and components of discipline parts of competences is shown. The purpose and objectives of research are formulated. Methods. The paper deals with methods of mathematical logic, Boolean algebra, and parametrical analysis of complex diagnostic test results, that controls proficiency of some discipline competence elements. Results. The method of logical conditions analysis is created. It will give the possibility to formulate logical conditions for proficiency determination of each discipline competence element, controlled by complex diagnostic test. Normalized test result is divided into noncrossing zones; a logical condition about controlled elements proficiency is formulated for each of them. Summarized characteristics for test result zones are imposed. An example of logical conditions forming for diagnostic test with preset features is provided. Practical Relevance. The proposed method of logical conditions analysis is applied in the decoding algorithm of proficiency test diagnosis for discipline competence elements. It will give the possibility to automate the search procedure for elements with insufficient proficiency, and is also usable for estimation of education results of a discipline or a component of competence-based educational program.
Modern analytic methods applied to the art and archaeology
International Nuclear Information System (INIS)
Tenorio C, M. D.; Longoria G, L. C.
2010-01-01
The interaction of diverse areas as the analytic chemistry, the history of the art and the archaeology has allowed the development of a variety of techniques used in archaeology, in conservation and restoration. These methods have been used to date objects, to determine the origin of the old materials and to reconstruct their use and to identify the degradation processes that affect the integrity of the art works. The objective of this chapter is to offer a general vision on the researches that have been realized in the Instituto Nacional de Investigaciones Nucleares (ININ) in the field of cultural goods. A series of researches carried out in collaboration with national investigators and of the foreigner is described shortly, as well as with the great support of degree students and master in archaeology of the National School of Anthropology and History, since one of the goals that have is to diffuse the knowledge of the existence of these techniques among the young archaeologists, so that they have a wider vision of what they could use in an in mediate future and they can check hypothesis with scientific methods. (Author)
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen
2016-01-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations
Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.
2016-12-01
Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.
Artificial Intelligence Methods Applied to Parameter Detection of Atrial Fibrillation
Arotaritei, D.; Rotariu, C.
2015-09-01
In this paper we present a novel method to develop an atrial fibrillation (AF) based on statistical descriptors and hybrid neuro-fuzzy and crisp system. The inference of system produce rules of type if-then-else that care extracted to construct a binary decision system: normal of atrial fibrillation. We use TPR (Turning Point Ratio), SE (Shannon Entropy) and RMSSD (Root Mean Square of Successive Differences) along with a new descriptor, Teager- Kaiser energy, in order to improve the accuracy of detection. The descriptors are calculated over a sliding window that produce very large number of vectors (massive dataset) used by classifier. The length of window is a crisp descriptor meanwhile the rest of descriptors are interval-valued type. The parameters of hybrid system are adapted using Genetic Algorithm (GA) algorithm with fitness single objective target: highest values for sensibility and sensitivity. The rules are extracted and they are part of the decision system. The proposed method was tested using the Physionet MIT-BIH Atrial Fibrillation Database and the experimental results revealed a good accuracy of AF detection in terms of sensitivity and specificity (above 90%).
Frequency domain methods applied to forecasting electricity markets
International Nuclear Information System (INIS)
Trapero, Juan R.; Pedregal, Diego J.
2009-01-01
The changes taking place in electricity markets during the last two decades have produced an increased interest in the problem of forecasting, either load demand or prices. Many forecasting methodologies are available in the literature nowadays with mixed conclusions about which method is most convenient. This paper focuses on the modeling of electricity market time series sampled hourly in order to produce short-term (1 to 24 h ahead) forecasts. The main features of the system are that (1) models are of an Unobserved Component class that allow for signal extraction of trend, diurnal, weekly and irregular components; (2) its application is automatic, in the sense that there is no need for human intervention via any sort of identification stage; (3) the models are estimated in the frequency domain; and (4) the robustness of the method makes possible its direct use on both load demand and price time series. The approach is thoroughly tested on the PJM interconnection market and the results improve on classical ARIMA models. (author)
Interesting Developments in Testing Methods Applied to Foundation Piles
Sobala, Dariusz; Tkaczyński, Grzegorz
2017-10-01
Both: piling technologies and pile testing methods are a subject of current development. New technologies, providing larger diameters or using in-situ materials, are very demanding in terms of providing proper quality of execution of works. That concerns the material quality and continuity which define the integral strength of pile. On the other side we have the capacity of the ground around the pile and its ability to carry the loads transferred by shaft and pile base. Inhomogeneous nature of soils and a relatively small amount of tested piles imposes very good understanding of small amount of results. In some special cases the capacity test itself form an important cost in the piling contract. This work presents a brief description of selected testing methods and authors remarks based on cooperation with Universities constantly developing new ideas. Paper presents some experience based remarks on integrity testing by means of low energy impact (low strain) and introduces selected (Polish) developments in the field of closed-end pipe piles testing based on bi-directional loading, similar to Osterberg idea, but without sacrificial hydraulic jack. Such test is suitable especially when steel piles are used for temporary support in the rivers, where constructing of conventional testing appliance with anchor piles or kentledge meets technical problems. According to the author’s experience, such tests were not yet used on the building site but they bring a real potential especially, when the displacement control can be provided from the river bank using surveying techniques.
Applying Simulation Method in Formulation of Gluten-Free Cookies
Directory of Open Access Journals (Sweden)
Nikitina Marina
2017-01-01
Full Text Available At present time priority direction in the development of new food products its developing of technology products for special purposes. These types of products are gluten-free confectionery products, intended for people with celiac disease. Gluten-free products are in demand among consumers, it needs to expand assortment, and improvement of quality indicators. At this article results of studies on the development of pastry products based on amaranth flour does not contain gluten. Study based on method of simulation recipes gluten-free confectionery functional orientation to optimize their chemical composition. The resulting products will allow to diversify and supplement the necessary nutrients diet for people with gluten intolerance, as well as for those who follow a gluten-free diet.
Nuclear method applied in archaeological sites at the Amazon basin
International Nuclear Information System (INIS)
Nicoli, Ieda Gomes; Bernedo, Alfredo Victor Bellido; Latini, Rose Mary
2002-01-01
The aim of this work was to use the nuclear methodology to character pottery discovered inside archaeological sites recognized with circular earth structure in Acre State - Brazil which may contribute to the research in the reconstruction of part of the pre-history of the Amazonic Basin. The sites are located mainly in the Hydrographic Basin of High Purus River. Three of them were strategic chosen to collect the ceramics: Lobao, in Sena Madureira County at north; Alto Alegre in Rio Branco County at east and Xipamanu I, in Xapuri County at south. Neutron Activation Analysis in conjunction with multivariate statistical methods were used for the ceramic characterization and classification. An homogeneous group was established by all the sherds collected from Alto Alegre and was distinct from the other two groups analyzed. Some of the sherds collected from Xipamunu I appeared in Lobao's urns, probably because they had the same fabrication process. (author)
Applying Multi-Criteria Analysis Methods for Fire Risk Assessment
Directory of Open Access Journals (Sweden)
Pushkina Julia
2015-11-01
Full Text Available The aim of this paper is to prove the application of multi-criteria analysis methods for optimisation of fire risk identification and assessment process. The object of this research is fire risk and risk assessment. The subject of the research is studying the application of analytic hierarchy process for modelling and influence assessment of various fire risk factors. Results of research conducted by the authors can be used by insurance companies to perform the detailed assessment of fire risks on the object and to calculate a risk extra charge to an insurance premium; by the state supervisory institutions to determine the compliance of a condition of object with requirements of regulations; by real state owners and investors to carry out actions for decrease in degree of fire risks and minimisation of possible losses.
A new deconvolution method applied to ultrasonic images
International Nuclear Information System (INIS)
Sallard, J.
1999-01-01
This dissertation presents the development of a new method for restoration of ultrasonic signals. Our goal is to remove the perturbations induced by the ultrasonic probe and to help to characterize the defects due to a strong local discontinuity of the acoustic impedance. The point of view adopted consists in taking into account the physical properties in the signal processing to develop an algorithm which gives good results even on experimental data. The received ultrasonic signal is modeled as a convolution between a function that represents the waveform emitted by the transducer and a function that is abusively called the 'defect impulse response'. It is established that, in numerous cases, the ultrasonic signal can be expressed as a sum of weighted, phase-shifted replicas of a reference signal. Deconvolution is an ill-posed problem. A priori information must be taken into account to solve the problem. The a priori information translates the physical properties of the ultrasonic signals. The defect impulse response is modeled as a Double-Bernoulli-Gaussian sequence. Deconvolution becomes the problem of detection of the optimal Bernoulli sequence and estimation of the associated complex amplitudes. Optimal parameters of the sequence are those which maximize a likelihood function. We develop a new estimation procedure based on an optimization process. An adapted initialization procedure and an iterative algorithm enables to quickly process a huge number of data. Many experimental ultrasonic data that reflect usual control configurations have been processed and the results demonstrate the robustness of the method. Our algorithm enables not only to remove the waveform emitted by the transducer but also to estimate the phase. This parameter is useful for defect characterization. At last the algorithm makes easier data interpretation by concentrating information. So automatic characterization should be possible in the future. (author)
Applying Human-Centered Design Methods to Scientific Communication Products
Burkett, E. R.; Jayanty, N. K.; DeGroot, R. M.
2016-12-01
Knowing your users is a critical part of developing anything to be used or experienced by a human being. User interviews, journey maps, and personas are all techniques commonly employed in human-centered design practices because they have proven effective for informing the design of products and services that meet the needs of users. Many non-designers are unaware of the usefulness of personas and journey maps. Scientists who are interested in developing more effective products and communication can adopt and employ user-centered design approaches to better reach intended audiences. Journey mapping is a qualitative data-collection method that captures the story of a user's experience over time as related to the situation or product that requires development or improvement. Journey maps help define user expectations, where they are coming from, what they want to achieve, what questions they have, their challenges, and the gaps and opportunities that can be addressed by designing for them. A persona is a tool used to describe the goals and behavioral patterns of a subset of potential users or customers. The persona is a qualitative data model that takes the form of a character profile, built upon data about the behaviors and needs of multiple users. Gathering data directly from users avoids the risk of basing models on assumptions, which are often limited by misconceptions or gaps in understanding. Journey maps and user interviews together provide the data necessary to build the composite character that is the persona. Because a persona models the behaviors and needs of the target audience, it can then be used to make informed product design decisions. We share the methods and advantages of developing and using personas and journey maps to create more effective science communication products.
Simplified Methods Applied to Nonlinear Motion of Spar Platforms
Energy Technology Data Exchange (ETDEWEB)
Haslum, Herbjoern Alf
2000-07-01
Simplified methods for prediction of motion response of spar platforms are presented. The methods are based on first and second order potential theory. Nonlinear drag loads and the effect of the pumping motion in a moon-pool are also considered. Large amplitude pitch motions coupled to extreme amplitude heave motions may arise when spar platforms are exposed to long period swell. The phenomenon is investigated theoretically and explained as a Mathieu instability. It is caused by nonlinear coupling effects between heave, surge, and pitch. It is shown that for a critical wave period, the envelope of the heave motion makes the pitch motion unstable. For the same wave period, a higher order pitch/heave coupling excites resonant heave response. This mutual interaction largely amplifies both the pitch and the heave response. As a result, the pitch/heave instability revealed in this work is more critical than the previously well known Mathieu's instability in pitch which occurs if the wave period (or the natural heave period) is half the natural pitch period. The Mathieu instability is demonstrated both by numerical simulations with a newly developed calculation tool and in model experiments. In order to learn more about the conditions for this instability to occur and also how it may be controlled, different damping configurations (heave damping disks and pitch/surge damping fins) are evaluated both in model experiments and by numerical simulations. With increased drag damping, larger wave amplitudes and more time are needed to trigger the instability. The pitch/heave instability is a low probability of occurrence phenomenon. Extreme wave periods are needed for the instability to be triggered, about 20 seconds for a typical 200m draft spar. However, it may be important to consider the phenomenon in design since the pitch/heave instability is very critical. It is also seen that when classical spar platforms (constant cylindrical cross section and about 200m draft
Variational methods applied to problems of diffusion and reaction
Strieder, William
1973-01-01
This monograph is an account of some problems involving diffusion or diffusion with simultaneous reaction that can be illuminated by the use of variational principles. It was written during a period that included sabbatical leaves of one of us (W. S. ) at the University of Minnesota and the other (R. A. ) at the University of Cambridge and we are grateful to the Petroleum Research Fund for helping to support the former and the Guggenheim Foundation for making possible the latter. We would also like to thank Stephen Prager for getting us together in the first place and for showing how interesting and useful these methods can be. We have also benefitted from correspondence with Dr. A. M. Arthurs of the University of York and from the counsel of Dr. B. D. Coleman the general editor of this series. Table of Contents Chapter 1. Introduction and Preliminaries . 1. 1. General Survey 1 1. 2. Phenomenological Descriptions of Diffusion and Reaction 2 1. 3. Correlation Functions for Random Suspensions 4 1. 4. Mean Free ...
Nondestructive methods of analysis applied to oriental swords
Directory of Open Access Journals (Sweden)
Edge, David
2015-12-01
Full Text Available Various neutron techniques were employed at the Budapest Nuclear Centre in an attempt to find the most useful method for analysing the high-carbon steels found in Oriental arms and armour, such as those in the Wallace Collection, London. Neutron diffraction was found to be the most useful in terms of identifying such steels and also indicating the presence of hidden patternEn el Centro Nuclear de Budapest se han empleado varias técnicas neutrónicas con el fin de encontrar un método adecuado para analizar las armas y armaduras orientales con un alto contenido en carbono, como algunas de las que se encuentran en la Colección Wallace de Londres. El empleo de la difracción de neutrones resultó ser la técnica más útil de cara a identificar ese tipo de aceros y también para encontrar patrones escondidos.
Perturbation Method of Analysis Applied to Substitution Measurements of Buckling
Energy Technology Data Exchange (ETDEWEB)
Persson, Rolf
1966-11-15
Calculations with two-group perturbation theory on substitution experiments with homogenized regions show that a condensation of the results into a one-group formula is possible, provided that a transition region is introduced in a proper way. In heterogeneous cores the transition region comes in as a consequence of a new cell concept. By making use of progressive substitutions the properties of the transition region can be regarded as fitting parameters in the evaluation procedure. The thickness of the region is approximately equal to the sum of 1/(1/{tau} + 1/L{sup 2}){sup 1/2} for the test and reference regions. Consequently a region where L{sup 2} >> {tau}, e.g. D{sub 2}O, contributes with {radical}{tau} to the thickness. In cores where {tau} >> L{sup 2} , e.g. H{sub 2}O assemblies, the thickness of the transition region is determined by L. Experiments on rod lattices in D{sub 2}O and on test regions of D{sub 2}O alone (where B{sup 2} = - 1/L{sup 2} ) are analysed. The lattice measurements, where the pitches differed by a factor of {radical}2, gave excellent results, whereas the determination of the diffusion length in D{sub 2}O by this method was not quite successful. Even regions containing only one test element can be used in a meaningful way in the analysis.
Energy Technology Data Exchange (ETDEWEB)
Cojazzi, G.G.M.; Renda, G. [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, TP 210, Via E. Fermi 2749, I-21027, Ispra - Va (Italy); Hassberger, J. [Lawrence Livermore National Laboratory (United States)
2009-06-15
The Generation IV International Forum (GIF) Proliferation Resistance and Physical Protection (PR and PP) Working Group has developed a methodology for the PR and PP evaluation of advanced nuclear energy systems. The methodology is organised as a progressive approach applying alternative methods at different levels of thoroughness as more design information becomes available and research improves the depth of technical knowledge. The GIF Proliferation Resistance and Physical Protection (PR and PP) Working Group developed a notional sodium cooled fast neutron nuclear reactor, named the Example Sodium Fast Reactor (ESFR), for use in developing and testing the methodology. The ESFR is a hypothetical nuclear energy system consisting of four sodium-cooled fast reactors of medium size, co-located with an on-site dry fuel storage facility and a Fuel Cycle Facility with pyrochemical processing of the spent fuel and re-fabrication of new ESFR fuel elements. The baseline design is an actinide burner, with LWR spent fuel elements as feed material processed on the site. In the years 2007 and 2008 the GIF PR and PP Working Group performed a case study designed to both test the methodology and demonstrate how it can provide useful feedback to designers even during pre-conceptual design. The Study analysed the response of the entire ESFR system to different proliferation and theft strategies. Three proliferation threats were considered: Concealed diversion, Concealed Misuse and Abrogation. An overt theft threat was also studied. One of the objectives of the case study is to confirm the capability of the methodology to capture PR and PP differences among varied design configurations. To this aim Design Variations (DV) have been also defined corresponding respectively to a) a small variation of the baseline design (DV0), b) a deep burner configuration (DV1), c) a self sufficient core (DV2), and c) a breeder configuration (DV3). This paper builds on the approach followed for the
Complexity methods applied to turbulence in plasma astrophysics
Vlahos, L.; Isliker, H.
2016-09-01
In this review many of the well known tools for the analysis of Complex systems are used in order to study the global coupling of the turbulent convection zone with the solar atmosphere where the magnetic energy is dissipated explosively. Several well documented observations are not easy to interpret with the use of Magnetohydrodynamic (MHD) and/or Kinetic numerical codes. Such observations are: (1) The size distribution of the Active Regions (AR) on the solar surface, (2) The fractal and multi fractal characteristics of the observed magnetograms, (3) The Self-Organised characteristics of the explosive magnetic energy release and (4) the very efficient acceleration of particles during the flaring periods in the solar corona. We review briefly the work published the last twenty five years on the above issues and propose solutions by using methods borrowed from the analysis of complex systems. The scenario which emerged is as follows: (a) The fully developed turbulence in the convection zone generates and transports magnetic flux tubes to the solar surface. Using probabilistic percolation models we were able to reproduce the size distribution and the fractal properties of the emerged and randomly moving magnetic flux tubes. (b) Using a Non Linear Force Free (NLFF) magnetic extrapolation numerical code we can explore how the emerged magnetic flux tubes interact nonlinearly and form thin and Unstable Current Sheets (UCS) inside the coronal part of the AR. (c) The fragmentation of the UCS and the redistribution of the magnetic field locally, when the local current exceeds a Critical threshold, is a key process which drives avalanches and forms coherent structures. This local reorganization of the magnetic field enhances the energy dissipation and influences the global evolution of the complex magnetic topology. Using a Cellular Automaton and following the simple rules of Self Organized Criticality (SOC), we were able to reproduce the statistical characteristics of the
IAEA-ASSET's root cause analysis method applied to sodium leakage incident at Monju
International Nuclear Information System (INIS)
Watanabe, Norio; Hirano, Masashi
1997-08-01
The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)
Bamberger, Katharine T
2016-03-01
The use of intensive longitudinal methods (ILM)-rapid in situ assessment at micro timescales-can be overlaid on RCTs and other study designs in applied family research. Particularly, when done as part of a multiple timescale design-in bursts over macro timescales-ILM can advance the study of the mechanisms and effects of family interventions and processes of family change. ILM confers measurement benefits in accurately assessing momentary and variable experiences and captures fine-grained dynamic pictures of time-ordered processes. Thus, ILM allows opportunities to investigate new research questions about intervention effects on within-subject (i.e., within-person, within-family) variability (i.e., dynamic constructs) and about the time-ordered change process that interventions induce in families and family members beginning with the first intervention session. This paper discusses the need and rationale for applying ILM to family intervention evaluation, new research questions that can be addressed with ILM, example research using ILM in the related fields of basic family research and the evaluation of individual-based interventions. Finally, the paper touches on practical challenges and considerations associated with ILM and points readers to resources for the application of ILM.
IAEA-ASSET`s root cause analysis method applied to sodium leakage incident at Monju
Energy Technology Data Exchange (ETDEWEB)
Watanabe, Norio; Hirano, Masashi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1997-08-01
The present study applied the ASSET (Analysis and Screening of Safety Events Team) methodology (This method identifies occurrences such as component failures and operator errors, identifies their respective direct/root causes and determines corrective actions.) to the analysis of the sodium leakage incident at Monju, based on the published reports by mainly the Science and Technology Agency, aiming at systematic identification of direct/root causes and corrective actions, and discussed the effectiveness and problems of the ASSET methodology. The results revealed the following seven occurrences and showed the direct/root causes and contributing factors for the individual occurrences: failure of thermometer well tube, delayed reactor manual trip, inadequate continuous monitoring of leakage, misjudgment of leak rate, non-required operator action (turbine trip), retarded emergency sodium drainage, and retarded securing of ventilation system. Most of the occurrences stemmed from deficiencies in emergency operating procedures (EOPs), which were mainly caused by defects in the EOP preparation process and operator training programs. The corrective actions already proposed in the published reports were reviewed, identifying issues to be further studied. Possible corrective actions were discussed for these issues. The present study also demonstrated the effectiveness of the ASSET methodology and pointed out some problems, for example, in delineating causal relations among occurrences, for applying it to the detail and systematic analysis of event direct/root causes and determination of concrete measures. (J.P.N.)
Statistical Methods for Unusual Count Data: Examples From Studies of Microchimerism
Guthrie, Katherine A.; Gammill, Hilary S.; Kamper-Jørgensen, Mads; Tjønneland, Anne; Gadi, Vijayakrishna K.; Nelson, J. Lee; Leisenring, Wendy
2016-01-01
Natural acquisition of small amounts of foreign cells or DNA, referred to as microchimerism, occurs primarily through maternal-fetal exchange during pregnancy. Microchimerism can persist long-term and has been associated with both beneficial and adverse human health outcomes. Quantitative microchimerism data present challenges for statistical analysis, including a skewed distribution, excess zero values, and occasional large values. Methods for comparing microchimerism levels across groups while controlling for covariates are not well established. We compared statistical models for quantitative microchimerism values, applied to simulated data sets and 2 observed data sets, to make recommendations for analytic practice. Modeling the level of quantitative microchimerism as a rate via Poisson or negative binomial model with the rate of detection defined as a count of microchimerism genome equivalents per total cell equivalents tested utilizes all available data and facilitates a comparison of rates between groups. We found that both the marginalized zero-inflated Poisson model and the negative binomial model can provide unbiased and consistent estimates of the overall association of exposure or study group with microchimerism detection rates. The negative binomial model remains the more accessible of these 2 approaches; thus, we conclude that the negative binomial model may be most appropriate for analyzing quantitative microchimerism data. PMID:27769989
Bayesian methods for the physical sciences learning from examples in astronomy and physics
Andreon, Stefano
2015-01-01
Statistical literacy is critical for the modern researcher in Physics and Astronomy. This book empowers researchers in these disciplines by providing the tools they will need to analyze their own data. Chapters in this book provide a statistical base from which to approach new problems, including numerical advice and a profusion of examples. The examples are engaging analyses of real-world problems taken from modern astronomical research. The examples are intended to be starting points for readers as they learn to approach their own data and research questions. Acknowledging that scientific progress now hinges on the availability of data and the possibility to improve previous analyses, data and code are distributed throughout the book. The JAGS symbolic language used throughout the book makes it easy to perform Bayesian analysis and is particularly valuable as readers may use it in a myriad of scenarios through slight modifications.
Scientific method, analyzed by means of examples from weak interaction physics
International Nuclear Information System (INIS)
Pietschmann, H.
1975-01-01
Following K. POPPER, the logic of science is briefly reviewed. The decay of the long lived K meson into muon pairs is considered and limits for its branching ratio are computed. It is used - together with the discovery of weak neutral currents - to demonstrate the logic of scientific discovery on special examples. (Auth.)
Morris, Michael Lane; Storberg-Walker, Julia; McMillan, Heather S.
2009-01-01
This article presents a new model, generated through applied theory-building research methods, that helps human resource development (HRD) practitioners evaluate the return on investment (ROI) of organization development (OD) interventions. This model, called organization development human-capital accounting system (ODHCAS), identifies…
Vanhamäki, Heikki; Amm, Olaf; Fujii, Ryo; Yoshikawa, Aki; Ieda, Aki
2013-04-01
The Cowling mechanism is characterized by the generation of polarization space charges in the ionosphere in consequence of a partial or total blockage of FAC flowing between the ionosphere and the magnetosphere. Thus a secondary polarization electric field builds up in the ionosphere, which guarantees that the whole (primary + secondary) ionospheric current system is again in balance with the FAC. In the Earth's ionosphere the Cowling mechanism is long known to operate in the equatorial electrojet, and several studies indicate that it is important also in auroral current systems. We present a general method for calculate the secondary polarization electric field, when the ionospheric conductances, the primary (modeled) or the total (measured) electric field, and the Cowling efficiency are given. Here the Cowling efficiency is defined as the fraction of the divergent Hall current canceled by secondary Pedersen current. In contrast to previous studies, our approach is a general solution which is not limited to specific geometrical setups (like an auroral arc), and all parameters may have any kind of spatial dependence. The solution technique is based on spherical elementary current (vector) systems (SECS). This way, we avoid the need to specify explicit boundary conditions for the searched polarization electric field or its potential, which would be required if the problem was solved in a differential equation approach. Instead, we solve an algebraic matrix equation, for which the implicit boundary condition that the divergence of the polarization electric field vanishes outside our analysis area is sufficient. In order to illustrate the effect of Cowling mechanism on ionospheric current systems, we apply our method to two simple models of auroral electrodynamic situations: 1) a mesoscale strong conductance enhancement in the early morning sector within a relatively weak southward primary electric field, 2) a morning sector auroral arc with only a weak conductance
Near-infrared radiation curable multilayer coating systems and methods for applying same
Bowman, Mark P; Verdun, Shelley D; Post, Gordon L
2015-04-28
Multilayer coating systems, methods of applying and related substrates are disclosed. The coating system may comprise a first coating comprising a near-IR absorber, and a second coating deposited on a least a portion of the first coating. Methods of applying a multilayer coating composition to a substrate may comprise applying a first coating comprising a near-IR absorber, applying a second coating over at least a portion of the first coating and curing the coating with near infrared radiation.
International Nuclear Information System (INIS)
Okuno, Hiroshi; Fujine, Yukio; Asakura, Toshihide; Murazaki, Minoru; Koyama, Tomozo; Sakakibara, Tetsuro; Shibata, Atsuhiro
1999-03-01
The crystallization method is proposed to apply for recovery of uranium from dissolution liquid, enabling to reduce handling materials in later stages of reprocessing used fast breeder reactor (FBR) fuels. This report studies possible safety problems accompanied by the proposed method. Crystallization process was first defined in the whole reprocessing process, and the quantity and the kind of treated fuel were specified. Possible problems, such as criticality, shielding, fire/explosion, and confinement, were then investigated; and the events that might induce accidental incidents were discussed. Criticality, above all the incidents, was further studied by considering exampled criticality control of the crystallization process. For crystallization equipment, in particular, evaluation models were set up in normal and accidental operation conditions. Related data were selected out from the nuclear criticality safety handbooks. The theoretical densities of plutonium nitrates, which give basic and important information, were estimated in this report based on the crystal structure data. The criticality limit of crystallization equipment was calculated based on the above information. (author)
Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti
2010-01-01
In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…
International Nuclear Information System (INIS)
Arvieu, R.
The assumptions and principles of the spectral distribution method are reviewed. The object of the method is to deduce information on the nuclear spectra by constructing a frequency function which has the same first few moments, as the exact frequency function, these moments being then exactly calculated. The method is applied to subspaces containing a large number of quasi particles [fr
Volcanic Hazard Assessments for Nuclear Installations: Methods and Examples in Site Evaluation
International Nuclear Information System (INIS)
2016-07-01
To provide guidance on the protection of nuclear installations against the effects of volcanoes, the IAEA published in 2012 IAEA Safety Standards Series No. SSG-21, Volcanic Hazards in Site Evaluation for Nuclear Installations. SSG-21 addresses hazards relating to volcanic phenomena, and provides recommendations and general guidance for evaluation of these hazards. Unlike seismic hazard assessments, models for volcanic hazard assessment have not undergone decades of review, evaluation and testing for suitability in evaluating hazards at proposed nuclear installations. Currently in volcanology, scientific developments and detailed methodologies to model volcanic phenomena are evolving rapidly.This publication provides information on detailed methodologies and examples in the application of volcanic hazard assessment to site evaluation for nuclear installations, thereby addressing the recommendations in SSG-21. Although SSG-21 develops a logical framework for conducting a volcanic hazard assessment, this publication demonstrates the practicability of evaluating the recommendations in SSG-21 through a systematic volcanic hazard assessment and examples from Member States. The results of this hazard assessment can be used to derive the appropriate design bases and operational considerations for specific nuclear installations
Using a Three-Step Method in a Calculus Class: Extending the Worked Example
Miller, David
2010-01-01
This article discusses a three-step method that was used in a college calculus course. The three-step method was developed to help students understand the course material and transition to be more independent learners. In addition, the method helped students to transfer concepts from short-term to long-term memory while lowering cognitive load.…
Commissioning methods applied to the Hunterston 'B' AGR operator training simulator
International Nuclear Information System (INIS)
Hacking, D.
1985-01-01
The Hunterston 'B' full scope AGR Simulator, built for the South of Scotland Electricity Board by Marconi Instruments, encompasses all systems under direct and indirect control of the Hunterston central control room operators. The resulting breadth and depth of simulation together with the specification for the real time implementation of a large number of highly interactive detailed plant models leads to the classic problem of identifying acceptance and acceptability criteria. For example, whilst the ultimate criterion for acceptability must clearly be that within the context of the training requirement the simulator should be indistinguishable from the actual plant, far more measurable (i.e. less subjective) statements are required if a formal contractual acceptance condition is to be achieved. Within the framework, individual models and processes can have radically different acceptance requirements which therefore reflect on the commissioning approach applied. This paper discusses the application of a combination of quality assurance methods, design code results, plant data, theoretical analysis and operator 'feel' in the commissioning of the Hunterston 'B' AGR Operator Training Simulator. (author)
Using mixed methods effectively in prevention science: designs, procedures, and examples.
Zhang, Wanqing; Watanabe-Galloway, Shinobu
2014-10-01
There is growing interest in using a combination of quantitative and qualitative methods to generate evidence about the effectiveness of health prevention, services, and intervention programs. With the emerging importance of mixed methods research across the social and health sciences, there has been an increased recognition of the value of using mixed methods for addressing research questions in different disciplines. We illustrate the mixed methods approach in prevention research, showing design procedures used in several published research articles. In this paper, we focused on two commonly used mixed methods designs: concurrent and sequential mixed methods designs. We discuss the types of mixed methods designs, the reasons for, and advantages of using a particular type of design, and the procedures of qualitative and quantitative data collection and integration. The studies reviewed in this paper show that the essence of qualitative research is to explore complex dynamic phenomena in prevention science, and the advantage of using mixed methods is that quantitative data can yield generalizable results and qualitative data can provide extensive insights. However, the emphasis of methodological rigor in a mixed methods application also requires considerable expertise in both qualitative and quantitative methods. Besides the necessary skills and effective interdisciplinary collaboration, this combined approach also requires an open-mindedness and reflection from the involved researchers.
2012-01-01
Background Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. Methods We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Results Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Conclusions Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research. PMID:22545681
Directory of Open Access Journals (Sweden)
Szczygielska Agnieszka
2014-12-01
Full Text Available This work refers to one of the hierarchical organizations, which is the police. The example of the Municipal Crowd Management (Stołeczne Stanowisko Kierowania; SSK has been chosen as the basis for a detailed analysis presented in this article. The Department, as the place where the work of many individuals, departments and services is integrated, must demonstrate a high level of knowledge, competence, and coordination activities. The innovative technologies appear to be the unquestionable support. They should primarily serve the needs of managing knowledge on the efficient actions in relation to the socio- market requirements. In case of this organization, these solutions play a vital part in the creation of intelligent organization to support its activities focused on delivering effective public services and thus contributing to build the knowledge economy in Poland. An attempt to present the discussed issues has been taken in this study.
Chen, Yan-Yan; Shek, Daniel T L; Bu, Fei-Fei
2011-01-01
This paper attempts to give a brief introduction to interpretivism, constructionism and constructivism. Similarities and differences between interpretivism and constructionism in terms of their histories and branches, ontological and epistemological stances, as well as research applications are highlighted. This review shows that whereas interpretivism can be viewed as a relatively mature orientation that contains various traditions, constructionism is a looser trend in adolescent research, and in the narrow sense denotes the "pure" relativist position, which refers to a discursive approach of theory and research. Both positions call for the importance of clearly identifying what type of knowledge and knowledge process the researcher is going to create, and correctly choosing methodology matching with the epistemological stance. Examples of adolescent research adopting interpretivist and constructionist orientations are presented.
Development and standardization of methods for promoting products on the example of bakery products
Directory of Open Access Journals (Sweden)
O. A. Orlovtseva
2018-01-01
Full Text Available The popularity of products depends not only on its quality, but also on the activities that have been undertaken to promote it in the market. The media plan developed for this purpose should be based on the use of scientific approaches, since the success of an advertising campaign directly depends on the correctness of the selected promotion channels and the level of the developed advertising and information materials. At the same time, it is necessary to optimize the media plan, which makes it possible to ensure the effectiveness of advertising by attracting consumers, advancing competitors and rational use of resources, including material ones. The article gives an example of a developed advertising campaign for the promotion of bakery products: advertising channels in magazines, a radio commercial in shopping centers, advertising stand and distribution of flyers were chosen as channels for promotion. The general concept of this advertising is the promotion of various types of fresh hot products, so the main character is Red Riding Hood. The article gives examples of layouts of printed materials, an approximate scenario of a radio commercial and a description of the layout of magazine advertising. To assess the adequacy of the developed advertising company, the media plan and expenses for creating and conducting an advertising campaign are calculated.. On its basis, a methodology is formulated and an algorithm for performing these marketing activities is constructed. An important step in the application of this technique is its standardization - the creation of an organization standard. The standardization document containing the rules, regulations and requirements will allow optimizing production processes and increasing the competitiveness of the enterprise's products, and also contributes to a common understanding of marketing concepts and advertising policy in the enterprise.
International Nuclear Information System (INIS)
Tabatabai, A.S.; Simonen, F.A.
1985-12-01
This paper describes work recently completed at Pacific Northwest Laboratory (PNL) to use value-impact (V/I) analysis methods to help guide research to improve the effectiveness of inservice inspection (ISI) procedures at nuclear power plants. The example developed at PNL uses the results of probabilistic fracture mechanics and probabilistic risk analysis (PRA) studies to compare three generic categories of non-destructive examination (NDE) methods. These NDE methods are used to detect possible pipe cracks such as those induced by intergranular stress corrosion (IGSCC). The results of the analysis of this example include (1) quantification of the effectiveness of ISI in increasing plant safety in terms of reduction in core-melt frequency, (2) estimates of the industry cost of performing ISI, (3) estimates of radiation exposures to plant personnel as a result of performing ISI, and (4) potential areas of improvement in the NDE and ISI process
Atkins, Salla; Launiala, Annika; Kagaha, Alexander; Smith, Helen
2012-04-30
Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research.
Jacek Łuczak; Radoslaw Wolniak
2015-01-01
The knowledge about methods and techniques of quality management together with their effective use can be definitely regarded as an indication of high organisational culture. Using such methods and techniques in an effective way can be attributed to certain level of maturity, as far as the quality management system in an organisation is concerned. There is in the paper an analysis of problem-solving methods and techniques of quality management in the automotive sector in Poland. The survey wa...
Comparison of methods used to estimate conventional undiscovered petroleum resources: World examples
Ahlbrandt, T.S.; Klett, T.R.
2005-01-01
Various methods for assessing undiscovered oil, natural gas, and natural gas liquid resources were compared in support of the USGS World Petroleum Assessment 2000. Discovery process, linear fractal, parabolic fractal, engineering estimates, PETRIMES, Delphi, and the USGS 2000 methods were compared. Three comparisons of these methods were made in: (1) the Neuquen Basin province, Argentina (different assessors, same input data); (2) provinces in North Africa, Oman, and Yemen (same assessors, different methods); and (3) the Arabian Peninsula, Arabian (Persian) Gulf, and North Sea (different assessors, different methods). A fourth comparison (same assessors, same assessment methods but different geologic models), between results from structural and stratigraphic assessment units in the North Sea used only the USGS 2000 method, and hence compared the type of assessment unit rather than the method. In comparing methods, differences arise from inherent differences in assumptions regarding: (1) the underlying distribution of the parent field population (all fields, discovered and undiscovered), (2) the population of fields being estimated; that is, the entire parent distribution or the undiscovered resource distribution, (3) inclusion or exclusion of large outlier fields; (4) inclusion or exclusion of field (reserve) growth, (5) deterministic or probabilistic models, (6) data requirements, and (7) scale and time frame of the assessment. Discovery process, Delphi subjective consensus, and the USGS 2000 method yield comparable results because similar procedures are employed. In mature areas such as the Neuquen Basin province in Argentina, the linear and parabolic fractal and engineering methods were conservative compared to the other five methods and relative to new reserve additions there since 1995. The PETRIMES method gave the most optimistic estimates in the Neuquen Basin. In less mature areas, the linear fractal method yielded larger estimates relative to other methods
Directory of Open Access Journals (Sweden)
Orczyk Malgorzata
2017-01-01
Full Text Available The article presents a method of selecting diagnostic parameters which map the process of damaging the object. This method consists in calculating, during the observation, the correlation coefficient between the intensity of damage and the individual diagnostic parameters; and discarding of those parameters whose correlation coefficient values are outside of the narrowest confidence interval of the correlation coefficient. The characteristic feature of this method is that the parameters are reduced during the diagnostic experiment. The essence of the proposed method is illustrated by the vibration diagnosis of an internal combustion engine.
Mendoza Beltran, A.; Heijungs, R.; Guinée, J.; Tukker, A.
2016-01-01
Purpose: Despite efforts to treat uncertainty due to methodological choices in life cycle assessment (LCA) such as standardization, one-at-a-time (OAT) sensitivity analysis, and analytical and statistical methods, no method exists that propagate this source of uncertainty for all relevant processes
Hales, Patrick Dean
2016-01-01
Mixed methods research becomes more utilized in education research every year. As this pluralist paradigm begins to take hold, it becomes more and more necessary to take a critical eye to studies making use of different mixed methods approaches. An area of education research that has yet struggled to find a foothold with mixed methodology is…
Schonfeld, Irvin Sam; Farrell, Edwin
2010-01-01
The chapter examines the ways in which qualitative and quantitative methods support each other in research on occupational stress. Qualitative methods include eliciting from workers unconstrained descriptions of work experiences, careful first-hand observations of the workplace, and participant-observers describing "from the inside" a…
An efficient and rapid method for protein detection with an example ...
African Journals Online (AJOL)
AJL
2012-05-15
May 15, 2012 ... protein expressed in Esherichia coli by staining and destaining in under 30 min. The CMW method .... the saturated solutions reached a state of dynamic ... M, Protein molecular marker; 1, the control vector; 2, the SQR protein.
Directory of Open Access Journals (Sweden)
Grégory Caignard
2014-09-01
Full Text Available Infectious diseases are responsible for over 25% of deaths globally, but many more individuals are exposed to deadly pathogens. The outcome of infection results from a set of diverse factors including pathogen virulence factors, the environment, and the genetic make-up of the host. The completion of the human reference genome sequence in 2004 along with technological advances have tremendously accelerated and renovated the tools to study the genetic etiology of infectious diseases in humans and its best characterized mammalian model, the mouse. Advancements in mouse genomic resources have accelerated genome-wide functional approaches, such as gene-driven and phenotype-driven mutagenesis, bringing to the fore the use of mouse models that reproduce accurately many aspects of the pathogenesis of human infectious diseases. Treatment with the mutagen N-ethyl-N-nitrosourea (ENU has become the most popular phenotype-driven approach. Our team and others have employed mouse ENU mutagenesis to identify host genes that directly impact susceptibility to pathogens of global significance. In this review, we first describe the strategies and tools used in mouse genetics to understand immunity to infection with special emphasis on chemical mutagenesis of the mouse germ-line together with current strategies to efficiently identify functional mutations using next generation sequencing. Then, we highlight illustrative examples of genes, proteins, and cellular signatures that have been revealed by ENU screens and have been shown to be involved in susceptibility or resistance to infectious diseases caused by parasites, bacteria, and viruses.
Directory of Open Access Journals (Sweden)
Aleksandar Ž. Drenovac
2012-07-01
Full Text Available This paper proves that the application of the PROMETHEE method, one of better known methods of multicriteria optimization, can be used as a generalized criterion for many different situations when solving diverse problems and at all levels of the military organization. The decision-making process can be simplified, resulting in a higher level of reliability of decisions of the Ministry of Defence and higher levels of command in Serbian Armed Forces while solving important multicriteria problems above all.
Rock, Adam J.; Coventry, William L.; Morgan, Methuen I.; Loi, Natasha M.
2016-01-01
Generally, academic psychologists are mindful of the fact that, for many students, the study of research methods and statistics is anxiety provoking (Gal, Ginsburg, & Schau, 1997). Given the ubiquitous and distributed nature of eLearning systems (Nof, Ceroni, Jeong, & Moghaddam, 2015), teachers of research methods and statistics need to cultivate an understanding of how to effectively use eLearning tools to inspire psychology students to learn. Consequently, the aim of the present paper is to...
Directory of Open Access Journals (Sweden)
Jacek Łuczak
2015-12-01
Full Text Available The knowledge about methods and techniques of quality management together with their effective use can be definitely regarded as an indication of high organisational culture. Using such methods and techniques in an effective way can be attributed to certain level of maturity, as far as the quality management system in an organisation is concerned. There is in the paper an analysis of problem-solving methods and techniques of quality management in the automotive sector in Poland. The survey was given to the general population, which in case of the study consisted of companies operating in Poland that had certified quality management systems against ISO/TS 16949. The results of the conducted survey and the conclusions of the author can show actual and potential OEM suppliers (both 1st and 2nd tier in which direction their strategies for development and improvement of quality management systems should go in order to be effective. When the universal character of methods and techniques used in the surveyed population of companies is taken into consideration, it can be assumed that the results of the survey are also universal for all organisations realising the TQM strategy. The results of the research confirmed that methods which are also the basis for creating key system documents are the most relevant ones, i.e. flowcharts and FMEA, and moreover process monitoring tools (SPC and problem solving methods -above all 8D.
Sherman, Recinda L; Henry, Kevin A; Tannenbaum, Stacey L; Feaster, Daniel J; Kobetz, Erin; Lee, David J
2014-03-20
Epidemiologists are gradually incorporating spatial analysis into health-related research as geocoded cases of disease become widely available and health-focused geospatial computer applications are developed. One health-focused application of spatial analysis is cluster detection. Using cluster detection to identify geographic areas with high-risk populations and then screening those populations for disease can improve cancer control. SaTScan is a free cluster-detection software application used by epidemiologists around the world to describe spatial clusters of infectious and chronic disease, as well as disease vectors and risk factors. The objectives of this article are to describe how spatial analysis can be used in cancer control to detect geographic areas in need of colorectal cancer screening intervention, identify issues commonly encountered by SaTScan users, detail how to select the appropriate methods for using SaTScan, and explain how method selection can affect results. As an example, we used various methods to detect areas in Florida where the population is at high risk for late-stage diagnosis of colorectal cancer. We found that much of our analysis was underpowered and that no single method detected all clusters of statistical or public health significance. However, all methods detected 1 area as high risk; this area is potentially a priority area for a screening intervention. Cluster detection can be incorporated into routine public health operations, but the challenge is to identify areas in which the burden of disease can be alleviated through public health intervention. Reliance on SaTScan's default settings does not always produce pertinent results.
A practical implicit finite-difference method: examples from seismic modelling
International Nuclear Information System (INIS)
Liu, Yang; Sen, Mrinal K
2009-01-01
We derive explicit and new implicit finite-difference formulae for derivatives of arbitrary order with any order of accuracy by the plane wave theory where the finite-difference coefficients are obtained from the Taylor series expansion. The implicit finite-difference formulae are derived from fractional expansion of derivatives which form tridiagonal matrix equations. Our results demonstrate that the accuracy of a (2N + 2)th-order implicit formula is nearly equivalent to that of a (6N + 2)th-order explicit formula for the first-order derivative, and (2N + 2)th-order implicit formula is nearly equivalent to (4N + 2)th-order explicit formula for the second-order derivative. In general, an implicit method is computationally more expensive than an explicit method, due to the requirement of solving large matrix equations. However, the new implicit method only involves solving tridiagonal matrix equations, which is fairly inexpensive. Furthermore, taking advantage of the fact that many repeated calculations of derivatives are performed by the same difference formula, several parts can be precomputed resulting in a fast algorithm. We further demonstrate that a (2N + 2)th-order implicit formulation requires nearly the same memory and computation as a (2N + 4)th-order explicit formulation but attains the accuracy achieved by a (6N + 2)th-order explicit formulation for the first-order derivative and that of a (4N + 2)th-order explicit method for the second-order derivative when additional cost of visiting arrays is not considered. This means that a high-order explicit method may be replaced by an implicit method of the same order resulting in a much improved performance. Our analysis of efficiency and numerical modelling results for acoustic and elastic wave propagation validates the effectiveness and practicality of the implicit finite-difference method
Ahn, SangNam; Smith, Matthew Lee; Altpeter, Mary; Belza, Basia; Post, Lindsey; Ory, Marcia G
2014-01-01
Maintaining intervention fidelity should be part of any programmatic quality assurance (QA) plan and is often a licensure requirement. However, fidelity checklists designed by original program developers are often lengthy, which makes compliance difficult once programs become widely disseminated in the field. As a case example, we used Stanford's original Chronic Disease Self-Management Program (CDSMP) fidelity checklist of 157 items to demonstrate heuristic procedures for generating shorter fidelity checklists. Using an expert consensus approach, we sought feedback from active master trainers registered with the Stanford University Patient Education Research Center about which items were most essential to, and also feasible for, assessing fidelity. We conducted three sequential surveys and one expert group-teleconference call. Three versions of the fidelity checklist were created using different statistical and methodological criteria. In a final group-teleconference call with seven national experts, there was unanimous agreement that all three final versions (e.g., a 34-item version, a 20-item version, and a 12-item version) should be made available because the purpose and resources for administering a checklist might vary from one setting to another. This study highlights the methodology used to generate shorter versions of a fidelity checklist, which has potential to inform future QA efforts for this and other evidence-based programs (EBP) for older adults delivered in community settings. With CDSMP and other EBP, it is important to differentiate between program fidelity as mandated by program developers for licensure, and intervention fidelity tools for providing an "at-a-glance" snapshot of the level of compliance to selected program indicators.
26 CFR 1.482-8T - Examples of the best method rule (temporary).
2010-04-01
...'s wholly-owned subsidiary, enter into a CSA to develop a new oncological drug, Oncol. Immediately prior to entering into the CSA, USP acquires Company X, an unrelated U.S. pharmaceutical company.... Preference for market capitalization method. (i) Company X is a publicly traded U.S. company solely engaged...
Hermeneutic interviewing: an example of its development and use as research method.
Geanellos, R
1999-06-01
In a study exploring the practice knowledge of nursing on adolescent mental health units, I chose hermeneutic philosophy to guide the conduct of the research. Immediately, I encountered the problem that hermeneutics is essentially unconcerned with its use as research method. The need for congruence between the study's hermeneutic foundations and the methodological processes of the research, led me to develop a style of hermeneutic interviewing for the purpose of information gathering. I did this using Gadamer's (1979) fundamental principles of: (1) tradition, (2) dialectics of interpretation, and (3) dialectic of question and answer. These principles are examined and discussed. The actualization of hermeneutic interviewing, as a means of information gathering, proved challenging. Using interview excerpts, I demonstrate my use of hermeneutic interviewing as research method, and critique my interviewing skills in relation to the fundamental principles from which this style of interviewing was developed.
Directory of Open Access Journals (Sweden)
Taylor Mac Intyer Fonseca Junior
2013-12-01
Full Text Available This work evaluate seven estimation methods of fatigue properties applied to stainless steels and aluminum alloys. Experimental strain-life curves are compared to the estimations obtained by each method. After applying seven different estimation methods at 14 material conditions, it was found that fatigue life can be estimated with good accuracy only by the Bäumel-Seeger method for the martensitic stainless steel tempered between 300°C and 500°C. The differences between mechanical behavior during monotonic and cyclic loading are probably the reason for the absence of a reliable method for estimation of fatigue behavior from monotonic properties for a group of materials.
Directory of Open Access Journals (Sweden)
Predrag Pejovic
2013-12-01
Full Text Available Application of a single phase rectifier as an example in teaching circuit modeling, normalization, operating modes of nonlinear circuits, and circuit analysis methods is proposed.The rectifier supplied from a voltage source by an inductive impedance is analyzed in the discontinuous as well as in the continuous conduction mode. Completely analytical solution for the continuous conduction mode is derived. Appropriate numerical methods are proposed to obtain the circuit waveforms in both of the operating modes, and to compute the performance parameters. Source code of the program that performs such computation is provided.
Directory of Open Access Journals (Sweden)
Dunstan Debra A
2012-05-01
Full Text Available Abstract Background Children living in socioeconomic disadvantage are at risk of poor mental health outcomes. In order to focus and evaluate population health programs to facilitate children’s resilience, it is important to accurately assess baseline levels of functioning. With this end in mind, the aim of this study was to test the utility of 1 a voluntary random sampling method and 2 quantitative measures of adaptation (with national normative data for assessing the resilience of children in an identified community. Method This cross-sectional study utilized a sample of participants (N = 309, including parents (n = 169, teachers (n = 20 and children (n = 170; age range = 5-16 years, recruited from the schools in Tenterfield; a socioeconomically disadvantaged community in New South Wales, Australia. The Strengths and Difficulties Questionnaire (SDQ; including parent, teacher and youth versions was used to measure psychological well-being and pro-social functioning, and NAPLAN results (individual children’s and whole school’s performance in literacy and numeracy were used to measure level of academic achievement. Results The community’s disadvantage was evident in the whole school NAPLAN performance but not in the sample’s NAPLAN or SDQ results. The teacher SDQ ratings appeared to be more reliable than parent’s ratings. The voluntary random sampling method (requiring parental consent led to sampling bias. Conclusions The key indicators of resilience - psychological well-being, pro-social functioning and academic achievement – can be measured in whole communities using the teacher version of the SDQ and whole school results on a national test of literacy and numeracy (e.g., Australia’s NAPLAN. A voluntary random sample (dependent upon parental consent appears to have limited value due to the likelihood of sampling bias.
Azis, Moh. Ivan; Kasbawati; Haddade, Amiruddin; Astuti Thamrin, Sri
2018-03-01
A boundary element method (BEM) is obtained for solving a boundary value problem of homogeneous anisotropic media governed by diffusion-convection equation. The application of the BEM is shown for two particular pollutant transport problems of Tello river and Unhas lake in Makassar Indonesia. For the two particular problems a variety of the coefficients of diffusion and the velocity components are taken. The results show that the solutions vary as the parameters change. And this suggests that one has to be careful in measuring or determining the values of the parameters.
An Aural Learning Project: Assimilating Jazz Education Methods for Traditional Applied Pedagogy
Gamso, Nancy M.
2011-01-01
The Aural Learning Project (ALP) was developed to incorporate jazz method components into the author's classical practice and her applied woodwind lesson curriculum. The primary objective was to place a more focused pedagogical emphasis on listening and hearing than is traditionally used in the classical applied curriculum. The components of the…
Cost-effectiveness thresholds: methods for setting and examples from around the world.
Santos, André Soares; Guerra-Junior, Augusto Afonso; Godman, Brian; Morton, Alec; Ruas, Cristina Mariano
2018-06-01
Cost-effectiveness thresholds (CETs) are used to judge if an intervention represents sufficient value for money to merit adoption in healthcare systems. The study was motivated by the Brazilian context of HTA, where meetings are being conducted to decide on the definition of a threshold. Areas covered: An electronic search was conducted on Medline (via PubMed), Lilacs (via BVS) and ScienceDirect followed by a complementary search of references of included studies, Google Scholar and conference abstracts. Cost-effectiveness thresholds are usually calculated through three different approaches: the willingness-to-pay, representative of welfare economics; the precedent method, based on the value of an already funded technology; and the opportunity cost method, which links the threshold to the volume of health displaced. An explicit threshold has never been formally adopted in most places. Some countries have defined thresholds, with some flexibility to consider other factors. An implicit threshold could be determined by research of funded cases. Expert commentary: CETs have had an important role as a 'bridging concept' between the world of academic research and the 'real world' of healthcare prioritization. The definition of a cost-effectiveness threshold is paramount for the construction of a transparent and efficient Health Technology Assessment system.
An implementation of the diagnosis method DYANA, applied to a combined heat-power device
Energy Technology Data Exchange (ETDEWEB)
Van der Neut, F.
1993-10-01
The development and implementation of the monitor-and-diagnosis method DYANA is presented. This implementation is applied to and tested on a combined heat and power generating device (CHP). The steps that have been taken in realizing this implementation are evaluated into detail . In chapter two the theory behind DYANA is recapitulated. Attention is paid to the basic theory of diagnoses, and the steps of the path from this theory to the algorithm DYANA are revealed. These steps include the hierarchical approach, and explain the following features of DYANA: a) the use of best-first dynamic model zooming based on heuristics with respect to parsimony of the number of components within the diagnoses, b) the use of consistency of fault models with observations to focus on the most likely diagnoses, and c) the use of online diagnosis: the current set of diagnoses is incrementally updated after a new observation of the system is made. In chapter three the relevant aspects of the system to be diagnosed, the CHP, are dealt with in detail. An explanation is given of the broad working of the CHP, its hierarchical structure and mathematical representation are given, CHP observation is commented, and some possible forms of fault models are stated. In chapter four the pseudocode of the implementation, developed for DYANA, is presented. The pseudocode consists of two parts: the monitoring process (using numerical simulation), and the diagnostic process. The differences between the pseudocode and the actual implementation are mentioned. The CHP will then be monitored and diagnosed with this algorithm and results of this test are given in chapter five. An actual implementation of DYANA can be found in a separately supplied appendix, the Programme Appendix. The implementation of the monitoring process is meant only for this example of the CHP. The code for the diagnostic process can be easily adjusted for diagnosing other devices, such as electronic circuits. The language is Pascal.
1987-06-26
BUREAU OF STANDAR-S1963-A Nw BOM -ILE COPY -. 4eo .?3sa.9"-,,A WIN* MAT HEMATICAL SCIENCES _*INSTITUTE AD-A184 687 DTICS!ELECTE ANNOTATED COMPUTER OUTPUT...intoduction to the use of mixture models in clustering. Cornell University Biometrics Unit Technical Report BU-920-M and Mathematical Sciences Institute...mixture method and two comparable methods from SAS. Cornell University Biometrics Unit Technical Report BU-921-M and Mathematical Sciences Institute
Energy Technology Data Exchange (ETDEWEB)
Sehgal, A K; Gupta, S C [Punjabi Univ., Patiala (India). Dept. of Physics
1982-12-14
The complementary variational principles method (CVP) is applied to the thermal conductivities of a plasma in a uniform magnetic field. The results of computations show that the CVP derived results are very useful.
Searching for Innovations and Methods of Using the Cultural Heritage on the Example of Upper Silesia
Wagner, Tomasz
2017-10-01
The basic subject of this paper is historical and cultural heritage of some parts of Upper Silesia, bind by common history and similar problems at present days. The paper presents some selected historical phenomena that have influenced contemporary space, mentioned above, and contemporary issues of heritage protection in Upper Silesia. The Silesian architecture interpretation, since 1989, is strongly covered with some ideological and national ideas. The last 25 years are the next level of development which contains rapidly transformation of the space what is caused by another economical transformations. In this period, we can observe landscape transformations, liquidation of objects and historical structures, loos of regional features, spontaneous adaptation processes of objects and many methods of implementation forms of protection, and using of cultural resources. Some upheaval linked to the state borders changes, system, economy and ethnic transformation caused that former Upper Silesia border area focuses phenomena that exists in some other similar European areas which are abutments of cultures and traditions. The latest period in the history of Upper Silesia gives us time to reflect the character of changes in architecture and city planning of the area and appraisal of efficiency these practices which are connected to cultural heritage perseveration. The phenomena of the last decades are: decrement of regional features, elimination of objects, which were a key feature of the regional cultural heritage, deformation of these forms that were shaped in the history and some trials of using these elements of cultural heritage, which are widely recognized as cultural values. In this situation, it is important to seek creative solutions that will neutralize bad processes resulting from bad law and practice. The most important phenomena of temporary space is searching of innovative fields and methods and use of cultural resources. An important part of the article is
Directory of Open Access Journals (Sweden)
Samira Maerrawi Haddad
2014-01-01
Full Text Available Objective. To assess quality of care of women with severe maternal morbidity and to identify associated factors. Method. This is a national multicenter cross-sectional study performing surveillance for severe maternal morbidity, using the World Health Organization criteria. The expected number of maternal deaths was calculated with the maternal severity index (MSI based on the severity of complication, and the standardized mortality ratio (SMR for each center was estimated. Analyses on the adequacy of care were performed. Results. 17 hospitals were classified as providing adequate and 10 as nonadequate care. Besides almost twofold increase in maternal mortality ratio, the main factors associated with nonadequate performance were geographic difficulty in accessing health services (P<0.001, delays related to quality of medical care (P=0.012, absence of blood derivatives (P=0.013, difficulties of communication between health services (P=0.004, and any delay during the whole process (P=0.039. Conclusions. This is an example of how evaluation of the performance of health services is possible, using a benchmarking tool specific to Obstetrics. In this study the MSI was a useful tool for identifying differences in maternal mortality ratios and factors associated with nonadequate performance of care.
Risk-based security cost-benefit analysis: method and example applications - 59381
International Nuclear Information System (INIS)
Wyss, Gregory; Hinton, John; Clem, John; Silva, Consuelo; Duran, Felicia A.
2012-01-01
Document available in abstract form only. Full text of publication follows: Decision makers wish to use risk-based cost-benefit analysis to prioritize security investments. However, understanding security risk requires estimating the likelihood of attack, which is extremely uncertain and depends on unquantifiable psychological factors like dissuasion and deterrence. In addition, the most common performance metric for physical security systems, probability of effectiveness at the design basis threat [P(E)], performs poorly in cost-benefit analysis. It is extremely sensitive to small changes in adversary characteristics when the threat is near a systems breaking point, but very insensitive to those changes under other conditions. This makes it difficult to prioritize investment options on the basis of P(E), especially across multiple targets or facilities. To overcome these obstacles, a Sandia National Laboratories Laboratory Directed Research and Development project has developed a risk-based security cost-benefit analysis method. This approach characterizes targets by how difficult it would be for adversaries to exploit each targets vulnerabilities to induce consequences. Adversaries generally have success criteria (e.g., adequate or desired consequences and thresholds for likelihood of success), and choose among alternative strategies that meet these criteria while considering their degree of difficulty in achieving their successful outcome. Investments reduce security risk as they reduce the severity of consequences available and/or increase the difficulty for an adversary to successfully accomplish their most advantageous attack
Using Qualitative Methods to Explore Non-Disclosure: The Example of Self-Injury
Directory of Open Access Journals (Sweden)
Jo Borrill PhD
2012-09-01
Full Text Available Attempts to investigate non-disclosure are hampered by the very aspect being examined, namely an unwillingness to disclose non-disclosure. Although qualitative interviews may be considered to be an appropriate method for in-depth exploration of personal experiences, a lack of anonymity and the desire to conform to what is perceived to be socially acceptable limit its application in sensitive research. The current study, using a qualitative approach, addresses non-disclosure in the context of non-suicidal self-injury. Twenty-five young adults from diverse cultural backgrounds were interviewed in depth about their perceptions of self-injury, without the researchers asking directly whether the participants had ever self-harmed. Two techniques were used to enhance discussion within the qualitative interview: participants were invited to (a discuss three hypothetical scenarios and (b explore alternative interpretations of statistical data on patterns of self-harm. Key themes emerged regarding disclosure, gender issues, and culturally shaped concerns about the consequences of disclosure. The contributions of each element of the interview to understanding participants' perceptions are highlighted and alternative methodological approaches for examining disclosure are discussed.
Wielandt method applied to the diffusion equations discretized by finite element nodal methods
International Nuclear Information System (INIS)
Mugica R, A.; Valle G, E. del
2003-01-01
Nowadays the numerical methods of solution to the diffusion equation by means of algorithms and computer programs result so extensive due to the great number of routines and calculations that should carry out, this rebounds directly in the execution times of this programs, being obtained results in relatively long times. This work shows the application of an acceleration method of the convergence of the classic method of those powers that it reduces notably the number of necessary iterations for to obtain reliable results, what means that the compute times they see reduced in great measure. This method is known in the literature like Wielandt method and it has incorporated to a computer program that is based on the discretization of the neutron diffusion equations in plate geometry and stationary state by polynomial nodal methods. In this work the neutron diffusion equations are described for several energy groups and their discretization by means of those called physical nodal methods, being illustrated in particular the quadratic case. It is described a model problem widely described in the literature which is solved for the physical nodal grade schemes 1, 2, 3 and 4 in three different ways: to) with the classic method of the powers, b) method of the powers with the Wielandt acceleration and c) method of the powers with the Wielandt modified acceleration. The results for the model problem as well as for two additional problems known as benchmark problems are reported. Such acceleration method can also be implemented to problems of different geometry to the proposal in this work, besides being possible to extend their application to problems in 2 or 3 dimensions. (Author)
Utilising a collective case study system theory mixed methods approach: a rural health example.
Adams, Robyn; Jones, Anne; Lefmann, Sophie; Sheppard, Lorraine
2014-07-28
Insight into local health service provision in rural communities is limited in the literature. The dominant workforce focus in the rural health literature, while revealing issues of shortage of maldistribution, does not describe service provision in rural towns. Similarly aggregation of data tends to render local health service provision virtually invisible. This paper describes a methodology to explore specific aspects of rural health service provision with an initial focus on understanding rurality as it pertains to rural physiotherapy service provision. A system theory-case study heuristic combined with a sequential mixed methods approach to provide a framework for both quantitative and qualitative exploration across sites. Stakeholder perspectives were obtained through surveys and in depth interviews. The investigation site was a large area of one Australian state with a mix of rural, regional and remote communities. 39 surveys were received from 11 locations within the investigation site and 19 in depth interviews were conducted. Stakeholder perspectives of rurality and workforce numbers informed the development of six case types relevant to the exploration of rural physiotherapy service provision. Participant perspective of rurality often differed with the geographical classification of their location. The numbers of onsite colleagues and local access to health services contributed to participant perceptions of rurality. The complexity of understanding the concept of rurality was revealed by interview participants when providing their perspectives about rural physiotherapy service provision. Dual measures, such as rurality and workforce numbers, provide more relevant differentiation of sites to explore specific services, such rural physiotherapy service provision, than single measure of rurality as defined by geographic classification. The system theory-case study heuristic supports both qualitative and quantitative exploration in rural health services
What is the method in applying formal methods to PLC applications?
Mader, Angelika H.; Engel, S.; Wupper, Hanno; Kowalewski, S.; Zaytoon, J.
2000-01-01
The question we investigate is how to obtain PLC applications with confidence in their proper functioning. Especially, we are interested in the contribution that formal methods can provide for their development. Our maxim is that the place of a particular formal method in the total picture of system
A simple method to predict regional fish abundance: an example in the McKenzie River Basin, Oregon
D.J. McGarvey; J.M. Johnston
2011-01-01
Regional assessments of fisheries resources are increasingly called for, but tools with which to perform them are limited. We present a simple method that can be used to estimate regional carrying capacity and apply it to the McKenzie River Basin, Oregon. First, we use a macroecological model to predict trout densities within small, medium, and large streams in the...
Non-parametric order statistics method applied to uncertainty propagation in fuel rod calculations
International Nuclear Information System (INIS)
Arimescu, V.E.; Heins, L.
2001-01-01
method, which is computationally efficient, is presented for the evaluation of the global statement. It is proved that, r, the expected fraction of fuel rods exceeding a certain limit is equal to the (1-r)-quantile of the overall distribution of all possible values from all fuel rods. In this way, the problem is reduced to that of estimating a certain quantile of the overall distribution, and the same techniques used for a single rod distribution can be applied again. A simplified test case was devised to verify and validate the methodology. The fuel code was replaced by a transfer function dependent on two input parameters. The function was chosen so that analytic results could be obtained for the distribution of the output. This offers a direct validation for the statistical procedure. Also, a sensitivity study has been performed to analyze the effect on the final outcome of the sampling procedure, simple Monte Carlo and Latin Hypercube Sampling. Also, the effect on the accuracy and bias of the statistical results due to the size of the sample was studied and the conclusion was reached that the results of the statistical methodology are typically conservative. In the end, an example of applying these statistical techniques to a PWR reload is presented together with the improvements and new insights the statistical methodology brings to fuel rod design calculations. (author)
Towers, Sherry; Mubayi, Anuj; Castillo-Chavez, Carlos
2018-01-01
When attempting to statistically distinguish between a null and an alternative hypothesis, many researchers in the life and social sciences turn to binned statistical analysis methods, or methods that are simply based on the moments of a distribution (such as the mean, and variance). These methods have the advantage of simplicity of implementation, and simplicity of explanation. However, when null and alternative hypotheses manifest themselves in subtle differences in patterns in the data, binned analysis methods may be insensitive to these differences, and researchers may erroneously fail to reject the null hypothesis when in fact more sensitive statistical analysis methods might produce a different result when the null hypothesis is actually false. Here, with a focus on two recent conflicting studies of contagion in mass killings as instructive examples, we discuss how the use of unbinned likelihood methods makes optimal use of the information in the data; a fact that has been long known in statistical theory, but perhaps is not as widely appreciated amongst general researchers in the life and social sciences. In 2015, Towers et al published a paper that quantified the long-suspected contagion effect in mass killings. However, in 2017, Lankford & Tomek subsequently published a paper, based upon the same data, that claimed to contradict the results of the earlier study. The former used unbinned likelihood methods, and the latter used binned methods, and comparison of distribution moments. Using these analyses, we also discuss how visualization of the data can aid in determination of the most appropriate statistical analysis methods to distinguish between a null and alternate hypothesis. We also discuss the importance of assessment of the robustness of analysis results to methodological assumptions made (for example, arbitrary choices of number of bins and bin widths when using binned methods); an issue that is widely overlooked in the literature, but is critical
Goodwin, M.; Pandya, R.; Udu-gama, N.; Wilkins, S.
2017-12-01
While one-size-fits all may work for most hats, it rarely does for communities. Research products, methods and knowledge may be usable at a local scale, but applying them often presents a challenge due to issues like availability, accessibility, awareness, lack of trust, and time. However, in an environment with diminishing federal investment in issues related climate change, natural hazards, and natural resource use and management, the ability of communities to access and leverage science has never been more urgent. Established, yet responsive frameworks and methods can help scientists and communities work together to identify and address specific challenges and leverage science to make a local impact. Through the launch of over 50 community science projects since 2013, the Thriving Earth Exchange (TEX) has created a living framework consisting of a set of milestones by which teams of scientists and community leaders navigate the challenges of working together. Central to the framework are context, trust, project planning and refinement, relationship management and community impact. We find that careful and respectful partnership management results in trust and an open exchange of information. Community science partnerships grounded in local priorities result in the development and exchange of stronger decision-relevant tools, resources and knowledge. This presentation will explore three methods TEX uses to apply its framework to community science partnerships: cohort-based collaboration, online dialogues, and one-on-one consultation. The choice of method should be responsive to a community's needs and working style. For example, a community may require customized support, desire the input and support of peers, or require consultation with multiple experts before deciding on a course of action. Knowing and applying the method of engagement best suited to achieve the community's objectives will ensure that the science is most effectively translated and applied.
International Nuclear Information System (INIS)
Reynolds, J.G.
2011-01-01
Previous researchers have developed correlations between oxide electronegativity and oxide basicity. The present paper revises those correlations using a newer method of calculating electronegativity of the oxygen anion. Basicity is expressed using the Smith α parameter scale. A linear relation was found between the oxide electronegativity and the Smith α parameter, with an R 2 of 0.92. An example application of this new correlation to the durability of high-level nuclear waste glass is demonstrated. The durability of waste glass was found to be directly proportional to the quantity and basicity of the oxides of tetrahedrally coordinated network forming ions.
Estimation Methods for Infinite-Dimensional Systems Applied to the Hemodynamic Response in the Brain
Belkhatir, Zehor
2018-05-01
Infinite-Dimensional Systems (IDSs) which have been made possible by recent advances in mathematical and computational tools can be used to model complex real phenomena. However, due to physical, economic, or stringent non-invasive constraints on real systems, the underlying characteristics for mathematical models in general (and IDSs in particular) are often missing or subject to uncertainty. Therefore, developing efficient estimation techniques to extract missing pieces of information from available measurements is essential. The human brain is an example of IDSs with severe constraints on information collection from controlled experiments and invasive sensors. Investigating the intriguing modeling potential of the brain is, in fact, the main motivation for this work. Here, we will characterize the hemodynamic behavior of the brain using functional magnetic resonance imaging data. In this regard, we propose efficient estimation methods for two classes of IDSs, namely Partial Differential Equations (PDEs) and Fractional Differential Equations (FDEs). This work is divided into two parts. The first part addresses the joint estimation problem of the state, parameters, and input for a coupled second-order hyperbolic PDE and an infinite-dimensional ordinary differential equation using sampled-in-space measurements. Two estimation techniques are proposed: a Kalman-based algorithm that relies on a reduced finite-dimensional model of the IDS, and an infinite-dimensional adaptive estimator whose convergence proof is based on the Lyapunov approach. We study and discuss the identifiability of the unknown variables for both cases. The second part contributes to the development of estimation methods for FDEs where major challenges arise in estimating fractional differentiation orders and non-smooth pointwise inputs. First, we propose a fractional high-order sliding mode observer to jointly estimate the pseudo-state and input of commensurate FDEs. Second, we propose a
A new clamp method for firing bricks | Obeng | Journal of Applied ...
African Journals Online (AJOL)
A new clamp method for firing bricks. ... Journal of Applied Science and Technology ... To overcome this operational deficiencies, a new method of firing bricks that uses brick clamp technique that incorporates a clamp wall of 60 cm thickness, a six tier approach of sealing the top of the clamp (by combination of green bricks) ...
1978-10-01
This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...
Determination methods for plutonium as applied in the field of reprocessing
International Nuclear Information System (INIS)
1983-07-01
The papers presented report on Pu-determination methods, which are routinely applied in process control, and also on new developments which could supercede current methods either because they are more accurate or because they are simpler and faster. (orig./DG) [de
Water Permeability of Pervious Concrete Is Dependent on the Applied Pressure and Testing Methods
Directory of Open Access Journals (Sweden)
Yinghong Qin
2015-01-01
Full Text Available Falling head method (FHM and constant head method (CHM are, respectively, used to test the water permeability of permeable concrete, using different water heads on the testing samples. The results indicate the apparent permeability of pervious concrete decreasing with the applied water head. The results also demonstrate the permeability measured from the FHM is lower than that from the CHM. The fundamental difference between the CHM and FHM is examined from the theory of fluid flowing through porous media. The testing results suggest that the water permeability of permeable concrete should be reported with the applied pressure and the associated testing method.
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels
Directory of Open Access Journals (Sweden)
Javier Cubas
2015-01-01
Full Text Available A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers’ datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
Directory of Open Access Journals (Sweden)
Yannan Hu
2017-04-01
Full Text Available Abstract Background The scientific evidence-base for policies to tackle health inequalities is limited. Natural policy experiments (NPE have drawn increasing attention as a means to evaluating the effects of policies on health. Several analytical methods can be used to evaluate the outcomes of NPEs in terms of average population health, but it is unclear whether they can also be used to assess the outcomes of NPEs in terms of health inequalities. The aim of this study therefore was to assess whether, and to demonstrate how, a number of commonly used analytical methods for the evaluation of NPEs can be applied to quantify the effect of policies on health inequalities. Methods We identified seven quantitative analytical methods for the evaluation of NPEs: regression adjustment, propensity score matching, difference-in-differences analysis, fixed effects analysis, instrumental variable analysis, regression discontinuity and interrupted time-series. We assessed whether these methods can be used to quantify the effect of policies on the magnitude of health inequalities either by conducting a stratified analysis or by including an interaction term, and illustrated both approaches in a fictitious numerical example. Results All seven methods can be used to quantify the equity impact of policies on absolute and relative inequalities in health by conducting an analysis stratified by socioeconomic position, and all but one (propensity score matching can be used to quantify equity impacts by inclusion of an interaction term between socioeconomic position and policy exposure. Conclusion Methods commonly used in economics and econometrics for the evaluation of NPEs can also be applied to assess the equity impact of policies, and our illustrations provide guidance on how to do this appropriately. The low external validity of results from instrumental variable analysis and regression discontinuity makes these methods less desirable for assessing policy effects
Diamond difference method with hybrid angular quadrature applied to neutron transport problems
International Nuclear Information System (INIS)
Zani, Jose H.; Barros, Ricardo C.; Alves Filho, Hermes
2005-01-01
In this work we presents the results for the calculations of the disadvantage factor in thermal nuclear reactor physics. We use the one-group discrete ordinates (S N ) equations to mathematically model the flux distributions in slab lattices. We apply the diamond difference method with source iteration iterative scheme to numerically solve the discretized systems equations. We used special interface conditions to describe the method with hybrid angular quadrature. We show numerical results to illustrate the accuracy of the hybrid method. (author)
Ando, Yoshinobu; Eguchi, Yuya; Mizukawa, Makoto
In this research, we proposed and evaluated a management method of college mechatronics education. We applied the project management to college mechatronics education. We practiced our management method to the seminar “Microcomputer Seminar” for 3rd grade students who belong to Department of Electrical Engineering, Shibaura Institute of Technology. We succeeded in management of Microcomputer Seminar in 2006. We obtained the good evaluation for our management method by means of questionnaire.
Bendinskaitė, Irmina
2015-01-01
Bendinskaitė I. Perspective for applying traditional and innovative teaching and learning methods to nurse’s continuing education, magister thesis / supervisor Assoc. Prof. O. Riklikienė; Departament of Nursing and Care, Faculty of Nursing, Lithuanian University of Health Sciences. – Kaunas, 2015, – p. 92 The purpose of this study was to investigate traditional and innovative teaching and learning methods perspective to nurse’s continuing education. Material and methods. In a period fro...
Cluster detection methods applied to the Upper Cape Cod cancer data
Directory of Open Access Journals (Sweden)
Ozonoff David
2005-09-01
Full Text Available Abstract Background A variety of statistical methods have been suggested to assess the degree and/or the location of spatial clustering of disease cases. However, there is relatively little in the literature devoted to comparison and critique of different methods. Most of the available comparative studies rely on simulated data rather than real data sets. Methods We have chosen three methods currently used for examining spatial disease patterns: the M-statistic of Bonetti and Pagano; the Generalized Additive Model (GAM method as applied by Webster; and Kulldorff's spatial scan statistic. We apply these statistics to analyze breast cancer data from the Upper Cape Cancer Incidence Study using three different latency assumptions. Results The three different latency assumptions produced three different spatial patterns of cases and controls. For 20 year latency, all three methods generally concur. However, for 15 year latency and no latency assumptions, the methods produce different results when testing for global clustering. Conclusion The comparative analyses of real data sets by different statistical methods provides insight into directions for further research. We suggest a research program designed around examining real data sets to guide focused investigation of relevant features using simulated data, for the purpose of understanding how to interpret statistical methods applied to epidemiological data with a spatial component.
Apparatus and method for applying an end plug to a fuel rod tube end
International Nuclear Information System (INIS)
Rieben, S.L.; Wylie, M.E.
1987-01-01
An apparatus is described for applying an end plug to a hollow end of a nuclear fuel rod tube, comprising: support means mounted for reciprocal movement between remote and adjacent positions relative to a nuclear fuel rod tube end to which an end plug is to be applied; guide means supported on the support means for movement; and drive means coupled to the support means and being actuatable for movement between retracted and extended positions for reciprocally moving the support means between its respective remote and adjacent positions. A method for applying an end plug to a hollow end of a nuclear fuel rod tube is also described
Hassenforder, Emeline; Ducrot, Raphaëlle; Ferrand, Nils; Barreteau, Olivier; Anne Daniell, Katherine; Pittock, Jamie
2016-09-15
Participatory approaches are now increasingly recognized and used as an essential element of policies and programs, especially in regards to natural resource management (NRM). Most practitioners, decision-makers and researchers having adopted participatory approaches also acknowledge the need to monitor and evaluate such approaches in order to audit their effectiveness, support decision-making or improve learning. Many manuals and frameworks exist on how to carry out monitoring and evaluation (M&E) for participatory processes. However, few provide guidelines on the selection and implementation of M&E methods, an aspect which is also often obscure in published studies, at the expense of the transparency, reliability and validity of the study. In this paper, we argue that the selection and implementation of M&E methods are particularly strategic when monitoring and evaluating a participatory process. We demonstrate that evaluators of participatory processes have to tackle a quadruple challenge when selecting and implementing methods: using mixed-methods, both qualitative and quantitative; assessing the participatory process, its outcomes, and its context; taking into account both the theory and participants' views; and being both rigorous and adaptive. The M&E of a participatory planning process in the Rwenzori Region, Uganda, is used as an example to show how these challenges unfold on the ground and how they can be tackled. Based on this example, we conclude by providing tools and strategies that can be used by evaluators to ensure that they make utile, feasible, coherent, transparent and adaptive methodological choices when monitoring and evaluating participatory processes for NRM. Copyright © 2016 Elsevier Ltd. All rights reserved.
Method of levelized discounted costs applied in economic evaluation of nuclear power plant project
International Nuclear Information System (INIS)
Tian Li; Wang Yongqing; Liu Jingquan; Guo Jilin; Liu Wei
2000-01-01
The main methods of economic evaluation of bid which are in common use are introduced. The characteristics of levelized discounted cost method and its application are presented. The method of levelized discounted cost is applied to the cost calculation of a 200 MW nuclear heating reactor economic evaluation. The results indicate that the method of levelized discounted costs is simple, feasible and which is considered most suitable for the economic evaluation of various case. The method is suggested which is used in the national economic evaluation
Lorne, Emmanuel; Diouf, Momar; de Wilde, Robert B P; Fischer, Marc-Olivier
2018-02-01
The Bland-Altman (BA) and percentage error (PE) methods have been previously described to assess the agreement between 2 methods of medical or laboratory measurements. This type of approach raises several problems: the BA methodology constitutes a subjective approach to interchangeability, whereas the PE approach does not take into account the distribution of values over a range. We describe a new methodology that defines an interchangeability rate between 2 methods of measurement and cutoff values that determine the range of interchangeable values. We used a simulated data and a previously published data set to demonstrate the concept of the method. The interchangeability rate of 5 different cardiac output (CO) pulse contour techniques (Wesseling method, LiDCO, PiCCO, Hemac method, and Modelflow) was calculated, in comparison with the reference pulmonary artery thermodilution CO using our new method. In our example, Modelflow with a good interchangeability rate of 93% and a cutoff value of 4.8 L min, was found to be interchangeable with the thermodilution method for >95% of measurements. Modelflow had a higher interchangeability rate compared to Hemac (93% vs 86%; P = .022) or other monitors (Wesseling cZ = 76%, LiDCO = 73%, and PiCCO = 62%; P < .0001). Simulated data and reanalysis of a data set comparing 5 CO monitors against thermodilution CO showed that, depending on the repeatability of the reference method, the interchangeability rate combined with a cutoff value could be used to define the range of values over which interchangeability remains acceptable.
Hu, Yannan; van Lenthe, Frank J; Hoffmann, Rasmus; van Hedel, Karen; Mackenbach, Johan P
2017-04-20
The scientific evidence-base for policies to tackle health inequalities is limited. Natural policy experiments (NPE) have drawn increasing attention as a means to evaluating the effects of policies on health. Several analytical methods can be used to evaluate the outcomes of NPEs in terms of average population health, but it is unclear whether they can also be used to assess the outcomes of NPEs in terms of health inequalities. The aim of this study therefore was to assess whether, and to demonstrate how, a number of commonly used analytical methods for the evaluation of NPEs can be applied to quantify the effect of policies on health inequalities. We identified seven quantitative analytical methods for the evaluation of NPEs: regression adjustment, propensity score matching, difference-in-differences analysis, fixed effects analysis, instrumental variable analysis, regression discontinuity and interrupted time-series. We assessed whether these methods can be used to quantify the effect of policies on the magnitude of health inequalities either by conducting a stratified analysis or by including an interaction term, and illustrated both approaches in a fictitious numerical example. All seven methods can be used to quantify the equity impact of policies on absolute and relative inequalities in health by conducting an analysis stratified by socioeconomic position, and all but one (propensity score matching) can be used to quantify equity impacts by inclusion of an interaction term between socioeconomic position and policy exposure. Methods commonly used in economics and econometrics for the evaluation of NPEs can also be applied to assess the equity impact of policies, and our illustrations provide guidance on how to do this appropriately. The low external validity of results from instrumental variable analysis and regression discontinuity makes these methods less desirable for assessing policy effects on population-level health inequalities. Increased use of the
Local regression type methods applied to the study of geophysics and high frequency financial data
Mariani, M. C.; Basu, K.
2014-09-01
In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.
Directory of Open Access Journals (Sweden)
Adam John Rock
2016-03-01
Full Text Available Generally, academic psychologists are mindful of the fact that, for many students, the study of research methods and statistics is anxiety provoking (Gal, Ginsburg, & Schau, 1997. Given the ubiquitous and distributed nature of eLearning systems (Nof, Ceroni, Jeong, & Moghaddam, 2015, teachers of research methods and statistics need to cultivate an understanding of how to effectively use eLearning tools to inspire psychology students to learn. Consequently, the aim of the present paper is to discuss critically how using eLearning systems might engage psychology students in research methods and statistics. First, we critically appraise definitions of eLearning. Second, we examine numerous important pedagogical principles associated with effectively teaching research methods and statistics using eLearning systems. Subsequently, we provide practical examples of our own eLearning-based class activities designed to engage psychology students to learn statistical concepts such as Factor Analysis and Discriminant Function Analysis. Finally, we discuss general trends in eLearning and possible futures that are pertinent to teachers of research methods and statistics in psychology.
Method to detect substances in a body and device to apply the method
International Nuclear Information System (INIS)
Voigt, H.
1978-01-01
The method and the measuring disposition serve to localize pellets doped with Gd 2 O 3 , lying between UO 2 pellets within a reactor fuel rod. The fuel rod is penetrating a homogeneous magnetic field generated between two pole shoes. The magnetic stray field caused by the doping substances is then measured by means of Hall probes (e.g. InAs) for quantitative discrimination from UO 2 . The position of the Gd 2 O 3 -doped pellets is determined by moving the fuel rod through the magnetic field in a direction perpendicular to the homogeneous field. The measuring signal is caused by the different susceptibility of Gd 2 O 3 with respect to UO 2 . (DG) [de
Directory of Open Access Journals (Sweden)
Dariusz Kołodziej
2012-06-01
Full Text Available This paper presents examples of coordination between automatic voltage and reactive power control systems (ARST covering adjacent and strongly related extra high voltage substations. Included are conclusions resulting from the use of these solutions. The Institute of Power Engineering, Gdańsk Division has developed and deployed ARST systems in the national power system for a dozen or so years.
International Nuclear Information System (INIS)
Terra, Andre Miguel Barge Pontes Torres
2005-01-01
The Albedo method applied to criticality calculations to nuclear reactors is characterized by following the neutron currents, allowing to make detailed analyses of the physics phenomena about interactions of the neutrons with the core-reflector set, by the determination of the probabilities of reflection, absorption, and transmission. Then, allowing to make detailed appreciations of the variation of the effective neutron multiplication factor, keff. In the present work, motivated for excellent results presented in dissertations applied to thermal reactors and shieldings, was described the methodology to Albedo method for the analysis criticality of thermal reactors by using two energy groups admitting variable core coefficients to each re-entrant current. By using the Monte Carlo KENO IV code was analyzed relation between the total fraction of neutrons absorbed in the core reactor and the fraction of neutrons that never have stayed into the reflector but were absorbed into the core. As parameters of comparison and analysis of the results obtained by the Albedo method were used one dimensional deterministic code ANISN (ANIsotropic SN transport code) and Diffusion method. The keff results determined by the Albedo method, to the type of analyzed reactor, showed excellent agreement. Thus were obtained relative errors of keff values smaller than 0,78% between the Albedo method and code ANISN. In relation to the Diffusion method were obtained errors smaller than 0,35%, showing the effectiveness of the Albedo method applied to criticality analysis. The easiness of application, simplicity and clarity of the Albedo method constitute a valuable instrument to neutronic calculations applied to nonmultiplying and multiplying media. (author)
Zhao, Yu Xi; Xie, Ping; Sang, Yan Fang; Wu, Zi Yi
2018-04-01
Hydrological process evaluation is temporal dependent. Hydrological time series including dependence components do not meet the data consistency assumption for hydrological computation. Both of those factors cause great difficulty for water researches. Given the existence of hydrological dependence variability, we proposed a correlationcoefficient-based method for significance evaluation of hydrological dependence based on auto-regression model. By calculating the correlation coefficient between the original series and its dependence component and selecting reasonable thresholds of correlation coefficient, this method divided significance degree of dependence into no variability, weak variability, mid variability, strong variability, and drastic variability. By deducing the relationship between correlation coefficient and auto-correlation coefficient in each order of series, we found that the correlation coefficient was mainly determined by the magnitude of auto-correlation coefficient from the 1 order to p order, which clarified the theoretical basis of this method. With the first-order and second-order auto-regression models as examples, the reasonability of the deduced formula was verified through Monte-Carlo experiments to classify the relationship between correlation coefficient and auto-correlation coefficient. This method was used to analyze three observed hydrological time series. The results indicated the coexistence of stochastic and dependence characteristics in hydrological process.
Takae, Kyohei; Onuki, Akira
2013-09-28
We develop an efficient Ewald method of molecular dynamics simulation for calculating the electrostatic interactions among charged and polar particles between parallel metallic plates, where we may apply an electric field with an arbitrary size. We use the fact that the potential from the surface charges is equivalent to the sum of those from image charges and dipoles located outside the cell. We present simulation results on boundary effects of charged and polar fluids, formation of ionic crystals, and formation of dipole chains, where the applied field and the image interaction are crucial. For polar fluids, we find a large deviation of the classical Lorentz-field relation between the local field and the applied field due to pair correlations along the applied field. As general aspects, we clarify the difference between the potential-fixed and the charge-fixed boundary conditions and examine the relationship between the discrete particle description and the continuum electrostatics.
Non-invasive imaging methods applied to neo- and paleo-ontological cephalopod research
Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.
2014-05-01
Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum/maximum size of objects that can be studied, the degree of post-processing needed and availability. The main application of the methods is seen in morphometry and volumetry of cephalopod shells. In particular we present a method for precise buoyancy calculation. Therefore, cephalopod shells were scanned together with different reference bodies, an approach developed in medical sciences. It is necessary to know the volume of the reference bodies, which should have similar absorption properties like the object of interest. Exact volumes can be obtained from surface scanning. Depending on the dimensions of the study object different computed tomography techniques were applied.
DEFF Research Database (Denmark)
Zambach, Sine; Madsen, Bodil Nistrup
2009-01-01
By applying formal terminological methods to model an ontology within the domain of enzyme inhibition, we aim to clarify concepts and to obtain consistency. Additionally, we propose a procedure for implementing this ontology in OWL with the aim of obtaining a strict structure which can form...
Method of applying single higher order polynomial basis function over multiple domains
CSIR Research Space (South Africa)
Lysko, AA
2010-03-01
Full Text Available A novel method has been devised where one set of higher order polynomial-based basis functions can be applied over several wire segments, thus permitting to decouple the number of unknowns from the number of segments, and so from the geometrical...
Applied probabilistic methods in the field of reactor safety in Germany
International Nuclear Information System (INIS)
Heuser, F.W.
1982-01-01
Some aspects of applied reliability and risk analysis methods in nuclear safety and the present role of both in Germany, are discussed. First, some comments on the status and applications of reliability analysis are given. Second, some conclusions that can be drawn from previous work on the German Risk Study are summarized. (orig.)
21 CFR 111.320 - What requirements apply to laboratory methods for testing and examination?
2010-04-01
... 21 Food and Drugs 2 2010-04-01 2010-04-01 false What requirements apply to laboratory methods for testing and examination? 111.320 Section 111.320 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION CURRENT GOOD MANUFACTURING...
A nodal method applied to a diffusion problem with generalized coefficients
International Nuclear Information System (INIS)
Laazizi, A.; Guessous, N.
1999-01-01
In this paper, we consider second order neutrons diffusion problem with coefficients in L ∞ (Ω). Nodal method of the lowest order is applied to approximate the problem's solution. The approximation uses special basis functions in which the coefficients appear. The rate of convergence obtained is O(h 2 ) in L 2 (Ω), with a free rectangular triangulation. (authors)
Trends in Research Methods in Applied Linguistics: China and the West.
Yihong, Gao; Lichun, Li; Jun, Lu
2001-01-01
Examines and compares current trends in applied linguistics (AL) research methods in China and the West. Reviews AL articles in four Chinese journals, from 1978-1997, and four English journals from 1985 to 1997. Articles are categorized and subcategorized. Results show that in China, AL research is heading from non-empirical toward empirical, with…
Critical path method applied to research project planning: Fire Economics Evaluation System (FEES)
Earl B. Anderson; R. Stanton Hales
1986-01-01
The critical path method (CPM) of network analysis (a) depicts precedence among the many activities in a project by a network diagram; (b) identifies critical activities by calculating their starting, finishing, and float times; and (c) displays possible schedules by constructing time charts. CPM was applied to the development of the Forest Service's Fire...
Rajabi, A; Dabiri, A
2012-01-01
Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990's. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.
Yang, Xue; Hu, Yajia; Li, Gang; Lin, Ling
2018-02-01
This paper proposes an optimized lighting method of applying a shaped-function signal for increasing the dynamic range of light emitting diode (LED)-multispectral imaging system. The optimized lighting method is based on the linear response zone of the analog-to-digital conversion (ADC) and the spectral response of the camera. The auxiliary light at a higher sensitivity-camera area is introduced to increase the A/D quantization levels that are within the linear response zone of ADC and improve the signal-to-noise ratio. The active light is modulated by the shaped-function signal to improve the gray-scale resolution of the image. And the auxiliary light is modulated by the constant intensity signal, which is easy to acquire the images under the active light irradiation. The least square method is employed to precisely extract the desired images. One wavelength in multispectral imaging based on LED illumination was taken as an example. It has been proven by experiments that the gray-scale resolution and the accuracy of information of the images acquired by the proposed method were both significantly improved. The optimum method opens up avenues for the hyperspectral imaging of biological tissue.
Directory of Open Access Journals (Sweden)
Nikola Štambuk
2014-05-01
Full Text Available Antisense peptide technology is a valuable tool for deriving new biologically active molecules and performing peptide–receptor modulation. It is based on the fact that peptides specified by the complementary (antisense nucleotide sequences often bind to each other with a higher specificity and efficacy. We tested the validity of this concept on the example of human erythropoietin, a well-characterized and pharmacologically relevant hematopoietic growth factor. The purpose of the work was to present and test simple and efficient three-step procedure for the design of an antisense peptide targeting receptor-binding site of human erythropoietin. Firstly, we selected the carboxyl-terminal receptor binding region of the molecule (epitope as a template for the antisense peptide modeling; Secondly, we designed an antisense peptide using mRNA transcription of the epitope sequence in the 3'→5' direction and computational screening of potential paratope structures with BLAST; Thirdly, we evaluated sense–antisense (epitope–paratope peptide binding and affinity by means of fluorescence spectroscopy and microscale thermophoresis. Both methods showed similar Kd values of 850 and 816 µM, respectively. The advantages of the methods were: fast screening with a small quantity of the sample needed, and measurements done within the range of physicochemical parameters resembling physiological conditions. Antisense peptides targeting specific erythropoietin region(s could be used for the development of new immunochemical methods. Selected antisense peptides with optimal affinity are potential lead compounds for the development of novel diagnostic substances, biopharmaceuticals and vaccines.
Sumowski, Chris Vanessa; Hanni, Matti; Schweizer, Sabine; Ochsenfeld, Christian
2014-01-14
The structural sensitivity of NMR chemical shifts as computed by quantum chemical methods is compared to a variety of empirical approaches for the example of a prototypical peptide, the 38-residue kaliotoxin KTX comprising 573 atoms. Despite the simplicity of empirical chemical shift prediction programs, the agreement with experimental results is rather good, underlining their usefulness. However, we show in our present work that they are highly insensitive to structural changes, which renders their use for validating predicted structures questionable. In contrast, quantum chemical methods show the expected high sensitivity to structural and electronic changes. This appears to be independent of the quantum chemical approach or the inclusion of solvent effects. For the latter, explicit solvent simulations with increasing number of snapshots were performed for two conformers of an eight amino acid sequence. In conclusion, the empirical approaches neither provide the expected magnitude nor the patterns of NMR chemical shifts determined by the clearly more costly ab initio methods upon structural changes. This restricts the use of empirical prediction programs in studies where peptide and protein structures are utilized for the NMR chemical shift evaluation such as in NMR refinement processes, structural model verifications, or calculations of NMR nuclear spin relaxation rates.
International Nuclear Information System (INIS)
Soupios, P M; Vallianatos, F; Loupasakis, C
2008-01-01
Nowadays, geophysical prospecting is implemented in order to resolve a diversity of geological, hydrogeological, environmental and geotechnical problems. Although plenty of applications and a lot of research have been conducted in the countryside, only a few cases have been reported in the literature concerning urban areas, mainly due to high levels of noise present that aggravate most of the geophysical methods or due to spatial limitations that hinder normal method implementation. Among all geophysical methods, electrical resistivity tomography has proven to be a rapid technique and the most robust with regard to urban noise. This work presents a case study in the urban area of Chania (Crete Island, Greece), where electrical resistivity tomography (ERT) has been applied for the detection and identification of possible buried ancient ruins or other man-made structures, prior to the construction of a building. The results of the detailed geophysical survey indicated eight areas of interest providing resistivity anomalies. Those anomalies were analysed and interpreted combining the resistivity readings with the geotechnical borehole data and the historical bibliographic reports—referring to the 1940s (Xalkiadakis 1997 Industrial Archaeology in Chania Territory pp 51–62). The collected ERT-data were processed by applying advanced algorithms in order to obtain a 3D-model of the study area that depicts the interesting subsurface structures more clearly and accurately
A new effective Monte Carlo Midway coupling method in MCNP applied to a well logging problem
Energy Technology Data Exchange (ETDEWEB)
Serov, I.V.; John, T.M.; Hoogenboom, J.E
1998-12-01
The background of the Midway forward-adjoint coupling method including the black absorber technique for efficient Monte Carlo determination of radiation detector responses is described. The method is implemented in the general purpose MCNP Monte Carlo code. The utilization of the method is fairly straightforward and does not require any substantial extra expertise. The method was applied to a standard neutron well logging porosity tool problem. The results exhibit reliability and high efficiency of the Midway method. For the studied problem the efficiency gain is considerably higher than for a normal forward calculation, which is already strongly optimized by weight-windows. No additional effort is required to adjust the Midway model if the position of the detector or the porosity of the formation is changed. Additionally, the Midway method can be used with other variance reduction techniques if extra gain in efficiency is desired.
Determination of activity of I-125 applying sum-peak methods
International Nuclear Information System (INIS)
Arbelo Penna, Y.; Hernandez Rivero, A.T.; Oropesa Verdecia, P.; Serra Aguila, R.; Moreno Leon, Y.
2011-01-01
The determination of activity of I-125 in radioactive solutions, applying sum-peak methods, by using an n-type HPGe detector of extended range is described. Two procedures were used for obtaining I-125 specific activity in solutions: a) an absolute method, which is independent of nuclear parameters and detector efficiency, and b) an option which consider constant the efficiency in the region of interest and involves calculations using nuclear parameters. The measurement geometries studied are specifically solid point sources. The relative deviations between specific activities, obtained by these different procedures are not higher than 1 %. Moreover, the activity of the radioactive solution was obtained by measuring it in NIST ampoule using a CAPINTEC CRC 35R dose calibrator. The consistency of obtained results, confirm the feasibility of applying direct methods of measurement for I-125 activity determinations, which allow us to achieve lower uncertainties in comparison with the relative methods of measurement. The establishment of these methods is aimed to be applied for the calibration of equipment and radionuclide dose calibrators used currently in clinical RIA/IRMA assays and Nuclear medicine practice respectively. (Author)
An applied study using systems engineering methods to prioritize green systems options
Energy Technology Data Exchange (ETDEWEB)
Lee, Sonya M [Los Alamos National Laboratory; Macdonald, John M [Los Alamos National Laboratory
2009-01-01
For many years, there have been questions about the effectiveness of applying different green solutions. If you're building a home and wish to use green technologies, where do you start? While all technologies sound promising, which will perform the best over time? All this has to be considered within the cost and schedule of the project. The amount of information available on the topic can be overwhelming. We seek to examine if Systems Engineering methods can be used to help people choose and prioritize technologies that fit within their project and budget. Several methods are used to gain perspective into how to select the green technologies, such as the Analytic Hierarchy Process (AHP) and Kepner-Tregoe. In our study, subjects applied these methods to analyze cost, schedule, and trade-offs. Results will document whether the experimental approach is applicable to defining system priorities for green technologies.
Economic consequences assessment for scenarios and actual accidents do the same methods apply
International Nuclear Information System (INIS)
Brenot, J.
1991-01-01
Methods for estimating the economic consequences of major technological accidents, and their corresponding computer codes, are briefly presented with emphasis on the basic choices. When applied to hypothetic scenarios, those methods give results that are of interest for risk managers with a decision aiding perspective. Simultaneously the various costs, and the procedures for their estimation are reviewed for some actual accidents (Three Mile Island, Chernobyl,..). These costs are used in a perspective of litigation and compensation. The comparison of the methods used and cost estimates obtained for scenarios and actual accidents shows the points of convergence and discrepancies that are discussed
Non-invasive imaging methods applied to neo- and paleontological cephalopod research
Hoffmann, R.; Schultz, J. A.; Schellhorn, R.; Rybacki, E.; Keupp, H.; Gerden, S. R.; Lemanis, R.; Zachow, S.
2013-11-01
Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum-maximum size of objects that can be studied, of the degree of post-processing needed and availability. Main application of the methods is seen in morphometry and volumetry of cephalopod shells in order to improve our understanding of diversity and disparity, functional morphology and biology of extinct and extant cephalopods.
Covariance methodology applied to 35S disintegration rate measurements by the CIEMAT/NIST method
International Nuclear Information System (INIS)
Koskinas, M.F.; Nascimento, T.S.; Yamazaki, I.M.; Dias, M.S.
2014-01-01
The Nuclear Metrology Laboratory (LMN) at IPEN is carrying out measurements in a LSC (Liquid Scintillation Counting system), applying the CIEMAT/NIST method. In this context 35 S is an important radionuclide for medical applications and it is difficult to be standardized by other primary methods due to low beta ray energy. The CIEMAT/NIST is a standard technique used by most metrology laboratories in order to improve accuracy and speed up beta emitter standardization. The focus of the present work was to apply the covariance methodology for determining the overall uncertainty in the 35 S disintegration rate. All partial uncertainties involved in the measurements were considered, taking into account all possible correlations between each pair of them. - Highlights: ► 35 S disintegration rate measured in Liquid Scintillator system using CIEMAT/NIST method. ► Covariance methodology applied to the overall uncertainty in the 35 S disintegration rate. ► Monte Carlo simulation was applied to determine 35 S activity in the 4πβ(PC)-γ coincidence system
Power System Oscillation Modes Identifications: Guidelines for Applying TLS-ESPRIT Method
Gajjar, Gopal R.; Soman, Shreevardhan
2013-05-01
Fast measurements of power system quantities available through wide-area measurement systems enables direct observations for power system electromechanical oscillations. But the raw observations data need to be processed to obtain the quantitative measures required to make any inference regarding the power system state. A detailed discussion is presented for the theory behind the general problem of oscillatory mode indentification. This paper presents some results on oscillation mode identification applied to a wide-area frequency measurements system. Guidelines for selection of parametes for obtaining most reliable results from the applied method are provided. Finally, some results on real measurements are presented with our inference on them.
International Nuclear Information System (INIS)
Donnat, Ph.; Treimany, C.; Gouedard, C.; Morice, O.
1998-06-01
This document presents some examples which were used for debugging the code. It seemed useful to write these examples onto a book to be sure the code would not regret; to give warranties for the code's functionality; to propose some examples to illustrate the possibilities and the limits of Miro. (author)
Schroeder, Krista; Jia, Haomiao; Smaldone, Arlene
Propensity score (PS) methods are increasingly being employed by researchers to reduce bias arising from confounder imbalance when using observational data to examine intervention effects. The purpose of this study was to examine PS theory and methodology and compare application of three PS methods (matching, stratification, weighting) to determine which best improves confounder balance. Baseline characteristics of a sample of 20,518 school-aged children with severe obesity (of whom 1,054 received an obesity intervention) were assessed prior to PS application. Three PS methods were then applied to the data to determine which showed the greatest improvement in confounder balance between the intervention and control group. The effect of each PS method on the outcome variable-body mass index percentile change at one year-was also examined. SAS 9.4 and Comprehensive Meta-analysis statistical software were used for analyses. Prior to PS adjustment, the intervention and control groups differed significantly on seven of 11 potential confounders. PS matching removed all differences. PS stratification and weighting both removed one difference but created two new differences. Sensitivity analyses did not change these results. Body mass index percentile at 1 year decreased in both groups. The size of the decrease was smaller in the intervention group, and the estimate of the decrease varied by PS method. Selection of a PS method should be guided by insight from statistical theory and simulation experiments, in addition to observed improvement in confounder balance. For this data set, PS matching worked best to correct confounder imbalance. Because each method varied in correcting confounder imbalance, we recommend that multiple PS methods be compared for ability to improve confounder balance before implementation in evaluating treatment effects in observational data.
Carpentier, Olivier; Defer, Didier; Antczak, Emmanuel; Chartier, Thierry
2012-01-01
In many fields, such as in the agri-food industry or in the building industry, it is important to be able to monitor the thermophysical properties of granular materials. Regular thermal probes allow for the determination of one or several thermophysical factors. The success of the method used depends in part on the nature of the signal sent, on the type of physical model applied and eventually on the type of probe used and its implantation in the material. Although efficacious for most applications, regular thermal probes do present some limitations. It is the case, for example, when one has to know precisely the thermal contact resistance or the nature of the signal sent. In this article is presented a characterization method based on thermal impedance formalism. This method allows for the determination of the thermal conductivity, the thermal diffusivity, and the contact thermal resistance in one single test. The application of this method requires the use of a specific probe developed to enable measurement of heat flux and temperature at the interface of the probe and the studied material. Its practical application is presented for dry sand.
Directory of Open Access Journals (Sweden)
Adílio Renê Almeida Miranda
2014-12-01
Full Text Available Recently, the Life Story Method has been used in the area of Business Administration as an important methodological strategy in qualitative research. The purpose is to understand groups or collective bodies based on individual paths of life. Thus, the goal of this study was to show the contribution of the life story method in understanding the identity dynamics of female professors managing a public university, by means of an example derived from an empirical study. It was observed from the reports of four female professors involved in management that recovery of past memories, as well as of values, facts, standards and occurrences connected with the primary and organizational socialization of the interviewees, contributes to understanding of their identity dynamics. Some categories of analysis emerged that express relationships between life story and identity, e.g., discontinuity, subjectivity and the importance of allowing an individual/subject to speak; the individual and the social sphere and socio-historical transformations, a dynamic interaction in construction of identities; and temporal analysis in construction of identities.
Multigrid method applied to the solution of an elliptic, generalized eigenvalue problem
Energy Technology Data Exchange (ETDEWEB)
Alchalabi, R.M. [BOC Group, Murray Hill, NJ (United States); Turinsky, P.J. [North Carolina State Univ., Raleigh, NC (United States)
1996-12-31
The work presented in this paper is concerned with the development of an efficient MG algorithm for the solution of an elliptic, generalized eigenvalue problem. The application is specifically applied to the multigroup neutron diffusion equation which is discretized by utilizing the Nodal Expansion Method (NEM). The underlying relaxation method is the Power Method, also known as the (Outer-Inner Method). The inner iterations are completed using Multi-color Line SOR, and the outer iterations are accelerated using Chebyshev Semi-iterative Method. Furthermore, the MG algorithm utilizes the consistent homogenization concept to construct the restriction operator, and a form function as a prolongation operator. The MG algorithm was integrated into the reactor neutronic analysis code NESTLE, and numerical results were obtained from solving production type benchmark problems.
The boundary element method applied to 3D magneto-electro-elastic dynamic problems
Igumnov, L. A.; Markov, I. P.; Kuznetsov, Iu A.
2017-11-01
Due to the coupling properties, the magneto-electro-elastic materials possess a wide number of applications. They exhibit general anisotropic behaviour. Three-dimensional transient analyses of magneto-electro-elastic solids can hardly be found in the literature. 3D direct boundary element formulation based on the weakly-singular boundary integral equations in Laplace domain is presented in this work for solving dynamic linear magneto-electro-elastic problems. Integral expressions of the three-dimensional fundamental solutions are employed. Spatial discretization is based on a collocation method with mixed boundary elements. Convolution quadrature method is used as a numerical inverse Laplace transform scheme to obtain time domain solutions. Numerical examples are provided to illustrate the capability of the proposed approach to treat highly dynamic problems.
Least Square NUFFT Methods Applied to 2D and 3D Radially Encoded MR Image Reconstruction
Song, Jiayu; Liu, Qing H.; Gewalt, Sally L.; Cofer, Gary; Johnson, G. Allan
2009-01-01
Radially encoded MR imaging (MRI) has gained increasing attention in applications such as hyperpolarized gas imaging, contrast-enhanced MR angiography, and dynamic imaging, due to its motion insensitivity and improved artifact properties. However, since the technique collects k-space samples nonuniformly, multidimensional (especially 3D) radially sampled MRI image reconstruction is challenging. The balance between reconstruction accuracy and speed becomes critical when a large data set is processed. Kaiser-Bessel gridding reconstruction has been widely used for non-Cartesian reconstruction. The objective of this work is to provide an alternative reconstruction option in high dimensions with on-the-fly kernels calculation. The work develops general multi-dimensional least square nonuniform fast Fourier transform (LS-NUFFT) algorithms and incorporates them into a k-space simulation and image reconstruction framework. The method is then applied to reconstruct the radially encoded k-space, although the method addresses general nonuniformity and is applicable to any non-Cartesian patterns. Performance assessments are made by comparing the LS-NUFFT based method with the conventional Kaiser-Bessel gridding method for 2D and 3D radially encoded computer simulated phantoms and physically scanned phantoms. The results show that the LS-NUFFT reconstruction method has better accuracy-speed efficiency than the Kaiser-Bessel gridding method when the kernel weights are calculated on the fly. The accuracy of the LS-NUFFT method depends on the choice of scaling factor, and it is found that for a particular conventional kernel function, using its corresponding deapodization function as scaling factor and utilizing it into the LS-NUFFT framework has the potential to improve accuracy. When a cosine scaling factor is used, in particular, the LS-NUFFT method is faster than Kaiser-Bessel gridding method because of a quasi closed-form solution. The method is successfully applied to 2D and
Applying Item Response Theory methods to design a learning progression-based science assessment
Chen, Jing
Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1) how to use items in different formats to classify students into levels on the learning progression, (2) how to design a test to give good information about students' progress through the learning progression of a particular construct and (3) what characteristics of test items support their use for assessing students' levels. Data used for this study were collected from 1500 elementary and secondary school students during 2009--2010. The written assessment was developed in several formats such as the Constructed Response (CR) items, Ordered Multiple Choice (OMC) and Multiple True or False (MTF) items. The followings are the main findings from this study. The OMC, MTF and CR items might measure different components of the construct. A single construct explained most of the variance in students' performances. However, additional dimensions in terms of item format can explain certain amount of the variance in student performance. So additional dimensions need to be considered when we want to capture the differences in students' performances on different types of items targeting the understanding of the same underlying progression. Items in each item format need to be improved in certain ways to classify students more accurately into the learning progression levels. This study establishes some general steps that can be followed to design other learning progression-based tests as well. For example, first, the boundaries between levels on the IRT scale can be defined by using the means of the item thresholds across a set of good items. Second, items in multiple formats can be selected to achieve the information criterion at all
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.
Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V
2016-01-01
Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
Directory of Open Access Journals (Sweden)
Oldřich Trenz
2010-01-01
Full Text Available The paper is focused on comparing the classification ability of the model with self-learning neutral network and methods from cluster analysis. The emphasis is particularly on the comparison of different approaches to a specific application example of the commitment, the classification of then financial situation. The aim is to critically evaluate different approaches at the level of application and deployment options.The verify the classification capability of the different approaches were used financial data from the database „Credit Info“, in particular data describing the financial situation of the two hundred eleven farms of homogeneous and uniform primary field.Input data were from the methods used, modified and evaluated by appropriate methodology. Found the final solution showed that the used approaches do not show significant differences, and they can say that they are equivalent. Based on this finding can formulate the conclusion that the approach of artificial intelligence (self-learning neural network is as effective as a partial methods in the field of cluster analysis. In both cases, these approaches can be an invaluable tool in decision making.When the financial situation is evaluated by the expert, the calculation of liquidity, profitability and other financial indicators are making some simplification. In this respect, neural networks perform better, since these simplifications in them selves are not natively included. They can better assess and somewhat ambiguous cases, including businesses with undefined financial situation, the so-called data in the border region. In this respect, support and representation of the graphical layout of the resulting situation sorted out objects using software implemented neural network model.
Applied ecosystem analysis - a primer; the ecosystem diagnosis and treatment method
International Nuclear Information System (INIS)
Lestelle, L.C.; Mobrand, L.E.; Lichatowich, J.A.; Vogel, T.S.
1996-05-01
The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual
An Ultrasonic Guided Wave Method to Estimate Applied Biaxial Loads (Preprint)
2011-11-01
VALIDATION A fatigue test was performed with an array of six surface-bonded PZT transducers on a 6061 aluminum plate as shown in Figure 4. The specimen...direct paths of propagation are oriented at different angles. This method is applied to experimental sparse array data recorded during a fatigue test...and the additional complication of the resulting fatigue cracks interfering with some of the direct arrivals is addressed via proper selection of
Accuracy of the Adomian decomposition method applied to the Lorenz system
International Nuclear Information System (INIS)
Hashim, I.; Noorani, M.S.M.; Ahmad, R.; Bakar, S.A.; Ismail, E.S.; Zakaria, A.M.
2006-01-01
In this paper, the Adomian decomposition method (ADM) is applied to the famous Lorenz system. The ADM yields an analytical solution in terms of a rapidly convergent infinite power series with easily computable terms. Comparisons between the decomposition solutions and the fourth-order Runge-Kutta (RK4) numerical solutions are made for various time steps. In particular we look at the accuracy of the ADM as the Lorenz system changes from a non-chaotic system to a chaotic one
DEFF Research Database (Denmark)
Filyushkina, Anna; Strange, Niels; Löf, Magnus
2018-01-01
This study applied a structured expert elicitation technique, the Delphi method, to identify the impacts of five forest management alternatives and several forest characteristics on the preservation of biodiversity and habitats in the boreal zone of the Nordic countries. The panel of experts...... as a valuable addition to on-going empirical and modeling efforts. The findings could assist forest managers in developing forest management strategies that generate benefits from timber production while taking into account the trade-offs with biodiversity goals....
Modified Method of Simplest Equation Applied to the Nonlinear Schrödinger Equation
Directory of Open Access Journals (Sweden)
Vitanov Nikolay K.
2018-03-01
Full Text Available We consider an extension of the methodology of the modified method of simplest equation to the case of use of two simplest equations. The extended methodology is applied for obtaining exact solutions of model nonlinear partial differential equations for deep water waves: the nonlinear Schrödinger equation. It is shown that the methodology works also for other equations of the nonlinear Schrödinger kind.
Modified Method of Simplest Equation Applied to the Nonlinear Schrödinger Equation
Vitanov, Nikolay K.; Dimitrova, Zlatinka I.
2018-03-01
We consider an extension of the methodology of the modified method of simplest equation to the case of use of two simplest equations. The extended methodology is applied for obtaining exact solutions of model nonlinear partial differential equations for deep water waves: the nonlinear Schrödinger equation. It is shown that the methodology works also for other equations of the nonlinear Schrödinger kind.
Applied Ecosystem Analysis - - a Primer : EDT the Ecosystem Diagnosis and Treatment Method.
Energy Technology Data Exchange (ETDEWEB)
Lestelle, Lawrence C.; Mobrand, Lars E.
1996-05-01
The aim of this document is to inform and instruct the reader about an approach to ecosystem management that is based upon salmon as an indicator species. It is intended to provide natural resource management professionals with the background information needed to answer questions about why and how to apply the approach. The methods and tools the authors describe are continually updated and refined, so this primer should be treated as a first iteration of a sequentially revised manual.
The LTSN method used in transport equation, applied in nuclear engineering problems
International Nuclear Information System (INIS)
Borges, Volnei; Vilhena, Marco Tulio de
2002-01-01
The LTS N method solves analytically the S N equations, applying the Laplace transform in the spatial variable. This methodology is used in determination of scalar flux for neutrons and photons, absorbed dose rate, buildup factors and power for a heterogeneous planar slab. This procedure leads to the solution of a transcendental equations for effective multiplication, critical thickness and the atomic density. In this work numerical results are reported, considering multigroup problem in heterogeneous slab. (author)
Lorencatto, Fabiana; West, Robert; Seymour, Natalie; Michie, Susan
2013-06-01
There is a difference between interventions as planned and as delivered in practice. Unless we know what was actually delivered, we cannot understand "what worked" in effective interventions. This study aimed to (a) assess whether an established taxonomy of 53 smoking cessation behavior change techniques (BCTs) may be applied or adapted as a method for reliably specifying the content of smoking cessation behavioral support consultations and (b) develop an effective method for training researchers and practitioners in the reliable application of the taxonomy. Fifteen transcripts of audio-recorded consultations delivered by England's Stop Smoking Services were coded into component BCTs using the taxonomy. Interrater reliability and potential adaptations to the taxonomy to improve coding were discussed following 3 coding waves. A coding training manual was developed through expert consensus and piloted on 10 trainees, assessing coding reliability and self-perceived competence before and after training. An average of 33 BCTs from the taxonomy were identified at least once across sessions and coding waves. Consultations contained on average 12 BCTs (range = 8-31). Average interrater reliability was high (88% agreement). The taxonomy was adapted to simplify coding by merging co-occurring BCTs and refining BCT definitions. Coding reliability and self-perceived competence significantly improved posttraining for all trainees. It is possible to apply a taxonomy to reliably identify and classify BCTs in smoking cessation behavioral support delivered in practice, and train inexperienced coders to do so reliably. This method can be used to investigate variability in provision of behavioral support across services, monitor fidelity of delivery, and identify training needs.
Machine Learning Method Applied in Readout System of Superheated Droplet Detector
Liu, Yi; Sullivan, Clair Julia; d'Errico, Francesco
2017-07-01
Direct readability is one advantage of superheated droplet detectors in neutron dosimetry. Utilizing such a distinct characteristic, an imaging readout system analyzes image of the detector for neutron dose readout. To improve the accuracy and precision of algorithms in the imaging readout system, machine learning algorithms were developed. Deep learning neural network and support vector machine algorithms are applied and compared with generally used Hough transform and curvature analysis methods. The machine learning methods showed a much higher accuracy and better precision in recognizing circular gas bubbles.
Translation Methods Applied in Translating Quotations in “the Secret” by Rhonda
FEBRIANTI, VICKY
2014-01-01
Keywords: Translation Methods, The Secret, Quotations.Translation helps human to get information written in any language evenwhen it is written in foreign languages. Therefore translation happens in printed media. Books have been popular printed media. The Secret written by Rhonda Byrne is a popular self-help book which has been translated into 50 languages including Indonesian (“The Secret”, n.d., para.5-6).This study is meant to find out the translation methods applied in The Secret. The wr...
Development of a tracking method for augmented reality applied to nuclear plant maintenance work
International Nuclear Information System (INIS)
Shimoda, Hiroshi; Maeshima, Masayuki; Nakai, Toshinori; Bian, Zhiqiang; Ishii, Hirotake; Yoshikawa, Hidekazu
2005-01-01
In this paper, a plant maintenance support method is described, which employs the state-of-the-art information technology, Augmented Reality (AR), in order to improve efficiency of NPP maintenance work and to prevent from human error. Although AR has a great possibility to support various works in real world, it is difficult to apply it to actual work support because the tracking method is the bottleneck for the practical use. In this study, a bar code marker tracking method is proposed to apply AR system for a maintenance work support in NPP field. The proposed method calculates the users position and orientation in real time by two long markers, which are captured by the user-mounted camera. The markers can be easily pasted on the pipes in plant field, and they can be easily recognized in long distance in order to reduce the number of pasted markers in the work field. Experiments were conducted in a laboratory and plant field to evaluate the proposed method. The results show that (1) fast and stable tracking can be realized, (2) position error in camera view is less than 1%, which is almost perfect under the limitation of camera resolution, and (3) it is relatively difficult to catch two markers in one camera view especially in short distance
Applying the response matrix method for solving coupled neutron diffusion and transport problems
International Nuclear Information System (INIS)
Sibiya, G.S.
1980-01-01
The numerical determination of the flux and power distribution in the design of large power reactors is quite a time-consuming procedure if the space under consideration is to be subdivided into very fine weshes. Many computing methods applied in reactor physics (such as the finite-difference method) require considerable computing time. In this thesis it is shown that the response matrix method can be successfully used as an alternative approach to solving the two-dimension diffusion equation. Furthermore it is shown that sufficient accuracy of the method is achieved by assuming a linear space dependence of the neutron currents on the boundaries of the geometries defined for the given space. (orig.) [de
Garcia, Diego; Moro, Claudia Maria Cabral; Cicogna, Paulo Eduardo; Carvalho, Deborah Ribeiro
2013-01-01
Clinical guidelines are documents that assist healthcare professionals, facilitating and standardizing diagnosis, management, and treatment in specific areas. Computerized guidelines as decision support systems (DSS) attempt to increase the performance of tasks and facilitate the use of guidelines. Most DSS are not integrated into the electronic health record (EHR), ordering some degree of rework especially related to data collection. This study's objective was to present a method for integrating clinical guidelines into the EHR. The study developed first a way to identify data and rules contained in the guidelines, and then incorporate rules into an archetype-based EHR. The proposed method tested was anemia treatment in the Chronic Kidney Disease Guideline. The phases of the method are: data and rules identification; archetypes elaboration; rules definition and inclusion in inference engine; and DSS-EHR integration and validation. The main feature of the proposed method is that it is generic and can be applied toany type of guideline.
Lessons learned applying CASE methods/tools to Ada software development projects
Blumberg, Maurice H.; Randall, Richard L.
1993-01-01
This paper describes the lessons learned from introducing CASE methods/tools into organizations and applying them to actual Ada software development projects. This paper will be useful to any organization planning to introduce a software engineering environment (SEE) or evolving an existing one. It contains management level lessons learned, as well as lessons learned in using specific SEE tools/methods. The experiences presented are from Alpha Test projects established under the STARS (Software Technology for Adaptable and Reliable Systems) project. They reflect the front end efforts by those projects to understand the tools/methods, initial experiences in their introduction and use, and later experiences in the use of specific tools/methods and the introduction of new ones.
Directory of Open Access Journals (Sweden)
Bochaton Audrey
2007-06-01
Full Text Available Abstract Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind
Posch, Andreas E; Spadiut, Oliver; Herwig, Christoph
2012-06-22
Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding.
Applying some methods to process the data coming from the nuclear reactions
International Nuclear Information System (INIS)
Suleymanov, M.K.; Abdinov, O.B.; Belashev, B.Z.
2010-01-01
Full text : The methods of a posterior increasing the resolution of the spectral lines are offered to process the data coming from the nuclear reactions. The methods have applied to process the data coming from the nuclear reactions at high energies. They give possibilities to get more detail information on a structure of the spectra of particles emitted in the nuclear reactions. The nuclear reactions are main source of the information on the structure and physics of the atomic nuclei. Usually the spectrums of the fragments of the reactions are complex ones. Apparently it is not simple to extract the necessary for investigation information. In the talk we discuss the methods of a posterior increasing the resolution of the spectral lines. The methods could be useful to process the complex data coming from the nuclear reactions. We consider the Fourier transformation method and maximum entropy one. The complex structures were identified by the method. One can see that at lest two selected points are indicated by the method. Recent we presented a talk where we shown that the results of the analyzing the structure of the pseudorapidity spectra of charged relativistic particles with ≥ 0.7 measured in Au+Em and Pb+Em at AGS and SPS energies using the Fourier transformation method and maximum entropy one. The dependences of these spectra on the number of fast target protons were studied. These distribution shown visually some plateau and shoulder that was at least three selected points on the distributions. The plateaus become wider in PbEm reactions. The existing of plateau is necessary for the parton models. The maximum entropy method could confirm the existing of the plateau and the shoulder on the distributions. The figure shows the results of applying the maximum entropy method. One can see that the method indicates several clean selected points. Some of them same with observed visually ones. We would like to note that the Fourier transformation method could not
A Guide on Spectral Methods Applied to Discrete Data in One Dimension
Directory of Open Access Journals (Sweden)
Martin Seilmayer
2017-01-01
Full Text Available This paper provides an overview about the usage of the Fourier transform and its related methods and focuses on the subtleties to which the users must pay attention. Typical questions, which are often addressed to the data, will be discussed. Such a problem can be the origin of frequency or band limitation of the signal or the source of artifacts, when a Fourier transform is carried out. Another topic is the processing of fragmented data. Here, the Lomb-Scargle method will be explained with an illustrative example to deal with this special type of signal. Furthermore, the time-dependent spectral analysis, with which one can evaluate the point in time when a certain frequency appears in the signal, is of interest. The goal of this paper is to collect the important information about the common methods to give the reader a guide on how to use these for application on one-dimensional data. The introduced methods are supported by the spectral package, which has been published for the statistical environment R prior to this article.
APPLYING ROBUST RANKING METHOD IN TWO PHASE FUZZY OPTIMIZATION LINEAR PROGRAMMING PROBLEMS (FOLPP
Directory of Open Access Journals (Sweden)
Monalisha Pattnaik
2014-12-01
Full Text Available Background: This paper explores the solutions to the fuzzy optimization linear program problems (FOLPP where some parameters are fuzzy numbers. In practice, there are many problems in which all decision parameters are fuzzy numbers, and such problems are usually solved by either probabilistic programming or multi-objective programming methods. Methods: In this paper, using the concept of comparison of fuzzy numbers, a very effective method is introduced for solving these problems. This paper extends linear programming based problem in fuzzy environment. With the problem assumptions, the optimal solution can still be theoretically solved using the two phase simplex based method in fuzzy environment. To handle the fuzzy decision variables can be initially generated and then solved and improved sequentially using the fuzzy decision approach by introducing robust ranking technique. Results and conclusions: The model is illustrated with an application and a post optimal analysis approach is obtained. The proposed procedure was programmed with MATLAB (R2009a version software for plotting the four dimensional slice diagram to the application. Finally, numerical example is presented to illustrate the effectiveness of the theoretical results, and to gain additional managerial insights.
International Nuclear Information System (INIS)
Koch, Stephan
2009-01-01
problem-tailored discretization approach is based on a geometrical modeling of reduced spatial dimension inside respective domains of symmetry. For the approximation of the electromagnetic fields, orthogonal polynomials along the direction of symmetry are combined with finite element shape functions at the remaining cross-section. This leads to an efficient method providing a high accuracy. The domains of symmetry are embedded into the surrounding region by means of a strong coupling at the discrete level in terms of a domain decomposition approach. Using this strategy, for certain examples a level of accuracy corresponding to numerical models featuring several millions of degrees of freedom in classical finite element methods can be achieved with only one hundred thousand unknowns. This is demonstrated for different examples, e.g., a cylindrical power transformer and the already mentioned accelerator magnet. (orig.)
Intestinal colic in newborn babies: incidence and methods of proceeding applied by parents
Directory of Open Access Journals (Sweden)
Anna Lewandowska
2017-06-01
Full Text Available Introduction: Intestinal colic is one of the more frequent complaints that a general practitioner and paediatrician deal with in their work. 10-40% of babies formula fed and 10-20% breast fed are stricken by this complaint. A colic attack appears suddenly and very quickly causes energetic, squeaky cry or even scream. Colic attacks last for a few minutes and appear every 2-3 hours usually in the evenings. Specialist literature provides numerous definitions of intestinal colic. The concept was introduced for the first time to paediatric textbooks over 250 years ago. One of the most accurate definitions describe colic as recurring attacks of intensive cry and anxiety lasting for more than 3 hours a day, 3 days a week within 3 weeks. Care of a baby suffering from an intestinal colic causes numerous problems and anxiety among parents, therefore knowledge of effective methods to combat this complaint is a challenge for contemporary neonatology and paediatrics. The aim of the study is to estimate the incidence of intestinal colic in newborn babies formula and breast fed as well as to assess methods of proceeding applied by parents and analyze their effectiveness. Material and methods: The research involved 100 newborn babies breast fed and 100 formula fed, and their parents. The research method applied in the study was a diagnostic survey conducted by use of a questionnaire method. Results: Among examined newborn babies that were breast fed, 43% have experienced intestinal colic, while among those formula fed 30% have suffered from it. The study involved 44% new born female babies and 56% male babies. 52% of mothers were 30-34 years old, 30% 35-59 years old, and 17% 25-59 years old. When it comes to families, the most numerous was a group in good financial situation (60%. The second numerous group was that in average financial situation (40%. All the respondents claimed that they had the knowledge on intestinal colic and the main source of knowledge
Should methods of correction for multiple comparisons be applied in pharmacovigilance?
Directory of Open Access Journals (Sweden)
Lorenza Scotti
2015-12-01
Full Text Available Purpose. In pharmacovigilance, spontaneous reporting databases are devoted to the early detection of adverse event ‘signals’ of marketed drugs. A common limitation of these systems is the wide number of concurrently investigated associations, implying a high probability of generating positive signals simply by chance. However it is not clear if the application of methods aimed to adjust for the multiple testing problems are needed when at least some of the drug-outcome relationship under study are known. To this aim we applied a robust estimation method for the FDR (rFDR particularly suitable in the pharmacovigilance context. Methods. We exploited the data available for the SAFEGUARD project to apply the rFDR estimation methods to detect potential false positive signals of adverse reactions attributable to the use of non-insulin blood glucose lowering drugs. Specifically, the number of signals generated from the conventional disproportionality measures and after the application of the rFDR adjustment method was compared. Results. Among the 311 evaluable pairs (i.e., drug-event pairs with at least one adverse event report, 106 (34% signals were considered as significant from the conventional analysis. Among them 1 resulted in false positive signals according to rFDR method. Conclusions. The results of this study seem to suggest that when a restricted number of drug-outcome pairs is considered and warnings about some of them are known, multiple comparisons methods for recognizing false positive signals are not so useful as suggested by theoretical considerations.
Energy Technology Data Exchange (ETDEWEB)
Tumelero, Fernanda, E-mail: fernanda.tumelero@yahoo.com.br [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana, E-mail: claudiopeteren@yahoo.com.br, E-mail: gleniogoncalves@yahoo.com.br, E-mail: luana-lazzari@hotmail.com [Universidade Federal de Pelotas (DME/UFPEL), Capao do Leao, RS (Brazil). Instituto de Fisica e Matematica
2015-07-01
In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)
Power secant method applied to natural frequency extraction of Timoshenko beam structures
Directory of Open Access Journals (Sweden)
C.A.N. Dias
Full Text Available This work deals with an improved plane frame formulation whose exact dynamic stiffness matrix (DSM presents, uniquely, null determinant for the natural frequencies. In comparison with the classical DSM, the formulation herein presented has some major advantages: local mode shapes are preserved in the formulation so that, for any positive frequency, the DSM will never be ill-conditioned; in the absence of poles, it is possible to employ the secant method in order to have a more computationally efficient eigenvalue extraction procedure. Applying the procedure to the more general case of Timoshenko beams, we introduce a new technique, named "power deflation", that makes the secant method suitable for the transcendental nonlinear eigenvalue problems based on the improved DSM. In order to avoid overflow occurrences that can hinder the secant method iterations, limiting frequencies are formulated, with scaling also applied to the eigenvalue problem. Comparisons with results available in the literature demonstrate the strength of the proposed method. Computational efficiency is compared with solutions obtained both by FEM and by the Wittrick-Williams algorithm.
International Nuclear Information System (INIS)
Tumelero, Fernanda; Petersen, Claudio Z.; Goncalves, Glenio A.; Lazzari, Luana
2015-01-01
In this work, we present a solution of the Neutron Point Kinetics Equations with temperature feedback effects applying the Polynomial Approach Method. For the solution, we consider one and six groups of delayed neutrons precursors with temperature feedback effects and constant reactivity. The main idea is to expand the neutron density, delayed neutron precursors and temperature as a power series considering the reactivity as an arbitrary function of the time in a relatively short time interval around an ordinary point. In the first interval one applies the initial conditions of the problem and the analytical continuation is used to determine the solutions of the next intervals. With the application of the Polynomial Approximation Method it is possible to overcome the stiffness problem of the equations. In such a way, one varies the time step size of the Polynomial Approach Method and performs an analysis about the precision and computational time. Moreover, we compare the method with different types of approaches (linear, quadratic and cubic) of the power series. The answer of neutron density and temperature obtained by numerical simulations with linear approximation are compared with results in the literature. (author)
International Nuclear Information System (INIS)
Vianna Filho, Alfredo Marques
2009-01-01
The economic equipment replacement problem is a central question in Nuclear Engineering. On the one hand, new equipment are more attractive given their best performance, better reliability, lower maintenance cost etc. New equipment, however, require a higher initial investment. On the other hand, old equipment represent the other way around, with lower performance, lower reliability and specially higher maintenance costs, but in contrast having lower financial and insurance costs. The weighting of all these costs can be made with deterministic and probabilistic methods applied to the study of equipment replacement. Two types of distinct problems will be examined, substitution imposed by the wearing and substitution imposed by the failures. In order to solve the problem of nuclear system substitution imposed by wearing, deterministic methods are discussed. In order to solve the problem of nuclear system substitution imposed by failures, probabilistic methods are discussed. The aim of this paper is to present a methodological framework to the choice of the most useful method applied in the problem of nuclear system substitution.(author)
CSIR Research Space (South Africa)
Kotzé, Paula
2016-11-01
Full Text Available Enterprise systems engineering (ESE) is a multidisciplinary approach that combines traditional systems engineering (TSE) and strategic management to address methods and approaches for aligning system architectures, system development and system...
Novel Signal Noise Reduction Method through Cluster Analysis, Applied to Photoplethysmography.
Waugh, William; Allen, John; Wightman, James; Sims, Andrew J; Beale, Thomas A W
2018-01-01
Physiological signals can often become contaminated by noise from a variety of origins. In this paper, an algorithm is described for the reduction of sporadic noise from a continuous periodic signal. The design can be used where a sample of a periodic signal is required, for example, when an average pulse is needed for pulse wave analysis and characterization. The algorithm is based on cluster analysis for selecting similar repetitions or pulses from a periodic single. This method selects individual pulses without noise, returns a clean pulse signal, and terminates when a sufficiently clean and representative signal is received. The algorithm is designed to be sufficiently compact to be implemented on a microcontroller embedded within a medical device. It has been validated through the removal of noise from an exemplar photoplethysmography (PPG) signal, showing increasing benefit as the noise contamination of the signal increases. The algorithm design is generalised to be applicable for a wide range of physiological (physical) signals.
Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard
2007-06-01
Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be
Solution and study of nodal neutron transport equation applying the LTSN-DiagExp method
International Nuclear Information System (INIS)
Hauser, Eliete Biasotto; Pazos, Ruben Panta; Vilhena, Marco Tullio de; Barros, Ricardo Carvalho de
2003-01-01
In this paper we report advances about the three-dimensional nodal discrete-ordinates approximations of neutron transport equation for Cartesian geometry. We use the combined collocation method of the angular variables and nodal approach for the spatial variables. By nodal approach we mean the iterated transverse integration of the S N equations. This procedure leads to the set of one-dimensional averages angular fluxes in each spatial variable. The resulting system of equations is solved with the LTS N method, first applying the Laplace transform to the set of the nodal S N equations and then obtained the solution by symbolic computation. We include the LTS N method by diagonalization to solve the nodal neutron transport equation and then we outline the convergence of these nodal-LTS N approximations with the help of a norm associated to the quadrature formula used to approximate the integral term of the neutron transport equation. (author)
Artificial intelligence methods applied for quantitative analysis of natural radioactive sources
International Nuclear Information System (INIS)
Medhat, M.E.
2012-01-01
Highlights: ► Basic description of artificial neural networks. ► Natural gamma ray sources and problem of detections. ► Application of neural network for peak detection and activity determination. - Abstract: Artificial neural network (ANN) represents one of artificial intelligence methods in the field of modeling and uncertainty in different applications. The objective of the proposed work was focused to apply ANN to identify isotopes and to predict uncertainties of their activities of some natural radioactive sources. The method was tested for analyzing gamma-ray spectra emitted from natural radionuclides in soil samples detected by a high-resolution gamma-ray spectrometry based on HPGe (high purity germanium). The principle of the suggested method is described, including, relevant input parameters definition, input data scaling and networks training. It is clear that there is satisfactory agreement between obtained and predicted results using neural network.
Scalable Methods for Eulerian-Lagrangian Simulation Applied to Compressible Multiphase Flows
Zwick, David; Hackl, Jason; Balachandar, S.
2017-11-01
Multiphase flows can be found in countless areas of physics and engineering. Many of these flows can be classified as dispersed two-phase flows, meaning that there are solid particles dispersed in a continuous fluid phase. A common technique for simulating such flow is the Eulerian-Lagrangian method. While useful, this method can suffer from scaling issues on larger problem sizes that are typical of many realistic geometries. Here we present scalable techniques for Eulerian-Lagrangian simulations and apply it to the simulation of a particle bed subjected to expansion waves in a shock tube. The results show that the methods presented here are viable for simulation of larger problems on modern supercomputers. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138. This work was supported in part by the U.S. Department of Energy under Contract No. DE-NA0002378.
Relativistic convergent close-coupling method applied to electron scattering from mercury
International Nuclear Information System (INIS)
Bostock, Christopher J.; Fursa, Dmitry V.; Bray, Igor
2010-01-01
We report on the extension of the recently formulated relativistic convergent close-coupling (RCCC) method to accommodate two-electron and quasi-two-electron targets. We apply the theory to electron scattering from mercury and obtain differential and integrated cross sections for elastic and inelastic scattering. We compared with previous nonrelativistic convergent close-coupling (CCC) calculations and for a number of transitions obtained significantly better agreement with the experiment. The RCCC method is able to resolve structure in the integrated cross sections for the energy regime in the vicinity of the excitation thresholds for the (6s6p) 3 P 0,1,2 states. These cross sections are associated with the formation of negative ion (Hg - ) resonances that could not be resolved with the nonrelativistic CCC method. The RCCC results are compared with the experiment and other relativistic theories.
A reflective lens: applying critical systems thinking and visual methods to ecohealth research.
Cleland, Deborah; Wyborn, Carina
2010-12-01
Critical systems methodology has been advocated as an effective and ethical way to engage with the uncertainty and conflicting values common to ecohealth problems. We use two contrasting case studies, coral reef management in the Philippines and national park management in Australia, to illustrate the value of critical systems approaches in exploring how people respond to environmental threats to their physical and spiritual well-being. In both cases, we used visual methods--participatory modeling and rich picturing, respectively. The critical systems methodology, with its emphasis on reflection, guided an appraisal of the research process. A discussion of these two case studies suggests that visual methods can be usefully applied within a critical systems framework to offer new insights into ecohealth issues across a diverse range of socio-political contexts. With this article, we hope to open up a conversation with other practitioners to expand the use of visual methods in integrated research.
International Nuclear Information System (INIS)
Suzuki, Mitsutoshi; Hori, Masato; Asou, Ryoji; Usuda, Shigekazu
2006-01-01
The multiscale statistical process control (MSSPC) method is applied to clarify the elements of material unaccounted for (MUF) in large scale reprocessing plants using numerical calculations. Continuous wavelet functions are used to decompose the process data, which simulate batch operation superimposed by various types of disturbance, and the disturbance components included in the data are divided into time and frequency spaces. The diagnosis of MSSPC is applied to distinguish abnormal events from the process data and shows how to detect abrupt and protracted diversions using principle component analysis. Quantitative performance of MSSPC for the time series data is shown with average run lengths given by Monte-Carlo simulation to compare to the non-detection probability β. Recent discussion about bias corrections in material balances is introduced and another approach is presented to evaluate MUF without assuming the measurement error model. (author)
The reduction method of statistic scale applied to study of climatic change
International Nuclear Information System (INIS)
Bernal Suarez, Nestor Ricardo; Molina Lizcano, Alicia; Martinez Collantes, Jorge; Pabon Jose Daniel
2000-01-01
In climate change studies the global circulation models of the atmosphere (GCMAs) enable one to simulate the global climate, with the field variables being represented on a grid points 300 km apart. One particular interest concerns the simulation of possible changes in rainfall and surface air temperature due to an assumed increase of greenhouse gases. However, the models yield the climatic projections on grid points that in most cases do not correspond to the sites of major interest. To achieve local estimates of the climatological variables, methods like the one known as statistical down scaling are applied. In this article we show a case in point by applying canonical correlation analysis (CCA) to the Guajira Region in the northeast of Colombia
ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE
Directory of Open Access Journals (Sweden)
SABOU FELICIA
2014-05-01
Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied.
ADVANTAGES AND DISADVANTAGES OF APPLYING EVOLVED METHODS IN MANAGEMENT ACCOUNTING PRACTICE
Directory of Open Access Journals (Sweden)
SABOU FELICIA
2014-05-01
Full Text Available The evolved methods of management accounting have been developed with the purpose of removing the disadvantages of the classical methods, they are methods adapted to the new market conditions, which provide much more useful cost-related information so that the management of the company is able to take certain strategic decisions. Out of the category of evolved methods, the most used is the one of standard-costs due to the advantages that it presents, being used widely in calculating the production costs in some developed countries. The main advantages of the standard-cost method are: in-advance knowledge of the production costs and the measures that ensure compliance to these; with the help of the deviations calculated from the standard costs, one manages a systematic control over the costs, thus allowing the making of decision in due time, in as far as the elimination of the deviations and the improvement of the activity are concerned and it is a method of analysis, control and cost forecast; Although the advantages of using standards are significant, there are a few disadvantages to the employment of the standard-cost method: sometimes there can appear difficulties in establishing the deviations from the standard costs, the method does not allow an accurate calculation of the fixed costs. As a result of the study, we can observe the fact that the evolved methods of management accounting, as compared to the classical ones, present a series of advantages linked to a better analysis, control, and foreseeing of costs, whereas the main disadvantage is related to the large amount of work necessary for these methods to be applied
The Fractional Step Method Applied to Simulations of Natural Convective Flows
Westra, Douglas G.; Heinrich, Juan C.; Saxon, Jeff (Technical Monitor)
2002-01-01
This paper describes research done to apply the Fractional Step Method to finite-element simulations of natural convective flows in pure liquids, permeable media, and in a directionally solidified metal alloy casting. The Fractional Step Method has been applied commonly to high Reynold's number flow simulations, but is less common for low Reynold's number flows, such as natural convection in liquids and in permeable media. The Fractional Step Method offers increased speed and reduced memory requirements by allowing non-coupled solution of the pressure and the velocity components. The Fractional Step Method has particular benefits for predicting flows in a directionally solidified alloy, since other methods presently employed are not very efficient. Previously, the most suitable method for predicting flows in a directionally solidified binary alloy was the penalty method. The penalty method requires direct matrix solvers, due to the penalty term. The Fractional Step Method allows iterative solution of the finite element stiffness matrices, thereby allowing more efficient solution of the matrices. The Fractional Step Method also lends itself to parallel processing, since the velocity component stiffness matrices can be built and solved independently of each other. The finite-element simulations of a directionally solidified casting are used to predict macrosegregation in directionally solidified castings. In particular, the finite-element simulations predict the existence of 'channels' within the processing mushy zone and subsequently 'freckles' within the fully processed solid, which are known to result from macrosegregation, or what is often referred to as thermo-solutal convection. These freckles cause material property non-uniformities in directionally solidified castings; therefore many of these castings are scrapped. The phenomenon of natural convection in an alloy under-going directional solidification, or thermo-solutal convection, will be explained. The
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-01-01
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203
Directory of Open Access Journals (Sweden)
Bailing Liu
2016-02-01
Full Text Available Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.
The Inverse System Method Applied to the Derivation of Power System Non—linear Control Laws
Institute of Scientific and Technical Information of China (English)
DonghaiLI; XuezhiJIANG; 等
1997-01-01
The differential geometric method has been applied to a series of power system non-linear control problems effectively.However a set of differential equations must be solved for obtaining the required diffeomorphic transformation.Therefore the derivation of control laws is very complicated.In fact because of the specificity of power system models the required diffeomorphic transformation may be obtained directly,so it is unnecessary to solve a set of differential equations.In addition inverse system method is equivalent to differential geometric method in reality and not limited to affine nonlinear systems,Its physical meaning is able to be viewed directly and its deduction needs only algebraic operation and derivation,so control laws can be obtained easily and the application to engineering is very convenient.Authors of this paper take steam valving control of power system as a typical case to be studied.It is demonstrated that the control law deduced by inverse system method is just the same as one by differential geometric method.The conclusion will simplify the control law derivations of steam valving,excitation,converter and static var compensator by differential geometric method and may be suited to similar control problems in other areas.
Comparison of Heuristic Methods Applied for Optimal Operation of Water Resources
Directory of Open Access Journals (Sweden)
Alireza Borhani Dariane
2009-01-01
Full Text Available Water resources optimization problems are usually complex and hard to solve using the ordinary optimization methods, or they are at least not economically efficient. A great number of studies have been conducted in quest of suitable methods capable of handling such problems. In recent years, some new heuristic methods such as genetic and ant algorithms have been introduced in systems engineering. Preliminary applications of these methods in water resources problems have shown that some of them are powerful tools, capable of solving complex problems. In this paper, the application of such heuristic methods as Genetic Algorithm (GA and Ant Colony Optimization (ACO have been studied for optimizing reservoir operation. The Dez Dam reservoir inIranwas chosen for a case study. The methods were applied and compared using short-term (one year and long-term models. Comparison of the results showed that GA outperforms both DP and ACO in finding true global optimum solutions and operating rules.
Boundary element methods applied to two-dimensional neutron diffusion problems
International Nuclear Information System (INIS)
Itagaki, Masafumi
1985-01-01
The Boundary element method (BEM) has been applied to two-dimensional neutron diffusion problems. The boundary integral equation and its discretized form have been derived. Some numerical techniques have been developed, which can be applied to critical and fixed-source problems including multi-region ones. Two types of test programs have been developed according to whether the 'zero-determinant search' or the 'source iteration' technique is adopted for criticality search. Both programs require only the fluxes and currents on boundaries as the unknown variables. The former allows a reduction in computing time and memory in comparison with the finite element method (FEM). The latter is not always efficient in terms of computing time due to the domain integral related to the inhomogeneous source term; however, this domain integral can be replaced by the equivalent boundary integral for a region with a non-multiplying medium or with a uniform source, resulting in a significant reduction in computing time. The BEM, as well as the FEM, is well suited for solving irregular geometrical problems for which the finite difference method (FDM) is unsuited. The BEM also solves problems with infinite domains, which cannot be solved by the ordinary FEM and FDM. Some simple test calculations are made to compare the BEM with the FEM and FDM, and discussions are made concerning the relative merits of the BEM and problems requiring future solution. (author)
Methodical basis of training of cadets for the military applied heptathlon competitions
Directory of Open Access Journals (Sweden)
R.V. Anatskyi
2017-12-01
Full Text Available The purpose of the research is to develop methodical bases of training of cadets for the military applied heptathlon competitions. Material and methods: Cadets of 2-3 courses at the age of 19-20 years (n=20 participated in researches. Cadets were selected by the best results of exercises performing included into the program of military applied heptathlon competitions (100 m run, 50 m freestyle swimming, Kalashnikov rifle shooting, pull-up, obstacle course, grenade throwing, 3000 m run. Preparation took place on the basis of training center. All trainings were organized and carried out according to the methodical basics: in a week preparation microcycle five days cadets had two trainings a day (on Saturday was one training, on Sunday they had rest. The selected exercises with individual loads were performed, Results : Sport scores demonstrated top results in the performance of 100 m run, 3000 m run and pull-up. The indices of performing exercise "obstacle course" were much lower than expected. Rather low results were demonstrated in swimming and shooting. Conclusions . Results of researches indicate the necessity of quality improvement: cadets’ weapons proficiency; physical readiness to perform the exercises requiring complex demonstration of all physical qualities.
Sojda, R.S.
2007-01-01
Decision support systems are often not empirically evaluated, especially the underlying modelling components. This can be attributed to such systems necessarily being designed to handle complex and poorly structured problems and decision making. Nonetheless, evaluation is critical and should be focused on empirical testing whenever possible. Verification and validation, in combination, comprise such evaluation. Verification is ensuring that the system is internally complete, coherent, and logical from a modelling and programming perspective. Validation is examining whether the system is realistic and useful to the user or decision maker, and should answer the question: “Was the system successful at addressing its intended purpose?” A rich literature exists on verification and validation of expert systems and other artificial intelligence methods; however, no single evaluation methodology has emerged as preeminent. At least five approaches to validation are feasible. First, under some conditions, decision support system performance can be tested against a preselected gold standard. Second, real-time and historic data sets can be used for comparison with simulated output. Third, panels of experts can be judiciously used, but often are not an option in some ecological domains. Fourth, sensitivity analysis of system outputs in relation to inputs can be informative. Fifth, when validation of a complete system is impossible, examining major components can be substituted, recognizing the potential pitfalls. I provide an example of evaluation of a decision support system for trumpeter swan (Cygnus buccinator) management that I developed using interacting intelligent agents, expert systems, and a queuing system. Predicted swan distributions over a 13-year period were assessed against observed numbers. Population survey numbers and banding (ringing) studies may provide long term data useful in empirical evaluation of decision support.
Nutrient Runoff Losses from Liquid Dairy Manure Applied with Low-Disturbance Methods.
Jokela, William; Sherman, Jessica; Cavadini, Jason
2016-09-01
Manure applied to cropland is a source of phosphorus (P) and nitrogen (N) in surface runoff and can contribute to impairment of surface waters. Tillage immediately after application incorporates manure into the soil, which may reduce nutrient loss in runoff as well as N loss via NH volatilization. However, tillage also incorporates crop residue, which reduces surface cover and may increase erosion potential. We applied liquid dairy manure in a silage corn ( L.)-cereal rye ( L.) cover crop system in late October using methods designed to incorporate manure with minimal soil and residue disturbance. These include strip-till injection and tine aerator-band manure application, which were compared with standard broadcast application, either incorporated with a disk or left on the surface. Runoff was generated with a portable rainfall simulator (42 mm h for 30 min) three separate times: (i) 2 to 5 d after the October manure application, (ii) in early spring, and (iii) after tillage and planting. In the postmanure application runoff, the highest losses of total P and dissolved reactive P were from surface-applied manure. Dissolved P loss was reduced 98% by strip-till injection; this result was not statistically different from the no-manure control. Reductions from the aerator band method and disk incorporation were 53 and 80%, respectively. Total P losses followed a similar pattern, with 87% reduction from injected manure. Runoff losses of N had generally similar patterns to those of P. Losses of P and N were, in most cases, lower in the spring rain simulations with fewer significant treatment effects. Overall, results show that low-disturbance manure application methods can significantly reduce nutrient runoff losses compared with surface application while maintaining residue cover better than incorporation by tillage. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Directory of Open Access Journals (Sweden)
Nicholas W. Mitiukov
2015-12-01
Full Text Available In paper there is proposed a new method of historical research, based on analysis of derivatives coefficients of database (for example, the form factor in the database of ballistic data. This method has a much greater protection from subjectivism and direct falsification, compared with the analysis obtained directly from the source of the numerical series, as any intentional or unintentional distortion of the raw data provides a significant contrast ratio derived from the average sample values. Application of this method to the analysis of ballistic data base of naval artillery allowed to find the facts, forcing a new look at some of the events in the history data on the German naval artillery before World War I, probably overpriced for disinformation opponents of the Entente; during the First World War, Spain, apparently held secret talks with the firm Bofors ended purchase of Swedish shells; the first Russian naval rifled guns were created obvious based on the project Blackly, not Krupp as traditionally considered.
International Nuclear Information System (INIS)
Aly, Omar Fernandes; Andrade, Arnaldo Paes de; MattarNeto, Miguel; Aoki, Idalina Vieira
2002-01-01
This paper aims to collect information and to discuss the electrochemical noise measurements and the reversing dc potential drop method, applied to stress corrosion essays that can be used to evaluate the nucleation and the increase of stress corrosion cracking in Alloy 600 and/or Alloy 182 specimens from Angra I Nuclear Power Plant. Therefore we will pretend to establish a standard procedure to essays to be realized on the new autoclave equipment on the Laboratorio de Eletroquimica e Corrosao do Departamento de Engenharia Quimica da Escola Politecnica da Universidade de Sao Paulo - Electrochemical and Corrosion Laboratory of the Chemical Engineering Department of Polytechnical School of Sao Paulo University, Brazil. (author)
Making Design Decisions Visible: Applying the Case-Based Method in Designing Online Instruction
Directory of Open Access Journals (Sweden)
Heng Luo,
2011-01-01
Full Text Available The instructional intervention in this design case is a self-directed online tutorial that applies the case-based method to teach educators how to design and conduct entrepreneurship programs for elementary school students. In this article, the authors describe the major decisions made in each phase of the design and development process, explicate the rationales behind them, and demonstrate their effect on the production of the tutorial. Based on such analysis, the guidelines for designing case-based online instruction are summarized for the design case.
International Nuclear Information System (INIS)
Walker, R.S.; Thompson, D.A.; Poehlman, S.W.
1977-01-01
The application of single, plural or multiple scattering theories to the determination of defect dechanneling in channeling-backscattering disorder measurements is re-examined. A semiempirical modification to the method is described that results in making the extracted disorder and disorder distribution relatively insensitive to the scattering model employed. The various models and modifications have been applied to the 1 to 2 MeV He + channeling-backscatter data obtained from 20 to 80 keV H + to Ne + bombarded Si, GaP and GaAs at 50 K and 300 K. (author)
Zoltàn Dörnyei, Research Methods in Applied Linguistics
Marie-Françoise Narcy-Combes
2012-01-01
Research Methods in Applied Linguistics est un ouvrage pratique et accessible qui s’adresse en priorité au chercheur débutant et au doctorant en linguistique appliquée et en didactique des langues pour lesquels il représente un accompagnement fort utile. Son style clair et son organisation sans surprise en font une lecture facile et agréable et rendent les différents concepts aisément compréhensibles pour tous. Il présente un bilan de la méthodologie de la recherche en linguistique appliquée,...
Cork-resin ablative insulation for complex surfaces and method for applying the same
Walker, H. M.; Sharpe, M. H.; Simpson, W. G. (Inventor)
1980-01-01
A method of applying cork-resin ablative insulation material to complex curved surfaces is disclosed. The material is prepared by mixing finely divided cork with a B-stage curable thermosetting resin, forming the resulting mixture into a block, B-stage curing the resin-containing block, and slicing the block into sheets. The B-stage cured sheet is shaped to conform to the surface being insulated, and further curing is then performed. Curing of the resins only to B-stage before shaping enables application of sheet material to complex curved surfaces and avoids limitations and disadvantages presented in handling of fully cured sheet material.
Perturbative methods applied for sensitive coefficients calculations in thermal-hydraulic systems
International Nuclear Information System (INIS)
Andrade Lima, F.R. de
1993-01-01
The differential formalism and the Generalized Perturbation Theory (GPT) are applied to sensitivity analysis of thermal-hydraulics problems related to pressurized water reactor cores. The equations describing the thermal-hydraulic behavior of these reactors cores, used in COBRA-IV-I code, are conveniently written. The importance function related to the response of interest and the sensitivity coefficient of this response with respect to various selected parameters are obtained by using Differential and Generalized Perturbation Theory. The comparison among the results obtained with the application of these perturbative methods and those obtained directly with the model developed in COBRA-IV-I code shows a very good agreement. (author)
Brezina, Tadej; Graser, Anita; Leth, Ulrich
2017-04-01
Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.
Directory of Open Access Journals (Sweden)
Ismael de Moura Costa
2017-04-01
Full Text Available Introduction: Paper to presentation the MAIA Method for Architecture of Information Applied evolution, its structure, results obtained and three practical applications.Objective: Proposal of a methodological constructo for treatment of complex information, distinguishing information spaces and revealing inherent configurations of those spaces. Metodology: The argument is elaborated from theoretical research of analitical hallmark, using distinction as a way to express concepts. Phenomenology is used as a philosophical position, which considers the correlation between Subject↔Object. The research also considers the notion of interpretation as an integrating element for concepts definition. With these postulates, the steps to transform the information spaces are formulated. Results: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Conclusions: This article explores not only how the method is structured to process information in its contexts, starting from a succession of evolutive cicles, divided in moments, which, on their turn, evolve to transformation acts. Besides that, the article presents not only possible applications as a cientific method, but also as configuration tool in information spaces, as well as generator of ontologies. At last, but not least, presents a brief summary of the analysis made by researchers who have already evaluated the method considering the three aspects mentioned.
Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme
Directory of Open Access Journals (Sweden)
Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin
2012-08-01
Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words: Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.
Complex Method Mixed with PSO Applying to Optimization Design of Bridge Crane Girder
Directory of Open Access Journals (Sweden)
He Yan
2017-01-01
Full Text Available In engineer design, basic complex method has not enough global search ability for the nonlinear optimization problem, so it mixed with particle swarm optimization (PSO has been presented in the paper,that is the optimal particle evaluated from fitness function of particle swarm displacement complex vertex in order to realize optimal principle of the largest complex central distance.This method is applied to optimization design problems of box girder of bridge crane with constraint conditions.At first a mathematical model of the girder optimization has been set up,in which box girder cross section area of bridge crane is taken as the objective function, and its four sizes parameters as design variables, girder mechanics performance, manufacturing process, border sizes and so on requirements as constraint conditions. Then complex method mixed with PSO is used to solve optimization design problem of cane box girder from constrained optimization studying approach, and its optimal results have achieved the goal of lightweight design and reducing the crane manufacturing cost . The method is reliable, practical and efficient by the practical engineer calculation and comparative analysis with basic complex method.
Efficient alpha particle detection by CR-39 applying 50 Hz-HV electrochemical etching method
International Nuclear Information System (INIS)
Sohrabi, M.; Soltani, Z.
2016-01-01
Alpha particles can be detected by CR-39 by applying either chemical etching (CE), electrochemical etching (ECE), or combined pre-etching and ECE usually through a multi-step HF-HV ECE process at temperatures much higher than room temperature. By applying pre-etching, characteristics responses of fast-neutron-induced recoil tracks in CR-39 by HF-HV ECE versus KOH normality (N) have shown two high-sensitivity peaks around 5–6 and 15–16 N and a large-diameter peak with a minimum sensitivity around 10–11 N at 25°C. On the other hand, 50 Hz-HV ECE method recently advanced in our laboratory detects alpha particles with high efficiency and broad registration energy range with small ECE tracks in polycarbonate (PC) detectors. By taking advantage of the CR-39 sensitivity to alpha particles, efficacy of 50 Hz-HV ECE method and CR-39 exotic responses under different KOH normalities, detection characteristics of 0.8 MeV alpha particle tracks were studied in 500 μm CR-39 for different fluences, ECE duration and KOH normality. Alpha registration efficiency increased as ECE duration increased to 90 ± 2% after 6–8 h beyond which plateaus are reached. Alpha track density versus fluence is linear up to 10 6 tracks cm −2 . The efficiency and mean track diameter versus alpha fluence up to 10 6 alphas cm −2 decrease as the fluence increases. Background track density and minimum detection limit are linear functions of ECE duration and increase as normality increases. The CR-39 processed for the first time in this study by 50 Hz-HV ECE method proved to provide a simple, efficient and practical alpha detection method at room temperature. - Highlights: • Alpha particles of 0.8 MeV were detected in CR-39 by 50 Hz-HV ECE method. • Efficiency/track diameter was studied vs fluence and time for 3 KOH normality. • Background track density and minimum detection limit vs duration were studied. • A new simple, efficient and low-cost alpha detection method
A Modal-Based Substructure Method Applied to Nonlinear Rotordynamic Systems
Directory of Open Access Journals (Sweden)
Helmut J. Holl
2009-01-01
Full Text Available The discretisation of rotordynamic systems usually results in a high number of coordinates, so the computation of the solution of the equations of motion is very time consuming. An efficient semianalytic time-integration method combined with a substructure technique is given, which accounts for nonsymmetric matrices and local nonlinearities. The partitioning of the equation of motion into two substructures is performed. Symmetric and linear background systems are defined for each substructure. The excitation of the substructure comes from the given excitation force, the nonlinear restoring force, the induced force due to the gyroscopic and circulatory effects of the substructure under consideration and the coupling force of the substructures. The high effort for the analysis with complex numbers, which is necessary for nonsymmetric systems, is omitted. The solution is computed by means of an integral formulation. A suitable approximation for the unknown coordinates, which are involved in the coupling forces, has to be introduced and the integration results in Green's functions of the considered substructures. Modal analysis is performed for each linear and symmetric background system of the substructure. Modal reduction can be easily incorporated and the solution is calculated iteratively. The numerical behaviour of the algorithm is discussed and compared to other approximate methods of nonlinear structural dynamics for a benchmark problem and a representative example.
Method of decision tree applied in adopting the decision for promoting a company
Directory of Open Access Journals (Sweden)
Cezarina Adina TOFAN
2015-09-01
Full Text Available The decision can be defined as the way chosen from several possible to achieve an objective. An important role in the functioning of the decisional-informational system is held by the decision-making methods. Decision trees are proving to be very useful tools for taking financial decisions or regarding the numbers, where a large amount of complex information must be considered. They provide an effective structure in which alternative decisions and the implications of their choice can be assessed, and help to form a correct and balanced vision of the risks and rewards that may result from a certain choice. For these reasons, the content of this communication will review a series of decision-making criteria. Also, it will analyse the benefits of using the decision tree method in the decision-making process by providing a numerical example. On this basis, it can be concluded that the procedure may prove useful in making decisions for companies operating on markets where competition intensity is differentiated.
A METHOD FOR PREPARING A SUBSTRATE BY APPLYING A SAMPLE TO BE ANALYSED
DEFF Research Database (Denmark)
2017-01-01
The invention relates to a method for preparing a substrate (105a) comprising a sample reception area (110) and a sensing area (111). The method comprises the steps of: 1) applying a sample on the sample reception area; 2) rotating the substrate around a predetermined axis; 3) during rotation......, at least part of the liquid travels from the sample reception area to the sensing area due to capillary forces acting between the liquid and the substrate; and 4) removing the wave of particles and liquid formed at one end of the substrate. The sensing area is closer to the predetermined axis than...... the sample reception area. The sample comprises a liquid part and particles suspended therein....
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
Troudi, Molka; Alimi, Adel M.; Saoudi, Samir
2008-12-01
The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
Directory of Open Access Journals (Sweden)
Samir Saoudi
2008-07-01
Full Text Available The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs. Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE depends directly upon J(f which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of J(f, the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Infrared thermography inspection methods applied to the target elements of W7-X divertor
Energy Technology Data Exchange (ETDEWEB)
Missirlian, M. [Association Euratom-CEA, CEA/DSM/DRFC, CEA/Cadarache, F-13108 Saint Paul Lez Durance (France)], E-mail: marc.missirlian@cea.fr; Traxler, H. [PLANSEE SE, Technology Center, A-6600 Reutte (Austria); Boscary, J. [Max-Planck-Institut fuer Plasmaphysik, Euratom Association, Boltzmannstr. 2, D-85748 Garching (Germany); Durocher, A.; Escourbiac, F.; Schlosser, J. [Association Euratom-CEA, CEA/DSM/DRFC, CEA/Cadarache, F-13108 Saint Paul Lez Durance (France); Schedler, B.; Schuler, P. [PLANSEE SE, Technology Center, A-6600 Reutte (Austria)
2007-10-15
The non-destructive examination (NDE) method is one of the key issues in developing highly loaded plasma-facing components (PFCs) for a next generation fusion devices such as W7-X and ITER. The most critical step is certainly the fabrication and the examination of the bond between the armour and the heat sink. Two inspection systems based on the infrared thermography methods, namely, the transient thermography (SATIR-CEA) and the pulsed thermography (ARGUS-PLANSEE), are being developed and have been applied to the pre-series of target elements of the W7-X divertor. Results obtained from qualification experiences performed on target elements with artificial calibrated defects allowed to demonstrate the capability of the two techniques and raised the efficiency of inspection to a level which is appropriate for industrial application.
Infrared thermography inspection methods applied to the target elements of W7-X divertor
International Nuclear Information System (INIS)
Missirlian, M.; Traxler, H.; Boscary, J.; Durocher, A.; Escourbiac, F.; Schlosser, J.; Schedler, B.; Schuler, P.
2007-01-01
The non-destructive examination (NDE) method is one of the key issues in developing highly loaded plasma-facing components (PFCs) for a next generation fusion devices such as W7-X and ITER. The most critical step is certainly the fabrication and the examination of the bond between the armour and the heat sink. Two inspection systems based on the infrared thermography methods, namely, the transient thermography (SATIR-CEA) and the pulsed thermography (ARGUS-PLANSEE), are being developed and have been applied to the pre-series of target elements of the W7-X divertor. Results obtained from qualification experiences performed on target elements with artificial calibrated defects allowed to demonstrate the capability of the two techniques and raised the efficiency of inspection to a level which is appropriate for industrial application
The fundamental parameter method applied to X-ray fluorescence analysis with synchrotron radiation
Pantenburg, F. J.; Beier, T.; Hennrich, F.; Mommsen, H.
1992-05-01
Quantitative X-ray fluorescence analysis applying the fundamental parameter method is usually restricted to monochromatic excitation sources. It is shown here, that such analyses can be performed as well with a white synchrotron radiation spectrum. To determine absolute elemental concentration values it is necessary to know the spectral distribution of this spectrum. A newly designed and tested experimental setup, which uses the synchrotron radiation emitted from electrons in a bending magnet of ELSA (electron stretcher accelerator of the university of Bonn) is presented. The determination of the exciting spectrum, described by the given electron beam parameters, is limited due to uncertainties in the vertical electron beam size and divergence. We describe a method which allows us to determine the relative and absolute spectral distributions needed for accurate analysis. First test measurements of different alloys and standards of known composition demonstrate that it is possible to determine exact concentration values in bulk and trace element analysis.
Super-convergence of Discontinuous Galerkin Method Applied to the Navier-Stokes Equations
Atkins, Harold L.
2009-01-01
The practical benefits of the hyper-accuracy properties of the discontinuous Galerkin method are examined. In particular, we demonstrate that some flow attributes exhibit super-convergence even in the absence of any post-processing technique. Theoretical analysis suggest that flow features that are dominated by global propagation speeds and decay or growth rates should be super-convergent. Several discrete forms of the discontinuous Galerkin method are applied to the simulation of unsteady viscous flow over a two-dimensional cylinder. Convergence of the period of the naturally occurring oscillation is examined and shown to converge at 2p+1, where p is the polynomial degree of the discontinuous Galerkin basis. Comparisons are made between the different discretizations and with theoretical analysis.
Data Analytics of Mobile Serious Games: Applying Bayesian Data Analysis Methods
Directory of Open Access Journals (Sweden)
Heide Lukosch
2018-03-01
Full Text Available Traditional teaching methods in the field of resuscitation training show some limitations, while teaching the right actions in critical situations could increase the number of people saved after a cardiac arrest. For our study, we developed a mobile game to support the transfer of theoretical knowledge on resuscitation. The game has been tested at three schools of further education. A number of data has been collected from 171 players. To analyze this large data set from different sources and quality, different types of data modeling and analyses had to be applied. This approach showed its usefulness in analyzing the large set of data from different sources. It revealed some interesting findings, such as that female players outperformed the male ones, and that the game fostering informal, self-directed is equally efficient as the traditional formal learning method.
An input feature selection method applied to fuzzy neural networks for signal esitmation
International Nuclear Information System (INIS)
Na, Man Gyun; Sim, Young Rok
2001-01-01
It is well known that the performance of a fuzzy neural networks strongly depends on the input features selected for its training. In its applications to sensor signal estimation, there are a large number of input variables related with an output. As the number of input variables increases, the training time of fuzzy neural networks required increases exponentially. Thus, it is essential to reduce the number of inputs to a fuzzy neural networks and to select the optimum number of mutually independent inputs that are able to clearly define the input-output mapping. In this work, principal component analysis (PAC), genetic algorithms (GA) and probability theory are combined to select new important input features. A proposed feature selection method is applied to the signal estimation of the steam generator water level, the hot-leg flowrate, the pressurizer water level and the pressurizer pressure sensors in pressurized water reactors and compared with other input feature selection methods
Bolós, Xavier; Barde-Cabusson, Stéphanie; Pedrazzi, Dario; Martí, Joan; Casas, Albert; Lovera, Raúl; Nadal-Sala, Daniel
2014-05-01
Improving knowledge of the shallowest part of the feeding system of monogenetic volcanoes and the relationship with the subsurface geology is an important task. We applied high-precision geophysical techniques that are self-potential and electrical resistivity tomography, for the exploration of the uppermost part of the substrate of La Garrotxa Volcanic Field, which is part of the European Cenozoic Rift System. Previous geophysical studies carried out in the same area at a less detailed scale were aimed at identifying deeper structures, and together constitute the basis to establish volcanic susceptibility in La Garrotxa. Self-potential study allowed identifying key areas where electrical resistivity tomography could be conducted. Dykes and faults associated with several monogenetic cones were identified through the generation of resistivity models. The combined results confirm that shallow tectonics controlling the distribution of the foci of eruptive activity in this volcanic zone mainly correspond to NNW-SSE and accessorily by NNE-SSW Neogene extensional fissures and faults and concretely show the associated magmatic intrusions. These studies show that previous alpine tectonic structures played no apparent role in controlling the loci of this volcanism. Furthermore, the results obtained show that the changes in eruption dynamics occurring at different vents located at relatively short distances in this volcanic area can be controlled by shallow stratigraphical, structural, and hydrogeological features underneath these monogenetic volcanoes. This study was partially funded by the Beca Ciutat d'Olot en Ciències Naturals and the European Commission (FT7 Theme: ENV.2011.1.3.3-1; Grant 282759: "VUELCO").
National Aeronautics and Space Administration — This is a textbook, created example for illustration purposes. The System takes inputs of Pt, Ps, and Alt, and calculates the Mach number using the Rayleigh Pitot...
Su, Hailin; Li, Hengde; Wang, Shi; Wang, Yangfan; Bao, Zhenmin
2017-02-01
Genomic selection is more and more popular in animal and plant breeding industries all around the world, as it can be applied early in life without impacting selection candidates. The objective of this study was to bring the advantages of genomic selection to scallop breeding. Two different genomic selection tools MixP and gsbay were applied on genomic evaluation of simulated data and Zhikong scallop ( Chlamys farreri) field data. The data were compared with genomic best linear unbiased prediction (GBLUP) method which has been applied widely. Our results showed that both MixP and gsbay could accurately estimate single-nucleotide polymorphism (SNP) marker effects, and thereby could be applied for the analysis of genomic estimated breeding values (GEBV). In simulated data from different scenarios, the accuracy of GEBV acquired was ranged from 0.20 to 0.78 by MixP; it was ranged from 0.21 to 0.67 by gsbay; and it was ranged from 0.21 to 0.61 by GBLUP. Estimations made by MixP and gsbay were expected to be more reliable than those estimated by GBLUP. Predictions made by gsbay were more robust, while with MixP the computation is much faster, especially in dealing with large-scale data. These results suggested that both algorithms implemented by MixP and gsbay are feasible to carry out genomic selection in scallop breeding, and more genotype data will be necessary to produce genomic estimated breeding values with a higher accuracy for the industry.
Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies
Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander
2015-01-01
Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on
Directory of Open Access Journals (Sweden)
Erin O Sills
Full Text Available Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012. This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and
Saulière, Guillaume; Dedecker, Jérôme; Marquet, Laurie-Anne; Rochcongar, Pierre; Toussaint, Jean-Francois; Berthelot, Geoffroy
2017-11-15
The clinical and biological follow-up of individuals, such as the biological passport for athletes, is typically based on the individual and longitudinal monitoring of hematological or urine markers. These follow-ups aim to identify abnormal behavior by comparing the individual's biological samples to an established baseline. These comparisons may be done via different ways, but each of them requires an appropriate extra population to compute the significance levels, which is a non-trivial issue. Moreover, it is not necessarily relevant to compare the measures of a biomarker of a professional athlete to that of a reference population (even restricted to other athletes), and a reasonable alternative is to detect the abnormal values by considering only the other measurements of the same athlete. Here we propose a simple adaptive statistic based on maxima of Z-scores that does not rely on the use of an extra population. We show that, in the Gaussian framework, it is a practical and relevant method for detecting abnormal values in a series of observations from the same individual. The distribution of this statistic does not depend on the individual parameters under the null hypothesis, and its quantiles can be computed using Monte Carlo simulations. The proposed method is tested on the 3-year follow-up of ferritin, serum iron, erythrocytes, hemoglobin, and hematocrit markers in 2577 elite male soccer players. For instance, if we consider the abnormal values for the hematocrit at a 5% level, we found that 5.57% of the selected cohort had at least one abnormal value (which is not significantly different from the expected false-discovery rate). The approach is a starting point for more elaborate models that would produce a refined individual baseline. The method can be extended to the Gaussian linear model, in order to include additional variables such as the age or exposure to altitude. The method could also be applied to other domains, such as the clinical patient
Energy Technology Data Exchange (ETDEWEB)
Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Chan, Maria F. [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States); Jarry, Geneviève; Lemire, Matthieu [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada); Lowden, John [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States); Hampton, Carnell [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States); Feygelman, Vladimir [Moffitt Cancer Center, Tampa, Florida 33612 (United States)
2013-11-15
Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were
Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.
Directory of Open Access Journals (Sweden)
Nadia Said
Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.
Labile soil phosphorus as influenced by methods of applying radioactive phosphorus
International Nuclear Information System (INIS)
Selvaratnam, V.V.; Andersen, A.J.; Thomsen, J.D.; Gissel-Nielsen, G.
1980-03-01
The influence of different methods of applying radioactive phosphorus on the E- and L-values was studied in four foil types using barley, buckwheat, and rye grass for the L-value determination. The four soils differed greatly in their E- and L-values. The experiment was carried out both with and without carrier-P. The presence of carrier-P had no influence on the E-values, while carrier-P in some cases gave a lower L-value. Both E- and L-values dependent on the method of application. When the 32 P was applied on a small soil or sand sample and dried before mixing with the total amount of soil, the E-values were higher than at direct application most likely because of a stronger fixation to the soil/sand particles. This was not the case for the L-values that are based on a much longer equilibrium time. On the contrary, the direct application of the 32 p-solution to the whole amount of soil gave higher L-values of a non-homogeneous distribution of the 32 p in the soil. (author)
Analysis of coupled neutron-gamma radiations, applied to shieldings in multigroup albedo method
International Nuclear Information System (INIS)
Dunley, Leonardo Souza
2002-01-01
The principal mathematical tools frequently available for calculations in Nuclear Engineering, including coupled neutron-gamma radiations shielding problems, involve the full Transport Theory or the Monte Carlo techniques. The Multigroup Albedo Method applied to shieldings is characterized by following the radiations through distinct layers of materials, allowing the determination of the neutron and gamma fractions reflected from, transmitted through and absorbed in the irradiated media when a neutronic stream hits the first layer of material, independently of flux calculations. Then, the method is a complementary tool of great didactic value due to its clarity and simplicity in solving neutron and/or gamma shielding problems. The outstanding results achieved in previous works motivated the elaboration and the development of this study that is presented in this dissertation. The radiation balance resulting from the incidence of a neutronic stream into a shielding composed by 'm' non-multiplying slab layers for neutrons was determined by the Albedo method, considering 'n' energy groups for neutrons and 'g' energy groups for gammas. It was taken into account there is no upscattering of neutrons and gammas. However, it was considered that neutrons from any energy groups are able to produce gammas of all energy groups. The ANISN code, for an angular quadrature order S 2 , was used as a standard for comparison of the results obtained by the Albedo method. So, it was necessary to choose an identical system configuration, both for ANISN and Albedo methods. This configuration was six neutron energy groups and eight gamma energy groups, using three slab layers (iron aluminum - manganese). The excellent results expressed in comparative tables show great agreement between the values determined by the deterministic code adopted as standard and, the values determined by the computational program created using the Albedo method and the algorithm developed for coupled neutron
Raies, Arwa B.
2017-12-05
One goal of toxicity testing, among others, is identifying harmful effects of chemicals. Given the high demand for toxicity tests, it is necessary to conduct these tests for multiple toxicity endpoints for the same compound. Current computational toxicology methods aim at developing models mainly to predict a single toxicity endpoint. When chemicals cause several toxicity effects, one model is generated to predict toxicity for each endpoint, which can be labor and computationally intensive when the number of toxicity endpoints is large. Additionally, this approach does not take into consideration possible correlation between the endpoints. Therefore, there has been a recent shift in computational toxicity studies toward generating predictive models able to predict several toxicity endpoints by utilizing correlations between these endpoints. Applying such correlations jointly with compounds\\' features may improve model\\'s performance and reduce the number of required models. This can be achieved through multi-label classification methods. These methods have not undergone comprehensive benchmarking in the domain of predictive toxicology. Therefore, we performed extensive benchmarking and analysis of over 19,000 multi-label classification models generated using combinations of the state-of-the-art methods. The methods have been evaluated from different perspectives using various metrics to assess their effectiveness. We were able to illustrate variability in the performance of the methods under several conditions. This review will help researchers to select the most suitable method for the problem at hand and provide a baseline for evaluating new approaches. Based on this analysis, we provided recommendations for potential future directions in this area.
Raies, Arwa B.; Bajic, Vladimir B.
2017-01-01
One goal of toxicity testing, among others, is identifying harmful effects of chemicals. Given the high demand for toxicity tests, it is necessary to conduct these tests for multiple toxicity endpoints for the same compound. Current computational toxicology methods aim at developing models mainly to predict a single toxicity endpoint. When chemicals cause several toxicity effects, one model is generated to predict toxicity for each endpoint, which can be labor and computationally intensive when the number of toxicity endpoints is large. Additionally, this approach does not take into consideration possible correlation between the endpoints. Therefore, there has been a recent shift in computational toxicity studies toward generating predictive models able to predict several toxicity endpoints by utilizing correlations between these endpoints. Applying such correlations jointly with compounds' features may improve model's performance and reduce the number of required models. This can be achieved through multi-label classification methods. These methods have not undergone comprehensive benchmarking in the domain of predictive toxicology. Therefore, we performed extensive benchmarking and analysis of over 19,000 multi-label classification models generated using combinations of the state-of-the-art methods. The methods have been evaluated from different perspectives using various metrics to assess their effectiveness. We were able to illustrate variability in the performance of the methods under several conditions. This review will help researchers to select the most suitable method for the problem at hand and provide a baseline for evaluating new approaches. Based on this analysis, we provided recommendations for potential future directions in this area.
Energy Technology Data Exchange (ETDEWEB)
Meyer, L.; Witzel, G.; Ghez, A. M. [Department of Physics and Astronomy, University of California, Los Angeles, CA 90095-1547 (United States); Longstaff, F. A. [UCLA Anderson School of Management, University of California, Los Angeles, CA 90095-1481 (United States)
2014-08-10
Continuously time variable sources are often characterized by their power spectral density and flux distribution. These quantities can undergo dramatic changes over time if the underlying physical processes change. However, some changes can be subtle and not distinguishable using standard statistical approaches. Here, we report a methodology that aims to identify distinct but similar states of time variability. We apply this method to the Galactic supermassive black hole, where 2.2 μm flux is observed from a source associated with Sgr A* and where two distinct states have recently been suggested. Our approach is taken from mathematical finance and works with conditional flux density distributions that depend on the previous flux value. The discrete, unobserved (hidden) state variable is modeled as a stochastic process and the transition probabilities are inferred from the flux density time series. Using the most comprehensive data set to date, in which all Keck and a majority of the publicly available Very Large Telescope data have been merged, we show that Sgr A* is sufficiently described by a single intrinsic state. However, the observed flux densities exhibit two states: noise dominated and source dominated. Our methodology reported here will prove extremely useful to assess the effects of the putative gas cloud G2 that is on its way toward the black hole and might create a new state of variability.
Energy Technology Data Exchange (ETDEWEB)
Bernard, J; Gautier, A [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires; Peres, A [Israel Institute of Technology, Dept. of Nuclear Science Technion (Israel)
1958-07-01
As modern techniques develop more elaborate machines, and make their way towards higher and higher temperatures and pressures, the thermal stresses become a matter of major importance in the design of mechanical structures. In the first part of this paper, the authors examine the problem from a theoretical standpoint, and try to evaluate the aptitude and limitation of mathematical techniques to attain the quantitative values of thermal stresses. This paper deals mainly with the experimental methods to measure thermal stresses. The authors show some examples relating to nuclear reactors. (author)Fren. [French] Au fur et a mesure que la technique moderne developpe des machines plus poussees et s'oriente vers des temperatures et des pressions toujours plus elevees, les contraintes thermiques deviennent un facteur d'importance capitale dans le calcul des structures mecaniques. Les auteurs examinent d'abord l'aspect theorique du probleme, ainsi que l'aptitude et les limites du calcul pour exprimer quantitativement la valeur des contraintes thermiques. Les auteurs exposent principalement, ensuite, les methodes experimentales qui permettent de mesurer ces contraintes, et illustrent cet expose de quelques exemples relatifs aux installations nucleaires. (auteur)
International Nuclear Information System (INIS)
Huh, Jae Sung; Kwak, Byung Man
2011-01-01
Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated
An IMU-to-Body Alignment Method Applied to Human Gait Analysis
Directory of Open Access Journals (Sweden)
Laura Susana Vargas-Valencia
2016-12-01
Full Text Available This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
The Cn method applied to problems with an anisotropic diffusion law
International Nuclear Information System (INIS)
Grandjean, P.M.
A 2-dimensional Cn calculation has been applied to homogeneous media subjected to the Rayleigh impact law. Results obtained with collision probabilities and Chandrasekhar calculations are compared to those from Cn method. Introducing in the expression of the transport equation, an expansion truncated on a polynomial basis for the outgoing angular flux (or possibly entrance flux) gives two Cn systems of algebraic linear equations for the expansion coefficients. The matrix elements of these equations are the moments of the Green function in infinite medium. The search for the Green function is effected through the Fourier transformation of the integrodifferential equation and its moments are derived from their Fourier transforms through a numerical integration in the complex plane. The method has been used for calculating the albedo in semi-infinite media, the extrapolation length of the Milne problem, and the albedo and transmission factor of a slab (a concise study of convergence is presented). A system of integro-differential equations bearing on the moments of the angular flux inside the medium has been derived, for the collision probability method. It is numerically solved with approximately the bulk flux by step functions. The albedo in semi-infinite medium has also been computed through the semi-analytical Chandrasekhar method. In the latter, the outgoing flux is expressed as a function of the entrance flux by means of a integral whose kernel is numerically derived [fr
International Nuclear Information System (INIS)
Brodsky, A.
1979-01-01
Some recent reports of Mancuso, Stewart and Kneale claim findings of radiation-produced cancer in the Hanford worker population. These claims are based on statistical computations that use small differences in accumulated exposures between groups dying of cancer and groups dying of other causes; actual mortality and longevity were not reported. This paper presents a statistical method for evaluation of actual mortality and longevity longitudinally over time, as applied in a primary analysis of the mortality experience of the Hanford worker population. Although available, this method was not utilized in the Mancuso-Stewart-Kneale paper. The author's preliminary longitudinal analysis shows that the gross mortality experience of persons employed at Hanford during 1943-70 interval did not differ significantly from that of certain controls, when both employees and controls were selected from families with two or more offspring and comparison were matched by age, sex, race and year of entry into employment. This result is consistent with findings reported by Sanders (Health Phys. vol.35, 521-538, 1978). The method utilizes an approximate chi-square (1 D.F.) statistic for testing population subgroup comparisons, as well as the cumulation of chi-squares (1 D.F.) for testing the overall result of a particular type of comparison. The method is available for computer testing of the Hanford mortality data, and could also be adapted to morbidity or other population studies. (author)
Directory of Open Access Journals (Sweden)
Koivistoinen Teemu
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an -by-1 or 1-by- array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD.'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Directory of Open Access Journals (Sweden)
Alpo Värri
2007-01-01
Full Text Available As we know, singular value decomposition (SVD is designed for computing singular values (SVs of a matrix. Then, if it is used for finding SVs of an m-by-1 or 1-by-m array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ‘‘time-frequency moments singular value decomposition (TFM-SVD.’’ In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal. This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs for ballistocardiogram (BCG data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo
2006-12-01
As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.
Energy Technology Data Exchange (ETDEWEB)
Vesisenaho, T [VTT Energy, Jyvaeskylae (Finland); Liukkonen, S [VTT Manufacturing Technology, Espoo (Finland)
1997-12-01
The objective of this project is to apply whole-tree harvesting method to Finnish timber harvesting conditions in order to lower the harvesting costs of energy wood and timber in spruce-dominant final cuttings. In Finnish conditions timber harvesting is normally based on the log-length method. Because of small landings and the high level of thinning cuttings, whole-tree skidding methods cannot be utilised extensively. The share of stands which could be harvested with whole-tree skidding method showed up to be about 10 % of the total harvesting amount of 50 mill. m{sup 3}. The corresponding harvesting potential of energy wood is 0,25 Mtoe. The aim of the structural measurements made in this project was to get information about the effect of different hauling methods into the structural response of the tractor, and thus reveal the possible special requirements that the new whole-tree skidding places forest tractor design. Altogether 7 strain gauge based sensors were mounted into the rear frame structures and drive shafts of the forest tractor. Five strain gauges measured local strains in some critical details and two sensors measured the torque moments of the front and rear bogie drive shafts. Also the revolution speed of the rear drive shaft was recorded. Signal time histories, maximum peaks, Time at Level distributions and Rainflow distributions were gathered in different hauling modes. From these, maximum values, average stress levels and fatigue life estimates were calculated for each mode, and a comparison of the different methods from the structural point of view was performed
Brucellosis Prevention Program: Applying “Child to Family Health Education” Method
Directory of Open Access Journals (Sweden)
H. Allahverdipour
2010-04-01
Full Text Available Introduction & Objective: Pupils have efficient potential to increase community awareness and promoting community health through participating in the health education programs. Child to family health education program is one of the communicative strategies that was applied in this field trial study. Because of high prevalence of Brucellosis in Hamadan province, Iran, the aim of this study was promoting families’ knowledge and preventive behaviors about Brucellosis in the rural areas by using child to family health education method.Materials & Methods: In this nonequivalent control group design study three rural schools were chosen (one as intervention and two others as control. At first knowledge and behavior of families about Brucellosis were determined using a designed questionnaire. Then the families were educated through “child to family” procedure. At this stage the students gained information. Then they were instructed to teach their parents what they had learned. After 3 months following the last session of education, the level of knowledge and behavior changes of the families about Brucellosis were determined and analyzed by paired t-test.Results: The results showed significant improvement in the knowledge of the mothers. The knowledge of the mothers about the signs of Brucellosis disease in human increased from 1.81 to 3.79 ( t:-21.64 , sig:0.000 , and also the knowledge on the signs of Brucellosis in animals increased from 1.48 to 2.82 ( t:-10.60 , sig:0.000. Conclusion: Child to family health education program is one of the effective and available methods, which would be useful and effective in most communities, and also Students potential would be effective for applying in the health promotion programs.
International Nuclear Information System (INIS)
Yiin, L.-M.; Lu, S.-E.; Sannoh, Sulaiman; Lim, B.S.; Rhoads, G.G.
2004-01-01
We conducted a cleaning trial in 40 northern New Jersey homes where home renovation and remodeling (R and R) activities were undertaken. Two cleaning protocols were used in the study: a specific method recommended by the US Department of Housing and Urban Development (HUD), in the 1995 'Guidelines for the Evaluation and Control of Lead-Based Paint Hazards in Housing', using a high-efficiency particulate air (HEPA)-filtered vacuum cleaner and a tri-sodium phosphate solution (TSP); and an alternative method using a household vacuum cleaner and a household detergent. Eligible homes were built before the 1970s with potential lead-based paint and had recent R and R activities without thorough cleaning. The two cleaning protocols were randomly assigned to the participants' homes and followed the HUD-recommended three-step procedure: vacuuming, wet washing, and repeat vacuuming. Wipe sampling was conducted on floor surfaces or windowsills before and after cleaning to evaluate the efficacy. All floor and windowsill data indicated that both methods (TSP/HEPA and non-TSP/non-HEPA) were effective in reducing lead loading on the surfaces (P<0.001). When cleaning was applied to surfaces with initial lead loading above the clearance standards, the reductions were even greater, above 95% for either cleaning method. The mixed-effect model analysis showed no significant difference between the two methods. Baseline lead loading was found to be associated with lead loading reduction significantly on floors (P<0.001) and marginally on windowsills (P=0.077). Such relations were different between the two cleaning methods significantly on floors (P<0.001) and marginally on windowsills (P=0.066), with the TSP/HEPA method being favored for higher baseline levels and the non-TSP/non-HEPA method for lower baseline levels. For the 10 homes with lead abatement, almost all post-cleaning lead loadings were below the standards using either cleaning method. Based on our results, we recommend that
Method for pulse to pulse dose reproducibility applied to electron linear accelerators
International Nuclear Information System (INIS)
Ighigeanu, D.; Martin, D.; Oproiu, C.; Cirstea, E.; Craciun, G.
2002-01-01
An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method
Winchester, David E; Burkart, Thomas A; Choi, Calvin Y; McKillop, Matthew S; Beyth, Rebecca J; Dahm, Phillipp
2016-06-01
Training in quality improvement (QI) is a pillar of the next accreditation system of the Accreditation Committee on Graduate Medical Education and a growing expectation of physicians for maintenance of certification. Despite this, many postgraduate medical trainees are not receiving training in QI methods. We created the Fellows Applied Quality Training (FAQT) curriculum for cardiology fellows using both didactic and applied components with the goal of increasing confidence to participate in future QI projects. Fellows completed didactic training from the Institute for Healthcare Improvement's Open School and then designed and completed a project to improve quality of care or patient safety. Self-assessments were completed by the fellows before, during, and after the first year of the curriculum. The primary outcome for our curriculum was the median score reported by the fellows regarding their self-confidence to complete QI activities. Self-assessments were completed by 23 fellows. The majority of fellows (15 of 23, 65.2%) reported no prior formal QI training. Median score on baseline self-assessment was 3.0 (range, 1.85-4), which was significantly increased to 3.27 (range, 2.23-4; P = 0.004) on the final assessment. The distribution of scores reported by the fellows indicates that 30% were slightly confident at conducting QI activities on their own, which was reduced to 5% after completing the FAQT curriculum. An interim assessment was conducted after the fellows completed didactic training only; median scores were not different from the baseline (mean, 3.0; P = 0.51). After completion of the FAQT, cardiology fellows reported higher self-confidence to complete QI activities. The increase in self-confidence seemed to be limited to the applied component of the curriculum, with no significant change after the didactic component.
Applying system engineering methods to site characterization research for nuclear waste repositories
International Nuclear Information System (INIS)
Woods, T.W.
1985-01-01
Nuclear research and engineering projects can benefit from the use of system engineering methods. This paper is brief overview illustrating how system engineering methods could be applied in structuring a site characterization effort for a candidate nuclear waste repository. System engineering is simply an orderly process that has been widely used to transform a recognized need into a fully defined system. Such a system may be physical or abstract, natural or man-made, hardware or procedural, as is appropriate to the system's need or objective. It is a way of mentally visualizing all the constituent elements and their relationships necessary to fulfill a need, and doing so compliant with all constraining requirements attendant to that need. Such a system approach provides completeness, order, clarity, and direction. Admittedly, system engineering can be burdensome and inappropriate for those project objectives having simple and familiar solutions that are easily held and controlled mentally. However, some type of documented and structured approach is needed for those objectives that dictate extensive, unique, or complex programs, and/or creation of state-of-the-art machines and facilities. System engineering methods have been used extensively and successfully in these cases. The scientific methods has served well in ordering countless technical undertakings that address a specific question. Similarly, conventional construction and engineering job methods will continue to be quite adequate to organize routine building projects. Nuclear waste repository site characterization projects involve multiple complex research questions and regulatory requirements that interface with each other and with advanced engineering and subsurface construction techniques. There is little doubt that system engineering is an appropriate orchestrating process to structure such diverse elements into a cohesive, well defied project
A Precise Method for Cloth Configuration Parsing Applied to Single-Arm Flattening
Directory of Open Access Journals (Sweden)
Li Sun
2016-04-01
Full Text Available In this paper, we investigate the contribution that visual perception affords to a robotic manipulation task in which a crumpled garment is flattened by eliminating visually detected wrinkles. In order to explore and validate visually guided clothing manipulation in a repeatable and controlled environment, we have developed a hand-eye interactive virtual robot manipulation system that incorporates a clothing simulator to close the effector-garment-visual sensing interaction loop. We present the technical details and compare the performance of two different methods for detecting, representing and interpreting wrinkles within clothing surfaces captured in high-resolution depth maps. The first method we present relies upon a clustering-based method for localizing and parametrizing wrinkles, while the second method adopts a more advanced geometry-based approach in which shape-topology analysis underpins the identification of the cloth configuration (i.e., maps wrinkles. Having interpreted the state of the cloth configuration by means of either of these methods, a heuristic-based flattening strategy is then executed to infer the appropriate forces, their directions and gripper contact locations that must be applied to the cloth in order to flatten the perceived wrinkles. A greedy approach, which attempts to flatten the largest detected wrinkle for each perception-iteration cycle, has been successfully adopted in this work. We present the results of our heuristic-based flattening methodology which relies upon clustering-based and geometry-based features respectively. Our experiments indicate that geometry-based features have the potential to provide a greater degree of clothing configuration understanding and, as a consequence, improve flattening performance. The results of experiments using a real robot (as opposed to simulated robot also confirm our proposition that a more effective visual perception system can advance the performance of cloth
Specific algorithm method of scoring the Clock Drawing Test applied in cognitively normal elderly
Directory of Open Access Journals (Sweden)
Liana Chaves Mendes-Santos
Full Text Available The Clock Drawing Test (CDT is an inexpensive, fast and easily administered measure of cognitive function, especially in the elderly. This instrument is a popular clinical tool widely used in screening for cognitive disorders and dementia. The CDT can be applied in different ways and scoring procedures also vary. OBJECTIVE: The aims of this study were to analyze the performance of elderly on the CDT and evaluate inter-rater reliability of the CDT scored by using a specific algorithm method adapted from Sunderland et al. (1989. METHODS: We analyzed the CDT of 100 cognitively normal elderly aged 60 years or older. The CDT ("free-drawn" and Mini-Mental State Examination (MMSE were administered to all participants. Six independent examiners scored the CDT of 30 participants to evaluate inter-rater reliability. RESULTS AND CONCLUSION: A score of 5 on the proposed algorithm ("Numbers in reverse order or concentrated", equivalent to 5 points on the original Sunderland scale, was the most frequent (53.5%. The CDT specific algorithm method used had high inter-rater reliability (p<0.01, and mean score ranged from 5.06 to 5.96. The high frequency of an overall score of 5 points may suggest the need to create more nuanced evaluation criteria, which are sensitive to differences in levels of impairment in visuoconstructive and executive abilities during aging.
Energy Technology Data Exchange (ETDEWEB)
Zerbst, U.; Beeck, F.; Scheider, I.; Brocks, W. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Werkstofforschung
1998-11-01
Under the roof of SINTAP (Structural Integrity Assessment Procedures for European Industry), a European BRITE-EURAM project, a study is being carried out into the possibility of establishing on the basis of existing models a standard European flaw assessment method. The R6 Routine and the ETM are important, existing examples in this context. The paper presents the two methods, explaining their advantages and shortcomes as well as common features. Their applicability is shown by experiments with two pressure vessels subject to internal pressure and flawed by a surface crack or a through-wall crack, respectively. Both the R6 Routine and the ETM results have been compared with results of component tests carried out in the 1980s at TWI and are found to yield acceptable conservative, i.e. sufficiently safe, lifetime predictions, as they do not give lifetime assessments which unduly underestimate the effects of flaws under operational loads. (orig./CB) [Deutsch] Gegenwaertig wird im Rahmen von SINTAP (Structural Integrity Assessment Procedures for European Industries), einem europaeischen BRITE-EURAM-Projekt geprueft, inwieweit auf der Grundlage vorhandener Modelle eine einheitliche europaeische Fehlerbewertungsmethode erstellt werden kann. Eine zentrale Stellung kommt dabei Verfahren wie der R6-Routine und dem ETM zu. In der vorliegenden Arbeit wurden beide Methoden vorgestellt, wobei ihre Vor- und Nachteile, aber auch ihre Gemeinsamkeiten herausgearbeitet wurden. Die Anwendung wurde an zwei innendruckbelasteten Behaeltern mit Oberflaechen- bzw. wanddurchdringendem Riss demonstriert. Sowohl R6-Routine als auch ETM ergaben im Vergleich mit am TWI zu Beginn der 80er Jahre durchgefuehrten Bauteilexperimenten eine vertretbare konservative Vorhersage, d.h. eine nicht allzu grosse Unterschaetzung der ertragbaren Last der Bauteile. (orig.)
An acceleration technique for the Gauss-Seidel method applied to symmetric linear systems
Directory of Open Access Journals (Sweden)
Jesús Cajigas
2014-06-01
Full Text Available A preconditioning technique to improve the convergence of the Gauss-Seidel method applied to symmetric linear systems while preserving symmetry is proposed. The preconditioner is of the form I + K and can be applied an arbitrary number of times. It is shown that under certain conditions the application of the preconditioner a finite number of steps reduces the matrix to a diagonal. A series of numerical experiments using matrices from spatial discretizations of partial differential equations demonstrates that both versions of the preconditioner, point and block version, exhibit lower iteration counts than its non-symmetric version. Resumen. Se propone una técnica de precondicionamiento para mejorar la convergencia del método Gauss-Seidel aplicado a sistemas lineales simétricos pero preservando simetría. El precondicionador es de la forma I + K y puede ser aplicado un número arbitrario de veces. Se demuestra que bajo ciertas condiciones la aplicación del precondicionador un número finito de pasos reduce la matriz del sistema precondicionado a una diagonal. Una serie de experimentos con matrices que provienen de la discretización de ecuaciones en derivadas parciales muestra que ambas versiones del precondicionador, por punto y por bloque, muestran un menor número de iteraciones en comparación con la versión que no preserva simetría.
A method of applying two-pump system in automatic transmissions for energy conservation
Directory of Open Access Journals (Sweden)
Peng Dong
2015-06-01
Full Text Available In order to improve the hydraulic efficiency, modern automatic transmissions tend to apply electric oil pump in their hydraulic system. The electric oil pump can support the mechanical oil pump for cooling, lubrication, and maintaining the line pressure at low engine speeds. In addition, the start–stop function can be realized by means of the electric oil pump; thus, the fuel consumption can be further reduced. This article proposes a method of applying two-pump system (one electric oil pump and one mechanical oil pump in automatic transmissions based on the forward driving simulation. A mathematical model for calculating the transmission power loss is developed. The power loss transfers to heat which requires oil flow for cooling and lubrication. A leakage model is developed to calculate the leakage of the hydraulic system. In order to satisfy the flow requirement, a flow-based control strategy for the electric oil pump is developed. Simulation results of different driving cycles show that there is a best combination of the size of electric oil pump and the size of mechanical oil pump with respect to the optimal energy conservation. Besides, the two-pump system can also satisfy the requirement of the start–stop function. This research is extremely valuable for the forward design of a two-pump system in automatic transmissions with respect to energy conservation and start–stop function.
Soil Improvement By Jet Grout Method And Geogrid Against Liquefaction: Example Of Samsun-Tekkeköy
Öztürk, Seda; Banu İkizler, S.; Şadoǧlu, Erol; Dadaşbilge, Ozan; Angın, Zekai
2017-04-01
scenarios of earthquakes with 6.0, 6.5, 7.0 and 7.2 magnitudes. As a result of the analyses made, it has been deemed necessary to improve the soil in order to prevent or reduce the liquefaction effects which may occur in a possible earthquake due to the presence of liquefaction potential in the research area. For this purpose, jet grouting method and geogrid fill system, which are used widely in Turkey, have been chosen as appropriate improvement methods. Geogrids are strong in tension so they are commonly used to reinforce subsoils below foundations. Additionally, jet grouting method provides high bearing capacity; it is solution to the settlement problems, it can be applied to almost any kind of soil and it has a short production period. Within this scope, optimal solution was obtained with 616 pieces of 8 m and 12 m jet grout columns with the diameter of 0.65 m and with geogrid mechanical fillings laid on jet grout columns. Thus, not only the risk of liquefaction was eliminated but also an improvement of more than 3 times of the bearing capacity of the foundation was acquired. In addition, the required quality control tests were carried out for the jet grout columns built in the research area and no adverse effects were observed. Key words: Liquefaction, soil improvement, jet grouting, geogrid
[Influence of Sex and Age on Contrast Sensitivity Subject to the Applied Method].
Darius, Sabine; Bergmann, Lisa; Blaschke, Saskia; Böckelmann, Irina
2018-02-01
The aim of the study was to detect gender and age differences in both photopic and mesopic contrast sensitivity with different methods in relation to German driver's license regulations (Fahrerlaubnisverordnung; FeV). We examined 134 healthy volunteers (53 men, 81 women) with an age between 18 and 76 years, that had been divided into two groups (AG I Mars charts under standardized illumination were applied for photopic contrast sensitivity. We could not find any gender differences. When evaluating age, there were no differences between the two groups for the Mars charts nor in the Rodatest. In all other tests, the younger volunteers achieved significantly better results. For contrast vision, there exists age-adapted cut-off-values. Concerning the driving safety of traffic participants, sufficient photopic and mesopic contrast vision should be focused on, independent of age. Therefore, there is a need to reconsider the age-adapted cut-off-values. Georg Thieme Verlag KG Stuttgart · New York.
Study of different ultrasonic focusing methods applied to non destructive testing
International Nuclear Information System (INIS)
El Amrani, M.
1995-01-01
The work presented in this thesis concerns the study of different ultrasonic focusing techniques applied to Nondestructive Testing (mechanical focusing and electronic focusing) and compares their capabilities. We have developed a model to predict the ultrasonic field radiated into a solid by water-coupled transducers. The model is based upon the Rayleigh integral formulation, modified to take account the refraction at the liquid-solid interface. The model has been validated by numerous experiments in various configurations. Running this model and the associated software, we have developed new methods to optimize focused transducers and studied the characteristics of the beam generated by transducers using various focusing techniques. (author). 120 refs., 95 figs., 4 appends
Applying RP-FDM Technology to Produce Prototype Castings Using the Investment Casting Method
Directory of Open Access Journals (Sweden)
M. Macků
2012-09-01
Full Text Available The research focused on the production of prototype castings, which is mapped out starting from the drawing documentation up to theproduction of the casting itself. The FDM method was applied for the production of the 3D pattern. Its main objective was to find out whatdimensional changes happened during individual production stages, starting from the 3D pattern printing through a silicon mouldproduction, wax patterns casting, making shells, melting out wax from shells and drying, up to the production of the final casting itself.Five measurements of determined dimensions were made during the production, which were processed and evaluated mathematically.A determination of shrinkage and a proposal of measures to maintain the dimensional stability of the final casting so as to meetrequirements specified by a customer were the results.
Directory of Open Access Journals (Sweden)
J. Szymszal
2009-01-01
Full Text Available The study discusses application of computer simulation based on the method of inverse cumulative distribution function. The simulationrefers to an elementary static case, which can also be solved by physical experiment, consisting mainly in observations of foundryproduction in a selected foundry plant. For the simulation and forecasting of foundry production quality in selected cast iron grade, arandom number generator of Excel calculation sheet was chosen. Very wide potentials of this type of simulation when applied to theevaluation of foundry production quality were demonstrated, using a number generator of even distribution for generation of a variable ofan arbitrary distribution, especially of a preset empirical distribution, without any need of adjusting to this variable the smooth theoreticaldistributions.
Comparison of gradient methods for gain tuning of a PD controller applied on a quadrotor system
Kim, Jinho; Wilkerson, Stephen A.; Gadsden, S. Andrew
2016-05-01
Many mechanical and electrical systems have utilized the proportional-integral-derivative (PID) control strategy. The concept of PID control is a classical approach but it is easy to implement and yields a very good tracking performance. Unmanned aerial vehicles (UAVs) are currently experiencing a significant growth in popularity. Due to the advantages of PID controllers, UAVs are implementing PID controllers for improved stability and performance. An important consideration for the system is the selection of PID gain values in order to achieve a safe flight and successful mission. There are a number of different algorithms that can be used for real-time tuning of gains. This paper presents two algorithms for gain tuning, and are based on the method of steepest descent and Newton's minimization of an objective function. This paper compares the results of applying these two gain tuning algorithms in conjunction with a PD controller on a quadrotor system.
Adding randomness controlling parameters in GRASP method applied in school timetabling problem
Directory of Open Access Journals (Sweden)
Renato Santos Pereira
2017-09-01
Full Text Available This paper studies the influence of randomness controlling parameters (RCP in first stage GRASP method applied in graph coloring problem, specifically school timetabling problems in a public high school. The algorithm (with the inclusion of RCP was based on critical variables identified through focus groups, whose weights can be adjusted by the user in order to meet the institutional needs. The results of the computational experiment, with 11-year-old data (66 observations processed at the same high school show that the inclusion of RCP leads to significantly lowering the distance between initial solutions and local minima. The acceptance and the use of the solutions found allow us to conclude that the modified GRASP, as has been constructed, can make a positive contribution to this timetabling problem of the school in question.
Applied methods and techniques for mechatronic systems modelling, identification and control
Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya
2014-01-01
Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...
Applied methods for mitigation of damage by stress corrosion in BWR type reactors
International Nuclear Information System (INIS)
Hernandez C, R.; Diaz S, A.; Gachuz M, M.; Arganis J, C.
1998-01-01
The Boiling Water nuclear Reactors (BWR) have presented stress corrosion problems, mainly in components and pipes of the primary system, provoking negative impacts in the performance of energy generator plants, as well as the increasing in the radiation exposure to personnel involucred. This problem has caused development of research programs, which are guided to find solution alternatives for the phenomena control. Among results of greater relevance the control for the reactor water chemistry stands out particularly in the impurities concentration and oxidation of radiolysis products; as well as the supervision in the materials selection and the stresses levels reduction. The present work presents the methods which can be applied to diminish the problems of stress corrosion in BWR reactors. (Author)
An implict LU scheme for the Euler equations applied to arbitrary cascades. [new method of factoring
Buratynski, E. K.; Caughey, D. A.
1984-01-01
An implicit scheme for solving the Euler equations is derived and demonstrated. The alternating-direction implicit (ADI) technique is modified, using two implicit-operator factors corresponding to lower-block-diagonal (L) or upper-block-diagonal (U) algebraic systems which can be easily inverted. The resulting LU scheme is implemented in finite-volume mode and applied to 2D subsonic and transonic cascade flows with differing degrees of geometric complexity. The results are presented graphically and found to be in good agreement with those of other numerical and analytical approaches. The LU method is also 2.0-3.4 times faster than ADI, suggesting its value in calculating 3D problems.
International Nuclear Information System (INIS)
Klose, G.
1999-01-01
Lyotropic mesophases possess lattice dimensions of the order of magnitude of the length of their molecules. Consequently, the first Bragg reflections of such systems appear at small scattering angles (small angle scattering). A combination of scattering and NMR methods was applied to study structural properties of POPC/C 12 E n mixtures. Generally, the ranges of existence of the liquid crystalline lamellar phase, the dimension of the unit-cell of the lamellae and important structural parameters of the lipid and surfactant molecules in the mixed bilayers were determined. With that the POPC/C 12 E 4 bilayer represents one of the best structurally characterized mixed model membranes. It is a good starting system for studying the interrelation with other e.g. dynamic or thermodynamic properties. (K.A.)
Applying RP-FDM Technology to Produce Prototype Castings Using the Investment Casting Method
Directory of Open Access Journals (Sweden)
Macků M.
2012-09-01
Full Text Available The research focused on the production of prototype castings, which is mapped out starting from the drawing documentation up to the production of the casting itself. The FDM method was applied for the production of the 3D pattern. Its main objective was to find out what dimensional changes happened during individual production stages, starting from the 3D pattern printing through a silicon mould production, wax patterns casting, making shells, melting out wax from shells and drying, up to the production of the final casting itself. Five measurements of determined dimensions were made during the production, which were processed and evaluated mathematically. A determination of shrinkage and a proposal of measures to maintain the dimensional stability of the final casting so as to meet requirements specified by a customer were the results.
Štambuk, Nikola; Manojlović, Zoran; Turčić, Petra; Martinić, Roko; Konjevoda, Paško; Weitner, Tin; Wardega, Piotr; Gabričević, Mario
2014-01-01
Antisense peptide technology is a valuable tool for deriving new biologically active molecules and performing peptide–receptor modulation. It is based on the fact that peptides specified by the complementary (antisense) nucleotide sequences often bind to each other with a higher specificity and efficacy. We tested the validity of this concept on the example of human erythropoietin, a well-characterized and pharmacologically relevant hematopoietic growth factor. The purpose of the work was to ...
International Nuclear Information System (INIS)
Swiderska-Kowalczyk, M.; Gomez, F.J.; Martin, M.
1997-01-01
In aerosol research aerosols of known size, shape, and density are highly desirable because most aerosols properties depend strongly on particle size. However, such constant and reproducible generation of those aerosol particles whose size and concentration can be easily controlled, can be achieved only in laboratory-scale tests. In large scale experiments, different generation methods for various elements and compounds have been applied. This work presents, in a brief from, a review of applications of these methods used in large scale experiments on aerosol behaviour and source term. Description of generation method and generated aerosol transport conditions is followed by properties of obtained aerosol, aerosol instrumentation used, and the scheme of aerosol generation system-wherever it was available. An information concerning aerosol generation particular purposes and reference number(s) is given at the end of a particular case. These methods reviewed are: evaporation-condensation, using a furnace heating and using a plasma torch; atomization of liquid, using compressed air nebulizers, ultrasonic nebulizers and atomization of liquid suspension; and dispersion of powders. Among the projects included in this worked are: ACE, LACE, GE Experiments, EPRI Experiments, LACE-Spain. UKAEA Experiments, BNWL Experiments, ORNL Experiments, MARVIKEN, SPARTA and DEMONA. The aim chemical compounds studied are: Ba, Cs, CsOH, CsI, Ni, Cr, NaI, TeO 2 , UO 2 Al 2 O 3 , Al 2 SiO 5 , B 2 O 3 , Cd, CdO, Fe 2 O 3 , MnO, SiO 2 , AgO, SnO 2 , Te, U 3 O 8 , BaO, CsCl, CsNO 3 , Urania, RuO 2 , TiO 2 , Al(OH) 3 , BaSO 4 , Eu 2 O 3 and Sn. (Author)
Non-Invasive Seismic Methods for Earthquake Site Classification Applied to Ontario Bridge Sites
Bilson Darko, A.; Molnar, S.; Sadrekarimi, A.
2017-12-01
How a site responds to earthquake shaking and its corresponding damage is largely influenced by the underlying ground conditions through which it propagates. The effects of site conditions on propagating seismic waves can be predicted from measurements of the shear wave velocity (Vs) of the soil layer(s) and the impedance ratio between bedrock and soil. Currently the seismic design of new buildings and bridges (2015 Canadian building and bridge codes) requires determination of the time-averaged shear-wave velocity of the upper 30 metres (Vs30) of a given site. In this study, two in situ Vs profiling methods; Multichannel Analysis of Surface Waves (MASW) and Ambient Vibration Array (AVA) methods are used to determine Vs30 at chosen bridge sites in Ontario, Canada. Both active-source (MASW) and passive-source (AVA) surface wave methods are used at each bridge site to obtain Rayleigh-wave phase velocities over a wide frequency bandwidth. The dispersion curve is jointly inverted with each site's amplification function (microtremor horizontal-to-vertical spectral ratio) to obtain shear-wave velocity profile(s). We apply our non-invasive testing at three major infrastructure projects, e.g., five bridge sites along the Rt. Hon. Herb Gray Parkway in Windsor, Ontario. Our non-invasive testing is co-located with previous invasive testing, including Standard Penetration Test (SPT), Cone Penetration Test and downhole Vs data. Correlations between SPT blowcount and Vs are developed for the different soil types sampled at our Ontario bridge sites. A robust earthquake site classification procedure (reliable Vs30 estimates) for bridge sites across Ontario is evaluated from available combinations of invasive and non-invasive site characterization methods.
Infrared thermography inspection methods applied to the target elements of W7-X Divertor
International Nuclear Information System (INIS)
Missirlian, M.; Durocher, A.; Schlosser, J.; Farjon, J.-L.; Vignal, N.; Traxler, H.; Schedler, B.; Boscary, J.
2006-01-01
As heat exhaust capability and lifetime of plasma-facing component (PFC) during in-situ operation are linked to the manufacturing quality, a set of non-destructive testing must be operated during R-and-D and manufacturing phases. Within this framework, advanced non-destructive examination (NDE) methods are one of the key issues to achieve a high level of quality and reliability of joining techniques in the production of high heat flux components but also to develop and built successfully PFCs for a next generation of fusion devices. In this frame, two NDE infrared thermographic approaches, which have been recently applied to the qualification of CFC target elements of the W7-X divertor during the first series production will be discussed in this paper. The first one, developed by CEA (SATIR facility) and used with successfully to the control of the mass-produced actively cooled PFCs on Tore Supra, is based on the transient thermography where the testing protocol consists in inducing a thermal transient within the heat sink structure by an alternative hot/cold water flow. The second one, recently developed by PLANSEE (ARGUS facility), is based on the pulsed thermography where the component is heated externally by a single powerful flash of light. Results obtained on qualification experiences performed during the first series production of W7-X divertor components representing about thirty mock-ups with artificial and manufacturing defects, demonstrated the capabilities of these two methods and raised the efficiency of inspection to a level which is appropriate for industrial application. This comparative study, associated to a cross-checking analysis between the high heat flux performance tests and these inspection methods by infrared thermography, showed a good reproducibility and allowed to set a detectable limit specific at each method. Finally, the detectability of relevant defects showed excellent coincidence with thermal images obtained from high heat flux
Directory of Open Access Journals (Sweden)
Vitor Souza Martins
2017-03-01
Full Text Available Satellite data provide the only viable means for extensive monitoring of remote and large freshwater systems, such as the Amazon floodplain lakes. However, an accurate atmospheric correction is required to retrieve water constituents based on surface water reflectance ( R W . In this paper, we assessed three atmospheric correction methods (Second Simulation of a Satellite Signal in the Solar Spectrum (6SV, ACOLITE and Sen2Cor applied to an image acquired by the MultiSpectral Instrument (MSI on-board of the European Space Agency’s Sentinel-2A platform using concurrent in-situ measurements over four Amazon floodplain lakes in Brazil. In addition, we evaluated the correction of forest adjacency effects based on the linear spectral unmixing model, and performed a temporal evaluation of atmospheric constituents from Multi-Angle Implementation of Atmospheric Correction (MAIAC products. The validation of MAIAC aerosol optical depth (AOD indicated satisfactory retrievals over the Amazon region, with a correlation coefficient (R of ~0.7 and 0.85 for Terra and Aqua products, respectively. The seasonal distribution of the cloud cover and AOD revealed a contrast between the first and second half of the year in the study area. Furthermore, simulation of top-of-atmosphere (TOA reflectance showed a critical contribution of atmospheric effects (>50% to all spectral bands, especially the deep blue (92%–96% and blue (84%–92% bands. The atmospheric correction results of the visible bands illustrate the limitation of the methods over dark lakes ( R W < 1%, and better match of the R W shape compared with in-situ measurements over turbid lakes, although the accuracy varied depending on the spectral bands and methods. Particularly above 705 nm, R W was highly affected by Amazon forest adjacency, and the proposed adjacency effect correction minimized the spectral distortions in R W (RMSE < 0.006. Finally, an extensive validation of the methods is required for
Applied statistics for economists
Lewis, Margaret
2012-01-01
This book is an undergraduate text that introduces students to commonly-used statistical methods in economics. Using examples based on contemporary economic issues and readily-available data, it not only explains the mechanics of the various methods, it also guides students to connect statistical results to detailed economic interpretations. Because the goal is for students to be able to apply the statistical methods presented, online sources for economic data and directions for performing each task in Excel are also included.
Burke, Andrea; Robinson, Laura F.; McNichol, Ann P.; Jenkins, William J.; Scanlon, Kathryn M.; Gerlach, Dana S.
2010-01-01
We have developed a rapid 'reconnaissance' method of preparing graphite for 14C/12C analysis. Carbonate (~15 mg) is combusted using an elemental analyzer and the resulting CO2 is converted to graphite using a sealed tube zinc reduction method. Over 85% (n=45 replicates on twenty-one individual corals) of reconnaissance ages measured on corals ranging in age from 500 to 33,000 radiocarbon years (Ryr) are within two standard deviations of ages generated using standard hydrolysis methods on the same corals, and all reconnaissance ages are within 300 Ryr of the standard hydrolysis ages. Replicate measurements on three individual aragonitic corals yielded ages of 1076±35 Ryr (standard deviation; n=5), 10,739±47 Ryr (n=8), and 40,146±3500 Ryr (n=9). No systematic biases were found using different cleaning methods or variable sample sizes. Analysis of 13C/12C was made concurrently with the 14C/12C measurement to correct for natural fractionation and for fractionation during sample processing and analysis. This technique provides a new, rapid method for making accurate, percent-level 14C/12C analyses that may be used to establish the rates and chronology of earth system processes where survey-type modes of age estimation are desirable. For example, applications may include creation of sediment core-top maps, preliminary age models for sediment cores, and growth rate studies of marine organisms such as corals or mollusks. We applied the reconnaissance method to more than 100 solitary deep-sea corals collected in the Drake Passage in the Southern Ocean to investigate their temporal and spatial distribution. The corals used in this study are part of a larger sample set, and the subset that was dated was chosen based on species as opposed to preservation state, so as to exclude obvious temporal biases. Similar to studies in other regions, the distribution of deep-sea corals is not constant through time across the Drake Passage. Most of the corals from the Burdwood Bank
Reflection seismic methods applied to locating fracture zones in crystalline rock
International Nuclear Information System (INIS)
Juhlin, C.
1998-01-01
The reflection seismic method is a potentially powerful tool for identifying and localising fracture zones in crystalline rock if used properly. Borehole sonic logs across fracture zones show that they have reduced P-wave velocities compared to the surrounding intact rock. Diagnostically important S-wave velocity log information across the fracture zones is generally lacking. Generation of synthetic reflection seismic data and subsequent processing of these data show that structures dipping up towards 70 degrees from horizontal can be reliably imaged using surface seismic methods. Two real case studies where seismic reflection methods have been used to image fracture zones in crystalline rock are presented. Two examples using reflection seismic are presented. The first is from the 5354 m deep SG-4 borehole in the Middle Urals, Russia where strong seismic reflectors dipping from 25 to 50 degrees are observed on surface seismic reflection data crossing over the borehole. On vertical seismic profile data acquired in the borehole, the observed P-wave reflectivity is weak from these zones, however, strong converted P to S waves are observed. This can be explained by the source of the reflectors being fracture zones with a high P wave to S wave velocity ratio compared to the surrounding rock resulting in a high dependence on the angle of incidence for the reflection coefficient. A high P wave to S wave velocity ratio (high Poisson's ratio) is to be expected in fluid filled fractured rock. The second case is from Aevroe, SE Sweden, where two 1 km long crossing high resolution seismic reflection lines were acquired in October 1996. An E-W line was shot with 5 m geophone and shotpoint spacing and a N-S one with 10 m geophone and shotpoint spacing. An explosive source with a charge size of 100 grams was used along both lines. The data clearly image three major dipping reflectors in the upper 200 ms (600 m). The dipping ones intersect or project to the surface at/or close to
Mofettes - Investigation of Natural CO2 Springs - Insights and Methods applied
Lübben, A.; Leven, C.
2014-12-01
The quantification of carbon dioxide concentrations and fluxes leaking from the subsurface into the atmosphere is highly relevant in several research fields such as climate change, CCS, volcanic activity, or earthquake monitoring. Many of the areas with elevated carbon dioxide degassing pose the problem that under the given situation a systematic investigation of the relevant processes is only possible to a limited extent (e.g. in terms of spatial extent, accessibility, hazardous conditions). The upper Neckar valley in Southwest Germany is a region of enhanced natural subsurface CO2 concentrations and mass fluxes of Tertiary volcanic origin. At the beginning of the twentieth century several companies started industrial mining of CO2. The decreasing productivity of the CO2 springs led to the complete shutdown of the industry in 1995 and the existing boreholes were sealed. However, there are evidences that the reservoir, located in the deposits of the Lower Triassic, started to refill during the last 20 years. The CO2 springs replenished and a variety of different phenomena (e.g. mofettes and perished flora and fauna) indicate the active process of large scale CO2 exhalation. This easy-to-access site serves as a perfect example for a natural analog to a leaky CCS site, including abandoned boreholes and a suitable porous rock reservoir in the subsurface. During extensive field campaigns we applied several monitoring techniques like measurements of soil gas concentrations, mass fluxes, electrical resistivity, as well as soil and atmospheric parameters. The aim was to investigate and quantify mass fluxes and the effect of variations in e.g. temperature, soil moisture on the mass flux intensity. Furthermore, we investigated the effect of the vicinity to a mofette on soil parameters like electrical conductivity and soil CO2 concentrations. In times of a changing climate due to greenhouse gases, regions featuring natural CO2 springs demand to be intensively investigated
International Nuclear Information System (INIS)
Haaland, D.M.; Easterling, R.G.; Vopicka, D.A.
1985-01-01
In an extension of earlier work, weighted multivariate least-squares methods of quantitative FT-IR analysis have been developed. A linear least-squares approximation to nonlinearities in the Beer-Lambert law is made by allowing the reference spectra to be a set of known mixtures, The incorporation of nonzero intercepts in the relation between absorbance and concentration further improves the approximation of nonlinearities while simultaneously accounting for nonzero spectra baselines. Pathlength variations are also accommodated in the analysis, and under certain conditions, unknown sample pathlengths can be determined. All spectral data are used to improve the precision and accuracy of the estimated concentrations. During the calibration phase of the analysis, pure component spectra are estimated from the standard mixture spectra. These can be compared with the measured pure component spectra to determine which vibrations experience nonlinear behavior. In the predictive phase of the analysis, the calculated spectra are used in our previous least-squares analysis to estimate sample component concentrations. These methods were applied to the analysis of the IR spectra of binary mixtures of esters. Even with severely overlapping spectral bands and nonlinearities in the Beer-Lambert law, the average relative error in the estimated concentration was <1%
Din, Tengku Noor Daimah Tengku; Jamayet, Nafij; Rajion, Zainul Ahmad; Luddin, Norhayati; Abdullah, Johari Yap; Abdullah, Abdul Manaf; Yahya, Suzana
2016-12-01
Facial defects are either congenital or caused by trauma or cancer where most of them affect the person appearance. The emotional pressure and low self-esteem are problems commonly related to patient with facial defect. To overcome this problem, silicone prosthesis was designed to cover the defect part. This study describes the techniques in designing and fabrication for facial prosthesis applying computer aided method and manufacturing (CADCAM). The steps of fabricating the facial prosthesis were based on a patient case. The patient was diagnosed for Gorlin Gotz syndrome and came to Hospital Universiti Sains Malaysia (HUSM) for prosthesis. The 3D image of the patient was reconstructed from CT data using MIMICS software. Based on the 3D image, the intercanthal and zygomatic measurements of the patient were compared with available data in the database to find the suitable nose shape. The normal nose shape for the patient was retrieved from the nasal digital library. Mirror imaging technique was used to mirror the facial part. The final design of facial prosthesis including eye, nose and cheek was superimposed to see the result virtually. After the final design was confirmed, the mould design was created. The mould of nasal prosthesis was printed using Objet 3D printer. Silicone casting was done using the 3D print mould. The final prosthesis produced from the computer aided method was acceptable to be used for facial rehabilitation to provide better quality of life.
Applying the Weighted Horizontal Magnetic Gradient Method to a Simulated Flaring Active Region
Korsós, M. B.; Chatterjee, P.; Erdélyi, R.
2018-04-01
Here, we test the weighted horizontal magnetic gradient (WG M ) as a flare precursor, introduced by Korsós et al., by applying it to a magnetohydrodynamic (MHD) simulation of solar-like flares. The preflare evolution of the WG M and the behavior of the distance parameter between the area-weighted barycenters of opposite-polarity sunspots at various heights is investigated in the simulated δ-type sunspot. Four flares emanated from this sunspot. We found the optimum heights above the photosphere where the flare precursors of the WG M method are identifiable prior to each flare. These optimum heights agree reasonably well with the heights of the occurrence of flares identified from the analysis of their thermal and ohmic heating signatures in the simulation. We also estimated the expected time of the flare onsets from the duration of the approaching–receding motion of the barycenters of opposite polarities before each single flare. The estimated onset time and the actual time of occurrence of each flare are in good agreement at the corresponding optimum heights. This numerical experiment further supports the use of flare precursors based on the WG M method.
Method of moments as applied to arbitrarily shaped bounded nonlinear scatterers
Caorsi, Salvatore; Massa, Andrea; Pastorino, Matteo
1994-01-01
In this paper, we explore the possibility of applying the moment method to determine the electromagnetic field distributions inside three-dimensional bounded nonlinear dielectric objects of arbitrary shapes. The moment method has usually been employed to solve linear scattering problems. We start with an integral equation formulation, and derive a nonlinear system of algebraic equations that allows us to obtain an approximate solution for the harmonic vector components of the electric field. Preliminary results of some numerical simulations are reported. Dans cet article nous explorons la possibilité d'appliquer la méthode des moments pour déterminer la distribution du champ électromagnétique dans des objets tridimensionnels diélectriques, non-linéaires, limités et de formes arbitraires. La méthode des moments a été communément employée pour les problèmes de diffusion linéaire. Nous commençons par une formulation basée sur l'équation intégrale et nous dérivons un système non-linéaire d'équations algébriques qui nous permet d'obtenir une solution approximative pour les composantes harmoniques du vecteur du champ électrique. Les résultats préliminaires de quelques simulations numériques sont présentés.
[An experimental assessment of methods for applying intestinal sutures in intestinal obstruction].
Akhmadudinov, M G
1992-04-01
The results of various methods used in applying intestinal sutures in obturation were studied. Three series of experiments were conducted on 30 dogs--resection of the intestine after obstruction with the formation of anastomoses by means of double-row suture (Albert--Shmiden--Lambert) in the first series (10 dogs), by a single-row suture after V. M. Mateshchuk [correction of Mateshuku] in the second series, and bu a single-row stretching suture suggested by the author in the third series. The postoperative complications and the parameters of physical airtightness of the intestinal anastomosis were studied in dynamics in the experimental animals. The results of the study: incompetence of the anastomosis sutures in the first series 6, in the second 4, and in the third series one. Adhesions occurred in all animals of the first and second series and in 2 of the third series. Six dogs of the first series died, 4 of the second, and one of the third. Study of the dynamics of the results showed a direct connection of the complications with the parameters of the physical airtightness of the anastomosis, and the last-named with the method of the intestinal suture. Relatively better results were noted in formation of the anastomosis by means of our suggested stretshing continuous suture passed through the serous, muscular, and submucous coats of the intestine.
Estimation Methods for Infinite-Dimensional Systems Applied to the Hemodynamic Response in the Brain
Belkhatir, Zehor
2018-01-01
available measurements is essential. The human brain is an example of IDSs with severe constraints on information collection from controlled experiments and invasive sensors. Investigating the intriguing modeling potential of the brain is, in fact, the main
Svetlana MIHAILA
2014-01-01
The purpose of this article is to describe and present the main features of ABC-costing method, as well as the key elements, such as activities, resources and cost drivers. The article enlarges upon the stages of application of such method in the Moldovan manufacturing businesses and brings forth some examples of internal reports used both for a simple analysis and for management decision-making purposes. The article also focuses on a comparison between the traditional costing methods and ABC...
Ivankova, Nataliya V.
2014-01-01
In spite of recent methodological developments related to quality assurance in mixed methods research, practical examples of how to implement quality criteria in designing and conducting sequential QUAN [right arrow] QUAL mixed methods studies to ensure the process is systematic and rigorous remain scarce. This article discusses a three-step…
Chemometric methods and near-infrared spectroscopy applied to bioenergy production
International Nuclear Information System (INIS)
Liebmann, B.
2010-01-01
data analysis (i) successfully determine the concentrations of moisture, protein, and starch in the feedstock material as well as glucose, ethanol, glycerol, lactic acid, acetic acid in the processed bioethanol broths; (ii) and allow quantifying a complex biofuel's property such as the heating value. At the third stage, this thesis focuses on new chemometric methods that improve mathematical analysis of multivariate data such as NIR spectra. The newly developed method 'repeated double cross validation' (rdCV) separates optimization of regression models from tests of model performance; furthermore, rdCV estimates the variability of the model performance based on a large number of prediction errors from test samples. The rdCV procedure has been applied to both the classical PLS regression and the robust 'partial robust M' regression method, which can handle erroneous data. The peculiar and relatively unknown 'random projection' method is tested for its potential of dimensionality reduction of data from chemometrics and chemoinformatics. The main findings are: (i) rdCV fosters a realistic assessment of model performance, (ii) robust regression has outstanding performance for data containing outliers and thus is strongly recommendable, and (iii) random projection is a useful niche application for high-dimensional data combined with possible restrictions in data storage and computing time. The three chemometric methods described are available as functions for the free software R. (author) [de
Hoelscher, Michael
2017-01-01
This article argues that strong interrelations between methodological and theoretical advances exist. Progress in, especially comparative, methods may have important impacts on theory evaluation. By using the example of the "Varieties of Capitalism" approach and an international comparison of higher education systems, it can be shown…
Energy Technology Data Exchange (ETDEWEB)
Krajewski, A [Szkola Glowna Gospodarstwa Wiejskiego, Warsaw (Poland)
1997-10-01
The radiation method of disinfestation on example of `Stegobium paniceum L.` has been described. The different stadia of insect growth have been irradiated. Their radiosensitivity have been estimated on the base of dose-response relationship. Biological radiation effects have been observed as insect procreation limitation. 26 refs, 4 figs, 1 tab.
Applying a weighted random forests method to extract karst sinkholes from LiDAR data
Zhu, Junfeng; Pierskalla, William P.
2016-02-01
Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.
International Nuclear Information System (INIS)
Zhang Huiqun
2009-01-01
By using a new coupled Riccati equations, a direct algebraic method, which was applied to obtain exact travelling wave solutions of some complex nonlinear equations, is improved. And the exact travelling wave solutions of the complex KdV equation, Boussinesq equation and Klein-Gordon equation are investigated using the improved method. The method presented in this paper can also be applied to construct exact travelling wave solutions for other nonlinear complex equations.
Branney, Jonathan; Priego-Hernández, Jacqueline
2018-02-01
It is important for nurses to have a thorough understanding of the biosciences such as pathophysiology that underpin nursing care. These courses include content that can be difficult to learn. Team-based learning is emerging as a strategy for enhancing learning in nurse education due to the promotion of individual learning as well as learning in teams. In this study we sought to evaluate the use of team-based learning in the teaching of applied pathophysiology to undergraduate student nurses. A mixed methods observational study. In a year two, undergraduate nursing applied pathophysiology module circulatory shock was taught using Team-based Learning while all remaining topics were taught using traditional lectures. After the Team-based Learning intervention the students were invited to complete the Team-based Learning Student Assessment Instrument, which measures accountability, preference and satisfaction with Team-based Learning. Students were also invited to focus group discussions to gain a more thorough understanding of their experience with Team-based Learning. Exam scores for answers to questions based on Team-based Learning-taught material were compared with those from lecture-taught material. Of the 197 students enrolled on the module, 167 (85% response rate) returned the instrument, the results from which indicated a favourable experience with Team-based Learning. Most students reported higher accountability (93%) and satisfaction (92%) with Team-based Learning. Lectures that promoted active learning were viewed as an important feature of the university experience which may explain the 76% exhibiting a preference for Team-based Learning. Most students wanted to make a meaningful contribution so as not to let down their team and they saw a clear relevance between the Team-based Learning activities and their own experiences of teamwork in clinical practice. Exam scores on the question related to Team-based Learning-taught material were comparable to those
Lesellier, E; Mith, D; Dubrulle, I
2015-12-04
necessary, two-step gradient elution. The developed methods were then applied to real cosmetic samples to assess the method specificity, with regards to matrix interferences, and calibration curves were plotted to evaluate quantification. Besides, depending on the matrix and on the studied compounds, the importance of the detector type, UV or ELSD (evaporative light-scattering detection), and of the particle size of the stationary phase is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
Davis, Andrew
2009-01-01
What is "fairness" in the context of educational assessment? I apply this question to a number of contemporary educational assessment practices and policies. My approach to philosophy of education owes much to Wittgenstein. A commentary set apart from the main body of the paper focuses on my style of philosophising. Wittgenstein teaches us to…
Energy Technology Data Exchange (ETDEWEB)
Ridgway, Kathy, E-mail: Kathy.Ridgway@Unilever.com [Safety and Environmental Assurance Centre, Unilever Colworth, Bedfordshire, MK44 1LQ (United Kingdom); Lalljie, Sam P.D. [Safety and Environmental Assurance Centre, Unilever Colworth, Bedfordshire, MK44 1LQ (United Kingdom); Smith, Roger M. [Department of Chemistry, Loughborough University, Loughborough, Leics, LE11 3TU (United Kingdom)
2010-01-11
A comparison is made between static headspace analysis and stir bar sorptive extraction (SBSE) for the quantitative determination of furan. The SBSE technique was optimised and evaluated using two example food matrices (coffee and jarred baby food). The use of the SBSE technique in most cases, gave comparable results to the static headspace method, using the method of standard additions with d{sub 4}-labelled furan as an internal standard. Using the SBSE method, limits of detection down to 2 ng g{sup -1} were achieved, with only a 1 h extraction. The method was performed at ambient temperatures, thus eliminating the possibility of formation of furan during extraction.
International Nuclear Information System (INIS)
Ridgway, Kathy; Lalljie, Sam P.D.; Smith, Roger M.
2010-01-01
A comparison is made between static headspace analysis and stir bar sorptive extraction (SBSE) for the quantitative determination of furan. The SBSE technique was optimised and evaluated using two example food matrices (coffee and jarred baby food). The use of the SBSE technique in most cases, gave comparable results to the static headspace method, using the method of standard additions with d 4 -labelled furan as an internal standard. Using the SBSE method, limits of detection down to 2 ng g -1 were achieved, with only a 1 h extraction. The method was performed at ambient temperatures, thus eliminating the possibility of formation of furan during extraction.
Directory of Open Access Journals (Sweden)
D.D. Lestiani
2011-08-01
Full Text Available Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA and particles induced X-ray emission (PIXE. Particle samples in the PM2.5 and PM2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preffered, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment.
New methods applied to the analysis and treatment of ovarian cancer
International Nuclear Information System (INIS)
Order, S.E.; Rosenshein, N.B.; Klein, J.L.; Lichter, A.S.; Ettinger, D.S.; Dillon, M.B.; Leibel, S.A.
1979-01-01
The development of rigorous staging methods, appreciation of new knowledge concerning ovarian cancer dissemination, and administration of new treatment techniques have been applied to ovarian cancer. The method of staging consists of peritoneal cytology, total abdominal hysterectomy-bilateral salpingo oophorectomy (TAH-BSO), omentectomy, nodal biopsy, diaphragmatic inspection and is coupled with maximal surgical resection. An additional examination being evaluated for usefulness in future staging is intraperitoneal /sup 99m/Tc sulfur colloid scans. Nineteen patients have entered the pilot studies. Sixteen patients (5 Stage 2, 10 Stage 3 micrometastatic, and 1 Stage 4) have been treated with colloidal 32 P, i.p. followed 2 weeks later by split abdominal irradiation (200 rad fractions pelvis-2 hr rest-150 rad upper abdomen) to a total abdominal dose of 3000 rad with a pelvic cone down to 4000 rad. Five of these patients received Phenylalanine mustard (L-PAM) (7 mg/m 2 ) maintenance therapy. The 3 year actuarial survival was 78% and the 3 year disease free actuarial survival 68%. Seven patients were treated with intraperitoneal tumor antisera and 4/7 remain in complete remission as of this writing. The specificity of the antiserum has been demonstrated by immunoelectrophoresis in 4/4 patients, and by live cell fluorescence in 1 patient. Rabbit IgG levels revealed significant increasing titers in 4/6 patients following i.p. antiovarian antiserum. Radiolabeled IgG derived from the antiserum demonstrated tumor localization and correlation with conventional radiograhy and computerized axial tomograhy (CAT) scans in 2 patients studied to date. Biomarker analysis reveals that free secretory protein 6/6, apha globulin 5/6, and CEA (carcinoembryonic antigen) 3/6 were elevated in the 6 patients studied. Two patients whose disease progressed demonstrated elevated levels of all three biomarkers
The Global Survey Method Applied to Ground-level Cosmic Ray Measurements
Belov, A.; Eroshenko, E.; Yanke, V.; Oleneva, V.; Abunin, A.; Abunina, M.; Papaioannou, A.; Mavromichalaki, H.
2018-04-01
The global survey method (GSM) technique unites simultaneous ground-level observations of cosmic rays in different locations and allows us to obtain the main characteristics of cosmic-ray variations outside of the atmosphere and magnetosphere of Earth. This technique has been developed and applied in numerous studies over many years by the Institute of Terrestrial Magnetism, Ionosphere and Radiowave Propagation (IZMIRAN). We here describe the IZMIRAN version of the GSM in detail. With this technique, the hourly data of the world-wide neutron-monitor network from July 1957 until December 2016 were processed, and further processing is enabled upon the receipt of new data. The result is a database of homogeneous and continuous hourly characteristics of the density variations (an isotropic part of the intensity) and the 3D vector of the cosmic-ray anisotropy. It includes all of the effects that could be identified in galactic cosmic-ray variations that were caused by large-scale disturbances of the interplanetary medium in more than 50 years. These results in turn became the basis for a database on Forbush effects and interplanetary disturbances. This database allows correlating various space-environment parameters (the characteristics of the Sun, the solar wind, et cetera) with cosmic-ray parameters and studying their interrelations. We also present features of the coupling coefficients for different neutron monitors that enable us to make a connection from ground-level measurements to primary cosmic-ray variations outside the atmosphere and the magnetosphere. We discuss the strengths and weaknesses of the current version of the GSM as well as further possible developments and improvements. The method developed allows us to minimize the problems of the neutron-monitor network, which are typical for experimental physics, and to considerably enhance its advantages.
International Nuclear Information System (INIS)
Lestiani, D.D.; Santoso, M.
2011-01-01
Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA) and particles induced X-ray emission (PIXE). Particle samples in the PM 2.5 and PM 2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preferred, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment. (author)
Stochastic Methods Applied to Power System Operations with Renewable Energy: A Review
Energy Technology Data Exchange (ETDEWEB)
Zhou, Z. [Argonne National Lab. (ANL), Argonne, IL (United States); Liu, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Electric Reliability Council of Texas (ERCOT), Austin, TX (United States); Botterud, A. [Argonne National Lab. (ANL), Argonne, IL (United States)
2016-08-01
Renewable energy resources have been rapidly integrated into power systems in many parts of the world, contributing to a cleaner and more sustainable supply of electricity. Wind and solar resources also introduce new challenges for system operations and planning in terms of economics and reliability because of their variability and uncertainty. Operational strategies based on stochastic optimization have been developed recently to address these challenges. In general terms, these stochastic strategies either embed uncertainties into the scheduling formulations (e.g., the unit commitment [UC] problem) in probabilistic forms or develop more appropriate operating reserve strategies to take advantage of advanced forecasting techniques. Other approaches to address uncertainty are also proposed, where operational feasibility is ensured within an uncertainty set of forecasting intervals. In this report, a comprehensive review is conducted to present the state of the art through Spring 2015 in the area of stochastic methods applied to power system operations with high penetration of renewable energy. Chapters 1 and 2 give a brief introduction and overview of power system and electricity market operations, as well as the impact of renewable energy and how this impact is typically considered in modeling tools. Chapter 3 reviews relevant literature on operating reserves and specifically probabilistic methods to estimate the need for system reserve requirements. Chapter 4 looks at stochastic programming formulations of the UC and economic dispatch (ED) problems, highlighting benefits reported in the literature as well as recent industry developments. Chapter 5 briefly introduces alternative formulations of UC under uncertainty, such as robust, chance-constrained, and interval programming. Finally, in Chapter 6, we conclude with the main observations from our review and important directions for future work.
International Nuclear Information System (INIS)
Nicolau-Rebigan, S.; Sporea, D.; Niculescu, V.I.R.
2000-01-01
The paper presents a holographic method applied in the ionizing radiation dosimetry. It is possible to use two types of holographic interferometry like as double exposure holographic interferometry, or fast real time holographic interferometry. In this paper the applications of holographic interferometry to ionizing radiation dosimetry are presented. The determination of the accurate value of dose delivered by an ionizing radiation source (released energy per mass unit) is a complex problem which imposes different solutions depending on experimental parameters and it is solved with a double exposure holographic interferometric method associated with an optoelectronic interface and Z80 microprocessor. The method can determine the absorbed integral dose as well as the three-dimensional distribution of dose in given volume. The paper presents some results obtained in radiation dosimetry. Original mathematical relations for integral absorbed dose in irreversible radiolyzing liquids where derived. Irradiation effects can be estimated from the holographic fringes displacement and density. To measure these parameters, the obtained holographic interferograms were picked-up by a closed TV circuit system in such a way that a selected TV line explores the picture along the direction of interest using a special designed interface, a Z80 and our microprocessor system captures data along the selected TV line. When the integral dose is to be measured the microprocessor computes it from the information contained in the fringes distribution, according to the proposed formulae. Integral absorbed dose and spatial dose distribution can be estimated with an accuracy better than 4%. Some advantages of this method are outlined comparatively with conventional method in radiation dosimetry. The paper presents an original holographic set-up with an electronic interface, assisted by a Z80 microprocessor and used for nondestructive testing of transparent objects at the laser wave length
A nuclear-medical method applied for determining the choledochus diameter after cholecystectomy
International Nuclear Information System (INIS)
Wolf, M.
1980-01-01
54 patients (46 of them females, 8 males) who underwent cholecystectomy at least 4 years ago, were followed up roentgenologically by infusion cholangiography and nuclear-medicinally by quantitative hepatobiliary functional scintiscanning (HBFS). The ROI method applied for HBFS permits to record time/activity curves above the liver parenchyma (A) and the porta of the liver (B). By substracting curve A of curve B with the scale in which A is incorporated in B, a curve B' results, indicating the flow volume through the porta of the liver. The quotient Q=maximum pulse A to B/maximum pulse B to B indicates the portion of the liver parenchyma in the porta curve. The quotient represents a measure for the total volume of the large bile ducts included in the region of the porta of the liver. The quantity 1-Q/Q was put in relation with the roentgenologically determined common bile duct diameters. It resulted that both quantities correlated well, with a correlation coefficient of r=-0.860. Thus, the choledochus diameter can be determined in a primarily functional examination with a precision of 2 mm, a degree which permits the detection of clinically relevant discharge malfunctions. It was not possible to detect peristalsis-dependent phenomena with a dosage of 4-5 mCi 99 mTc-diethyl-IDA, an irradiation dose which was sufficient for answering the clinical questions and could be justified for the patients. (orig.) [de