WorldWideScience

Sample records for proposed method significantly

  1. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  2. Assessing Clinical Significance: Does it Matter which Method we Use?

    Science.gov (United States)

    Atkins, David C.; Bedics, Jamie D.; Mcglinchey, Joseph B.; Beauchaine, Theodore P.

    2005-01-01

    Measures of clinical significance are frequently used to evaluate client change during therapy. Several alternatives to the original method devised by N. S. Jacobson, W. C. Follette, & D. Revenstorf (1984) have been proposed, each purporting to increase accuracy. However, researchers have had little systematic guidance in choosing among…

  3. Significance tests in mutagen screening: another method considering historical control frequencies

    International Nuclear Information System (INIS)

    Traut, H.

    1983-01-01

    Recently a method has been devised for testing the significance of the difference between a mutation frequency observed after chemical treatment or iradiation and the historical ('stable') control frequency. Another test is proposed serving the same purpose. Both methods are applied to several examples (experimental frequency versus historical control frequency). The results (P values) obtained agree well. (author)

  4. 77 FR 35331 - Trichoderma reesei; Proposed Significant New Use Rule

    Science.gov (United States)

    2012-06-13

    ... Trichoderma reesei; Proposed Significant New Use Rule AGENCY: Environmental Protection Agency (EPA). ACTION... Control Act (TSCA) for the genetically modified microorganism identified generically as Trichoderma reesei...: Trichoderma reesei (MCAN J-10-2) (generic). Chemical Abstracts Service (CAS) Registry Number: Not available...

  5. A fast method for the unit scheduling problem with significant renewable power generation

    International Nuclear Information System (INIS)

    Osório, G.J.; Lujano-Rojas, J.M.; Matias, J.C.O.; Catalão, J.P.S.

    2015-01-01

    Highlights: • A model to the scheduling of power systems with significant renewable power generation is provided. • A new methodology that takes information from the analysis of each scenario separately is proposed. • Based on a probabilistic analysis, unit scheduling and corresponding economic dispatch are estimated. • A comparison with others methodologies is in favour of the proposed approach. - Abstract: Optimal operation of power systems with high integration of renewable power sources has become difficult as a consequence of the random nature of some sources like wind energy and photovoltaic energy. Nowadays, this problem is solved using Monte Carlo Simulation (MCS) approach, which allows considering important statistical characteristics of wind and solar power production such as the correlation between consecutive observations, the diurnal profile of the forecasted power production, and the forecasting error. However, MCS method requires the analysis of a representative amount of trials, which is an intensive calculation task that increases considerably with the number of scenarios considered. In this paper, a model to the scheduling of power systems with significant renewable power generation based on scenario generation/reduction method, which establishes a proportional relationship between the number of scenarios and the computational time required to analyse them, is proposed. The methodology takes information from the analysis of each scenario separately to determine the probabilistic behaviour of each generator at each hour in the scheduling problem. Then, considering a determined significance level, the units to be committed are selected and the load dispatch is determined. The proposed technique was illustrated through a case study and the comparison with stochastic programming approach was carried out, concluding that the proposed methodology can provide an acceptable solution in a reduced computational time

  6. Assessment of proposed electromagnetic quantum vacuum energy extraction methods

    OpenAIRE

    Moddel, Garret

    2009-01-01

    In research articles and patents several methods have been proposed for the extraction of zero-point energy from the vacuum. None has been reliably demonstrated, but the proposals remain largely unchallenged. In this paper the feasibility of these methods is assessed in terms of underlying thermodynamics principles of equilibrium, detailed balance, and conservation laws. The methods are separated into three classes: nonlinear processing of the zero-point field, mechanical extraction using Cas...

  7. Proposal of Evolutionary Simplex Method for Global Optimization Problem

    Science.gov (United States)

    Shimizu, Yoshiaki

    To make an agile decision in a rational manner, role of optimization engineering has been notified increasingly under diversified customer demand. With this point of view, in this paper, we have proposed a new evolutionary method serving as an optimization technique in the paradigm of optimization engineering. The developed method has prospects to solve globally various complicated problem appearing in real world applications. It is evolved from the conventional method known as Nelder and Mead’s Simplex method by virtue of idea borrowed from recent meta-heuristic method such as PSO. Mentioning an algorithm to handle linear inequality constraints effectively, we have validated effectiveness of the proposed method through comparison with other methods using several benchmark problems.

  8. A method for risk-informed safety significance categorization using the analytic hierarchy process and bayesian belief networks

    International Nuclear Information System (INIS)

    Ha, Jun Su; Seong, Poong Hyun

    2004-01-01

    A risk-informed safety significance categorization (RISSC) is to categorize structures, systems, or components (SSCs) of a nuclear power plant (NPP) into two or more groups, according to their safety significance using both probabilistic and deterministic insights. In the conventional methods for the RISSC, the SSCs are quantitatively categorized according to their importance measures for the initial categorization. The final decisions (categorizations) of SSCs, however, are qualitatively made by an expert panel through discussions and adjustments of opinions by using the probabilistic insights compiled in the initial categorization process and combining the probabilistic insights with the deterministic insights. Therefore, owing to the qualitative and linear decision-making process, the conventional methods have the demerits as follows: (1) they are very costly in terms of time and labor, (2) it is not easy to reach the final decision, when the opinions of the experts are in conflict and (3) they have an overlapping process due to the linear paradigm (the categorization is performed twice - first, by the engineers who propose the method, and second, by the expert panel). In this work, a method for RISSC using the analytic hierarchy process (AHP) and bayesian belief networks (BBN) is proposed to overcome the demerits of the conventional methods and to effectively arrive at a final decision (or categorization). By using the AHP and BBN, the expert panel takes part in the early stage of the categorization (that is, the quantification process) and the safety significance based on both probabilistic and deterministic insights is quantified. According to that safety significance, SSCs are quantitatively categorized into three categories such as high safety significant category (Hi), potentially safety significant category (Po), or low safety significant category (Lo). The proposed method was applied to the components such as CC-V073, CV-V530, and SI-V644 in Ulchin Unit

  9. A method proposal for cumulative environmental impact assessment based on the landscape vulnerability evaluation

    International Nuclear Information System (INIS)

    Pavlickova, Katarina; Vyskupova, Monika

    2015-01-01

    Cumulative environmental impact assessment deals with the occasional use in practical application of environmental impact assessment process. The main reasons are the difficulty of cumulative impact identification caused by lack of data, inability to measure the intensity and spatial effect of all types of impacts and the uncertainty of their future evolution. This work presents a method proposal to predict cumulative impacts on the basis of landscape vulnerability evaluation. For this purpose, qualitative assessment of landscape ecological stability is conducted and major vulnerability indicators of environmental and socio-economic receptors are specified and valuated. Potential cumulative impacts and the overall impact significance are predicted quantitatively in modified Argonne multiple matrixes while considering the vulnerability of affected landscape receptors and the significance of impacts identified individually. The method was employed in the concrete environmental impact assessment process conducted in Slovakia. The results obtained in this case study reflect that this methodology is simple to apply, valid for all types of impacts and projects, inexpensive and not time-consuming. The objectivity of the partial methods used in this procedure is improved by quantitative landscape ecological stability evaluation, assignment of weights to vulnerability indicators based on the detailed characteristics of affected factors, and grading impact significance. - Highlights: • This paper suggests a method proposal for cumulative impact prediction. • The method includes landscape vulnerability evaluation. • The vulnerability of affected receptors is determined by their sensitivity. • This method can increase the objectivity of impact prediction in the EIA process

  10. A Proposed Arabic Handwritten Text Normalization Method

    Directory of Open Access Journals (Sweden)

    Tarik Abu-Ain

    2014-11-01

    Full Text Available Text normalization is an important technique in document image analysis and recognition. It consists of many preprocessing stages, which include slope correction, text padding, skew correction, and straight the writing line. In this side, text normalization has an important role in many procedures such as text segmentation, feature extraction and characters recognition. In the present article, a new method for text baseline detection, straightening, and slant correction for Arabic handwritten texts is proposed. The method comprises a set of sequential steps: first components segmentation is done followed by components text thinning; then, the direction features of the skeletons are extracted, and the candidate baseline regions are determined. After that, selection of the correct baseline region is done, and finally, the baselines of all components are aligned with the writing line.  The experiments are conducted on IFN/ENIT benchmark Arabic dataset. The results show that the proposed method has a promising and encouraging performance.

  11. Evaluation of a proposed optimization method for discrete-event simulation models

    Directory of Open Access Journals (Sweden)

    Alexandre Ferreira de Pinho

    2012-12-01

    Full Text Available Optimization methods combined with computer-based simulation have been utilized in a wide range of manufacturing applications. However, in terms of current technology, these methods exhibit low performance levels which are only able to manipulate a single decision variable at a time. Thus, the objective of this article is to evaluate a proposed optimization method for discrete-event simulation models based on genetic algorithms which exhibits more efficiency in relation to computational time when compared to software packages on the market. It should be emphasized that the variable's response quality will not be altered; that is, the proposed method will maintain the solutions' effectiveness. Thus, the study draws a comparison between the proposed method and that of a simulation instrument already available on the market and has been examined in academic literature. Conclusions are presented, confirming the proposed optimization method's efficiency.

  12. Thresholds for statistical and clinical significance in systematic reviews with meta-analytic methods

    DEFF Research Database (Denmark)

    Jakobsen, Janus Christian; Wetterslev, Jorn; Winkel, Per

    2014-01-01

    BACKGROUND: Thresholds for statistical significance when assessing meta-analysis results are being insufficiently demonstrated by traditional 95% confidence intervals and P-values. Assessment of intervention effects in systematic reviews with meta-analysis deserves greater rigour. METHODS......: Methodologies for assessing statistical and clinical significance of intervention effects in systematic reviews were considered. Balancing simplicity and comprehensiveness, an operational procedure was developed, based mainly on The Cochrane Collaboration methodology and the Grading of Recommendations...... Assessment, Development, and Evaluation (GRADE) guidelines. RESULTS: We propose an eight-step procedure for better validation of meta-analytic results in systematic reviews (1) Obtain the 95% confidence intervals and the P-values from both fixed-effect and random-effects meta-analyses and report the most...

  13. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  14. A Proposed Method for Solving Fuzzy System of Linear Equations

    Directory of Open Access Journals (Sweden)

    Reza Kargar

    2014-01-01

    Full Text Available This paper proposes a new method for solving fuzzy system of linear equations with crisp coefficients matrix and fuzzy or interval right hand side. Some conditions for the existence of a fuzzy or interval solution of m×n linear system are derived and also a practical algorithm is introduced in detail. The method is based on linear programming problem. Finally the applicability of the proposed method is illustrated by some numerical examples.

  15. Construction Method of Display Proposal for Commodities in Sales Promotion by Genetic Algorithm

    Science.gov (United States)

    Yumoto, Masaki

    In a sales promotion task, wholesaler prepares and presents the display proposal for commodities in order to negotiate with retailer's buyers what commodities they should sell. For automating the sales promotion tasks, the proposal has to be constructed according to the target retailer's buyer. However, it is difficult to construct the proposal suitable for the target retail store because of too much combination of commodities. This paper proposes a construction method by Genetic algorithm (GA). The proposed method represents initial display proposals for commodities with genes, improve ones with the evaluation value by GA, and rearrange one with the highest evaluation value according to the classification of commodity. Through practical experiment, we can confirm that display proposal by the proposed method is similar with the one constructed by a wholesaler.

  16. Proposal for Requirement Validation Criteria and Method Based on Actor Interaction

    Science.gov (United States)

    Hattori, Noboru; Yamamoto, Shuichiro; Ajisaka, Tsuneo; Kitani, Tsuyoshi

    We propose requirement validation criteria and a method based on the interaction between actors in an information system. We focus on the cyclical transitions of one actor's situation against another and clarify observable stimuli and responses based on these transitions. Both actors' situations can be listed in a state transition table, which describes the observable stimuli or responses they send or receive. Examination of the interaction between both actors in the state transition tables enables us to detect missing or defective observable stimuli or responses. Typically, this method can be applied to the examination of the interaction between a resource managed by the information system and its user. As a case study, we analyzed 332 requirement defect reports of an actual system development project in Japan. We found that there were a certain amount of defects regarding missing or defective stimuli and responses, which can be detected using our proposed method if this method is used in the requirement definition phase. This means that we can reach a more complete requirement definition with our proposed method.

  17. Occult pneumothoraces in Chinese patients with significant blunt chest trauma: radiological classification and proposed clinical significance.

    Science.gov (United States)

    Lee, Ryan K L; Graham, Colin A; Yeung, Janice H H; Ahuja, Anil T; Rainer, Timothy H

    2012-12-01

    vs. 1, p=0.02) than apical OPs. All apical and non-apical/basal OPs were successfully managed expectantly without associated mortality. This TCT classification of OP is proposed to help clinicians to decide on subsequent management of the OP. Basal OPs are significantly larger in size, and both basal and bilateral OPs are associated with higher severity of injury and longer hospital stay. These groups of patient may benefit from prophylactic tube thoracostomy instead of conservative treatment. On the other hand, apical and non-apical/basal groups is smaller in size, less severely injured and thus can be successfully managed expectantly. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Defining and determining the significance of impacts: concepts and methods

    International Nuclear Information System (INIS)

    Christensen, S.W.; Van Winkle, W.; Mattice, J.S.

    1975-01-01

    The term impact is conceptually and mathematically defined to be the difference in the state or value of an ecosystem with versus without the source of impact. Some resulting problems associated with the measurement of impacts based on comparisons of baseline and operational data are discussed briefly. The concept of a significant adverse impact on a biological system is operationally defined in terms of an adverse impact which, according to a proposed decision-tree, justifies rejection of a project or a change in its site, design, or mode of operation. A gradient of increasing difficulty in the prediction of impacts exists as the scope of the assessment is expanded to consider long-term, far-field impacts with respect to higher levels of biological organization (e.g., communities or ecosystems). The analytical methods available for predicting short-term, near-field impacts are discussed. Finally, the role of simulation modeling as an aid to professional judgment in predicting the long-term, far-field consequences of impacts is considered, and illustrated with an example. (U.S.)

  19. Applicability of the proposed evaluation method for social infrastructures to nuclear power plants

    International Nuclear Information System (INIS)

    Ichimura, Tomiyasu

    2015-01-01

    This study proposes an evaluation method for social infrastructures, and verifies the applicability of the proposed evaluation method to social infrastructures by applying it to nuclear power plants, which belong to social infrastructures. In the proposed evaluation method for social infrastructures, the authors chose four evaluation viewpoints and proposed common evaluation standards for the evaluation indexes obtained from each viewpoint. By applying this system to the evaluation of nuclear power plants, the evaluation index examples were obtained from the evaluation viewpoints. Furthermore, when the level of the common evaluation standards of the proposed evaluation method was applied to the evaluation of the activities of nuclear power plants based on the regulations, it was confirmed that these activities are at the highest level. Through this application validation, it was clarified that the proposed evaluation method for social infrastructures had certain effectiveness. The four evaluation viewpoints are 'service,' 'environment,' 'action factor,' and 'operation and management.' Part of the application examples to a nuclear power plant are as follows: (1) in the viewpoint of service: the operation rate of the power plant, and operation costs, and (2) in the viewpoint of environment: external influence related to nuclear waste and radioactivity, and external effect related to cooling water. (A.O.)

  20. Methodological proposal for environmental impact evaluation since different specific methods

    International Nuclear Information System (INIS)

    Leon Pelaez, Juan Diego; Lopera Arango Gabriel Jaime

    1999-01-01

    Some conceptual and practical elements related to environmental impact evaluation are described and related to the preparation of technical reports (environmental impact studies and environmental management plans) to be presented to environmental authorities for obtaining the environmental permits for development projects. In the first part of the document a summary of the main aspects of normative type is made that support the studies of environmental impact in Colombia. We propose a diagram for boarding and elaboration of the evaluation of environmental impact, which begins with the description of the project and of the environmental conditions in the area of the same. Passing then to identify the impacts through a method matricial and continuing with the quantitative evaluation of the same. For which we propose the use of the method developed by Arboleda (1994). Also we propose to qualify the activities of the project and the components of the environment in their relative importance, by means of a method here denominated agglomerate evaluation. Which allows finding those activities more impacting and the mostly impacted components. Lastly it is presented some models for the elaboration and presentation of the environmental management plans. The pursuit programs and those of environmental supervision

  1. Finding of No Significant Impact, proposed remediation of the Maybell Uranium Mill Processing Site, Maybell, Colorado

    International Nuclear Information System (INIS)

    1995-01-01

    The U.S. Department of Energy (DOE) has prepared an environmental assessment (EA) (DOE/EA-0347) on the proposed surface remediation of the Maybell uranium mill processing site in Moffat County, Colorado. The mill site contains radioactively contaminated materials from processing uranium ore that would be stabilized in place at the existing tailings pile location. Based on the analysis in the EA, DOE has determined that the proposed action does not constitute a major federal action significantly affecting the quality of the human environment within the meaning of the National Environmental Policy Act (NEPA) of 1969, Public Law 91-190 (42 U.S.C. section 4321 et seq.), as amended. Therefore, preparation of an environmental impact statement is not required and DOE is issuing this Finding of No Significant Impact (FONSI)

  2. A proposed assessment method for image of regional educational institutions

    Directory of Open Access Journals (Sweden)

    Kataeva Natalya

    2017-01-01

    Full Text Available Market of educational services in the current Russian economic conditions is a complex of a huge variety of educational institutions. Market of educational services is already experiencing a significant influence of the demographic situation in Russia. This means that higher education institutions are forced to fight in a tough competition for high school students. Increased competition in the educational market forces universities to find new methods of non-price competition in attraction of potential students and throughout own educational and economic activities. Commercialization of education places universities in a single plane with commercial companies who study a positive perception of the image and reputation as a competitive advantage, which is quite acceptable for use in strategic and current activities of higher education institutions to ensure the competitiveness of educational services and educational institution in whole. Nevertheless, due to lack of evidence-based proposals in this area there is a need for scientific research in terms of justification of organizational and methodological aspects of image use as a factor in the competitiveness of the higher education institution. Theoretically and practically there are different methods and ways of evaluating the company’s image. The article provides a comparative assessment of the existing valuation methods of corporate image and the author’s method of estimating the image of higher education institutions based on the key influencing factors. The method has been tested on the Vyatka State Agricultural Academy (Russia. The results also indicate the strengths and weaknesses of the institution, highlights ways of improving, and adjusts the efforts for image improvement.

  3. Proposed frustrated-total-reflection acoustic sensing method

    International Nuclear Information System (INIS)

    Hull, J.R.

    1981-01-01

    Modulation of electromagnetic energy transmission through a frustrated-total-reflection device by pressure-induced changes in the index of refraction is proposed for use as an acoustic detector. Maximum sensitivity occurs for angles of incidence near the critical angle. The minimum detectable pressure in air is limited by Brownian noise. Acoustic propagation losses and diffraction of the optical beam by the acoustic signal limit the minimum acoustic wavelength to lengths of the order of the spatial extent of the optical beam. The response time of the method is fast enough to follow individual acoustic waves

  4. Finding of no significant impact proposed remedial action at two uranium processing sites near Slick Rock, Colorado

    International Nuclear Information System (INIS)

    1994-01-01

    The U.S. Department of Energy (DOE) has prepared an environmental assessment (EA) (DOE/EA-0339) of the proposed remedial action at two uranium processing sites near Slick Rock in San Miguel County, Colorado. These sites contain radioactively contaminated materials that would be removed and stabilized at a remote location. Based on the information and analyses in the EA, the DOE has determined that the proposed action does not constitute a major Federal action significantly affecting the quality of the human environment within the meaning of the National Environmental Policy Act (NEPA) of 1969 (42 U.S.C. 4321 et seq.), as amended. Therefore, preparation of an environmental impact statement is not required, and the DOE is issuing this Finding of No Significant Impact (ONSI)

  5. European experiences of the proposed ASTM test method for crack arrest toughness of ferritic materials

    International Nuclear Information System (INIS)

    Jutla, T.; Lidbury, D.P.G.; Ziebs, J.; Zimmermann, C.

    1986-01-01

    The proposed ASTM test method for measuring the crack arrest toughness of ferritic materials using wedge-loaded, side-grooved, compact specimens was applied to three steels: A514 bridge steel, A588 bridge steel, and A533B pressure vessel steel. Five sets of results from different laboratories are discussed here. Notches were prepared by spark erosion, although root radii varied from ∝0.1-1.5 mm. Although fast fractures were successfully initiated, arrest did not occur in a significant number of cases. The results showed no obvious dependence of crack arrest toughness, K a , (determined by a static analysis) on crack initiation toughness, K 0 . It was found that K a decreases markedly with increasing crack jump distance. A limited amount of further work on smaller specimens of the A533B steel showed that lower K a values tended to be recorded. It is concluded that a number of points relating to the proposed test method and notch preparation are worthy of further consideration. It is pointed out that the proposed validity criteria may screen out lower bound data. Nevertheless, for present practical purposes, K a values may be regarded as useful in providing an estimate of arrest toughness - although not necessarily a conservative estimate. (orig./HP)

  6. Proposal of a method for evaluating tsunami risk using response-surface methodology

    Science.gov (United States)

    Fukutani, Y.

    2017-12-01

    Information on probabilistic tsunami inundation hazards is needed to define and evaluate tsunami risk. Several methods for calculating these hazards have been proposed (e.g. Løvholt et al. (2012), Thio (2012), Fukutani et al. (2014), Goda et al. (2015)). However, these methods are inefficient, and their calculation cost is high, since they require multiple tsunami numerical simulations, therefore lacking versatility. In this study, we proposed a simpler method for tsunami risk evaluation using response-surface methodology. Kotani et al. (2016) proposed an evaluation method for the probabilistic distribution of tsunami wave-height using a response-surface methodology. We expanded their study and developed a probabilistic distribution of tsunami inundation depth. We set the depth (x1) and the slip (x2) of an earthquake fault as explanatory variables and tsunami inundation depth (y) as an object variable. Subsequently, tsunami risk could be evaluated by conducting a Monte Carlo simulation, assuming that the generation probability of an earthquake follows a Poisson distribution, the probability distribution of tsunami inundation depth follows the distribution derived from a response-surface, and the damage probability of a target follows a log normal distribution. We applied the proposed method to a wood building located on the coast of Tokyo Bay. We implemented a regression analysis based on the results of 25 tsunami numerical calculations and developed a response-surface, which was defined as y=ax1+bx2+c (a:0.2615, b:3.1763, c=-1.1802). We assumed proper probabilistic distribution for earthquake generation, inundation height, and vulnerability. Based on these probabilistic distributions, we conducted Monte Carlo simulations of 1,000,000 years. We clarified that the expected damage probability of the studied wood building is 22.5%, assuming that an earthquake occurs. The proposed method is therefore a useful and simple way to evaluate tsunami risk using a response

  7. Estimation of body fluids with bioimpedance spectroscopy: state of the art methods and proposal of novel methods

    International Nuclear Information System (INIS)

    Buendia, R; Seoane, F; Lindecrantz, K; Bosaeus, I; Gil-Pita, R; Johannsson, G; Ellegård, L; Ward, L C

    2015-01-01

    Determination of body fluids is a useful common practice in determination of disease mechanisms and treatments. Bioimpedance spectroscopy (BIS) methods are non-invasive, inexpensive and rapid alternatives to reference methods such as tracer dilution. However, they are indirect and their robustness and validity are unclear. In this article, state of the art methods are reviewed, their drawbacks identified and new methods are proposed. All methods were tested on a clinical database of patients receiving growth hormone replacement therapy. Results indicated that most BIS methods are similarly accurate (e.g.  <  0.5   ±   3.0% mean percentage difference for total body water) for estimation of body fluids. A new model for calculation is proposed that performs equally well for all fluid compartments (total body water, extra- and intracellular water). It is suggested that the main source of error in extracellular water estimation is due to anisotropy, in total body water estimation to the uncertainty associated with intracellular resistivity and in determination of intracellular water a combination of both. (paper)

  8. Application of the AHP method to analyze the significance of the factors affecting road traffic safety

    Directory of Open Access Journals (Sweden)

    Justyna SORDYL

    2015-06-01

    Full Text Available Over the past twenty years, the number of vehicles registered in Poland has grown rapidly. At the same time, a relatively small increase in the length of the road network has been observed. As a result of the limited capacity of available infrastructure, it leads to significant congestion and to increase of the probability of road accidents. The overall level of road safety depends on many factors - the behavior of road users, infrastructure solutions and the development of automotive technology. Thus the detailed assessment of the importance of individual elements determining road safety is difficult. The starting point is to organize the factors by grouping them into categories which are components of the DVE system (driver - vehicle - environment. In this work, to analyze the importance of individual factors affecting road safety, the use of analytic hierarchy process method (AHP was proposed. It is one of the multi-criteria methods which allows us to perform hierarchical analysis of the decision process, by means of experts’ opinions. Usage of AHP method enabled us to evaluate and rank the factors affecting road safety. This work attempts to link the statistical data and surveys in significance analysis of the elements determining road safety.

  9. Significance and challenges of stereoselectivity assessing methods in drug metabolism

    Directory of Open Access Journals (Sweden)

    Zhuowei Shen

    2016-02-01

    Full Text Available Stereoselectivity in drug metabolism can not only influence the pharmacological activities, tolerability, safety, and bioavailability of drugs directly, but also cause different kinds of drug–drug interactions. Thus, assessing stereoselectivity in drug metabolism is of great significance for pharmaceutical research and development (R&D and rational use in clinic. Although there are various methods available for assessing stereoselectivity in drug metabolism, many of them have shortcomings. The indirect method of chromatographic methods can only be applicable to specific samples with functional groups to be derivatized or form complex with a chiral selector, while the direct method achieved by chiral stationary phases (CSPs is expensive. As a detector of chromatographic methods, mass spectrometry (MS is highly sensitive and specific, whereas the matrix interference is still a challenge to overcome. In addition, the use of nuclear magnetic resonance (NMR and immunoassay in chiral analysis are worth noting. This review presents several typical examples of drug stereoselective metabolism and provides a literature-based evaluation on current chiral analytical techniques to show the significance and challenges of stereoselectivity assessing methods in drug metabolism.

  10. Proposed Sandia frequency shift for anti-islanding detection method based on artificial immune system

    Directory of Open Access Journals (Sweden)

    A.Y. Hatata

    2018-03-01

    Full Text Available Sandia frequency shift (SFS is one of the active anti-islanding detection methods that depend on frequency drift to detect an islanding condition for inverter-based distributed generation. The non-detection zone (NDZ of the SFS method depends to a great extent on its parameters. Improper adjusting of these parameters may result in failure of the method. This paper presents a proposed artificial immune system (AIS-based technique to obtain optimal parameters of SFS anti-islanding detection method. The immune system is highly distributed, highly adaptive, and self-organizing in nature, maintains a memory of past encounters, and has the ability to continually learn about new encounters. The proposed method generates less total harmonic distortion (THD than the conventional SFS, which results in faster island detection and better non-detection zone. The performance of the proposed method is derived analytically and simulated using Matlab/Simulink. Two case studies are used to verify the proposed method. The first case includes a photovoltaic (PV connected to grid and the second includes a wind turbine connected to grid. The deduced optimized parameter setting helps to achieve the “non-islanding inverter” as well as least potential adverse impact on power quality. Keywords: Anti-islanding detection, Sandia frequency shift (SFS, Non-detection zone (NDZ, Total harmonic distortion (THD, Artificial immune system (AIS, Clonal selection algorithm

  11. A Proposal of Operational Risk Management Method Using FMEA for Drug Manufacturing Computerized System

    Science.gov (United States)

    Takahashi, Masakazu; Nanba, Reiji; Fukue, Yoshinori

    This paper proposes operational Risk Management (RM) method using Failure Mode and Effects Analysis (FMEA) for drug manufacturing computerlized system (DMCS). The quality of drug must not be influenced by failures and operational mistakes of DMCS. To avoid such situation, DMCS has to be conducted enough risk assessment and taken precautions. We propose operational RM method using FMEA for DMCS. To propose the method, we gathered and compared the FMEA results of DMCS, and develop a list that contains failure modes, failures and countermeasures. To apply this list, we can conduct RM in design phase, find failures, and conduct countermeasures efficiently. Additionally, we can find some failures that have not been found yet.

  12. A proposal on evaluation method of neutron absorption performance to substitute conventional neutron attenuation test

    International Nuclear Information System (INIS)

    Kim, Je Hyun; Shim, Chang Ho; Kim, Sung Hyun; Choe, Jung Hun; Cho, In Hak; Park, Hwan Seo; Park, Hyun Seo; Kim, Jung Ho; Kim, Yoon Ho

    2016-01-01

    For a verification of newly-developed neutron absorbers, one of guidelines on the qualification and acceptance of neutron absorbers is the neutron attenuation test. However, this approach can cause a problem for the qualifications that it cannot distinguish how the neutron attenuates from materials. In this study, an estimation method of neutron absorption performances for materials is proposed to detect both direct penetration and back-scattering neutrons. For the verification of the proposed method, MCNP simulations with the experimental system designed in this study were pursued using the polyethylene, iron, normal glass and the vitrified form. The results show that it can easily test neutron absorption ability using single absorber model. Also, from simulation results of single absorber and double absorbers model, it is verified that the proposed method can evaluate not only the direct thermal neutrons passing through materials, but also the scattered neutrons reflected to the materials. Therefore, the neutron absorption performances can be accurately estimated using the proposed method comparing with the conventional neutron attenuation test. It is expected that the proposed method can contribute to increase the reliability of the performance of neutron absorbers

  13. A proposal on evaluation method of neutron absorption performance to substitute conventional neutron attenuation test

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Je Hyun; Shim, Chang Ho [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Kim, Sung Hyun [Nuclear Fuel Cycle Waste Treatment Research Division, Research Reactor Institute, Kyoto University, Osaka (Japan); Choe, Jung Hun; Cho, In Hak; Park, Hwan Seo [Ionizing Radiation Center, Nuclear Fuel Cycle Waste Treatment Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Park, Hyun Seo; Kim, Jung Ho; Kim, Yoon Ho [Ionizing Radiation Center, Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of)

    2016-12-15

    For a verification of newly-developed neutron absorbers, one of guidelines on the qualification and acceptance of neutron absorbers is the neutron attenuation test. However, this approach can cause a problem for the qualifications that it cannot distinguish how the neutron attenuates from materials. In this study, an estimation method of neutron absorption performances for materials is proposed to detect both direct penetration and back-scattering neutrons. For the verification of the proposed method, MCNP simulations with the experimental system designed in this study were pursued using the polyethylene, iron, normal glass and the vitrified form. The results show that it can easily test neutron absorption ability using single absorber model. Also, from simulation results of single absorber and double absorbers model, it is verified that the proposed method can evaluate not only the direct thermal neutrons passing through materials, but also the scattered neutrons reflected to the materials. Therefore, the neutron absorption performances can be accurately estimated using the proposed method comparing with the conventional neutron attenuation test. It is expected that the proposed method can contribute to increase the reliability of the performance of neutron absorbers.

  14. The Box-and-Dot Method: A Simple Strategy for Counting Significant Figures

    Science.gov (United States)

    Stephenson, W. Kirk

    2009-01-01

    A visual method for counting significant digits is presented. This easy-to-learn (and easy-to-teach) method, designated the box-and-dot method, uses the device of "boxing" significant figures based on two simple rules, then counting the number of digits in the boxes. (Contains 4 notes.)

  15. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Noh, Jae Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-08-15

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.

  16. A proposal on alternative sampling-based modeling method of spherical particles in stochastic media for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Lee, Jae Yong; KIm, Do Hyun; Kim, Jong Kyung; Noh, Jae Man

    2015-01-01

    Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media

  17. Validation of a method for assessing resident physicians' quality improvement proposals.

    Science.gov (United States)

    Leenstra, James L; Beckman, Thomas J; Reed, Darcy A; Mundell, William C; Thomas, Kris G; Krajicek, Bryan J; Cha, Stephen S; Kolars, Joseph C; McDonald, Furman S

    2007-09-01

    Residency programs involve trainees in quality improvement (QI) projects to evaluate competency in systems-based practice and practice-based learning and improvement. Valid approaches to assess QI proposals are lacking. We developed an instrument for assessing resident QI proposals--the Quality Improvement Proposal Assessment Tool (QIPAT-7)-and determined its validity and reliability. QIPAT-7 content was initially obtained from a national panel of QI experts. Through an iterative process, the instrument was refined, pilot-tested, and revised. Seven raters used the instrument to assess 45 resident QI proposals. Principal factor analysis was used to explore the dimensionality of instrument scores. Cronbach's alpha and intraclass correlations were calculated to determine internal consistency and interrater reliability, respectively. QIPAT-7 items comprised a single factor (eigenvalue = 3.4) suggesting a single assessment dimension. Interrater reliability for each item (range 0.79 to 0.93) and internal consistency reliability among the items (Cronbach's alpha = 0.87) were high. This method for assessing resident physician QI proposals is supported by content and internal structure validity evidence. QIPAT-7 is a useful tool for assessing resident QI proposals. Future research should determine the reliability of QIPAT-7 scores in other residency and fellowship training programs. Correlations should also be made between assessment scores and criteria for QI proposal success such as implementation of QI proposals, resident scholarly productivity, and improved patient outcomes.

  18. Proposal of Constraints Analysis Method Based on Network Model for Task Planning

    Science.gov (United States)

    Tomiyama, Tomoe; Sato, Tatsuhiro; Morita, Toyohisa; Sasaki, Toshiro

    Deregulation has been accelerating several activities toward reengineering business processes, such as railway through service and modal shift in logistics. Making those activities successful, business entities have to regulate new business rules or know-how (we call them ‘constraints’). According to the new constraints, they need to manage business resources such as instruments, materials, workers and so on. In this paper, we propose a constraint analysis method to define constraints for task planning of the new business processes. To visualize each constraint's influence on planning, we propose a network model which represents allocation relations between tasks and resources. The network can also represent task ordering relations and resource grouping relations. The proposed method formalizes the way of defining constraints manually as repeatedly checking the network structure and finding conflicts between constraints. Being applied to crew scheduling problems shows that the method can adequately represent and define constraints of some task planning problems with the following fundamental features, (1) specifying work pattern to some resources, (2) restricting the number of resources for some works, (3) requiring multiple resources for some works, (4) prior allocation of some resources to some works and (5) considering the workload balance between resources.

  19. Human Body 3D Posture Estimation Using Significant Points and Two Cameras

    Science.gov (United States)

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422

  20. Proposal for an Evaluation Method for the Performance of Work Procedures.

    Science.gov (United States)

    Mohammed, Mouda; Mébarek, Djebabra; Wafa, Boulagouas; Makhlouf, Chati

    2016-12-01

    Noncompliance of operators with work procedures is a recurrent problem. This human behavior has been said to be situational and studied by many different approaches (ergonomic and others), which consider the noncompliance with work procedures to be obvious and seek to analyze its causes as well as consequences. The object of the proposed method is to solve this problem by focusing on the performance of work procedures and ensuring improved performance on a continuous basis. This study has multiple results: (1) assessment of the work procedures' performance by a multicriteria approach; (2) the use of a continuous improvement approach as a framework for the sustainability of the assessment method of work procedures' performance; and (3) adaptation of the Stop-Card as a facilitator support for continuous improvement of work procedures. The proposed method emphasizes to put in value the inputs of continuous improvement of the work procedures in relation with the conventional approaches which adopt the obvious evidence of the noncompliance to the working procedures and seek to analyze the cause-effect relationships related to this unacceptable phenomenon, especially in strategic industry.

  1. 78 FR 12684 - Proposed Significant New Use Rules on Certain Chemical Substances

    Science.gov (United States)

    2013-02-25

    ..., importers, or processors of one or more subject chemical substances (NAICS codes 325 and 324110), e.g... a use changes the type or form of exposure of human beings or the environment to a chemical... assigned in the regulatory text section of this proposed rule. This proposed rule includes 14 PMN...

  2. Proposed method to calculate FRMAC intervention levels for the assessment of radiologically contaminated food and comparison of the proposed method to the U.S. FDA's method to calculate derived intervention levels

    Energy Technology Data Exchange (ETDEWEB)

    Kraus, Terrence D.; Hunt, Brian D.

    2014-02-01

    This report reviews the method recommended by the U.S. Food and Drug Administration for calculating Derived Intervention Levels (DILs) and identifies potential improvements to the DIL calculation method to support more accurate ingestion pathway analyses and protective action decisions. Further, this report proposes an alternate method for use by the Federal Emergency Radiological Assessment Center (FRMAC) to calculate FRMAC Intervention Levels (FILs). The default approach of the FRMAC during an emergency response is to use the FDA recommended methods. However, FRMAC recommends implementing the FIL method because we believe it to be more technically accurate. FRMAC will only implement the FIL method when approved by the FDA representative on the Federal Advisory Team for Environment, Food, and Health.

  3. Disintegration of sublingual tablets: proposal for a validated test method and acceptance criterion.

    Science.gov (United States)

    Weda, M; van Riet-Nales, D A; van Aalst, P; de Kaste, D; Lekkerkerker, J F F

    2006-12-01

    In the Netherlands the market share of isosorbide dinitrate 5 mg sublingual tablets is dominated by 2 products (A and B). In the last few years complaints have been received from health care professionals on product B. During patient use the disintegration of the tablet was reported to be slow and/or incomplete, and ineffectiveness was experienced. In the European Pharmacopoeia (Ph. Eur.) no requirement is present for the disintegration time of sublingual tablets. The purpose of this study was to compare the in vitro disintegration time of products A and B, and to establish a suitable test method and acceptance criterion. A and B were tested with the Ph. Eur. method described in the monograph on disintegration of tablets and capsules as well as with 3 modified tests using the same Ph. Eur. apparatus, but without movement of the basket-rack assembly. In modified test 1 and modified test 2 water was used as medium (900 ml and 50 ml respectively), whereas in modified test 3 artificial saliva was used (50 ml). In addition, disintegration was tested in Nessler tubes with 0.5 and 2 ml of water. Finally, the Ph. Eur. method was also applied to other sublingual tablets with other drug substances on the Dutch market. With modified test 3 no disintegration could be achieved within 20 min. With the Ph. Eur. method and modified tests 1 and 2 product A and B differed significantly (p disintegration times. These 3 methods were capable of discriminating between products and between batches. The time measured with the Ph. Eur. method was significantly lower compared to modified tests 1 and 2 (p tablets the disintegration time should be tested. The Ph. Eur. method is considered suitable for this test. In view of the products currently on the market and taking into consideration requirements in the United States Pharmacopeia and Japanese Pharmacopoeia, an acceptance criterion of not more than 2 min is proposed.

  4. Finding of no significant impact proposed corrective action for the Northeast Site at the Pinellas Plant in Largo, Florida

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-06-01

    The U.S. Department of Energy (DOE) has prepared an Environmental Assessment (EA) (DOE/EA-0976) of the proposed corrective action for the Northeast Site at the Pinellas Plant in Largo, Florida. The Northeast Site contains contaminated groundwater that would be removed, treated, and discharged to the Pinellas County Sewer System. Based on the analyses in the EA, the DOE has determined that the proposed action is not a major Federal action significantly affecting the quality of the human environment, within the meaning of the National Environmental Policy Act of 1969 (NEPA), 42 U.S.C.4321 et.seq. Therefore, the preparation of an environmental impact statement is not required and the DOE is issuing this Finding of No Significant Impact (FONSI).

  5. New clinical validation method for automated sphygmomanometer: a proposal by Japan ISO-WG for sphygmomanometer standard.

    Science.gov (United States)

    Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio

    2007-12-01

    Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.

  6. Improved Object Proposals with Geometrical Features for Autonomous Driving

    Directory of Open Access Journals (Sweden)

    Yiliu Feng

    2017-01-01

    Full Text Available This paper aims at generating high-quality object proposals for object detection in autonomous driving. Most existing proposal generation methods are designed for the general object detection, which may not perform well in a particular scene. We propose several geometrical features suited for autonomous driving and integrate them into state-of-the-art general proposal generation methods. In particular, we formulate the integration as a feature fusion problem by fusing the geometrical features with existing proposal generation methods in a Bayesian framework. Experiments on the challenging KITTI benchmark demonstrate that our approach improves the existing methods significantly. Combined with a convolutional neural net detector, our approach achieves state-of-the-art performance on all three KITTI object classes.

  7. The Proposal to “Snapshot” Raim Method for Gnss Vessel Receivers Working in Poor Space Segment Geometry

    Directory of Open Access Journals (Sweden)

    Nowak Aleksander

    2015-12-01

    Full Text Available Nowadays, we can observe an increase in research on the use of small unmanned autonomous vessel (SUAV to patrol and guiding critical areas including harbours. The proposal to “snapshot” RAIM (Receiver Autonomous Integrity Monitoring method for GNSS receivers mounted on SUAV operating in poor space segment geometry is presented in the paper. Existing “snapshot” RAIM methods and algorithms which are used in practical applications have been developed for airborne receivers, thus two main assumptions have been made. The first one is that the geometry of visible satellites is strong. It means that the exclusion of any satellite from the positioning solution don’t cause significant deterioration of Dilution of Precision (DOP coefficients. The second one is that only one outlier could appear in pseudorange measurements. In case of SUAV operating in harbour these two assumptions cannot be accepted. Because of their small dimensions, GNSS antenna is only a few decimetres above sea level and regular ships, buildings and harbour facilities block and reflect satellite signals. Thus, different approach to “snapshot” RAIM is necessary. The proposal to method based on analyses of allowable maximal separation of positioning sub-solutions with using some information from EGNOS messages is described in the paper. Theoretical assumptions and results of numerical experiments are presented.

  8. Optimal plot size in the evaluation of papaya scions: proposal and comparison of methods

    Directory of Open Access Journals (Sweden)

    Humberto Felipe Celanti

    Full Text Available ABSTRACT Evaluating the quality of scions is extremely important and it can be done by characteristics of shoots and roots. This experiment evaluated height of the aerial part, stem diameter, number of leaves, petiole length and length of roots of papaya seedlings. Analyses were performed from a blank trial with 240 seedlings of "Golden Pecíolo Curto". The determination of the optimum plot size was done by applying the methods of maximum curvature, maximum curvature of coefficient of variation and a new proposed method, which incorporates the bootstrap resampling simulation to the maximum curvature method. According to the results obtained, five is the optimal number of seedlings of papaya "Golden Pecíolo Curto" per plot. The proposed method of bootstrap simulation with replacement provides optimal plot sizes equal or higher than the maximum curvature method and provides same plot size than maximum curvature method of the coefficient of variation.

  9. Proposed Project Selection Method for Human Support Research and Technology Development (HSR&TD)

    Science.gov (United States)

    Jones, Harry

    2005-01-01

    The purpose of HSR&TD is to deliver human support technologies to the Exploration Systems Mission Directorate (ESMD) that will be selected for future missions. This requires identifying promising candidate technologies and advancing them in technology readiness until they are acceptable. HSR&TD must select an may of technology development projects, guide them, and either terminate or continue them, so as to maximize the resulting number of usable advanced human support technologies. This paper proposes an effective project scoring methodology to support managing the HSR&TD project portfolio. Researchers strongly disagree as to what are the best technology project selection methods, or even if there are any proven ones. Technology development is risky and outstanding achievements are rare and unpredictable. There is no simple formula for success. Organizations that are satisfied with their project selection approach typically use a mix of financial, strategic, and scoring methods in an open, established, explicit, formal process. This approach helps to build consensus and develop management insight. It encourages better project proposals by clarifying the desired project attributes. We propose a project scoring technique based on a method previously used in a federal laboratory and supported by recent research. Projects are ranked by their perceived relevance, risk, and return - a new 3 R's. Relevance is the degree to which the project objective supports the HSR&TD goal of developing usable advanced human support technologies. Risk is the estimated probability that the project will achieve its specific objective. Return is the reduction in mission life cycle cost obtained if the project is successful. If the project objective technology performs a new function with no current cost, its return is the estimated cash value of performing the new function. The proposed project selection scoring method includes definitions of the criteria, a project evaluation

  10. Assessment of SKB's proposal for encapsulation

    International Nuclear Information System (INIS)

    Lundin, M.; Gustafsson, Oskar; Broemsen, B. von; Troell, E.

    2001-01-01

    This report accounts for an independent assessment of a proposal regarding manufacturing of copper canisters, which has been presented by SKB (Swedish Nuclear Fuel and Waste Management Co) in cooperation with MABU Consulting. IVF (The Swedish Institute for Production Engineering Research) has performed the assessment by commission of SKI (Swedish Nuclear Power Inspectorate). IVF generally believe that the proposed method, recommended manufacturing equipment and organisation will most likely mean that a functioning manufacturing of canisters can be realised. No significant deficiencies have been identified, which would mean serious problems during the manufacturing process. In some cases IVF recommends a further evaluation regarding proposed methods and/or equipment. Basically these concerns the welding processes. However, it should be stressed that SKB has emphasised that further investigation will be performed regarding this subject. Furthermore IVF recommend that proposed methods and equipment for machining of copper cylinders and for blasting of inserts should be further evaluated

  11. A proposed architecture and method of operation for improving the protection of privacy and confidentiality in disease registers

    Directory of Open Access Journals (Sweden)

    Churches Tim

    2003-01-01

    Full Text Available Abstract Background Disease registers aim to collect information about all instances of a disease or condition in a defined population of individuals. Traditionally methods of operating disease registers have required that notifications of cases be identified by unique identifiers such as social security number or national identification number, or by ensembles of non-unique identifying data items, such as name, sex and date of birth. However, growing concern over the privacy and confidentiality aspects of disease registers may hinder their future operation. Technical solutions to these legitimate concerns are needed. Discussion An alternative method of operation is proposed which involves splitting the personal identifiers from the medical details at the source of notification, and separately encrypting each part using asymmetrical (public key cryptographic methods. The identifying information is sent to a single Population Register, and the medical details to the relevant disease register. The Population Register uses probabilistic record linkage to assign a unique personal identification (UPI number to each person notified to it, although not necessarily everyone in the entire population. This UPI is shared only with a single trusted third party whose sole function is to translate between this UPI and separate series of personal identification numbers which are specific to each disease register. Summary The system proposed would significantly improve the protection of privacy and confidentiality, while still allowing the efficient linkage of records between disease registers, under the control and supervision of the trusted third party and independent ethics committees. The proposed architecture could accommodate genetic databases and tissue banks as well as a wide range of other health and social data collections. It is important that proposals such as this are subject to widespread scrutiny by information security experts, researchers and

  12. Comments and Remarks over Classic Linear Loop-Gain Method for Oscillator Design and Analysis. New Proposed Method Based on NDF/RRT

    Directory of Open Access Journals (Sweden)

    J. L. Jimenez-Martin

    2012-04-01

    Full Text Available Present paper describes a new method for designing oscillators based on the Normalized Determinant Function (NDF and Return Relations (RRT . First a review of the loop-gain method will be performed, showing pros, cons and including some examples for exploring wrong so- lutions provided by this method. Wrong solutions, because some conditions have to be previously fulfilled in order to obtain right ones, which will be described and finally, demonstrate that NDF analysis is necessary, including Return Relations (RRT usefulness, which in fact are related with the True Loop-Gain. Finally concluding this paper, steps for oscillator design and analysis, using the proposed NDF/RRT method will be presented, compared to wrong previous solutions pointing out new accuracy achieved on oscillation frequency and QL prediction. Also, more new examples, of plane reference oscillators (Z/Y/rho, will be added for which loop gain method application is clearly difficult or even impossible, solving them with the new proposed NDF/RRT method.

  13. Proposal for a Five-Step Method to Elicit Expert Judgment

    Directory of Open Access Journals (Sweden)

    Duco Veen

    2017-12-01

    Full Text Available Elicitation is a commonly used tool to extract viable information from experts. The information that is held by the expert is extracted and a probabilistic representation of this knowledge is constructed. A promising avenue in psychological research is to incorporated experts’ prior knowledge in the statistical analysis. Systematic reviews on elicitation literature however suggest that it might be inappropriate to directly obtain distributional representations from experts. The literature qualifies experts’ performance on estimating elements of a distribution as unsatisfactory, thus reliably specifying the essential elements of the parameters of interest in one elicitation step seems implausible. Providing feedback within the elicitation process can enhance the quality of the elicitation and interactive software can be used to facilitate the feedback. Therefore, we propose to decompose the elicitation procedure into smaller steps with adjustable outcomes. We represent the tacit knowledge of experts as a location parameter and their uncertainty concerning this knowledge by a scale and shape parameter. Using a feedback procedure, experts can accept the representation of their beliefs or adjust their input. We propose a Five-Step Method which consists of (1 Eliciting the location parameter using the trial roulette method. (2 Provide feedback on the location parameter and ask for confirmation or adjustment. (3 Elicit the scale and shape parameter. (4 Provide feedback on the scale and shape parameter and ask for confirmation or adjustment. (5 Use the elicited and calibrated probability distribution in a statistical analysis and update it with data or to compute a prior-data conflict within a Bayesian framework. User feasibility and internal validity for the Five-Step Method are investigated using three elicitation studies.

  14. Impact significance determination-Pushing the boundaries

    International Nuclear Information System (INIS)

    Lawrence, David P.

    2007-01-01

    Impact significance determination practice tends to be highly variable. Too often insufficient consideration is given to good practice insights. Also, impact significance determinations are frequently narrowly defined addressing, for example, only individual, negative impacts, focusing on bio-physical impacts, and not seeking to integrate either the Precautionary Principle or sustainability. This article seeks to extend the boundaries of impact significance determination practice by providing an overview of good general impact significance practices, together with stakeholder roles and potential methods for addressing significance determination challenges. Relevant thresholds, criteria, contextual considerations and support methods are also highlighted. The analysis is then extended to address how impact significance determination practices change for positive as compared with negative impacts, for cumulative as compared with individual impacts, for socio-economic as compared with bio-physical impacts, when the Precautionary Principle is integrated into the process, and when sustainability contributions drive the EIA process and related impact significance determinations. These refinements can assist EIA practitioners in ensuring that the scope and nature of impact significance determinations reflect the broadened scope of emerging EIA requirements and practices. Suggestions are included for further refining and testing of the proposed changes to impact significance determination practice

  15. A Proposal of New Spherical Particle Modeling Method Based on Stochastic Sampling of Particle Locations in Monte Carlo Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Kim, Do Hyun; Kim, Jong Kyung [Hanyang Univ., Seoul (Korea, Republic of); Noh, Jea Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    To the high computational efficiency and user convenience, the implicit method had received attention; however, it is noted that the implicit method in the previous studies has low accuracy at high packing fraction. In this study, a new implicit method, which can be used at any packing fraction with high accuracy, is proposed. In this study, the implicit modeling method in the spherical particle distributed medium for using the MC simulation is proposed. A new concept in the spherical particle sampling was developed to solve the problems in the previous implicit methods. The sampling method was verified by simulating the sampling method in the infinite and finite medium. The results show that the particle implicit modeling with the proposed method was accurately performed in all packing fraction boundaries. It is expected that the proposed method can be efficiently utilized for the spherical particle distributed mediums, which are the fusion reactor blanket, VHTR reactors, and shielding analysis.

  16. A proposed safety assurance method and its application to the fusion experimental reactor

    International Nuclear Information System (INIS)

    Okazaki, T.; Seki, Y.; Inabe, T.; Aoki, I.

    1995-01-01

    Importance categorization and hazard identification methods have been proposed for a fusion experimental reactor. A parameter, the system index, is introduced in the categorization method. The relative importance of systems with safety functions can be classified by the largeness of the system index and whether or not the system acts as a boundary for radioactive materials. This categorization can be used as the basic principle in determining structure design assessment, seismic design criteria etc. For the hazard identification the system time energy matrix is proposed, where the time and spatial distributions of hazard energies are used. This approach is formulated more systematically than an ad-hoc identification of hazard events and it is useful to select design basis events which are employed in the assessment of safety designs. (orig.)

  17. A Proposal on the Advanced Sampling Based Sensitivity and Uncertainty Analysis Method for the Eigenvalue Uncertainty Analysis

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Song, Myung Sub; Shin, Chang Ho; Noh, Jae Man

    2014-01-01

    In using the perturbation theory, the uncertainty of the response can be estimated by a single transport simulation, and therefore it requires small computational load. However, it has a disadvantage that the computation methodology must be modified whenever estimating different response type such as multiplication factor, flux, or power distribution. Hence, it is suitable for analyzing few responses with lots of perturbed parameters. Statistical approach is a sampling based method which uses randomly sampled cross sections from covariance data for analyzing the uncertainty of the response. XSUSA is a code based on the statistical approach. The cross sections are only modified with the sampling based method; thus, general transport codes can be directly utilized for the S/U analysis without any code modifications. However, to calculate the uncertainty distribution from the result, code simulation should be enough repeated with randomly sampled cross sections. Therefore, this inefficiency is known as a disadvantage of the stochastic method. In this study, an advanced sampling method of the cross sections is proposed and verified to increase the estimation efficiency of the sampling based method. In this study, to increase the estimation efficiency of the sampling based S/U method, an advanced sampling and estimation method was proposed. The main feature of the proposed method is that the cross section averaged from each single sampled cross section is used. For the use of the proposed method, the validation was performed using the perturbation theory

  18. Significance of human retinal optic disk localization in various retinal eye diseases

    International Nuclear Information System (INIS)

    Basit, A.

    2011-01-01

    Optic Disk is one of the prominent features in human fundus images. Automatic localization and segmentation of optic disk can help in early diagnosis of diabetic retinopathies and preventing vision loss. In this paper robust method for optic disk detection and extraction of optic disk boundary is proposed based on morphological operations, smoothing filters and markers controlled watershed transform. This method has shown significant improvements in terms of detection and boundaries extraction of optic disk. This method used two types of markers: internal marker and external marker. These markers first modified the gradient magnitude image and then watershed transformation is applied on this modified gradient magnitude image for boundary extraction. The proposed method has optic disk detection success rate of 100% for Shifa and 87.6% for DIARETDB1 databases. Proposed method achieved average overlap of 51.19% for DIARETDB1 database and 73.98% for Shifa database which is higher than currents methods. Experimental results clearly demonstrate an efficient performance of the proposed algorithm. (author)

  19. The Ability of Different Imputation Methods to Preserve the Significant Genes and Pathways in Cancer

    Directory of Open Access Journals (Sweden)

    Rosa Aghdam

    2017-12-01

    Full Text Available Deciphering important genes and pathways from incomplete gene expression data could facilitate a better understanding of cancer. Different imputation methods can be applied to estimate the missing values. In our study, we evaluated various imputation methods for their performance in preserving significant genes and pathways. In the first step, 5% genes are considered in random for two types of ignorable and non-ignorable missingness mechanisms with various missing rates. Next, 10 well-known imputation methods were applied to the complete datasets. The significance analysis of microarrays (SAM method was applied to detect the significant genes in rectal and lung cancers to showcase the utility of imputation approaches in preserving significant genes. To determine the impact of different imputation methods on the identification of important genes, the chi-squared test was used to compare the proportions of overlaps between significant genes detected from original data and those detected from the imputed datasets. Additionally, the significant genes are tested for their enrichment in important pathways, using the ConsensusPathDB. Our results showed that almost all the significant genes and pathways of the original dataset can be detected in all imputed datasets, indicating that there is no significant difference in the performance of various imputation methods tested. The source code and selected datasets are available on http://profiles.bs.ipm.ir/softwares/imputation_methods/.

  20. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance

    Science.gov (United States)

    Gwet, Kilem L.

    2016-01-01

    This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…

  1. The Ability of Different Imputation Methods to Preserve the Significant Genes and Pathways in Cancer.

    Science.gov (United States)

    Aghdam, Rosa; Baghfalaki, Taban; Khosravi, Pegah; Saberi Ansari, Elnaz

    2017-12-01

    Deciphering important genes and pathways from incomplete gene expression data could facilitate a better understanding of cancer. Different imputation methods can be applied to estimate the missing values. In our study, we evaluated various imputation methods for their performance in preserving significant genes and pathways. In the first step, 5% genes are considered in random for two types of ignorable and non-ignorable missingness mechanisms with various missing rates. Next, 10 well-known imputation methods were applied to the complete datasets. The significance analysis of microarrays (SAM) method was applied to detect the significant genes in rectal and lung cancers to showcase the utility of imputation approaches in preserving significant genes. To determine the impact of different imputation methods on the identification of important genes, the chi-squared test was used to compare the proportions of overlaps between significant genes detected from original data and those detected from the imputed datasets. Additionally, the significant genes are tested for their enrichment in important pathways, using the ConsensusPathDB. Our results showed that almost all the significant genes and pathways of the original dataset can be detected in all imputed datasets, indicating that there is no significant difference in the performance of various imputation methods tested. The source code and selected datasets are available on http://profiles.bs.ipm.ir/softwares/imputation_methods/. Copyright © 2017. Production and hosting by Elsevier B.V.

  2. Identifying significant temporal variation in time course microarray data without replicates

    Directory of Open Access Journals (Sweden)

    Porter Weston

    2009-03-01

    Full Text Available Abstract Background An important component of time course microarray studies is the identification of genes that demonstrate significant time-dependent variation in their expression levels. Until recently, available methods for performing such significance tests required replicates of individual time points. This paper describes a replicate-free method that was developed as part of a study of the estrous cycle in the rat mammary gland in which no replicate data was collected. Results A temporal test statistic is proposed that is based on the degree to which data are smoothed when fit by a spline function. An algorithm is presented that uses this test statistic together with a false discovery rate method to identify genes whose expression profiles exhibit significant temporal variation. The algorithm is tested on simulated data, and is compared with another recently published replicate-free method. The simulated data consists both of genes with known temporal dependencies, and genes from a null distribution. The proposed algorithm identifies a larger percentage of the time-dependent genes for a given false discovery rate. Use of the algorithm in a study of the estrous cycle in the rat mammary gland resulted in the identification of genes exhibiting distinct circadian variation. These results were confirmed in follow-up laboratory experiments. Conclusion The proposed algorithm provides a new approach for identifying expression profiles with significant temporal variation without relying on replicates. When compared with a recently published algorithm on simulated data, the proposed algorithm appears to identify a larger percentage of time-dependent genes for a given false discovery rate. The development of the algorithm was instrumental in revealing the presence of circadian variation in the virgin rat mammary gland during the estrous cycle.

  3. V-amylose structural characteristics, methods of preparation, significance, and potential applications

    CSIR Research Space (South Africa)

    Obiro, WC

    2012-02-01

    Full Text Available , and postprandial hyperglycaemia in diabetics. Various aspects of V-amylose structure, methods of preparation, factors that affect its formation, and the significance and potential applications of the V-amylose complexes are reviewed....

  4. Determining the significance of associations between two series of discrete events : bootstrap methods /

    Energy Technology Data Exchange (ETDEWEB)

    Niehof, Jonathan T.; Morley, Steven K.

    2012-01-01

    We review and develop techniques to determine associations between series of discrete events. The bootstrap, a nonparametric statistical method, allows the determination of the significance of associations with minimal assumptions about the underlying processes. We find the key requirement for this method: one of the series must be widely spaced in time to guarantee the theoretical applicability of the bootstrap. If this condition is met, the calculated significance passes a reasonableness test. We conclude with some potential future extensions and caveats on the applicability of these methods. The techniques presented have been implemented in a Python-based software toolkit.

  5. Proposal and Implementation of a Robust Sensing Method for DVB-T Signal

    Science.gov (United States)

    Song, Chunyi; Rahman, Mohammad Azizur; Harada, Hiroshi

    This paper proposes a sensing method for TV signals of DVB-T standard to realize effective TV White Space (TVWS) Communication. In the TVWS technology trial organized by the Infocomm Development Authority (iDA) of Singapore, with regard to the sensing level and sensing time, detecting DVB-T signal at the level of -120dBm over an 8MHz channel with a sensing time below 1 second is required. To fulfill such a strict sensing requirement, we propose a smart sensing method which combines feature detection and energy detection (CFED), and is also characterized by using dynamic threshold selection (DTS) based on a threshold table to improve sensing robustness to noise uncertainty. The DTS based CFED (DTS-CFED) is evaluated by computer simulations and is also implemented into a hardware sensing prototype. The results show that the DTS-CFED achieves a detection probability above 0.9 for a target false alarm probability of 0.1 for DVB-T signals at the level of -120dBm over an 8MHz channel with the sensing time equals to 0.1 second.

  6. Feature selection based on SVM significance maps for classification of dementia

    NARCIS (Netherlands)

    E.E. Bron (Esther); M. Smits (Marion); J.C. van Swieten (John); W.J. Niessen (Wiro); S. Klein (Stefan)

    2014-01-01

    textabstractSupport vector machine significance maps (SVM p-maps) previously showed clusters of significantly different voxels in dementiarelated brain regions. We propose a novel feature selection method for classification of dementia based on these p-maps. In our approach, the SVM p-maps are

  7. New significance test methods for Fourier analysis of geophysical time series

    Directory of Open Access Journals (Sweden)

    Z. Zhang

    2011-09-01

    Full Text Available When one applies the discrete Fourier transform to analyze finite-length time series, discontinuities at the data boundaries will distort its Fourier power spectrum. In this paper, based on a rigid statistics framework, we present a new significance test method which can extract the intrinsic feature of a geophysical time series very well. We show the difference in significance level compared with traditional Fourier tests by analyzing the Arctic Oscillation (AO and the Nino3.4 time series. In the AO, we find significant peaks at about 2.8, 4.3, and 5.7 yr periods and in Nino3.4 at about 12 yr period in tests against red noise. These peaks are not significant in traditional tests.

  8. Extrapeural locating method: significance in CT-guided transthoracic pulmonary biopsy

    International Nuclear Information System (INIS)

    Tang Guangjian; Wang Rengui; Liu Jianxin; Sun Jingtao

    2008-01-01

    Objective: To evaluate the usefulness of extrapleural locating method in CT-guided transthoracic pulmonary biopsy to prevent or reduce the size of peumothorax. Methods: One hundred and fifteen cases of CT-guided transthoracic pulmonary biopsy with the pulmonary lesions not in direct contact with the pleura were selected. Of 115 cases, 46 were performed with extrapleural locating method (EPL) and 69 cases with lesion edge locating method (LEL). Taking the maximum distance between the partial and visceral pleura (MPVD) measured on the CT image after the procedure as the index of the volume of pneumothorax. The incidence and volume of pneumothorax of both groups were compared and statistically analysed with R x C Chi-Square test. The retention time of the biopsy needle in the lung parenchyma of the two group was documented and the average time was calculated in each group. Results: The incidence of pneumothorax was 45.7% (21/46), median 0.4 cm with EPL group, and 66.7% (46/69) and median 0.3cm with LEL group. When the distance between the lesion and pleura was equal or smaller than 2 cm (≤2cm), the incidence of pneumothorax was 39.4% (13/33) with EPL group and 73.2% (30/41) with LEL group, and the difference of incidence and volume of the pneumothorax between two groups was statistically significant(χ 2 =9.981, P=0.019). When the distance was larger than 2 cm (>2 cm), the incidence and volume of pneumothorax between two groups were not significant statistically. The average retention time of the biopsy needle in the lung parenchyma was (7.2 ± 1.8) s with EPL group and (58.3 ± 11.6) s with LEL group. Conclusion: The extrapleural locating method can reduce effectively the retention time of the biopsy needle in the lung parenchyma and the incidence and volume of pneumothorax in CT-guided transthoracic pulmonary biopsy. (authors)

  9. Evaluating statistical and clinical significance of intervention effects in single-case experimental designs: an SPSS method to analyze univariate data.

    Science.gov (United States)

    Maric, Marija; de Haan, Else; Hogendoorn, Sanne M; Wolters, Lidewij H; Huizenga, Hilde M

    2015-03-01

    Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a data-analytic method to analyze univariate (i.e., one symptom) single-case data using the common package SPSS. This method can help the clinical researcher to investigate whether an intervention works as compared with a baseline period or another intervention type, and to determine whether symptom improvement is clinically significant. First, we describe the statistical method in a conceptual way and show how it can be implemented in SPSS. Simulation studies were performed to determine the number of observation points required per intervention phase. Second, to illustrate this method and its implications, we present a case study of an adolescent with anxiety disorders treated with cognitive-behavioral therapy techniques in an outpatient psychotherapy clinic, whose symptoms were regularly assessed before each session. We provide a description of the data analyses and results of this case study. Finally, we discuss the advantages and shortcomings of the proposed method. Copyright © 2014. Published by Elsevier Ltd.

  10. Proposal and Evaluation of Management Method for College Mechatronics Education Applying the Project Management

    Science.gov (United States)

    Ando, Yoshinobu; Eguchi, Yuya; Mizukawa, Makoto

    In this research, we proposed and evaluated a management method of college mechatronics education. We applied the project management to college mechatronics education. We practiced our management method to the seminar “Microcomputer Seminar” for 3rd grade students who belong to Department of Electrical Engineering, Shibaura Institute of Technology. We succeeded in management of Microcomputer Seminar in 2006. We obtained the good evaluation for our management method by means of questionnaire.

  11. A comparison of published methods of calculation of defect significance

    International Nuclear Information System (INIS)

    Ingham, T.; Harrison, R.P.

    1982-01-01

    This paper presents some of the results obtained in a round-robin calculational exercise organised by the OECD Committee on the Safety of Nuclear Installations (CSNI). The exercise was initiated to examine practical aspects of using documented elastic-plastic fracture mechanics methods to calculate defect significance. The extent to which the objectives of the exercise were met is illustrated using solutions to 'standard' problems produced by UKAEA and CEGB using the methods given in ASME XI, Appendix A, BSI PD6493, and the CEGB R/H/R6 Document. Differences in critical or tolerable defect size defined using these procedures are examined in terms of their different treatments and reasons for discrepancies are discussed. (author)

  12. A Proposal of Estimation Methodology to Improve Calculation Efficiency of Sampling-based Method in Nuclear Data Sensitivity and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2014-01-01

    The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method

  13. Proposal for outline of training and evaluation method for non-technical skills

    International Nuclear Information System (INIS)

    Nagasaka, Akihiko; Shibue, Hisao

    2015-01-01

    The purpose of this study is to systematize measures for improvement of emergency response capability focused on non-technical skills. As the results of investigation of some emergency training in nuclear power plant and referring to CRM training, following two issues were picked up. 1) Lack of practical training method for improvement of non-technical skills. 2) Lack of evaluation method of non-technical skills. Then, based on these 7 non-technical skills 'situational awareness' 'decision making' 'communication' 'teamworking' 'leadership' 'managing stress' 'coping with fatigue' are promotion factors to improve emergency response capability, we propose practical training method for each non-technical skill. Also we give example of behavioral markers as evaluation factor, and indicate approaches to introduce the evaluation method of non-technical skills. (author)

  14. Generating region proposals for histopathological whole slide image retrieval.

    Science.gov (United States)

    Ma, Yibing; Jiang, Zhiguo; Zhang, Haopeng; Xie, Fengying; Zheng, Yushan; Shi, Huaqiang; Zhao, Yu; Shi, Jun

    2018-06-01

    Content-based image retrieval is an effective method for histopathological image analysis. However, given a database of huge whole slide images (WSIs), acquiring appropriate region-of-interests (ROIs) for training is significant and difficult. Moreover, histopathological images can only be annotated by pathologists, resulting in the lack of labeling information. Therefore, it is an important and challenging task to generate ROIs from WSI and retrieve image with few labels. This paper presents a novel unsupervised region proposing method for histopathological WSI based on Selective Search. Specifically, the WSI is over-segmented into regions which are hierarchically merged until the WSI becomes a single region. Nucleus-oriented similarity measures for region mergence and Nucleus-Cytoplasm color space for histopathological image are specially defined to generate accurate region proposals. Additionally, we propose a new semi-supervised hashing method for image retrieval. The semantic features of images are extracted with Latent Dirichlet Allocation and transformed into binary hashing codes with Supervised Hashing. The methods are tested on a large-scale multi-class database of breast histopathological WSIs. The results demonstrate that for one WSI, our region proposing method can generate 7.3 thousand contoured regions which fit well with 95.8% of the ROIs annotated by pathologists. The proposed hashing method can retrieve a query image among 136 thousand images in 0.29 s and reach precision of 91% with only 10% of images labeled. The unsupervised region proposing method can generate regions as predictions of lesions in histopathological WSI. The region proposals can also serve as the training samples to train machine-learning models for image retrieval. The proposed hashing method can achieve fast and precise image retrieval with small amount of labels. Furthermore, the proposed methods can be potentially applied in online computer-aided-diagnosis systems. Copyright

  15. 78 FR 23184 - Proposed Significant New Use Rules on Certain Chemical Substances

    Science.gov (United States)

    2013-04-18

    .... Potentially affected entities may include: Manufacturers, importers, or processors of one or more subject... manufacturing and processing of a chemical substance. The extent to which a use changes the type or form of... information). CFR citation assigned in the regulatory text section of this proposed rule. The regulatory text...

  16. Visual assessment of BIPV retrofit design proposals for selected historical buildings using the saliency map method

    Directory of Open Access Journals (Sweden)

    Ran Xu

    2015-06-01

    Full Text Available With the increasing awareness of energy efficiency, many old buildings have to undergo a massive facade energy retrofit. How to predict the visual impact which solar installations on the aesthetic cultural value of these buildings has been a heated debate in Switzerland (and throughout the world. The usual evaluation method to describe the visual impact of BIPV is based on semantic and qualitative descriptors, and strongly dependent on personal preferences. The evaluation scale is therefore relative, flexible and imprecise. This paper proposes a new method to accurately measure the visual impact which BIPV installations have on a historical building by using the saliency map method. By imitating working principles of the human eye, it is measured how much the BIPV design proposals differ from the original building facade in the aspect of attracting human visual attention. The result is directly presented in a quantitative manner, and can be used to compare the fitness of different BIPV design proposals. The measuring process is numeric, objective and more precise.  

  17. Significance testing in ridge regression for genetic data

    Directory of Open Access Journals (Sweden)

    De Iorio Maria

    2011-09-01

    Full Text Available Abstract Background Technological developments have increased the feasibility of large scale genetic association studies. Densely typed genetic markers are obtained using SNP arrays, next-generation sequencing technologies and imputation. However, SNPs typed using these methods can be highly correlated due to linkage disequilibrium among them, and standard multiple regression techniques fail with these data sets due to their high dimensionality and correlation structure. There has been increasing interest in using penalised regression in the analysis of high dimensional data. Ridge regression is one such penalised regression technique which does not perform variable selection, instead estimating a regression coefficient for each predictor variable. It is therefore desirable to obtain an estimate of the significance of each ridge regression coefficient. Results We develop and evaluate a test of significance for ridge regression coefficients. Using simulation studies, we demonstrate that the performance of the test is comparable to that of a permutation test, with the advantage of a much-reduced computational cost. We introduce the p-value trace, a plot of the negative logarithm of the p-values of ridge regression coefficients with increasing shrinkage parameter, which enables the visualisation of the change in p-value of the regression coefficients with increasing penalisation. We apply the proposed method to a lung cancer case-control data set from EPIC, the European Prospective Investigation into Cancer and Nutrition. Conclusions The proposed test is a useful alternative to a permutation test for the estimation of the significance of ridge regression coefficients, at a much-reduced computational cost. The p-value trace is an informative graphical tool for evaluating the results of a test of significance of ridge regression coefficients as the shrinkage parameter increases, and the proposed test makes its production computationally feasible.

  18. Proposed waste form performance criteria and testing methods for low-level mixed waste

    International Nuclear Information System (INIS)

    Franz, E.M.; Fuhrmann, M.; Bowerman, B.

    1995-01-01

    Proposed waste form performance criteria and testing methods were developed as guidance in judging the suitability of solidified waste as a physico-chemical barrier to releases of radionuclides and RCRA regulated hazardous components. The criteria follow from the assumption that release of contaminants by leaching is the single most important property for judging the effectiveness of a waste form. A two-tier regimen is proposed. The first tier consists of a leach test designed to determine the net, forward leach rate of the solidified waste and a leach test required by the Environmental Protection Agency (EPA). The second tier of tests is to determine if a set of stresses (i.e., radiation, freeze-thaw, wet-dry cycling) on the waste form adversely impacts its ability to retain contaminants and remain physically intact. In the absence of site-specific performance assessments (PA), two generic modeling exercises are described which were used to calculate proposed acceptable leachates

  19. Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel Part II: Plate bending test and proposal of a simplified evaluation method

    Energy Technology Data Exchange (ETDEWEB)

    Ando, Masanori, E-mail: ando.masanori@jaea.go.jp; Takaya, Shigeru, E-mail: takaya.shigeru@jaea.go.jp

    2016-12-15

    Highlights: • Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel is proposed. • A simplified evaluation method is also proposed for the codification. • Both proposed evaluation method was validated by the plate bending test. • For codification, the local stress and strain behavior was analyzed. - Abstract: In the present study, to develop an evaluation procedure and design rules for Mod.9Cr-1Mo steel weld joints, a method for evaluating the creep-fatigue life of Mod.9Cr-1Mo steel weld joints was proposed based on finite element analysis (FEA) and a series of cyclic plate bending tests of longitudinal and horizontal seamed plates. The strain concentration and redistribution behaviors were evaluated and the failure cycles were estimated using FEA by considering the test conditions and metallurgical discontinuities in the weld joints. Inelastic FEA models consisting of the base metal, heat-affected zone and weld metal were employed to estimate the elastic follow-up behavior caused by the metallurgical discontinuities. The elastic follow-up factors determined by comparing the elastic and inelastic FEA results were determined to be less than 1.5. Based on the estimated elastic follow-up factors obtained via inelastic FEA, a simplified technique using elastic FEA was proposed for evaluating the creep-fatigue life in Mod.9Cr-1Mo steel weld joints. The creep-fatigue life obtained using the plate bending test was compared to those estimated from the results of inelastic FEA and by a simplified evaluation method.

  20. Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel Part II: Plate bending test and proposal of a simplified evaluation method

    International Nuclear Information System (INIS)

    Ando, Masanori; Takaya, Shigeru

    2016-01-01

    Highlights: • Creep-fatigue evaluation method for weld joint of Mod.9Cr-1Mo steel is proposed. • A simplified evaluation method is also proposed for the codification. • Both proposed evaluation method was validated by the plate bending test. • For codification, the local stress and strain behavior was analyzed. - Abstract: In the present study, to develop an evaluation procedure and design rules for Mod.9Cr-1Mo steel weld joints, a method for evaluating the creep-fatigue life of Mod.9Cr-1Mo steel weld joints was proposed based on finite element analysis (FEA) and a series of cyclic plate bending tests of longitudinal and horizontal seamed plates. The strain concentration and redistribution behaviors were evaluated and the failure cycles were estimated using FEA by considering the test conditions and metallurgical discontinuities in the weld joints. Inelastic FEA models consisting of the base metal, heat-affected zone and weld metal were employed to estimate the elastic follow-up behavior caused by the metallurgical discontinuities. The elastic follow-up factors determined by comparing the elastic and inelastic FEA results were determined to be less than 1.5. Based on the estimated elastic follow-up factors obtained via inelastic FEA, a simplified technique using elastic FEA was proposed for evaluating the creep-fatigue life in Mod.9Cr-1Mo steel weld joints. The creep-fatigue life obtained using the plate bending test was compared to those estimated from the results of inelastic FEA and by a simplified evaluation method.

  1. Proposal of evaluation method of tsunami wave pressure using 2D depth-integrated flow simulation

    International Nuclear Information System (INIS)

    Arimitsu, Tsuyoshi; Ooe, Kazuya; Kawasaki, Koji

    2012-01-01

    To design and construct land structures resistive to tsunami force, it is most essential to evaluate tsunami pressure quantitatively. The existing hydrostatic formula, in general, tended to underestimate tsunami wave pressure under the condition of inundation flow with large Froude number. Estimation method of tsunami pressure acting on a land structure was proposed using inundation depth and horizontal velocity at the front of the structure, which were calculated employing a 2D depth-integrated flow model based on the unstructured grid system. The comparison between the numerical and experimental results revealed that the proposed method could reasonably reproduce the vertical distribution of the maximum tsunami pressure as well as the time variation of the tsunami pressure exerting on the structure. (author)

  2. A Hybrid Swarm Intelligence Algorithm for Intrusion Detection Using Significant Features

    Directory of Open Access Journals (Sweden)

    P. Amudha

    2015-01-01

    Full Text Available Intrusion detection has become a main part of network security due to the huge number of attacks which affects the computers. This is due to the extensive growth of internet connectivity and accessibility to information systems worldwide. To deal with this problem, in this paper a hybrid algorithm is proposed to integrate Modified Artificial Bee Colony (MABC with Enhanced Particle Swarm Optimization (EPSO to predict the intrusion detection problem. The algorithms are combined together to find out better optimization results and the classification accuracies are obtained by 10-fold cross-validation method. The purpose of this paper is to select the most relevant features that can represent the pattern of the network traffic and test its effect on the success of the proposed hybrid classification algorithm. To investigate the performance of the proposed method, intrusion detection KDDCup’99 benchmark dataset from the UCI Machine Learning repository is used. The performance of the proposed method is compared with the other machine learning algorithms and found to be significantly different.

  3. On detection and assessment of statistical significance of Genomic Islands

    Directory of Open Access Journals (Sweden)

    Chaudhuri Probal

    2008-04-01

    Full Text Available Abstract Background Many of the available methods for detecting Genomic Islands (GIs in prokaryotic genomes use markers such as transposons, proximal tRNAs, flanking repeats etc., or they use other supervised techniques requiring training datasets. Most of these methods are primarily based on the biases in GC content or codon and amino acid usage of the islands. However, these methods either do not use any formal statistical test of significance or use statistical tests for which the critical values and the P-values are not adequately justified. We propose a method, which is unsupervised in nature and uses Monte-Carlo statistical tests based on randomly selected segments of a chromosome. Such tests are supported by precise statistical distribution theory, and consequently, the resulting P-values are quite reliable for making the decision. Results Our algorithm (named Design-Island, an acronym for Detection of Statistically Significant Genomic Island runs in two phases. Some 'putative GIs' are identified in the first phase, and those are refined into smaller segments containing horizontally acquired genes in the refinement phase. This method is applied to Salmonella typhi CT18 genome leading to the discovery of several new pathogenicity, antibiotic resistance and metabolic islands that were missed by earlier methods. Many of these islands contain mobile genetic elements like phage-mediated genes, transposons, integrase and IS elements confirming their horizontal acquirement. Conclusion The proposed method is based on statistical tests supported by precise distribution theory and reliable P-values along with a technique for visualizing statistically significant islands. The performance of our method is better than many other well known methods in terms of their sensitivity and accuracy, and in terms of specificity, it is comparable to other methods.

  4. Developing the RIAM method (rapid impact assessment matrix) in the context of impact significance assessment

    International Nuclear Information System (INIS)

    Ijaes, Asko; Kuitunen, Markku T.; Jalava, Kimmo

    2010-01-01

    In this paper the applicability of the RIAM method (rapid impact assessment matrix) is evaluated in the context of impact significance assessment. The methodological issues considered in the study are: 1) to test the possibilities of enlarging the scoring system used in the method, and 2) to compare the significance classifications of RIAM and unaided decision-making to estimate the consistency between these methods. The data used consisted of projects for which funding had been applied for via the European Union's Regional Development Trust in the area of Central Finland. Cases were evaluated with respect to their environmental, social and economic impacts using an assessment panel. The results showed the scoring framework used in RIAM could be modified according to the problem situation at hand, which enhances its application potential. However the changes made in criteria B did not significantly affect the final ratings of the method, which indicates the high importance of criteria A1 (importance) and A2 (magnitude) to the overall results. The significance classes obtained by the two methods diverged notably. In general the ratings given by RIAM tended to be smaller compared to intuitive judgement implying that the RIAM method may be somewhat conservative in character.

  5. Significance levels for studies with correlated test statistics.

    Science.gov (United States)

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  6. 77 FR 24684 - Proposed Information Collection; Comment Request; 2013-2015 American Community Survey Methods...

    Science.gov (United States)

    2012-04-25

    ... proposed content changes. Thus, we need to test an alternative questionnaire design to accommodate additional content on the ACS mail questionnaire. In the 2013 ACS Questionnaire Design Test, we will study... in Puerto Rico. II. Method of Collection Questionnaire Design Test--Data collection for this test...

  7. Significance of perceptually relevant image decolorization for scene classification

    Science.gov (United States)

    Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl

    2017-11-01

    Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

  8. Gene set analysis: limitations in popular existing methods and proposed improvements.

    Science.gov (United States)

    Mishra, Pashupati; Törönen, Petri; Leino, Yrjö; Holm, Liisa

    2014-10-01

    Gene set analysis is the analysis of a set of genes that collectively contribute to a biological process. Most popular gene set analysis methods are based on empirical P-value that requires large number of permutations. Despite numerous gene set analysis methods developed in the past decade, the most popular methods still suffer from serious limitations. We present a gene set analysis method (mGSZ) based on Gene Set Z-scoring function (GSZ) and asymptotic P-values. Asymptotic P-value calculation requires fewer permutations, and thus speeds up the gene set analysis process. We compare the GSZ-scoring function with seven popular gene set scoring functions and show that GSZ stands out as the best scoring function. In addition, we show improved performance of the GSA method when the max-mean statistics is replaced by the GSZ scoring function. We demonstrate the importance of both gene and sample permutations by showing the consequences in the absence of one or the other. A comparison of asymptotic and empirical methods of P-value estimation demonstrates a clear advantage of asymptotic P-value over empirical P-value. We show that mGSZ outperforms the state-of-the-art methods based on two different evaluations. We compared mGSZ results with permutation and rotation tests and show that rotation does not improve our asymptotic P-values. We also propose well-known asymptotic distribution models for three of the compared methods. mGSZ is available as R package from cran.r-project.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. A Method for Proposing Valued-Adding Attributes in Customized Housing

    Directory of Open Access Journals (Sweden)

    Cynthia S. Hentschke

    2014-12-01

    Full Text Available In most emerging economies, there has been many incentives and high availability of funding for low-cost housing projects. This has encouraged product standardization and the application of mass production ideas, based on the assumption that this is the most effective strategy for reducing costs. However, the delivery of highly standardized housing units to customers with different needs, without considering their lifestyle and perception of value, often results in inadequate products. Mass customization has been pointed out as an effective strategy to improve value generation in low-cost housing projects, and to avoid waste caused by renovations done in dwellings soon after occupancy. However, one of the main challenges for the implementation of mass customization is the definition of a set of relevant options based on users’ perceived value. The aim of this paper is to propose a method for defining value adding attributes in customized housing projects, which can support decision-making in product development. The means-end chain theory was used as theoretical framework to connect product attributes and costumers’ values, through the application of the laddering technique. The method was tested in two house-building projects delivered by a company from Brazil. The main contribution of this method is to indicate the customization units that are most important for users along with the explanation of why those units are the most relevant ones.

  10. A Novel Least Significant Bit First Processing Parallel CRC Circuit

    Directory of Open Access Journals (Sweden)

    Xiujie Qu

    2013-01-01

    Full Text Available In HDLC serial communication protocol, CRC calculation can first process the most or least significant bit of data. Nowadays most CRC calculation is based on the most significant bit (MSB first processing. An algorithm of the least significant bit (LSB first processing parallel CRC is proposed in this paper. Based on the general expression of the least significant bit first processing serial CRC, using state equation method of linear system, we derive a recursive formula by the mathematical deduction. The recursive formula is applicable to any number of bits processed in parallel and any series of generator polynomial. According to the formula, we present the parallel circuit of CRC calculation and implement it with VHDL on FPGA. The results verify the accuracy and effectiveness of this method.

  11. Hybrid Data Hiding Scheme Using Right-Most Digit Replacement and Adaptive Least Significant Bit for Digital Images

    Directory of Open Access Journals (Sweden)

    Mehdi Hussain

    2016-05-01

    Full Text Available The goal of image steganographic methods considers three main key issues: high embedding capacity, good visual symmetry/quality, and security. In this paper, a hybrid data hiding method combining the right-most digit replacement (RMDR with an adaptive least significant bit (ALSB is proposed to provide not only high embedding capacity but also maintain a good visual symmetry. The cover-image is divided into lower texture (symmetry patterns and higher texture (asymmetry patterns areas and these textures determine the selection of RMDR and ALSB methods, respectively, according to pixel symmetry. This paper has three major contributions. First, the proposed hybrid method enhanced the embedding capacity due to efficient ALSB utilization in the higher texture areas of cover images. Second, the proposed hybrid method maintains the high visual quality because RMDR has the closest selection process to generate the symmetry between stego and cover pixels. Finally, the proposed hybrid method is secure against statistical regular or singular (RS steganalysis and pixel difference histogram steganalysis because RMDR is capable of evading the risk of RS detection attacks due to pixel digits replacement instead of bits. Extensive experimental tests (over 1500+ cover images are conducted with recent least significant bit (LSB-based hybrid methods and it is demonstrated that the proposed hybrid method has a high embedding capacity (800,019 bits while maintaining good visual symmetry (39.00% peak signal-to-noise ratio (PSNR.

  12. Verbal Auditory Cueing of Improvisational Dance: A Proposed Method for Training Agency in Parkinson’s Disease

    Science.gov (United States)

    Batson, Glenna; Hugenschmidt, Christina E.; Soriano, Christina T.

    2016-01-01

    Dance is a non-pharmacological intervention that helps maintain functional independence and quality of life in people with Parkinson’s disease (PPD). Results from controlled studies on group-delivered dance for people with mild-to-moderate stage Parkinson’s have shown statistically and clinically significant improvements in gait, balance, and psychosocial factors. Tested interventions include non-partnered dance forms (ballet and modern dance) and partnered (tango). In all of these dance forms, specific movement patterns initially are learned through repetition and performed in time-to-music. Once the basic steps are mastered, students may be encouraged to improvise on the learned steps as they perform them in rhythm with the music. Here, we summarize a method of teaching improvisational dance that advances previous reported benefits of dance for people with Parkinson’s disease (PD). The method relies primarily on improvisational verbal auditory cueing with less emphasis on directed movement instruction. This method builds on the idea that daily living requires flexible, adaptive responses to real-life challenges. In PD, movement disorders not only limit mobility but also impair spontaneity of thought and action. Dance improvisation demands open and immediate interpretation of verbally delivered movement cues, potentially fostering the formation of spontaneous movement strategies. Here, we present an introduction to a proposed method, detailing its methodological specifics, and pointing to future directions. The viewpoint advances an embodied cognitive approach that has eco-validity in helping PPD meet the changing demands of daily living. PMID:26925029

  13. How to identify partial exposures to ionizing radiation? Proposal for a cytogenetic method

    International Nuclear Information System (INIS)

    Fernandes, T.S.; Silva, E.B.; Pinto, M.M.P.L.; Amaral, A.; Lloyd, David

    2013-01-01

    In cases of radiological incidents or in occupational exposures to ionizing radiation, the majority of exposures are not related to the total body, but only partial. In this context, if the cytogenetic dosimetry is performed, there will be an underestimation of the absorbed dose due to the dilution of irradiated cells with non-irradiated cells. Considering the norms of NR 32 - Safety and Health in the Work of Health Service - which recommends cytogenetic dosimetry in the investigation of accidental exposures to ionizing radiations, it is necessary to develop of a tool to provide a better identification of partial exposures. With this aim, a partial body exposure was simulated by mixing, in vitro, 70% of blood irradiated with 4 Gy of X-rays with 30% of unirradiated blood from the same healthy donor. Aliquots of this mixture were cultured for 48 and 72 hours. Prolonging the time of cell culture from 48 to 72 hours produced no significant change in the yield of dicentrics. However, when only M1 (first division cells) were analyzed, the frequency of dicentrics per cell was increased. Prolonging the time of cell culture allowed cells in mitotic delay by irradiation to reach metaphase, and thus provides enough time for the damage to be visualized. The results of this research present the proposed method as an important tool in the investigation of exposed individuals, allowing associating the cytogenetic analysis with the real percentage of irradiated cells, contributing significantly for the decision making in terms of occupational health. (author)

  14. An assessment of the long term suitability of present and proposed methods for the management of uranium mill tailings

    International Nuclear Information System (INIS)

    1979-07-01

    Proposals for safe, long-term containment of conventional tailings include 1) storage under water, 2) storage in active, abandoned or specially created underground mines and, 3) storage in open pits, with subsequent flooding or covering with overburden. The underwater proposal can meet most of the requirements of long term containment; however, extensive study of existing tailings deposits in deep water locations will be needed. Underground mines cannot provide sufficient storage capacity, since the tailings bulk during mill operation can occupy twice the volume of the original ore. It is possible to reduce the hazard by reducing the radium and thorium content of the tailings. Proposals for such an undertaking include ore beneficiation with rejection of the relatively innocuous fraction, radium-thorium removal in the mill, and significant changes in both ore processing and treatment of tailings. It is concluded that surface-stored tailings are vulnerable over the long term to dispersion by leaching and water erosion, and that access to a tailings site cannot be prevented, while only a major climatic or seismic event could disturb tailings stored in suitable underwater or underground mine sites. The criteria for determining suitability for each method, however, will need to be identified, tested, and accepted through the normal process of modeling, pilot plant evaluation, monitoring and evaluation. (author)

  15. Clinical significance of a proposed lymphoscintigrpahic functional grade system in patients with extremity lymphedema of stage i

    International Nuclear Information System (INIS)

    Choi, Joan Young; Hwang, Ji Hye; Kim, Dong Ik; Cho, Young Seok; Lee, Su Jin; Choi, Yong; Choe, Yeam Seong; Lee, Kyung Han; Kim, Byung Tae

    2005-01-01

    We proposed a new lymphoscintigrpahic functional grade (LGr) system in extremity lymphedema, and investigated the association between the LGr and a long-term response to physical therapy in patients with extremity lymphedema of stage I. The subjects were 20 patients with unilateral extremity lymphedema of stage I, who underwent pre-treatment extremity lymphoscintigraphy using Tc-99m antimony sulfur colloid, and were treated by complex decongestive physical therapy (CDPT). A proposed lymphoscintigrpahic functional grade system consisted of LGr 0 to LGr 4 according to the ilioinguinal nodal uptake, amount of dermal backflow, and uptake pattern of main and collateral lymphatics : LGr 0 = normal, LGr 1 = decreased lymphatic function without dermal backflow, LGr 2 = decreased lymphatic function with dermal backflow, LGr 3 = non - visualization of main lymphatics with dermal backflow, and LGr 4 = no significant lymphatic transport from injection site. LGr 2 was divided into 2A and 2B based on the amount of dermal backflow. A physician who is a lymphedema specialist determined the long-term outcome to CDPT with normalized response (NR), good response (GR) and poor response (PR) based on the change of edema volume reduction, skin status and occurrence of dermatolymphangioadenitis after the clinical follow-up for more than 1 year. Therapeutic responses were NR in 2 patients. GR in 9 patients and PR in 9 patients. Baseline LGrs were 1 in 7 patients, 2A in 4 patients, 2B in 5 patients, 3 in 2 patients, and 4 in 2 patients. There was a significant relationship between therapeutic response and LGr (p=0.003). In other words, 10 of 11 patients (91%) with LGr 1 or 2A showed NR. or GR. On the contrary, 8 of 9 patients (89%) with LGr 2B, 3 or 4 showed PR. Patients with unilateral extremity lymphedema of stage I had different lymphoscintigrpahic functional grades. This grade system may be useful to predict the response to physical therapy in such patients

  16. Clinical significance of a proposed lymphoscintigrpahic functional grade system in patients with extremity lymphedema of stage i

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Joan Young; Hwang, Ji Hye; Kim, Dong Ik; Cho, Young Seok; Lee, Su Jin; Choi, Yong; Choe, Yeam Seong; Lee, Kyung Han; Kim, Byung Tae [Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2005-07-01

    We proposed a new lymphoscintigrpahic functional grade (LGr) system in extremity lymphedema, and investigated the association between the LGr and a long-term response to physical therapy in patients with extremity lymphedema of stage I. The subjects were 20 patients with unilateral extremity lymphedema of stage I, who underwent pre-treatment extremity lymphoscintigraphy using Tc-99m antimony sulfur colloid, and were treated by complex decongestive physical therapy (CDPT). A proposed lymphoscintigrpahic functional grade system consisted of LGr 0 to LGr 4 according to the ilioinguinal nodal uptake, amount of dermal backflow, and uptake pattern of main and collateral lymphatics : LGr 0 = normal, LGr 1 = decreased lymphatic function without dermal backflow, LGr 2 = decreased lymphatic function with dermal backflow, LGr 3 = non - visualization of main lymphatics with dermal backflow, and LGr 4 = no significant lymphatic transport from injection site. LGr 2 was divided into 2A and 2B based on the amount of dermal backflow. A physician who is a lymphedema specialist determined the long-term outcome to CDPT with normalized response (NR), good response (GR) and poor response (PR) based on the change of edema volume reduction, skin status and occurrence of dermatolymphangioadenitis after the clinical follow-up for more than 1 year. Therapeutic responses were NR in 2 patients. GR in 9 patients and PR in 9 patients. Baseline LGrs were 1 in 7 patients, 2A in 4 patients, 2B in 5 patients, 3 in 2 patients, and 4 in 2 patients. There was a significant relationship between therapeutic response and LGr (p=0.003). In other words, 10 of 11 patients (91%) with LGr 1 or 2A showed NR. or GR. On the contrary, 8 of 9 patients (89%) with LGr 2B, 3 or 4 showed PR. Patients with unilateral extremity lymphedema of stage I had different lymphoscintigrpahic functional grades. This grade system may be useful to predict the response to physical therapy in such patients.

  17. A qualitative method proposal to improve environmental impact assessment

    International Nuclear Information System (INIS)

    Toro, Javier; Requena, Ignacio; Duarte, Oscar; Zamorano, Montserrat

    2013-01-01

    In environmental impact assessment, qualitative methods are used because they are versatile and easy to apply. This methodology is based on the evaluation of the strength of the impact by grading a series of qualitative attributes that can be manipulated by the evaluator. The results thus obtained are not objective, and all too often impacts are eliminated that should be mitigated with corrective measures. However, qualitative methodology can be improved if the calculation of Impact Importance is based on the characteristics of environmental factors and project activities instead on indicators assessed by evaluators. In this sense, this paper proposes the inclusion of the vulnerability of environmental factors and the potential environmental impact of project activities. For this purpose, the study described in this paper defined Total Impact Importance and specified a quantification procedure. The results obtained in the case study of oil drilling in Colombia reflect greater objectivity in the evaluation of impacts as well as a positive correlation between impact values, the environmental characteristics at and near the project location, and the technical characteristics of project activities. -- Highlights: • Concept of vulnerability has been used to calculate the importance impact assessment. • This paper defined Total Impact Importance and specified a quantification procedure. • The method includes the characteristics of environmental and project activities. • The application has shown greater objectivity in the evaluation of impacts. • Better correlation between impact values, environment and the project has been shown

  18. A qualitative method proposal to improve environmental impact assessment

    Energy Technology Data Exchange (ETDEWEB)

    Toro, Javier, E-mail: jjtoroca@unal.edu.co [Institute of Environmental Studies, National University of Colombia at Bogotá (Colombia); Requena, Ignacio, E-mail: requena@decsai.ugr.es [Department of Computer Science and Artificial Intelligence, University of Granada (Spain); Duarte, Oscar, E-mail: ogduartev@unal.edu.co [National University of Colombia at Bogotá, Department of Electrical Engineering and Electronics (Colombia); Zamorano, Montserrat, E-mail: zamorano@ugr.es [Department of Civil Engineering, University of Granada (Spain)

    2013-11-15

    In environmental impact assessment, qualitative methods are used because they are versatile and easy to apply. This methodology is based on the evaluation of the strength of the impact by grading a series of qualitative attributes that can be manipulated by the evaluator. The results thus obtained are not objective, and all too often impacts are eliminated that should be mitigated with corrective measures. However, qualitative methodology can be improved if the calculation of Impact Importance is based on the characteristics of environmental factors and project activities instead on indicators assessed by evaluators. In this sense, this paper proposes the inclusion of the vulnerability of environmental factors and the potential environmental impact of project activities. For this purpose, the study described in this paper defined Total Impact Importance and specified a quantification procedure. The results obtained in the case study of oil drilling in Colombia reflect greater objectivity in the evaluation of impacts as well as a positive correlation between impact values, the environmental characteristics at and near the project location, and the technical characteristics of project activities. -- Highlights: • Concept of vulnerability has been used to calculate the importance impact assessment. • This paper defined Total Impact Importance and specified a quantification procedure. • The method includes the characteristics of environmental and project activities. • The application has shown greater objectivity in the evaluation of impacts. • Better correlation between impact values, environment and the project has been shown.

  19. Comparison among four proposed direct blood culture microbial identification methods using MALDI-TOF MS.

    Science.gov (United States)

    Bazzi, Ali M; Rabaan, Ali A; El Edaily, Zeyad; John, Susan; Fawarah, Mahmoud M; Al-Tawfiq, Jaffar A

    Matrix-assisted laser desorption-ionization time-of-flight (MALDI-TOF) mass spectrometry facilitates rapid and accurate identification of pathogens, which is critical for sepsis patients. In this study, we assessed the accuracy in identification of both Gram-negative and Gram-positive bacteria, except for Streptococcus viridans, using four rapid blood culture methods with Vitek MALDI-TOF-MS. We compared our proposed lysis centrifugation followed by washing and 30% acetic acid treatment method (method 2) with two other lysis centrifugation methods (washing and 30% formic acid treatment (method 1); 100% ethanol treatment (method 3)), and picking colonies from 90 to 180min subculture plates (method 4). Methods 1 and 2 identified all organisms down to species level with 100% accuracy, except for Streptococcus viridans, Streptococcus pyogenes, Enterobacter cloacae and Proteus vulgaris. The latter two were identified to genus level with 100% accuracy. Each method exhibited excellent accuracy and precision in terms of identification to genus level with certain limitations. Copyright © 2016 King Saud Bin Abdulaziz University for Health Sciences. Published by Elsevier Ltd. All rights reserved.

  20. Comparison among four proposed direct blood culture microbial identification methods using MALDI-TOF MS

    Directory of Open Access Journals (Sweden)

    Ali M. Bazzi

    2017-05-01

    Full Text Available Summary: Matrix-assisted laser desorption-ionization time-of-flight (MALDI-TOF mass spectrometry facilitates rapid and accurate identification of pathogens, which is critical for sepsis patients.In this study, we assessed the accuracy in identification of both Gram-negative and Gram-positive bacteria, except for Streptococcus viridans, using four rapid blood culture methods with Vitek MALDI-TOF-MS. We compared our proposed lysis centrifugation followed by washing and 30% acetic acid treatment method (method 2 with two other lysis centrifugation methods (washing and 30% formic acid treatment (method 1; 100% ethanol treatment (method 3, and picking colonies from 90 to 180 min subculture plates (method 4. Methods 1 and 2 identified all organisms down to species level with 100% accuracy, except for Streptococcus viridans, Streptococcus pyogenes, Enterobacter cloacae and Proteus vulgaris. The latter two were identified to genus level with 100% accuracy. Each method exhibited excellent accuracy and precision in terms of identification to genus level with certain limitations. Keywords: MALDI-TOF, Gram-negative, Gram-positive, Sepsis, Blood culture

  1. Proposed waste form performance criteria and testing methods for low-level mixed waste

    International Nuclear Information System (INIS)

    Franz, E.M.; Fuhrmann, M.; Bowerman, B.; Bates, S.; Peters, R.

    1994-08-01

    This document describes proposed waste form performance criteria and testing method that could be used as guidance in judging viability of a waste form as a physico-chemical barrier to releases of radionuclides and RCRA regulated hazardous components. It is assumed that release of contaminants by leaching is the single most important property by which the effectiveness of a waste form is judged. A two-tier regimen is proposed. The first tier includes a leach test required by the Environmental Protection Agency and a leach test designed to determine the net forward leach rate for a variety of materials. The second tier of tests are to determine if a set of stresses (i.e., radiation, freeze-thaw, wet-dry cycling) on the waste form adversely impact its ability to retain contaminants and remain physically intact. It is recommended that the first tier tests be performed first to determine acceptability. Only on passing the given specifications for the leach tests should other tests be performed. In the absence of site-specific performance assessments (PA), two generic modeling exercises are described which were used to calculate proposed acceptable leach rates

  2. Active Learning with Rationales for Identifying Operationally Significant Anomalies in Aviation

    Science.gov (United States)

    Sharma, Manali; Das, Kamalika; Bilgic, Mustafa; Matthews, Bryan; Nielsen, David Lynn; Oza, Nikunj C.

    2016-01-01

    A major focus of the commercial aviation community is discovery of unknown safety events in flight operations data. Data-driven unsupervised anomaly detection methods are better at capturing unknown safety events compared to rule-based methods which only look for known violations. However, not all statistical anomalies that are discovered by these unsupervised anomaly detection methods are operationally significant (e.g., represent a safety concern). Subject Matter Experts (SMEs) have to spend significant time reviewing these statistical anomalies individually to identify a few operationally significant ones. In this paper we propose an active learning algorithm that incorporates SME feedback in the form of rationales to build a classifier that can distinguish between uninteresting and operationally significant anomalies. Experimental evaluation on real aviation data shows that our approach improves detection of operationally significant events by as much as 75% compared to the state-of-the-art. The learnt classifier also generalizes well to additional validation data sets.

  3. A Proposal on the Quantitative Homogeneity Analysis Method of SEM Images for Material Inspections

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Song Hyun; Kim, Jong Woo; Shin, Chang Ho [Hanyang University, Seoul (Korea, Republic of); Choi, Jung-Hoon; Cho, In-Hak; Park, Hwan Seo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    A scanning electron microscope (SEM) is a method to inspect the surface microstructure of materials. The SEM uses electron beams for imaging high magnifications of material surfaces; therefore, various chemical analyses can be performed from the SEM images. Therefore, it is widely used for the material inspection, chemical characteristic analysis, and biological analysis. For the nuclear criticality analysis field, it is an important parameter to check the homogeneity of the compound material for using it in the nuclear system. In our previous study, the SEM was tried to use for the homogeneity analysis of the materials. In this study, a quantitative homogeneity analysis method of SEM images is proposed for the material inspections. The method is based on the stochastic analysis method with the information of the grayscales of the SEM images.

  4. A Proposal on the Quantitative Homogeneity Analysis Method of SEM Images for Material Inspections

    International Nuclear Information System (INIS)

    Kim, Song Hyun; Kim, Jong Woo; Shin, Chang Ho; Choi, Jung-Hoon; Cho, In-Hak; Park, Hwan Seo

    2015-01-01

    A scanning electron microscope (SEM) is a method to inspect the surface microstructure of materials. The SEM uses electron beams for imaging high magnifications of material surfaces; therefore, various chemical analyses can be performed from the SEM images. Therefore, it is widely used for the material inspection, chemical characteristic analysis, and biological analysis. For the nuclear criticality analysis field, it is an important parameter to check the homogeneity of the compound material for using it in the nuclear system. In our previous study, the SEM was tried to use for the homogeneity analysis of the materials. In this study, a quantitative homogeneity analysis method of SEM images is proposed for the material inspections. The method is based on the stochastic analysis method with the information of the grayscales of the SEM images

  5. Statistical Significance for Hierarchical Clustering

    Science.gov (United States)

    Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.

    2017-01-01

    Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990

  6. Proposed measurement of the imaginary component of atomic form factor for medium Z-elements in regions exhibiting significant discrepancies

    International Nuclear Information System (INIS)

    De Jonge, M.; Dhal, B.B.; Tran, C.Q.; Barnea, Z.; Chantler, C.T.

    2000-01-01

    Full text: Discrepancies in measurements of the complex atomic form factor in regions of medium Z are alarmingly high for such a fundamental problem. The consequence of this is that any experiment reliant on the Beer-Lambert absorption law that uses the tabulated absorption coefficients assumes an immediate experimental uncertainty of 2-10%, depending on the element and the energy under consideration. We have begun to address this state of affairs in the medium Z-region through a series of precise determinations of attenuation coefficients. We will elaborate on a proposed method of measuring the atomic form factor to 0.2% absolute accuracy

  7. A proposed method of measuring the electric-dipole moment of the neutron by ultracold neutron interferometry

    International Nuclear Information System (INIS)

    Freedman, M.S.; Peshkin, M.; Ringo, G.R.; Dombeck, T.W.

    1989-08-01

    The use of an ultracold neutron interferometer incorporating an electrostatic accelerator having a strong electric field gradient to accelerate neutrons by their possible electric dipole moment is proposed as a method of measuring the neutron electric dipole moment. The method appears to have the possibility of extending the sensitivity of the measurement by several orders of magnitude, perhaps to 10 -30 e-cm. 9 refs., 3 figs

  8. Determination of the oxidizing property: proposal of an alternative method based on differential scanning calorimetry

    International Nuclear Information System (INIS)

    Gigante, L.; Dellavedova, M.; Pasturenzi, C.; Lunghi, A.; Mattarella, M.; Cardillo, P.

    2008-01-01

    Determination of chemical-physical and hazardous properties of substances is a very important matter in the chemical industry, considering the growing attention of public opinion regarding safety and eco-compatibility aspects of products. In the present work, attention was focused on characterization of oxidizing properties. In case of solid compounds, the current method (Dir 84/449/CEE 6) compares the maximum combustion rate of the examined substance to the maximum combustion rate of a reference mixture. This method shows a lot of disvantages and does not provide a quantitative result. In the following work an alternative method, based on DSC measurements, is proposed for the determination of oxidizing properties. [it

  9. Proposal for a method to estimate nutrient shock effects in bacteria

    Directory of Open Access Journals (Sweden)

    Azevedo Nuno F

    2012-08-01

    Full Text Available Abstract Background Plating methods are still the golden standard in microbiology; however, some studies have shown that these techniques can underestimate the microbial concentrations and diversity. A nutrient shock is one of the mechanisms proposed to explain this phenomenon. In this study, a tentative method to assess nutrient shock effects was tested. Findings To estimate the extent of nutrient shock effects, two strains isolated from tap water (Sphingomonas capsulata and Methylobacterium sp. and two culture collection strains (E. coli CECT 434 and Pseudomonas fluorescens ATCC 13525 were exposed both to low and high nutrient conditions for different times and then placed in low nutrient medium (R2A and rich nutrient medium (TSA. The average improvement (A.I. of recovery between R2A and TSA for the different times was calculated to more simply assess the difference obtained in culturability between each medium. As expected, A.I. was higher when cells were plated after the exposition to water than when they were recovered from high-nutrient medium showing the existence of a nutrient shock for the diverse bacteria used. S. capsulata was the species most affected by this phenomenon. Conclusions This work provides a method to consistently determine the extent of nutrient shock effects on different microorganisms and hence quantify the ability of each species to deal with sudden increases in substrate concentration.

  10. Contribution for an Urban Geomorphoheritage Assessment Method: Proposal from Three Geomorphosites in Rome (Italy

    Directory of Open Access Journals (Sweden)

    Pica Alessia

    2017-09-01

    Full Text Available Urban geomorphology has important implications in spatial planning of human activities, and it also has a geotouristic potential due to the relationship between cultural and geomorphological heritage. Despite the introduction of the term Anthropocene to describe the deep influence that human activities have had in recent times on Earth evolution, urban geomorphological heritage studies are relatively rare and limited and urban geotourism development is recent. The analysis of the complex urban landscape often need the integration of multidisciplinary data. This study aims to propose the first urban geomorphoheritage assessment method, which originates after long-lasting previous geomorphological and geotouristic studies on Rome city centre, it depict rare examples of the geomorphological mapping of a metropolis and, at the same time, of an inventory of urban geomorphosites. The proposal is applied to geomorphosites in the Esquilino neighbourhood of Rome, whose analysis confirm the need for an ad hoc method for assessing urban geomorphosites, as already highlighted in the most recent literature on the topic. The urban geomorphoheritage assessment method is based on: (i the urban geomorphological analysis by means of multitemporal and multidisciplinary data; (ii the geomorphosite inventory; and (iii the geomorphoheritage assessment and enhancement. One challenge is to assess invisible geomorphosites that are widespread in urban context. To this aim, we reworked the attributes describing the Value of a site for Geotourism in order to build up a specific methodology for the analysis of the urban geomorphological heritage.

  11. A Non-Parametric Surrogate-based Test of Significance for T-Wave Alternans Detection

    Science.gov (United States)

    Nemati, Shamim; Abdala, Omar; Bazán, Violeta; Yim-Yeh, Susie; Malhotra, Atul; Clifford, Gari

    2010-01-01

    We present a non-parametric adaptive surrogate test that allows for the differentiation of statistically significant T-Wave Alternans (TWA) from alternating patterns that can be solely explained by the statistics of noise. The proposed test is based on estimating the distribution of noise induced alternating patterns in a beat sequence from a set of surrogate data derived from repeated reshuffling of the original beat sequence. Thus, in assessing the significance of the observed alternating patterns in the data no assumptions are made about the underlying noise distribution. In addition, since the distribution of noise-induced alternans magnitudes is calculated separately for each sequence of beats within the analysis window, the method is robust to data non-stationarities in both noise and TWA. The proposed surrogate method for rejecting noise was compared to the standard noise rejection methods used with the Spectral Method (SM) and the Modified Moving Average (MMA) techniques. Using a previously described realistic multi-lead model of TWA, and real physiological noise, we demonstrate the proposed approach reduces false TWA detections, while maintaining a lower missed TWA detection compared with all the other methods tested. A simple averaging-based TWA estimation algorithm was coupled with the surrogate significance testing and was evaluated on three public databases; the Normal Sinus Rhythm Database (NRSDB), the Chronic Heart Failure Database (CHFDB) and the Sudden Cardiac Death Database (SCDDB). Differences in TWA amplitudes between each database were evaluated at matched heart rate (HR) intervals from 40 to 120 beats per minute (BPM). Using the two-sample Kolmogorov-Smirnov test, we found that significant differences in TWA levels exist between each patient group at all decades of heart rates. The most marked difference was generally found at higher heart rates, and the new technique resulted in a larger margin of separability between patient populations than

  12. A proposal of parameter determination method in the residual strength degradation model for the prediction of fatigue life (I)

    International Nuclear Information System (INIS)

    Kim, Sang Tae; Jang, Seong Soo

    2001-01-01

    The static and fatigue tests have been carried out to verify the validity of a generalized residual strength degradation model. And a new method of parameter determination in the model is verified experimentally to account for the effect of tension-compression fatigue loading of spheroidal graphite cast iron. It is shown that the correlation between the experimental results and the theoretical prediction on the statistical distribution of fatigue life by using the proposed method is very reasonable. Furthermore, it is found that the correlation between the theoretical prediction and the experimental results of fatigue life in case of tension-tension fatigue data in composite material appears to be reasonable. Therefore, the proposed method is more adjustable in the determination of the parameter than maximum likelihood method and minimization technique

  13. Precision of glucose measurements in control sera by isotope dilution/mass spectrometry: proposed definitive method compared with a reference method

    International Nuclear Information System (INIS)

    Pelletier, O.; Arratoon, C.

    1987-01-01

    This improved isotope-dilution gas chromatographic/mass spectrometric (GC/MS) method, in which [ 13 C]glucose is the internal standard, meets the requirements of a Definitive Method. In a first study with five reconstituted lyophilized sera, a nested analysis of variance of GC/MS values indicated considerable among-vial variation. The CV for 32 measurements per serum ranged from 0.5 to 0.9%. However, concentration and uncertainty values (mmol/L per gram of serum) assigned to one serum by the NBS Definitive Method (7.56 +/- 0.28) were practically identical to those obtained with the proposed method (7.57 +/- 0.20). In the second study, we used twice more [ 13 C]glucose diluent to assay four serum pools and two lyophilized sera. The CV ranged from 0.26 to 0.5% for the serum pools and from 0.28 to 0.59% for the lyophilized sera. In comparison, results by the hexokinase/glucose-6-phosphate dehydrogenase reference method agreed within acceptable limits with those by the Definitive Method but tended to be slightly higher (up to 3%) for lyophilized serum samples or slightly lower (up to 2.5%) for serum pools

  14. Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀.

    Science.gov (United States)

    Gronau, Quentin Frederik; Duizer, Monique; Bakker, Marjan; Wagenmakers, Eric-Jan

    2017-09-01

    Publication bias and questionable research practices have long been known to corrupt the published record. One method to assess the extent of this corruption is to examine the meta-analytic collection of significant p values, the so-called p -curve (Simonsohn, Nelson, & Simmons, 2014a). Inspired by statistical research on false-discovery rates, we propose a Bayesian mixture model analysis of the p -curve. Our mixture model assumes that significant p values arise either from the null-hypothesis H ₀ (when their distribution is uniform) or from the alternative hypothesis H1 (when their distribution is accounted for by a simple parametric model). The mixture model estimates the proportion of significant results that originate from H ₀, but it also estimates the probability that each specific p value originates from H ₀. We apply our model to 2 examples. The first concerns the set of 587 significant p values for all t tests published in the 2007 volumes of Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition; the mixture model reveals that p values higher than about .005 are more likely to stem from H ₀ than from H ₁. The second example concerns 159 significant p values from studies on social priming and 130 from yoked control studies. The results from the yoked controls confirm the findings from the first example, whereas the results from the social priming studies are difficult to interpret because they are sensitive to the prior specification. To maximize accessibility, we provide a web application that allows researchers to apply the mixture model to any set of significant p values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Principles, Methods of Participatory Research: Proposal for Draft Animal Power

    Directory of Open Access Journals (Sweden)

    E. Chia

    2004-03-01

    Full Text Available The meeting of researchers, who question themselves on the efficiency of their actions when they accompany stakeholders during change processes, provides the opportunity to ponder on the research methods to develop when working together with the stakeholders: participative research, research-action, research-intervention… The author proposes to present the research-action approach as new. If the three phases of research-action are important, the negotiation phase is essential, because it enables contract formalization among partners (ethical aspect, development of a common language, and formalization of structuring efforts between researchers with various specialties and stakeholders. In the research-action approach, the managing set-ups (scientific committees… play a major role: they guarantee at the same time a solution to problems, production, and the legitimacy of the scientific knowledge produced. In conclusion, the author suggests ways to develop research-action in the field of animal traction in order to conceive new socio-technical and organizational innovations that will make the use of this technique easier.

  16. Proposal for a new detection method of substance abuse risk in Croatian adolescents

    Directory of Open Access Journals (Sweden)

    Sanja Tatalovic Vorkapic

    2011-01-01

    Full Text Available One of the most important factors of successful substance abuse treatment is the early start of the same treatment. Recent selection method for identification of Croatian adolescents in the substance abuse risk that has been using drug tests from urine samples, has been simple and exact on the one hand, but on the other, has been very rare and usually guided by the pressure of parents or the court. Besides, such method presented the source of legal and ethical questions. So, the proposal of application of standardized psychological tests during systematic medical exams of Croatian adolescents at the age range of 15-22 years could help with the early detection of those adolescents who were in the substance abuse risk or already had the developed addiction problem.

  17. FIFRA Peer Review: Proposed Risk Assessment Methods Process

    Science.gov (United States)

    From September 11-14, 2012, EPA participated in a Federal Insecticide, Fungicide and Rodenticide Act Scientific Advisory Panel (SAP) meeting on a proposed pollinator risk assessment framework for determining the potential risks of pesticides to honey bees.

  18. 76 FR 5614 - Applications and Amendments to Facility Operating Licenses Involving Proposed No Significant...

    Science.gov (United States)

    2011-02-01

    ... against cyber attacks up to and including the design basis threat. Part one of the proposed change is... of adequate protection against cyber attacks, up to and including the design basis threat. The... you do not have access to ADAMS or if there are problems in accessing the documents located in ADAMS...

  19. 75 FR 70665 - Proposed Significant New Use Rule for Cobalt Lithium Manganese Nickel Oxide

    Science.gov (United States)

    2010-11-18

    .... Attention: Docket ID Number EPA-HQ-OPPT-2009-0922. The DCO is open from 8 a.m. to 4 p.m., Monday through... respirators unless actual measurements of the workplace air show that air-borne concentrations of the PMN...; requires establishment of a hazard communication program; and prohibits releases to water. The proposed...

  20. Hypothesis: primary antiangiogenic method proposed to treat early stage breast cancer

    International Nuclear Information System (INIS)

    Retsky, Michael W; Hrushesky, William JM; Gukas, Isaac D

    2009-01-01

    Women with Down syndrome very rarely develop breast cancer even though they now live to an age when it normally occurs. This may be related to the fact that Down syndrome persons have an additional copy of chromosome 21 where the gene that codes for the antiangiogenic protein Endostatin is located. Can this information lead to a primary antiangiogenic therapy for early stage breast cancer that indefinitely prolongs remission? A key question that arises is when is the initial angiogenic switch thrown in micrometastases? We have conjectured that avascular micrometastases are dormant and relatively stable if undisturbed but that for some patients angiogenesis is precipitated by surgery. We also proposed that angiogenesis of micrometastases very rarely occurs before surgical removal of the primary tumor. If that is so, it seems possible that we could suggest a primary antiangiogenic therapy but the problem then arises that starting a therapy before surgery would interfere with wound healing. The therapy must be initiated at least one day prior to surgical removal of the primary tumor and kept at a Down syndrome level perhaps indefinitely. That means the drug must have virtually no toxicity and not interfere meaningfully with wound healing. This specifically excludes drugs that significantly inhibit the VEGF pathway since that is important for wound healing and because these agents have some toxicity. Endostatin is apparently non-toxic and does not significantly interfere with wound healing since Down syndrome patients have no abnormal wound healing problems. We propose a therapy for early stage breast cancer consisting of Endostatin at or above Down syndrome levels starting at least one day before surgery and continuing at that level. This should prevent micrometastatic angiogenesis resulting from surgery or at any time later. Adjuvant chemotherapy or hormone therapy should not be necessary. This can be continued indefinitely since there is no acquired resistance that

  1. Toward a holistic environmental impact assessment of marble quarrying and processing: proposal of a novel easy-to-use IPAT-based method.

    Science.gov (United States)

    Capitano, Cinzia; Peri, Giorgia; Rizzo, Gianfranco; Ferrante, Patrizia

    2017-03-01

    Marble is a natural dimension stone that is widely used in building due to its resistance and esthetic qualities. Unfortunately, some concerns have arisen regarding its production process because quarrying and processing activities demand significant amounts of energy and greatly affect the environment. Further, performing an environmental analysis of a production process such as that of marble requires the consideration of many environmental aspects (e.g., noise, vibrations, dust and waste production, energy consumption). Unfortunately, the current impact accounting tools do not seem to be capable of considering all of the major aspects of the (marble) production process that may affect the environment and thus cannot provide a comprehensive and concise assessment of all environmental aspects associated with the marble production process. Therefore, innovative, easy, and reliable methods for evaluating its environmental impact are necessary, and they must be accessible for the non-technician. The present study intends to provide a contribution in this sense by proposing a reliable and easy-to-use evaluation method to assess the significance of the environmental impacts associated with the marble production process. In addition, an application of the method to an actual marble-producing company is presented to demonstrate its practicability. Because of its relative ease of use, the method presented here can also be used as a "self-assessment" tool for pursuing a virtuous environmental policy because it enables company owners to easily identify the segments of their production chain that most require environmental enhancement.

  2. Dictionary Pruning with Visual Word Significance for Medical Image Retrieval.

    Science.gov (United States)

    Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei

    2016-02-12

    Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.

  3. Proposed efficient method for ticket booking (PEMTB) | Ahmed ...

    African Journals Online (AJOL)

    Journal of Fundamental and Applied Sciences. Journal Home ... We used angular JS, ionic for a front end and node.js, express.js for a back end and mongo DB for a database. ... Our proposed system is totally softcopy and in digitalized.

  4. Proposal of Screening Method of Sleep Disordered Breathing Using Fiber Grating Vision Sensor

    Science.gov (United States)

    Aoki, Hirooki; Nakamura, Hidetoshi; Nakajima, Masato

    Every conventional respiration monitoring technique requires at least one sensor to be attached to the body of the subject during measurement, thereby imposing a sense of restraint that results in aversion against measurements that would last over consecutive days. To solve this problem, we developed a respiration monitoring system for sleepers, and it uses a fiber-grating vision sensor, which is a type of active image sensor to achieve non-contact respiration monitoring. In this paper, we verified the effectiveness of the system, and proposed screening method of the sleep disordered breathing. It was shown that our system could equivalently measure the respiration with thermistor and accelerograph. And, the respiratory condition of sleepers can be grasped by our screening method in one look, and it seems to be useful for the support of the screening of sleep disordered breathing.

  5. Significance of roentgenologic and nuclear medicine methods in diagnosis and operative indications of coronary artery disease

    Energy Technology Data Exchange (ETDEWEB)

    Felix, R [Bonn Univ. (F.R. Germany). Radiologische Klinik; Winkler, C [Bonn Univ. (F.R. Germany). Inst. fuer Klinische und Experimentelle Nuklearmedizin; Schaede, A [Bonn Univ. (F.R. Germany). Medizinische Klinik

    1976-03-01

    Significance and technique of roentgenologic and nuclear medicine methods for evaluation of coronary artery disease and myocardial perfusion are presented. Some routinely used methods in nuclear medicine are briefly discussed concerning the evaluation of left ventricular function.

  6. Modified Cross Feedback Control for a Magnetically Suspended Flywheel Rotor with Significant Gyroscopic Effects

    Directory of Open Access Journals (Sweden)

    Yuan Ren

    2014-01-01

    Full Text Available For magnetically suspended rigid rotors (MSRs with significant gyroscopic effects, phase lag of the control channel is the main factor influencing the system nutation stability and decoupling performance. At first, this paper proves that the phase lag of the cross channel instead of the decentralized channel is often the main factor influencing the system nutation stability at high speeds. Then a modified cross feedback control strategy based on the phase compensation of cross channel is proposed to improve the stability and decoupling performances. The common issues associated with the traditional control methods have been successfully resolved by this method. Analysis, simulation, and experimental results are presented to demonstrate the feasibility and superiority of the proposed control method.

  7. Relating two proposed methods for speedup of algorithms for fitting two- and three-way principal component and related multilinear models

    NARCIS (Netherlands)

    Kiers, Henk A.L.; Harshman, Richard A.

    Multilinear analysis methods such as component (and three-way component) analysis of very large data sets can become very computationally demanding and even infeasible unless some method is used to compress the data and/or speed up the algorithms. We discuss two previously proposed speedup methods.

  8. Method of extracting significant trouble information of nuclear power plants using probabilistic analysis technique

    International Nuclear Information System (INIS)

    Shimada, Yoshio; Miyazaki, Takamasa

    2005-01-01

    In order to analyze and evaluate large amounts of trouble information of overseas nuclear power plants, it is necessary to select information that is significant in terms of both safety and reliability. In this research, a method of efficiently and simply classifying degrees of importance of components in terms of safety and reliability while paying attention to root-cause components appearing in the information was developed. Regarding safety, the reactor core damage frequency (CDF), which is used in the probabilistic analysis of a reactor, was used. Regarding reliability, the automatic plant trip probability (APTP), which is used in the probabilistic analysis of automatic reactor trips, was used. These two aspects were reflected in the development of criteria for classifying degrees of importance of components. By applying these criteria, a simple method of extracting significant trouble information of overseas nuclear power plants was developed. (author)

  9. Proposed method for regulating major materials licensees

    International Nuclear Information System (INIS)

    1992-02-01

    The Director, Office of Nuclear Material Safety and Safeguards, US Nuclear Regulatory Commission, appointed a Materials Regulatory Review Task Force to conduct a broad-based review of the Commission's current licensing and oversight programs for fuel cycle and large materials plants. The task force, as requested, defined the components and subcomponents of an ideal regulatory evaluation system for these types of licensed plants and compared they to the components and subcomponents of the existing regulatory evaluation system. This report discusses findings from this comparison and proposed recommendations on the basis of these findings

  10. Proposal for a Method for Business Model Performance Assessment: Toward an Experimentation Tool for Business Model Innovation

    Directory of Open Access Journals (Sweden)

    Antonio Batocchio

    2017-04-01

    Full Text Available The representation of business models has been recently widespread, especially in the pursuit of innovation. However, defining a company’s business model is sometimes limited to discussion and debates. This study observes the need for performance measurement so that business models can be data-driven. To meet this goal, the work proposed as a hypothesis the creation of a method that combines the practices of the Balanced Scorecard with a method of business models representation – the Business Model Canvas. Such a combination was based on study of conceptual adaptation, resulting in an application roadmap. A case study application was performed to check the functionality of the proposition, focusing on startup organizations. It was concluded that based on the performance assessment of the business model it is possible to propose the search for change through experimentation, a path that can lead to business model innovation.

  11. Proposal on the mitigation methods of thermal stress near the sodium

    International Nuclear Information System (INIS)

    Ando, Masanori; Kasahara, Naoto

    2003-09-01

    A Reactor vessel of fast rector plants contains high temperature liquid sodium in its inside and its upper end is supported by a low temperature structures. Therefore, a significant temperature gradient will arise at the vessel wall near the sodium surface. For this reason, a large thermal stress will be generated around this part. To lower this stress and to protect the vessel, a number of methods have been applied the plants. Generally, these mitigation methods by protection equipments for thermal stress also have some problems such as, increase a mount of materials or to be complicate for control, hard to maintenance and so on. In this research, authors suggested another simple methods for thermal stress, and evaluated their effects using computer analysis. The results obtained in this research are as follows. Authors suggested one method, circulate high temperature gas around outside of the vessel and evaluated the effects of this method by analysis. In case of using this method, Sn (one of index values of design) value might be getting lower about 45%. Authors also suggested another method by setting up a heat transfer plate outside of the vessel and evaluated the effects of this method by analysis. Effects of this method depend on material of the plate. In case of using Carbon as material of plate, Sn value might be 27% lower and in case of using 12Cr steel as material of plate, Sn value might be 15% lower. Authors also suggested another method by changing material of the guard vessel to be the one which has good ability of heat transfer and evaluated the effects of this method by analysis. In case of changing material of guard vessel to 12Cr steel, Sn value might be lower about 12%. (author)

  12. Proposed method to produce a highly polarized e+ beam for future linear colliders

    International Nuclear Information System (INIS)

    Okugi, Toshiyuki; Chiba, Masami; Kurihara, Yoshimasa

    1996-01-01

    We propose a method to produce a spin-polarized e + beam using e + e - pair-creation by circularly polarized photons. Assuming Compton scattering of an unpolarized e - beam and circularly polarized laser light, scattered γ-rays at the high end of the energy spectrum are also circularly polarized. If those γ-rays are utilized to create e ± pairs on a thin target, the spin-polarization is preserved for e + 's at the high end of their energy spectrum. By using the injector linac of Accelerator Test Facility at KEK and a commercially available Nd:YAG pulse laser, we can expect about 10 5 polarized e + 's per second with a degree of polarization of 80% and a kinetic energy of 35-80 MeV. The apparatus for creation and measurement of polarized e + 's is being constructed. We present new idea for possible application of our method to future linear colliders by utilizing a high-power CO 2 laser. (author)

  13. Sensitivity analysis of a complex, proposed geologic waste disposal system using the Fourier Amplitude Sensitivity Test method

    International Nuclear Information System (INIS)

    Lu Yichi; Mohanty, Sitakanta

    2001-01-01

    The Fourier Amplitude Sensitivity Test (FAST) method has been used to perform a sensitivity analysis of a computer model developed for conducting total system performance assessment of the proposed high-level nuclear waste repository at Yucca Mountain, Nevada, USA. The computer model has a large number of random input parameters with assigned probability density functions, which may or may not be uniform, for representing data uncertainty. The FAST method, which was previously applied to models with parameters represented by the uniform probability distribution function only, has been modified to be applied to models with nonuniform probability distribution functions. Using an example problem with a small input parameter set, several aspects of the FAST method, such as the effects of integer frequency sets and random phase shifts in the functional transformations, and the number of discrete sampling points (equivalent to the number of model executions) on the ranking of the input parameters have been investigated. Because the number of input parameters of the computer model under investigation is too large to be handled by the FAST method, less important input parameters were first screened out using the Morris method. The FAST method was then used to rank the remaining parameters. The validity of the parameter ranking by the FAST method was verified using the conditional complementary cumulative distribution function (CCDF) of the output. The CCDF results revealed that the introduction of random phase shifts into the functional transformations, proposed by previous investigators to disrupt the repetitiveness of search curves, does not necessarily improve the sensitivity analysis results because it destroys the orthogonality of the trigonometric functions, which is required for Fourier analysis

  14. Method of administration of PROMIS scales did not significantly impact score level, reliability, or validity

    DEFF Research Database (Denmark)

    Bjorner, Jakob B; Rose, Matthias; Gandek, Barbara

    2014-01-01

    OBJECTIVES: To test the impact of the method of administration (MOA) on score level, reliability, and validity of scales developed in the Patient Reported Outcomes Measurement Information System (PROMIS). STUDY DESIGN AND SETTING: Two nonoverlapping parallel forms each containing eight items from......, no significant mode differences were found and all confidence intervals were within the prespecified minimal important difference of 0.2 standard deviation. Parallel-forms reliabilities were very high (ICC = 0.85-0.93). Only one across-mode ICC was significantly lower than the same-mode ICC. Tests of validity...... questionnaire (PQ), personal digital assistant (PDA), or personal computer (PC) and a second form by PC, in the same administration. Method equivalence was evaluated through analyses of difference scores, intraclass correlations (ICCs), and convergent/discriminant validity. RESULTS: In difference score analyses...

  15. EMS and process of identification and evaluation of environmental aspects: a proposal methodology

    International Nuclear Information System (INIS)

    Perotto, E.

    2006-01-01

    The Environmental Management System (EMS) is an instrument to manage the interaction between the organization and the environment. The scope od EMS is to reduce the environmental impact and to achieve improvements in overall performances. In particular, the focus point of EMS implementation is the method for identifying and assessing significant environmental aspects. The results of the literature and regulation reviews (Perotto 2006) have shown that rigourous repeatable and transparent methodologies do not exist. This paper presents a proposal method for identifying and assessing significant environmental aspects, that has all three of these important characteristics. In particular, the proposal methodology for assessing aspects is based on some criteria that are combined in a specific algorithm. It is important to specify that to make a correct application of the method a preliminary rigorous approach to investigating the environment and the activities of organizations is necessary [it

  16. The energetic significance of cooking.

    Science.gov (United States)

    Carmody, Rachel N; Wrangham, Richard W

    2009-10-01

    While cooking has long been argued to improve the diet, the nature of the improvement has not been well defined. As a result, the evolutionary significance of cooking has variously been proposed as being substantial or relatively trivial. In this paper, we evaluate the hypothesis that an important and consistent effect of cooking food is a rise in its net energy value. The pathways by which cooking influences net energy value differ for starch, protein, and lipid, and we therefore consider plant and animal foods separately. Evidence of compromised physiological performance among individuals on raw diets supports the hypothesis that cooked diets tend to provide energy. Mechanisms contributing to energy being gained from cooking include increased digestibility of starch and protein, reduced costs of digestion for cooked versus raw meat, and reduced energetic costs of detoxification and defence against pathogens. If cooking consistently improves the energetic value of foods through such mechanisms, its evolutionary impact depends partly on the relative energetic benefits of non-thermal processing methods used prior to cooking. We suggest that if non-thermal processing methods such as pounding were used by Lower Palaeolithic Homo, they likely provided an important increase in energy gain over unprocessed raw diets. However, cooking has critical effects not easily achievable by non-thermal processing, including the relatively complete gelatinisation of starch, efficient denaturing of proteins, and killing of food borne pathogens. This means that however sophisticated the non-thermal processing methods were, cooking would have conferred incremental energetic benefits. While much remains to be discovered, we conclude that the adoption of cooking would have led to an important rise in energy availability. For this reason, we predict that cooking had substantial evolutionary significance.

  17. 78 FR 29201 - Notice of Availability of Finding of No Significant Impact for the Proposed Vantage Pipeline US...

    Science.gov (United States)

    2013-05-17

    ... Impact for the Proposed Vantage Pipeline US LP Ethane Pipeline Project SUMMARY: The purpose of this... Impact on the proposed Vantage Pipeline US LP Ethane Pipeline Project. Under E.O. 13337, the Secretary of... petroleum, petroleum products, or other non-gaseous fuels to or from a foreign country. Vantage Pipeline US...

  18. Using a fuzzy comprehensive evaluation method to determine product usability: A proposed theoretical framework.

    Science.gov (United States)

    Zhou, Ronggang; Chan, Alan H S

    2017-01-01

    In order to compare existing usability data to ideal goals or to that for other products, usability practitioners have tried to develop a framework for deriving an integrated metric. However, most current usability methods with this aim rely heavily on human judgment about the various attributes of a product, but often fail to take into account of the inherent uncertainties in these judgments in the evaluation process. This paper presents a universal method of usability evaluation by combining the analytic hierarchical process (AHP) and the fuzzy evaluation method. By integrating multiple sources of uncertain information during product usability evaluation, the method proposed here aims to derive an index that is structured hierarchically in terms of the three usability components of effectiveness, efficiency, and user satisfaction of a product. With consideration of the theoretical basis of fuzzy evaluation, a two-layer comprehensive evaluation index was first constructed. After the membership functions were determined by an expert panel, the evaluation appraisals were computed by using the fuzzy comprehensive evaluation technique model to characterize fuzzy human judgments. Then with the use of AHP, the weights of usability components were elicited from these experts. Compared to traditional usability evaluation methods, the major strength of the fuzzy method is that it captures the fuzziness and uncertainties in human judgments and provides an integrated framework that combines the vague judgments from multiple stages of a product evaluation process.

  19. Enhancing the Social Network Dimension of Lifelong Competence Development and Management Systems: A Proposal of Methods and Tools

    NARCIS (Netherlands)

    Cheak, Alicia; Angehrn, Albert; Sloep, Peter

    2006-01-01

    Cheak, A. M., Angehrn, A. A., & Sloep, P. (2006). Enhancing the social network dimension of lifelong competence development and management systems: A proposal of methods and tools. In R. Koper & K. Stefanov (Eds.). Proceedings of International Workshop in Learning Networks for Lifelong Competence

  20. Functional assays for analysis of variants of uncertain significance in BRCA2

    DEFF Research Database (Denmark)

    Guidugli, Lucia; Carreira, Aura; Caputo, Sandrine M

    2014-01-01

    Missense variants in the BRCA2 gene are routinely detected during clinical screening for pathogenic mutations in patients with a family history of breast and ovarian cancer. These subtle changes frequently remain of unknown clinical significance because of the lack of genetic information that may...... of uncertain significance analyzed, and describe a validation set of (genetically) proven pathogenic and neutral missense variants to serve as a golden standard for the validation of each assay. Guidelines are proposed to enable implementation of laboratory-based methods to assess the impact of the variant...

  1. Robust gene selection methods using weighting schemes for microarray data analysis.

    Science.gov (United States)

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  2. Enhancing the Social Network Dimension of Lifelong Competence Development and Management Systems: A proposal of methods and tools

    NARCIS (Netherlands)

    Cheak, Alicia; Angehrn, Albert; Sloep, Peter

    2006-01-01

    Cheak, A. M., Angehrn, A. A., & Sloep, P. B. (2006). Enhancing the social network dimension of lifelong competence development and management systems: A proposal of methods and tools. In E. J. R. Koper & K. Stefanov (Eds.), Proceedings of International Workshop on Learning Networks for Lifelong

  3. A method for the estimation of the significance of cross-correlations in unevenly sampled red-noise time series

    Science.gov (United States)

    Max-Moerbeck, W.; Richards, J. L.; Hovatta, T.; Pavlidou, V.; Pearson, T. J.; Readhead, A. C. S.

    2014-11-01

    We present a practical implementation of a Monte Carlo method to estimate the significance of cross-correlations in unevenly sampled time series of data, whose statistical properties are modelled with a simple power-law power spectral density. This implementation builds on published methods; we introduce a number of improvements in the normalization of the cross-correlation function estimate and a bootstrap method for estimating the significance of the cross-correlations. A closely related matter is the estimation of a model for the light curves, which is critical for the significance estimates. We present a graphical and quantitative demonstration that uses simulations to show how common it is to get high cross-correlations for unrelated light curves with steep power spectral densities. This demonstration highlights the dangers of interpreting them as signs of a physical connection. We show that by using interpolation and the Hanning sampling window function we are able to reduce the effects of red-noise leakage and to recover steep simple power-law power spectral densities. We also introduce the use of a Neyman construction for the estimation of the errors in the power-law index of the power spectral density. This method provides a consistent way to estimate the significance of cross-correlations in unevenly sampled time series of data.

  4. Determining Semantically Related Significant Genes.

    Science.gov (United States)

    Taha, Kamal

    2014-01-01

    GO relation embodies some aspects of existence dependency. If GO term xis existence-dependent on GO term y, the presence of y implies the presence of x. Therefore, the genes annotated with the function of the GO term y are usually functionally and semantically related to the genes annotated with the function of the GO term x. A large number of gene set enrichment analysis methods have been developed in recent years for analyzing gene sets enrichment. However, most of these methods overlook the structural dependencies between GO terms in GO graph by not considering the concept of existence dependency. We propose in this paper a biological search engine called RSGSearch that identifies enriched sets of genes annotated with different functions using the concept of existence dependency. We observe that GO term xcannot be existence-dependent on GO term y, if x- and y- have the same specificity (biological characteristics). After encoding into a numeric format the contributions of GO terms annotating target genes to the semantics of their lowest common ancestors (LCAs), RSGSearch uses microarray experiment to identify the most significant LCA that annotates the result genes. We evaluated RSGSearch experimentally and compared it with five gene set enrichment systems. Results showed marked improvement.

  5. A Fast LMMSE Channel Estimation Method for OFDM Systems

    Directory of Open Access Journals (Sweden)

    Zhou Wen

    2009-01-01

    Full Text Available A fast linear minimum mean square error (LMMSE channel estimation method has been proposed for Orthogonal Frequency Division Multiplexing (OFDM systems. In comparison with the conventional LMMSE channel estimation, the proposed channel estimation method does not require the statistic knowledge of the channel in advance and avoids the inverse operation of a large dimension matrix by using the fast Fourier transform (FFT operation. Therefore, the computational complexity can be reduced significantly. The normalized mean square errors (NMSEs of the proposed method and the conventional LMMSE estimation have been derived. Numerical results show that the NMSE of the proposed method is very close to that of the conventional LMMSE method, which is also verified by computer simulation. In addition, computer simulation shows that the performance of the proposed method is almost the same with that of the conventional LMMSE method in terms of bit error rate (BER.

  6. The clinically-integrated randomized trial: proposed novel method for conducting large trials at low cost

    Directory of Open Access Journals (Sweden)

    Scardino Peter T

    2009-03-01

    Full Text Available Abstract Introduction Randomized controlled trials provide the best method of determining which of two comparable treatments is preferable. Unfortunately, contemporary randomized trials have become increasingly expensive, complex and burdened by regulation, so much so that many trials are of doubtful feasibility. Discussion Here we present a proposal for a novel, streamlined approach to randomized trials: the "clinically-integrated randomized trial". The key aspect of our methodology is that the clinical experience of the patient and doctor is virtually indistinguishable whether or not the patient is randomized, primarily because outcome data are obtained from routine clinical data, or from short, web-based questionnaires. Integration of a randomized trial into routine clinical practice also implies that there should be an attempt to randomize every patient, a corollary of which is that eligibility criteria are minimized. The similar clinical experience of patients on- and off-study also entails that the marginal cost of putting an additional patient on trial is negligible. We propose examples of how the clinically-integrated randomized trial might be applied in four distinct areas of medicine: comparisons of surgical techniques, "me too" drugs, rare diseases and lifestyle interventions. Barriers to implementing clinically-integrated randomized trials are discussed. Conclusion The proposed clinically-integrated randomized trial may allow us to enlarge dramatically the number of clinical questions that can be addressed by randomization.

  7. 48 CFR 315.605 - Content of unsolicited proposals.

    Science.gov (United States)

    2010-10-01

    ... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Unsolicited Proposals 315.605 Content of... prepared under Government supervision; (b) The methods and approaches stated in the proposal were developed... Title (This certification shall be signed by a responsible management official of the proposing...

  8. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  9. Estimates of statistical significance for comparison of individual positions in multiple sequence alignments

    Directory of Open Access Journals (Sweden)

    Sadreyev Ruslan I

    2004-08-01

    Full Text Available Abstract Background Profile-based analysis of multiple sequence alignments (MSA allows for accurate comparison of protein families. Here, we address the problems of detecting statistically confident dissimilarities between (1 MSA position and a set of predicted residue frequencies, and (2 between two MSA positions. These problems are important for (i evaluation and optimization of methods predicting residue occurrence at protein positions; (ii detection of potentially misaligned regions in automatically produced alignments and their further refinement; and (iii detection of sites that determine functional or structural specificity in two related families. Results For problems (1 and (2, we propose analytical estimates of P-value and apply them to the detection of significant positional dissimilarities in various experimental situations. (a We compare structure-based predictions of residue propensities at a protein position to the actual residue frequencies in the MSA of homologs. (b We evaluate our method by the ability to detect erroneous position matches produced by an automatic sequence aligner. (c We compare MSA positions that correspond to residues aligned by automatic structure aligners. (d We compare MSA positions that are aligned by high-quality manual superposition of structures. Detected dissimilarities reveal shortcomings of the automatic methods for residue frequency prediction and alignment construction. For the high-quality structural alignments, the dissimilarities suggest sites of potential functional or structural importance. Conclusion The proposed computational method is of significant potential value for the analysis of protein families.

  10. An adaptive-binning method for generating constant-uncertainty/constant-significance light curves with Fermi-LAT data

    International Nuclear Information System (INIS)

    Lott, B.; Escande, L.; Larsson, S.; Ballet, J.

    2012-01-01

    Here, we present a method enabling the creation of constant-uncertainty/constant-significance light curves with the data of the Fermi-Large Area Telescope (LAT). The adaptive-binning method enables more information to be encapsulated within the light curve than with the fixed-binning method. Although primarily developed for blazar studies, it can be applied to any sources. Furthermore, this method allows the starting and ending times of each interval to be calculated in a simple and quick way during a first step. The reported mean flux and spectral index (assuming the spectrum is a power-law distribution) in the interval are calculated via the standard LAT analysis during a second step. In the absence of major caveats associated with this method Monte-Carlo simulations have been established. We present the performance of this method in determining duty cycles as well as power-density spectra relative to the traditional fixed-binning method.

  11. Assessment of SKB's proposal for encapsulation; Granskning av SKB:s foerslag till inkapslingsteknik

    Energy Technology Data Exchange (ETDEWEB)

    Lundin, M.; Gustafsson, Oskar; Broemsen, B. von; Troell, E. [IVF Industriforskning och utveckling AB, Moelndal (Sweden)

    2001-01-01

    This report accounts for an independent assessment of a proposal regarding manufacturing of copper canisters, which has been presented by SKB (Swedish Nuclear Fuel and Waste Management Co) in cooperation with MABU Consulting. IVF (The Swedish Institute for Production Engineering Research) has performed the assessment by commission of SKI (Swedish Nuclear Power Inspectorate). IVF generally believe that the proposed method, recommended manufacturing equipment and organisation will most likely mean that a functioning manufacturing of canisters can be realised. No significant deficiencies have been identified, which would mean serious problems during the manufacturing process. In some cases IVF recommends a further evaluation regarding proposed methods and/or equipment. Basically these concerns the welding processes. However, it should be stressed that SKB has emphasised that further investigation will be performed regarding this subject. Furthermore IVF recommend that proposed methods and equipment for machining of copper cylinders and for blasting of inserts should be further evaluated.

  12. Proposed method for reconstructing velocity profiles using a multi-electrode electromagnetic flow meter

    International Nuclear Information System (INIS)

    Kollár, László E; Lucas, Gary P; Zhang, Zhichao

    2014-01-01

    An analytical method is developed for the reconstruction of velocity profiles using measured potential distributions obtained around the boundary of a multi-electrode electromagnetic flow meter (EMFM). The method is based on the discrete Fourier transform (DFT), and is implemented in Matlab. The method assumes the velocity profile in a section of a pipe as a superposition of polynomials up to sixth order. Each polynomial component is defined along a specific direction in the plane of the pipe section. For a potential distribution obtained in a uniform magnetic field, this direction is not unique for quadratic and higher-order components; thus, multiple possible solutions exist for the reconstructed velocity profile. A procedure for choosing the optimum velocity profile is proposed. It is applicable for single-phase or two-phase flows, and requires measurement of the potential distribution in a non-uniform magnetic field. The potential distribution in this non-uniform magnetic field is also calculated for the possible solutions using weight values. Then, the velocity profile with the calculated potential distribution which is closest to the measured one provides the optimum solution. The reliability of the method is first demonstrated by reconstructing an artificial velocity profile defined by polynomial functions. Next, velocity profiles in different two-phase flows, based on results from the literature, are used to define the input velocity fields. In all cases, COMSOL Multiphysics is used to model the physical specifications of the EMFM and to simulate the measurements; thus, COMSOL simulations produce the potential distributions on the internal circumference of the flow pipe. These potential distributions serve as inputs for the analytical method. The reconstructed velocity profiles show satisfactory agreement with the input velocity profiles. The method described in this paper is most suitable for stratified flows and is not applicable to axisymmetric flows in

  13. 76 FR 81441 - Proposed Significant New Use Rules on Certain Chemical Substances

    Science.gov (United States)

    2011-12-28

    .... The DCO is open from 8 a.m. to 4 p.m., Monday through Friday, excluding legal holidays. The telephone... 1.5 mg/m\\3\\ as an 8-hour time weighted average; establishment of a hazard communication program; and..., or a similar incorporation. (2) The significant new uses are: (i) Protection in the workplace...

  14. Iterative Brinkman penalization for remeshed vortex methods

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Koumoutsakos, Petros; Leonard, Anthony

    2015-01-01

    We introduce an iterative Brinkman penalization method for the enforcement of the no-slip boundary condition in remeshed vortex methods. In the proposed method, the Brinkman penalization is applied iteratively only in the neighborhood of the body. This allows for using significantly larger time...

  15. Microalbuminuria: It's Significance, risk factors and methods of ...

    African Journals Online (AJOL)

    Alasia Datonye

    Male gender Hypertension. High salt (and protein ) ... gender and high salt intake are also to be associated with a .... method has advantages and disadvantages, and the choice depends .... single voided urine samples to estimate quantitative proteinuria. .... in an Urban and Periurban School, Port Harcourt , Rivers. State.

  16. A proposed community reaction-wall facility at the JRC Ispra

    Energy Technology Data Exchange (ETDEWEB)

    Jones, P M; Donea, J [Commission of the European Communities, Joint Research Centre - Ispra Establishment Applied Mechanics Division, Ispra (Italy)

    1988-07-01

    The paper describes a large-size structural laboratory based on a reaction-wall facility proposed for the JRC Ispra establishment. It is foreseen that this will be used for large and full-scale testing of a wide variety of structures and components in the fields of civil/structural, mechanical, and geotechnical engineering. After briefly reviewing the background market research done to establish the needs for a large central facility in the Community, the main advantages and limitations of reaction-wall testing in comparison with other experimental techniques are summarized. The main characteristics of the proposed facility are then given followed by the identified fields of research in which significant tests can be performed. Finally, the proposed method of implementing an integral programme of work within the European Community member states is presented. (author)

  17. A proposed community reaction-wall facility at the JRC Ispra

    International Nuclear Information System (INIS)

    Jones, P.M.; Donea, J.

    1988-01-01

    The paper describes a large-size structural laboratory based on a reaction-wall facility proposed for the JRC Ispra establishment. It is foreseen that this will be used for large and full-scale testing of a wide variety of structures and components in the fields of civil/structural, mechanical, and geotechnical engineering. After briefly reviewing the background market research done to establish the needs for a large central facility in the Community, the main advantages and limitations of reaction-wall testing in comparison with other experimental techniques are summarized. The main characteristics of the proposed facility are then given followed by the identified fields of research in which significant tests can be performed. Finally, the proposed method of implementing an integral programme of work within the European Community member states is presented. (author)

  18. Small Private Online Research: A Proposal for A Numerical Methods Course Based on Technology Use and Blended Learning

    Science.gov (United States)

    Cepeda, Francisco Javier Delgado

    2017-01-01

    This work presents a proposed model in blended learning for a numerical methods course evolved from traditional teaching into a research lab in scientific visualization. The blended learning approach sets a differentiated and flexible scheme based on a mobile setup and face to face sessions centered on a net of research challenges. Model is…

  19. Proposed guidelines for synthetic accelerogram generation methods

    International Nuclear Information System (INIS)

    Shaw, D.E.; Rizzo, P.C.; Shukla, D.K.

    1975-01-01

    With the advent of high speed digital computation machines and discrete structural analysis techniques, it has become attractive to use synthetically generated accelerograms as input in the seismic design and analysis of structures. Several procedures are currently available which can generate accelerograms which match a given design response spectra while not paying significant attention to other properties of seismic accelerograms. This paper studies currently available artificial time history generation techniques from the standpoint of various properties of seismic time histories consisting of; 1. Response Spectra; 2. Peak Ground Acceleration; 3. Total Duration; 4. Time dependent enveloping functions defining the rise time to strong motion, duration of significant shaking and decay of the significant shaking portion of the seismic record; 5. Fourier Amplitude and Phase Spectra; 6. Ground Motion Parameters; 7. Apparent Frequency; with the aim of providing guidelines of the time history parameters based on historic strong motion seismic records. (Auth.)

  20. A hardenability test proposal

    Energy Technology Data Exchange (ETDEWEB)

    Murthy, N.V.S.N. [Ingersoll-Rand (I) Ltd., Bangalore (India)

    1996-12-31

    A new approach for hardenability evaluation and its application to heat treatable steels will be discussed. This will include an overview and deficiencies of the current methods and discussion on the necessity for a new approach. Hardenability terminology will be expanded to avoid ambiguity and over-simplification as encountered with the current system. A new hardenability definition is proposed. Hardenability specification methods are simplified and rationalized. The new hardenability evaluation system proposed here utilizes a test specimen with varying diameter as an alternative to the cylindrical Jominy hardenability test specimen and is readily applicable to the evaluation of a wide variety of steels with different cross-section sizes.

  1. The significance test controversy revisited the fiducial Bayesian alternative

    CERN Document Server

    Lecoutre, Bruno

    2014-01-01

    The purpose of this book is not only to revisit the “significance test controversy,”but also to provide a conceptually sounder alternative. As such, it presents a Bayesian framework for a new approach to analyzing and interpreting experimental data. It also prepares students and researchers for reporting on experimental results. Normative aspects: The main views of statistical tests are revisited and the philosophies of Fisher, Neyman-Pearson and Jeffrey are discussed in detail. Descriptive aspects: The misuses of Null Hypothesis Significance Tests are reconsidered in light of Jeffreys’ Bayesian conceptions concerning the role of statistical inference in experimental investigations. Prescriptive aspects: The current effect size and confidence interval reporting practices are presented and seriously questioned. Methodological aspects are carefully discussed and fiducial Bayesian methods are proposed as a more suitable alternative for reporting on experimental results. In closing, basic routine procedures...

  2. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models

    Science.gov (United States)

    2013-01-01

    Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852

  3. A calibration method for proposed XRF measurements of arsenic and selenium in nail clippings

    International Nuclear Information System (INIS)

    Gherase, Mihai R; Fleming, David E B

    2011-01-01

    A calibration method for proposed x-ray fluorescence (XRF) measurements of arsenic and selenium in nail clippings is demonstrated. Phantom nail clippings were produced from a whole nail phantom (0.7 mm thickness, 25 x 25 mm 2 area) and contained equal concentrations of arsenic and selenium ranging from 0 to 20 μg g -1 in increments of 5 μg g -1 . The phantom nail clippings were then grouped in samples of five different masses: 20, 40, 60, 80 and 100 mg for each concentration. Experimental x-ray spectra were acquired for each of the sample masses using a portable x-ray tube and a detector unit. Calibration lines (XRF signal in a number of counts versus stoichiometric elemental concentration) were produced for each of the two elements. A semi-empirical relationship between the mass of the nail phantoms (m) and the slope of the calibration line (s) was determined separately for arsenic and selenium. Using this calibration method, one can estimate elemental concentrations and their uncertainties from the XRF spectra of human nail clippings. (note)

  4. Two Proposals for determination of large reactivity of reactor

    International Nuclear Information System (INIS)

    Kaneko, Yoshihiko; Nagao, Yoshiharu; Yamane, Tsuyoshi; Takeuchi, Mituo

    1999-01-01

    Two Proposals for determination of large reactivity of reactors are presented. One is for large positive reactivity. The other is for large negative reactivity. Existing experimental methods for determination of large positive reactivity, the fuel addition method and the neutron adsorption substitution method were analyzed. It is found that both the experimental methods are possibly affected to the substantially large systematic error up to ∼ 20%, when the value of the excess multiplication factor comes into the range close to ∼20%Δk. To cope with this difficulty, a revised method is validly proposed. The revised method evaluates the value of the potential excess multiplication factor as the consecutive increments of the effective multiplication factor in a virtual core, which are converted from those in an actual core by multiplying a conversion factor f to it. The conversion factor f is to be obtained in principle by calculation. Numerical experiments were done on a slab reactor using one group diffusion model. The rod drop experimental method is widely used for determination of large negative negative reactivity values. The decay of the neutron density followed by initiating the insertion of the rod is obliged to be slowed down according to its speed. It is proved by analysis based on the one point reactor kinetics that in such a case the integral counting method hitherto used tend to significantly underestimate the absolute values of negative reactivity, even if the insertion time is in the range of 1-2 s. As for the High Temperature Engineering Test Reactor (HTTR), the insertion time will be lengthened up to 4-6 s. In order to overcome the difficulty , the delayed integral counting method is proposed, in which the integration of neutron counting starts after the rod drop has been completed and the counts before is evaluated by calculation using one point reactor kinetics. This is because the influence of the insertion time on the decay of the neutron

  5. Qualitative methods in radiography research: a proposed framework

    International Nuclear Information System (INIS)

    Adams, J.; Smith, T.

    2003-01-01

    Introduction: While radiography is currently developing a research base, which is important in terms of professional development and informing practice and policy issues in the field, the amount of research published by radiographers remains limited. However, a range of qualitative methods offer further opportunities for radiography research. Purpose: This paper briefly introduces a number of key qualitative methods (qualitative interviews, focus groups, observational methods, diary methods and document/text analysis) and sketches one possible framework for future qualitative work in radiography research. The framework focuses upon three areas for study: intra-professional issues; inter-professional issues; and clinical practice, patient and health delivery issues. While the paper outlines broad areas for future focus rather than providing a detailed protocol for how individual pieces of research should be conducted, a few research questions have been chosen and examples of possible qualitative methods required to answer such questions are outlined for each area. Conclusion: Given the challenges and opportunities currently facing the development of a research base within radiography, the outline of key qualitative methods and broad areas suitable for their application is offered as a useful tool for those within the profession looking to embark upon or enhance their research career

  6. Color dithering methods for LEGO-like 3D printing

    Science.gov (United States)

    Sun, Pei-Li; Sie, Yuping

    2015-01-01

    Color dithering methods for LEGO-like 3D printing are proposed in this study. The first method is work for opaque color brick building. It is a modification of classic error diffusion. Many color primaries can be chosen. However, RGBYKW is recommended as its image quality is good and the number of color primary is limited. For translucent color bricks, multi-layer color building can enhance the image quality significantly. A LUT-based method is proposed to speed the dithering proceeding and make the color distribution even smoother. Simulation results show the proposed multi-layer dithering method can really improve the image quality of LEGO-like 3D printing.

  7. Evaluation of the significance of abrupt changes in precipitation and runoff process in China

    Science.gov (United States)

    Xie, Ping; Wu, Ziyi; Sang, Yan-Fang; Gu, Haiting; Zhao, Yuxi; Singh, Vijay P.

    2018-05-01

    Abrupt changes are an important manifestation of hydrological variability. How to accurately detect the abrupt changes in hydrological time series and evaluate their significance is an important issue, but methods for dealing with them effectively are lacking. In this study, we propose an approach to evaluate the significance of abrupt changes in time series at five levels: no, weak, moderate, strong, and dramatic. The approach was based on an index of correlation coefficient calculated for the original time series and its abrupt change component. A bigger value of correlation coefficient reflects a higher significance level of abrupt change. Results of Monte-Carlo experiments verified the reliability of the proposed approach, and also indicated the great influence of statistical characteristics of time series on the significance level of abrupt change. The approach was derived from the relationship between correlation coefficient index and abrupt change, and can estimate and grade the significance levels of abrupt changes in hydrological time series. Application of the proposed approach to ten major watersheds in China showed that abrupt changes mainly occurred in five watersheds in northern China, which have arid or semi-arid climate and severe shortages of water resources. Runoff processes in northern China were more sensitive to precipitation change than those in southern China. Although annual precipitation and surface water resources amount (SWRA) exhibited a harmonious relationship in most watersheds, abrupt changes in the latter were more significant. Compared with abrupt changes in annual precipitation, human activities contributed much more to the abrupt changes in the corresponding SWRA, except for the Northwest Inland River watershed.

  8. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    Science.gov (United States)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  9. Using the longest significance run to estimate region-specific p-values in genetic association mapping studies

    Directory of Open Access Journals (Sweden)

    Yang Hsin-Chou

    2008-05-01

    Full Text Available Abstract Background Association testing is a powerful tool for identifying disease susceptibility genes underlying complex diseases. Technological advances have yielded a dramatic increase in the density of available genetic markers, necessitating an increase in the number of association tests required for the analysis of disease susceptibility genes. As such, multiple-tests corrections have become a critical issue. However the conventional statistical corrections on locus-specific multiple tests usually result in lower power as the number of markers increases. Alternatively, we propose here the application of the longest significant run (LSR method to estimate a region-specific p-value to provide an index for the most likely candidate region. Results An advantage of the LSR method relative to procedures based on genotypic data is that only p-value data are needed and hence can be applied extensively to different study designs. In this study the proposed LSR method was compared with commonly used methods such as Bonferroni's method and FDR controlling method. We found that while all methods provide good control over false positive rate, LSR has much better power and false discovery rate. In the authentic analysis on psoriasis and asthma disease data, the LSR method successfully identified important candidate regions and replicated the results of previous association studies. Conclusion The proposed LSR method provides an efficient exploratory tool for the analysis of sequences of dense genetic markers. Our results show that the LSR method has better power and lower false discovery rate comparing with the locus-specific multiple tests.

  10. Settlement behavior of the container for high-level nuclear waste disposal. Centrifuge model tests and proposal for simple evaluation method for settlement behavior

    International Nuclear Information System (INIS)

    Nakamura, Kunihiko; Tanaka, Yukihisa

    2004-01-01

    In Japan, bentonite will be used as buffer materials in high-level nuclear waste disposal. In the softened buffer material with the infiltration of various properties of under ground water, if the container deeply sinks, the decrease of the thickness of the buffer materials may lose the required abilities. Therefore, it is very important to consider settlement of container. In this study, influences of distilled water and artificial seawater on the settlement of the container were investigated and a simple evaluation method for settlement of the container was proposed. The following findings were obtained from this study. (1) Under the distilled water, amount of settlement decreases exponentially as dry density becomes larger. (2) While the amount of settlement of container under the 10% artificial seawater was almost equal to the one in the distilled water, the container was floating under the 100% artificial seawater. (3) The simple evaluation method for settlement of container was proposed based on the diffuse double layer theory, and the effectiveness of the proposed method was demonstrated by the results of several experiments. (author)

  11. ETMB-RBF: discrimination of metal-binding sites in electron transporters based on RBF networks with PSSM profiles and significant amino acid pairs.

    Science.gov (United States)

    Ou, Yu-Yen; Chen, Shu-An; Wu, Sheng-Cheng

    2013-01-01

    Cellular respiration is the process by which cells obtain energy from glucose and is a very important biological process in living cell. As cells do cellular respiration, they need a pathway to store and transport electrons, the electron transport chain. The function of the electron transport chain is to produce a trans-membrane proton electrochemical gradient as a result of oxidation-reduction reactions. In these oxidation-reduction reactions in electron transport chains, metal ions play very important role as electron donor and acceptor. For example, Fe ions are in complex I and complex II, and Cu ions are in complex IV. Therefore, to identify metal-binding sites in electron transporters is an important issue in helping biologists better understand the workings of the electron transport chain. We propose a method based on Position Specific Scoring Matrix (PSSM) profiles and significant amino acid pairs to identify metal-binding residues in electron transport proteins. We have selected a non-redundant set of 55 metal-binding electron transport proteins as our dataset. The proposed method can predict metal-binding sites in electron transport proteins with an average 10-fold cross-validation accuracy of 93.2% and 93.1% for metal-binding cysteine and histidine, respectively. Compared with the general metal-binding predictor from A. Passerini et al., the proposed method can improve over 9% of sensitivity, and 14% specificity on the independent dataset in identifying metal-binding cysteines. The proposed method can also improve almost 76% sensitivity with same specificity in metal-binding histidine, and MCC is also improved from 0.28 to 0.88. We have developed a novel approach based on PSSM profiles and significant amino acid pairs for identifying metal-binding sites from electron transport proteins. The proposed approach achieved a significant improvement with independent test set of metal-binding electron transport proteins.

  12. Proposed Suitable Methods to Detect Transient Regime Switching to Improve Power Quality with Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Javad Safaee Kuchaksaraee

    2016-10-01

    Full Text Available The increasing consumption of electrical energy and the use of non-linear loads that create transient regime states in distribution networks is increasing day by day. This is the only reason due to which the analysis of power quality for energy sustainability in power networks has become more important. Transients are often created by energy injection through switching or lightning and make changes in voltage and nominal current. Sudden increase or decrease in voltage or current makes characteristics of the transient regime. This paper shed some lights on the capacitor bank switching, which is one of the main causes for oscillatory transient regime states in the distribution network, using wavelet transform. The identification of the switching current of capacitor bank and the internal fault current of the transformer to prevent the unnecessary outage of the differential relay, it propose a new smart method. The accurate performance of this method is shown by simulation in EMTP and MATLAB (matrix laboratory software.

  13. Adjusted permutation method for multiple attribute decision making with meta-heuristic solution approaches

    Directory of Open Access Journals (Sweden)

    Hossein Karimi

    2011-04-01

    Full Text Available The permutation method of multiple attribute decision making has two significant deficiencies: high computational time and wrong priority output in some problem instances. In this paper, a novel permutation method called adjusted permutation method (APM is proposed to compensate deficiencies of conventional permutation method. We propose Tabu search (TS and particle swarm optimization (PSO to find suitable solutions at a reasonable computational time for large problem instances. The proposed method is examined using some numerical examples to evaluate the performance of the proposed method. The preliminary results show that both approaches provide competent solutions in relatively reasonable amounts of time while TS performs better to solve APM.

  14. Facade Proposals for Urban Augmented Reality

    OpenAIRE

    Fond , Antoine; Berger , Marie-Odile; Simon , Gilles

    2017-01-01

    International audience; We introduce a novel object proposals method specific to building facades. We define new image cues that measure typical facadecharacteristics such as semantic, symmetry and repetitions. They are combined to generate a few facade candidates in urban environments fast. We show that our method outperforms state-of-the-art object proposals techniques for this task on the 1000 images of the Zurich Building Database. We demonstrate the interest of this procedure for augment...

  15. Invasive physiological indices to determine the functional significance of coronary stenosis

    Directory of Open Access Journals (Sweden)

    Firas R. AL-Obaidi

    2018-03-01

    Full Text Available Physiological measurements are now commonly used to assess coronary lesions in the cardiac catheterisation laboratory, and this practice is evidence-based and supported by clinical guidelines. Fractional flow reserve is currently the gold standard method to determine whether coronary lesions are functionally significant, and is used to guide revascularization. There are however several other physiological measurements that have been proposed as alternatives to the fractional flow reserve. This review aims to comprehensively discuss physiological indices that can be used in the cardiac catheterisation laboratory to determine the functional significance of coronary lesions. We will focus on their advantages and disadvantages, and the current evidence supporting their use. Keywords: Coronary physiology, Fractional flow reserve, Resting physiological indices, Coronary flow reserve

  16. Developing an Agent-Based Simulation System for Post-Earthquake Operations in Uncertainty Conditions: A Proposed Method for Collaboration among Agents

    Directory of Open Access Journals (Sweden)

    Navid Hooshangi

    2018-01-01

    Full Text Available Agent-based modeling is a promising approach for developing simulation tools for natural hazards in different areas, such as during urban search and rescue (USAR operations. The present study aimed to develop a dynamic agent-based simulation model in post-earthquake USAR operations using geospatial information system and multi agent systems (GIS and MASs, respectively. We also propose an approach for dynamic task allocation and establishing collaboration among agents based on contract net protocol (CNP and interval-based Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS methods, which consider uncertainty in natural hazards information during agents’ decision-making. The decision-making weights were calculated by analytic hierarchy process (AHP. In order to implement the system, earthquake environment was simulated and the damage of the buildings and a number of injuries were calculated in Tehran’s District 3: 23%, 37%, 24% and 16% of buildings were in slight, moderate, extensive and completely vulnerable classes, respectively. The number of injured persons was calculated to be 17,238. Numerical results in 27 scenarios showed that the proposed method is more accurate than the CNP method in the terms of USAR operational time (at least 13% decrease and the number of human fatalities (at least 9% decrease. In interval uncertainty analysis of our proposed simulated system, the lower and upper bounds of uncertain responses are evaluated. The overall results showed that considering uncertainty in task allocation can be a highly advantageous in the disaster environment. Such systems can be used to manage and prepare for natural hazards.

  17. Shareholders proposals, vote outcome, and board composition

    Directory of Open Access Journals (Sweden)

    Amani Khaled Bouresli

    2008-07-01

    Full Text Available This paper examines the variables that affect vote outcome in shareholder proposals. We found that sponsor identity, proposal type, and board composition play a significant role in determining vote outcome. Furthermore, we found that the interaction between the prior performance with board composition is significant and has a negative coefficient. We conducted nonparametric tests to investigate changes in board’s major characteristics before and after targeting. The results indicate that some changes in management and boards occur after shareholder proposals. These changes, however, are unrelated to variables that impact vote outcome. We conclude that shareholders proposals are not effective at changing company behavior or corporate governance

  18. Nonlinear modal analysis in NPP dynamics: a proposal

    International Nuclear Information System (INIS)

    Suarez Antola, R.

    2005-07-01

    We propose and briefly suggest how to apply the analytical tools of nonlinear modal analysis (NMA) to problems of nuclear reactor kinetics, NPP dynamics, and NPP instrumentation and control. The proposed method is closely related with recent approaches by modal analysis using the reactivity matrix with feedbacks to couple neutron kinetics with thermal hydraulics in the reactors core. A nonlinear system of ordinary differential equations for mode amplitudes is obtained, projecting the dynamic equations of a model of NPP onto the eigenfunctions of a suitable adjoint operator. A steady state solution of the equations is taken as a reference, and the behaviour of transient solutions in some neighbourhood of the steady state solution is studied by an extension of Liapunov's First Method that enables to cope directly with the non-linear terms in the dynamics. In NPP dynamics these differential equations for the mode amplitudes are of polynomial type of low degree A few dominant modes can usually be identified. These mode amplitudes evolve almost independently of the other modes, more slowly and tending to slave the other mode amplitudes. Using asymptotic methods, it is possible to calculate a closed form analytical approximation to the response to finite amplitude perturbations from the given steady spatial pattern (the origin of the space of mode amplitudes).When there is finite amplitude instability, the method allows us to calculate the threshold amplitude as a well defined function of system's parameters. This is a most significant accomplishment that the other methods cannot afford

  19. An Efficient Method for Image and Audio Steganography using Least Significant Bit (LSB) Substitution

    Science.gov (United States)

    Chadha, Ankit; Satam, Neha; Sood, Rakshak; Bade, Dattatray

    2013-09-01

    In order to improve the data hiding in all types of multimedia data formats such as image and audio and to make hidden message imperceptible, a novel method for steganography is introduced in this paper. It is based on Least Significant Bit (LSB) manipulation and inclusion of redundant noise as secret key in the message. This method is applied to data hiding in images. For data hiding in audio, Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) both are used. All the results displayed prove to be time-efficient and effective. Also the algorithm is tested for various numbers of bits. For those values of bits, Mean Square Error (MSE) and Peak-Signal-to-Noise-Ratio (PSNR) are calculated and plotted. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when nsteganography process does not reveal presence of any hidden message, thus qualifying the criteria of imperceptible message.

  20. Proposed method for assigning metric tons of heavy metal values to defense high-level waste forms to be disposed of in a geologic repository

    International Nuclear Information System (INIS)

    1987-08-01

    A proposed method is described for assigning an equivalent metric ton heavy metal (eMTHM) value to defense high-level waste forms to be disposed of in a geologic repository. This method for establishing a curie equivalency between defense high-level waste and irradiated commercial fuel is based on the ratio of defense fuel exposure to the typical commercial fuel exposure, MWd/MTHM. application of this technique to defense high-level wastes is described. Additionally, this proposed technique is compared to several alternate calculations for eMTHM. 15 refs., 2 figs., 10 tabs

  1. Identification of significant features by the Global Mean Rank test.

    Science.gov (United States)

    Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph

    2014-01-01

    With the introduction of omics-technologies such as transcriptomics and proteomics, numerous methods for the reliable identification of significantly regulated features (genes, proteins, etc.) have been developed. Experimental practice requires these tests to successfully deal with conditions such as small numbers of replicates, missing values, non-normally distributed expression levels, and non-identical distributions of features. With the MeanRank test we aimed at developing a test that performs robustly under these conditions, while favorably scaling with the number of replicates. The test proposed here is a global one-sample location test, which is based on the mean ranks across replicates, and internally estimates and controls the false discovery rate. Furthermore, missing data is accounted for without the need of imputation. In extensive simulations comparing MeanRank to other frequently used methods, we found that it performs well with small and large numbers of replicates, feature dependent variance between replicates, and variable regulation across features on simulation data and a recent two-color microarray spike-in dataset. The tests were then used to identify significant changes in the phosphoproteomes of cancer cells induced by the kinase inhibitors erlotinib and 3-MB-PP1 in two independently published mass spectrometry-based studies. MeanRank outperformed the other global rank-based methods applied in this study. Compared to the popular Significance Analysis of Microarrays and Linear Models for Microarray methods, MeanRank performed similar or better. Furthermore, MeanRank exhibits more consistent behavior regarding the degree of regulation and is robust against the choice of preprocessing methods. MeanRank does not require any imputation of missing values, is easy to understand, and yields results that are easy to interpret. The software implementing the algorithm is freely available for academic and commercial use.

  2. The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.

    Science.gov (United States)

    Lash, Timothy L

    2017-09-15

    In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Proposal for evaluation methodology on impact resistant performance and construction method of tornado missile protection net structure

    International Nuclear Information System (INIS)

    Namba, Kosuke; Shirai, Koji

    2014-01-01

    In nuclear power plants, the necessity of the Tornado Missile Protection Structure is becoming a technical key issue. Utilization of the net structure seems to be one of the realistic counter measures from the point of the view of the mitigation wind and seismic loads. However, the methodology for the selection of the net suitable materials, the energy absorption design method and the construction method are not sufficiently established. In this report, three materials (high-strength metal mesh, super strong polyethylene fiber net and steel grating) were selected for the candidate material and the material screening tests, the energy absorption tests by free drop test using the heavy weight and the impact tests with the small diameter missile. As a result, high-strength metal mesh was selected as a suitable material for tornado missile protection net structure. Moreover, the construction method to obtain the good energy absorption performance of the material and the practical design method to estimate the energy absorption of the high-strength metal mesh under tornado missile impact load were proposed. (author)

  4. Calculating the true level of predictors significance when carrying out the procedure of regression equation specification

    Directory of Open Access Journals (Sweden)

    Nikita A. Moiseev

    2017-01-01

    Full Text Available The paper is devoted to a new randomization method that yields unbiased adjustments of p-values for linear regression models predictors by incorporating the number of potential explanatory variables, their variance-covariance matrix and its uncertainty, based on the number of observations. This adjustment helps to control type I errors in scientific studies, significantly decreasing the number of publications that report false relations to be authentic ones. Comparative analysis with such existing methods as Bonferroni correction and Shehata and White adjustments explicitly shows their imperfections, especially in case when the number of observations and the number of potential explanatory variables are approximately equal. Also during the comparative analysis it was shown that when the variance-covariance matrix of a set of potential predictors is diagonal, i.e. the data are independent, the proposed simple correction is the best and easiest way to implement the method to obtain unbiased corrections of traditional p-values. However, in the case of the presence of strongly correlated data, a simple correction overestimates the true pvalues, which can lead to type II errors. It was also found that the corrected p-values depend on the number of observations, the number of potential explanatory variables and the sample variance-covariance matrix. For example, if there are only two potential explanatory variables competing for one position in the regression model, then if they are weakly correlated, the corrected p-value will be lower than when the number of observations is smaller and vice versa; if the data are highly correlated, the case with a larger number of observations will show a lower corrected p-value. With increasing correlation, all corrections, regardless of the number of observations, tend to the original p-value. This phenomenon is easy to explain: as correlation coefficient tends to one, two variables almost linearly depend on each

  5. A proposed through-flow inverse method for the design of mixed-flow pumps

    Science.gov (United States)

    Borges, Joao Eduardo

    1991-01-01

    A through-flow (hub-to-shroud) truly inverse method is proposed and described. It uses an imposition of mean swirl, i.e., radius times mean tangential velocity, given throughout the meridional section of the turbomachine as an initial design specification. In the present implementation, it is assumed that the fluid is inviscid, incompressible, and irrotational at inlet and that the blades are supposed to have zero thickness. Only blade rows that impart to the fluid a constant work along the space are considered. An application of this procedure to design the rotor of a mixed-flow pump is described in detail. The strategy used to find a suitable mean swirl distribution and the other design inputs is also described. The final blade shape and pressure distributions on the blade surface are presented, showing that it is possible to obtain feasible designs using this technique. Another advantage of this technique is the fact that it does not require large amounts of CPU time.

  6. Significance evaluation in factor graphs

    DEFF Research Database (Denmark)

    Madsen, Tobias; Hobolth, Asger; Jensen, Jens Ledet

    2017-01-01

    in genomics and the multiple-testing issues accompanying them, accurate significance evaluation is of great importance. We here address the problem of evaluating statistical significance of observations from factor graph models. Results Two novel numerical approximations for evaluation of statistical...... significance are presented. First a method using importance sampling. Second a saddlepoint approximation based method. We develop algorithms to efficiently compute the approximations and compare them to naive sampling and the normal approximation. The individual merits of the methods are analysed both from....... Conclusions The applicability of saddlepoint approximation and importance sampling is demonstrated on known models in the factor graph framework. Using the two methods we can substantially improve computational cost without compromising accuracy. This contribution allows analyses of large datasets...

  7. Proposal for an alignment method of the CLIC linear accelerator - From geodesic networks to the active pre-alignment

    International Nuclear Information System (INIS)

    Touze, T.

    2011-01-01

    The compact linear collider (CLIC) is the particle accelerator project proposed by the european organization for nuclear research (CERN) for high energy physics after the large hadron collider (LHC). Because of the nano-metric scale of the CLIC leptons beams, the emittance growth budget is very tight. It induces alignment tolerances on the positions of the CLIC components that have never been achieved before. The last step of the CLIC alignment will be done according to the beam itself. It falls within the competence of the physicists. However, in order to implement the beam-based feedback, a challenging pre-alignment is required: 10 μm at 3σ along a 200 m sliding window. For such a precision, the proposed solution must be compatible with a feedback between the measurement and repositioning systems. The CLIC pre-alignment will have to be active. This thesis does not demonstrate the feasibility of the CLIC active pre-alignment but shows the way to the last developments that have to be done for that purpose. A method is proposed. Based on the management of the Helmert transformations between Euclidean coordinate systems, from the geodetic networks to the metrological measurements, this method is likely to solve the CLIC pre-alignment problem. Large scale facilities have been built and Monte-Carlo simulations have been made in order to validate the mathematical modeling of the measurement systems and of the alignment references. When this is done, it will be possible to extrapolate the modeling to the entire CLIC length. It will be the last step towards the demonstration of the CLIC pre-alignment feasibility. (author)

  8. Identification of Water Quality Significant Parameter with Two Transformation/Standardization Methods on Principal Component Analysis and Scilab Software

    Directory of Open Access Journals (Sweden)

    Jovan Putranda

    2016-09-01

    Full Text Available Water quality monitoring is prone to encounter error on its recording or measuring process. The monitoring on river water quality not only aims to recognize the water quality dynamic, but also to evaluate the data to create river management policy and water pollution in order to maintain the continuity of human health or sanitation requirement, and biodiversity preservation. Evaluation on water quality monitoring needs to be started by identifying the important water quality parameter. This research objected to identify the significant parameters by using two transformation or standardization methods on water quality data, which are the river Water Quality Index, WQI (Indeks Kualitas Air, Sungai, IKAs transformation or standardization method and transformation or standardization method with mean 0 and variance 1; so that the variability of water quality parameters could be aggregated with one another. Both of the methods were applied on the water quality monitoring data which its validity and reliability have been tested. The PCA, Principal Component Analysis (Analisa Komponen Utama, AKU, with the help of Scilab software, has been used to process the secondary data on water quality parameters of Gadjah Wong river in 2004-2013, with its validity and reliability has been tested. The Scilab result was cross examined with the result from the Excel-based Biplot Add In software. The research result showed that only 18 from total 35 water quality parameters that have passable data quality. The two transformation or standardization data methods gave different significant parameter type and amount result. On the transformation or standardization mean 0 variances 1, there were water quality significant parameter dynamic to mean concentration of each water quality parameters, which are TDS, SO4, EC, TSS, NO3N, COD, BOD5, Grease Oil and NH3N. On the river WQI transformation or standardization, the water quality significant parameter showed the level of

  9. The qualitative research proposal

    Directory of Open Access Journals (Sweden)

    H Klopper

    2008-09-01

    Full Text Available Qualitative research in the health sciences has had to overcome many prejudices and a number of misunderstandings, but today qualitative research is as acceptable as quantitative research designs and is widely funded and published. Writing the proposal of a qualitative study, however, can be a challenging feat, due to the emergent nature of the qualitative research design and the description of the methodology as a process. Even today, many sub-standard proposals at post-graduate evaluation committees and application proposals to be considered for funding are still seen. This problem has led the researcher to develop a framework to guide the qualitative researcher in writing the proposal of a qualitative study based on the following research questions: (i What is the process of writing a qualitative research proposal? and (ii What does the structure and layout of a qualitative proposal look like? The purpose of this article is to discuss the process of writing the qualitative research proposal, as well as describe the structure and layout of a qualitative research proposal. The process of writing a qualitative research proposal is discussed with regards to the most important questions that need to be answered in your research proposal with consideration of the guidelines of being practical, being persuasive, making broader links, aiming for crystal clarity and planning before you write. While the structure of the qualitative research proposal is discussed with regards to the key sections of the proposal, namely the cover page, abstract, introduction, review of the literature, research problem and research questions, research purpose and objectives, research paradigm, research design, research method, ethical considerations, dissemination plan, budget and appendices.

  10. Brief communication: a proposed osteological method for the estimation of pubertal stage in human skeletal remains.

    Science.gov (United States)

    Shapland, Fiona; Lewis, Mary E

    2013-06-01

    Puberty forms an important threshold between childhood and adulthood, but this subject has received little attention in bioarchaeology. The new application of clinical methods to assess pubertal stage in adolescent skeletal remains is explored, concentrating on the development of the mandibular canine, hamate, hand phalanges, iliac crest and distal radius. Initial results from the medieval cemetery of St. Peter's Church, Barton-upon-Humber, England suggest that application of these methods may provide insights into aspects of adolescent development. This analysis indicates that adolescents from this medieval site were entering the pubertal growth spurt at a similar age to their modern counterparts, but that the later stages of pubertal maturation were being significantly delayed, perhaps due to environmental stress. Continued testing and refinement of these methods on living adolescents is still necessary to improve our understanding of their significance and accuracy in predicting pubertal stages. Copyright © 2013 Wiley Periodicals, Inc.

  11. Subsampled Hessian Newton Methods for Supervised Learning.

    Science.gov (United States)

    Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen

    2015-08-01

    Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.

  12. Prediction of FAD binding sites in electron transport proteins according to efficient radial basis function networks and significant amino acid pairs.

    Science.gov (United States)

    Le, Nguyen-Quoc-Khanh; Ou, Yu-Yen

    2016-07-30

    Cellular respiration is a catabolic pathway for producing adenosine triphosphate (ATP) and is the most efficient process through which cells harvest energy from consumed food. When cells undergo cellular respiration, they require a pathway to keep and transfer electrons (i.e., the electron transport chain). Due to oxidation-reduction reactions, the electron transport chain produces a transmembrane proton electrochemical gradient. In case protons flow back through this membrane, this mechanical energy is converted into chemical energy by ATP synthase. The convert process is involved in producing ATP which provides energy in a lot of cellular processes. In the electron transport chain process, flavin adenine dinucleotide (FAD) is one of the most vital molecules for carrying and transferring electrons. Therefore, predicting FAD binding sites in the electron transport chain is vital for helping biologists understand the electron transport chain process and energy production in cells. We used an independent data set to evaluate the performance of the proposed method, which had an accuracy of 69.84 %. We compared the performance of the proposed method in analyzing two newly discovered electron transport protein sequences with that of the general FAD binding predictor presented by Mishra and Raghava and determined that the accuracy of the proposed method improved by 9-45 % and its Matthew's correlation coefficient was 0.14-0.5. Furthermore, the proposed method enabled reducing the number of false positives significantly and can provide useful information for biologists. We developed a method that is based on PSSM profiles and SAAPs for identifying FAD binding sites in newly discovered electron transport protein sequences. This approach achieved a significant improvement after we added SAAPs to PSSM features to analyze FAD binding proteins in the electron transport chain. The proposed method can serve as an effective tool for predicting FAD binding sites in electron

  13. Proposals of counting method for bubble detectors and their intercomparisons

    International Nuclear Information System (INIS)

    Ramalho, Eduardo; Silva, Ademir X.; Bellido, Luis F.; Facure, Alessandro; Pereira, Mario

    2009-01-01

    The study of neutron's spectrometry and dosimetry has become significantly easier due to relatively new devices called bubble detectors. Insensitive to gamma rays and composed by superheated emulsions, they still are subjects of many researches in Radiation Physics and Nuclear Engineering. In bubble detectors, either exposed to more intense neutron fields or for a long time, when more bubbles are produced, the statistical uncertainty during the dosimetric and spectrometric processes is reduced. A proposal of this nature is set up in this work, which presents ways to perform counting processes for bubble detectors and an updated proceeding to get the irradiated detectors' images in order to make the manual counting easier. Twelve BDS detectors were irradiated by RDS111 cyclotron from IEN's (Instituto de Engenharia Nuclear) and photographed using an assembly specially designed for this experiment. Counting was proceeded manually in a first moment; simultaneously, ImagePro was used in order to perform counting automatically. The bubble counting values, either manual or automatic, were compared and the time to get them and their difficult levels as well. After the bubble counting, the detectors' standardizes responses were calculated in both cases, according to BDS's manual and they were also compared. Among the results, the counting on these devices really becomes very hard at a large number of bubbles, besides higher variations in counting of many bubbles. Because of the good agreement between manual counting and the custom program, the last one revealed a good alternative in practical and economical levels. Despite the good results, the custom program needs of more adjustments in order to achieve more accuracy on higher counting on bubble detectors for neutron measurement applications. (author)

  14. Fingerprinting Localization Method Based on TOA and Particle Filtering for Mines

    Directory of Open Access Journals (Sweden)

    Boming Song

    2017-01-01

    Full Text Available Accurate target localization technology plays a very important role in ensuring mine safety production and higher production efficiency. The localization accuracy of a mine localization system is influenced by many factors. The most significant factor is the non-line of sight (NLOS propagation error of the localization signal between the access point (AP and the target node (Tag. In order to improve positioning accuracy, the NLOS error must be suppressed by an optimization algorithm. However, the traditional optimization algorithms are complex and exhibit poor optimization performance. To solve this problem, this paper proposes a new method for mine time of arrival (TOA localization based on the idea of comprehensive optimization. The proposed method utilizes particle filtering to reduce the TOA data error, and the positioning results are further optimized with fingerprinting based on the Manhattan distance. This proposed method combines the advantages of particle filtering and fingerprinting localization. It reduces algorithm complexity and has better error suppression performance. The experimental results demonstrate that, as compared to the symmetric double-sided two-way ranging (SDS-TWR method or received signal strength indication (RSSI based fingerprinting method, the proposed method has a significantly improved localization performance, and the environment adaptability is enhanced.

  15. Comparison of dietary fiber methods for foods.

    Science.gov (United States)

    Heckman, M M; Lane, S A

    1981-11-01

    In order to evaluate several proposed dietary fiber methods, 12 food samples, representing different food classes, were analyzed by (1) neutral and acid detergent fiber methods (NDF, ADF); (2) NDF with enzyme modification (ENDF); (3) a 2-fraction enzyme method for soluble, insoluble, and total dietary fiber, proposed by Furda (SDF, IDF, TDF); (+) a 1-fraction enzyme method for total dietary fiber proposed by Hellendoorn (TDF). Foods included cereals, fruits, vegetables, pectin, locust bean gum, and soybean polysaccharides. Results show that TDF by Furda and Hellendoorn methods agree reasonably well with literature values by the Southgate method, but ENDF is consistently lower; that ENDF and IDF (Furda method) agree reasonably well; that except for corn corn bran fiber (insoluble) and pectin and locus bean fiber (soluble), all materials have significant fractions of both soluble and insoluble fiber. The Furda method was used on numerous food and ingredient samples and was found to be practical and informative and to have acceptable precision (RSD values of 2.65-7.05%). It is suggested that the Furda (or similar) method be given consideration for the analysis of foods for dietary fiber.

  16. Multiagent scheduling method with earliness and tardiness objectives in flexible job shops.

    Science.gov (United States)

    Wu, Zuobao; Weng, Michael X

    2005-04-01

    Flexible job-shop scheduling problems are an important extension of the classical job-shop scheduling problems and present additional complexity. Such problems are mainly due to the existence of a considerable amount of overlapping capacities with modern machines. Classical scheduling methods are generally incapable of addressing such capacity overlapping. We propose a multiagent scheduling method with job earliness and tardiness objectives in a flexible job-shop environment. The earliness and tardiness objectives are consistent with the just-in-time production philosophy which has attracted significant attention in both industry and academic community. A new job-routing and sequencing mechanism is proposed. In this mechanism, two kinds of jobs are defined to distinguish jobs with one operation left from jobs with more than one operation left. Different criteria are proposed to route these two kinds of jobs. Job sequencing enables to hold a job that may be completed too early. Two heuristic algorithms for job sequencing are developed to deal with these two kinds of jobs. The computational experiments show that the proposed multiagent scheduling method significantly outperforms the existing scheduling methods in the literature. In addition, the proposed method is quite fast. In fact, the simulation time to find a complete schedule with over 2000 jobs on ten machines is less than 1.5 min.

  17. A Practical Tuning Method for the Robust PID Controller with Velocity Feed-Back

    Directory of Open Access Journals (Sweden)

    Emre Sariyildiz

    2015-08-01

    Full Text Available Proportional-Integral-Derivative (PID control is the most widely used control method in industrial and academic applications due to its simplicity and efficiency. Several different control methods/algorithms have been proposed to tune the gains of PID controllers. However, the conventional tuning methods do not have sufficient performance and simplicity for practical applications, such as robotics and motion control. The performance of motion control systems may significantly deteriorate by the nonlinear plant uncertainties and unknown external disturbances, such as inertia variations, friction, external loads, etc., i.e., there may be a significant discrepancy between the simulation and experiment if the robustness is not considered in the design of PID controllers. This paper proposes a novel practical tuning method for the robust PID controller with velocity feed-back for motion control systems. The main advantages of the proposed method are the simplicity and efficiency in practical applications, i.e., a high performance robust motion control system can be easily designed by properly tuning conventional PID controllers. The validity of the proposal is verified by giving simulation and experimental results.

  18. Egocentric Temporal Action Proposals.

    Science.gov (United States)

    Shao Huang; Weiqiang Wang; Shengfeng He; Lau, Rynson W H

    2018-02-01

    We present an approach to localize generic actions in egocentric videos, called temporal action proposals (TAPs), for accelerating the action recognition step. An egocentric TAP refers to a sequence of frames that may contain a generic action performed by the wearer of a head-mounted camera, e.g., taking a knife, spreading jam, pouring milk, or cutting carrots. Inspired by object proposals, this paper aims at generating a small number of TAPs, thereby replacing the popular sliding window strategy, for localizing all action events in the input video. To this end, we first propose to temporally segment the input video into action atoms, which are the smallest units that may contain an action. We then apply a hierarchical clustering algorithm with several egocentric cues to generate TAPs. Finally, we propose two actionness networks to score the likelihood of each TAP containing an action. The top ranked candidates are returned as output TAPs. Experimental results show that the proposed TAP detection framework performs significantly better than relevant approaches for egocentric action detection.

  19. Resonance interference method in lattice physics code stream

    International Nuclear Information System (INIS)

    Choi, Sooyoung; Khassenov, Azamat; Lee, Deokjung

    2015-01-01

    Newly developed resonance interference model is implemented in the lattice physics code STREAM, and the model shows a significant improvement in computing accurate eigenvalues. Equivalence theory is widely used in production calculations to generate the effective multigroup (MG) cross-sections (XS) for commercial reactors. Although a lot of methods have been developed to enhance the accuracy in computing effective XSs, the current resonance treatment methods still do not have a clear resonance interference model. The conventional resonance interference model simply adds the absorption XSs of resonance isotopes to the background XS. However, the conventional models show non-negligible errors in computing effective XSs and eigenvalues. In this paper, a resonance interference factor (RIF) library method is proposed. This method interpolates the RIFs in a pre-generated RIF library and corrects the effective XS, rather than solving the time consuming slowing down calculation. The RIF library method is verified for homogeneous and heterogeneous problems. The verification results using the proposed method show significant improvements of accuracy in treating the interference effect. (author)

  20. Images Encryption Method using Steganographic LSB Method, AES and RSA algorithm

    Science.gov (United States)

    Moumen, Abdelkader; Sissaoui, Hocine

    2017-03-01

    Vulnerability of communication of digital images is an extremely important issue nowadays, particularly when the images are communicated through insecure channels. To improve communication security, many cryptosystems have been presented in the image encryption literature. This paper proposes a novel image encryption technique based on an algorithm that is faster than current methods. The proposed algorithm eliminates the step in which the secrete key is shared during the encryption process. It is formulated based on the symmetric encryption, asymmetric encryption and steganography theories. The image is encrypted using a symmetric algorithm, then, the secret key is encrypted by means of an asymmetrical algorithm and it is hidden in the ciphered image using a least significant bits steganographic scheme. The analysis results show that while enjoying the faster computation, our method performs close to optimal in terms of accuracy.

  1. Semifragile Speech Watermarking Based on Least Significant Bit Replacement of Line Spectral Frequencies

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Nematollahi

    2017-01-01

    Full Text Available There are various techniques for speech watermarking based on modifying the linear prediction coefficients (LPCs; however, the estimated and modified LPCs vary from each other even without attacks. Because line spectral frequency (LSF has less sensitivity to watermarking than LPC, watermark bits are embedded into the maximum number of LSFs by applying the least significant bit replacement (LSBR method. To reduce the differences between estimated and modified LPCs, a checking loop is added to minimize the watermark extraction error. Experimental results show that the proposed semifragile speech watermarking method can provide high imperceptibility and that any manipulation of the watermark signal destroys the watermark bits since manipulation changes it to a random stream of bits.

  2. A Proposed Method for Improving the Performance of P-Type GaAs IMPATTs

    Directory of Open Access Journals (Sweden)

    H. A. El-Motaafy

    2012-07-01

    Full Text Available A special waveform is proposed and assumed to be the optimum waveform for p-type GaAs IMPATTs. This waveform is deduced after careful and extensive study of the performance of these devices. The results presented here indicate the superiority of the performance of the IMPATTs driven by the proposed waveform over that obtained when the same IMPATTs are driven by the conventional sinusoidal waveform. These results are obtained using a full-scale computer simulation program that takes fully into account all the physical effects pertinent to IMPATT operation.  In this paper, it is indicated that the superiority of the proposed waveform is attributed to its ability to reduce the bad effects that usually degrade the IMPATT performance such as the space-charge effect and the drift-velocity dropping below saturation effect. The superiority is also attributed to the ability of the proposed waveform to improve the phase relationship between the terminal voltage and the induced current.Key Words: Computer-Aided Design, GaAs IMPATT, Microwave Engineering

  3. Fast polarimetric dehazing method for visibility enhancement in HSI colour space

    Science.gov (United States)

    Zhang, Wenfei; Liang, Jian; Ren, Liyong; Ju, Haijuan; Bai, Zhaofeng; Wu, Zhaoxin

    2017-09-01

    Image haze removal has attracted much attention in optics and computer vision fields in recent years due to its wide applications. In particular, the fast and real-time dehazing methods are of significance. In this paper, we propose a fast dehazing method in hue, saturation and intensity colour space based on the polarimetric imaging technique. We implement the polarimetric dehazing method in the intensity channel, and the colour distortion of the image is corrected using the white patch retinex method. This method not only reserves the detailed information restoration capacity, but also improves the efficiency of the polarimetric dehazing method. Comparison studies with state of the art methods demonstrate that the proposed method obtains equal or better quality results and moreover the implementation is much faster. The proposed method is promising in real-time image haze removal and video haze removal applications.

  4. Tritium extraction methods proposed for a solid breeder blanket. Subtask WP-B 6.1 of the European Blanket Program 1996

    International Nuclear Information System (INIS)

    Albrecht, H.

    1997-04-01

    Ten different methods for the extraction of tritium from the purge gas of a ceramic blanket are described and evaluated with respect to their applicability for ITER and DEMO. The methods are based on the conditions that the purge gas is composed of helium with an addition of up to 0.1% of H 2 or O 2 and H 2 O to facilitate the release of tritium, and that tritium occurs in the purge gas in two main chemical forms, i.e. HT and HTO. Individual process steps of many methods are identical; in particular, the application of cold traps, molecular sieve beds, and diffusors are proposed in several cases. Differences between the methods arise mainly from the ways in which various process steps are combined and from the operating conditions which are chosen with respect to temperature and pressure. Up to now, none of the methods has been demonstrated to be reliably applicable for the purge gas conditions foreseen for the operation of an ITER blanket test module (or larger ceramic blanket designs such as for DEMO). These conditions are characterized by very high gas flow rates and extremely low concentrations of HT and HTO. Therefore, a proposal has been made (FZK concept) which is expected to have the best potential for applicability to ITER and DEMO and to incorporate the smallest development risk. In this concept, the extraction of tritium and excess hydrogen is accomplished by using a cold trap for freezing out HTO/H 2 O and a 5A molecular sieve bed for the adsorption of HT/H 2 . (orig.) [de

  5. Proposal of Environmental Impact Assessment Method for Concrete in South Korea: An Application in LCA (Life Cycle Assessment

    Directory of Open Access Journals (Sweden)

    Tae Hyoung Kim

    2016-11-01

    Full Text Available This study aims to develop a system for assessing the impact of the substances discharged from concrete production process on six environmental impact categories, i.e., global warming (GWP, acidification (AP, eutrophication (EP, abiotic depletion (ADP, ozone depletion (ODP, and photochemical oxidant creation (POCP, using the life a cycle assessment (LCA method. To achieve this, this study proposed an LCA method specifically applicable to the Korean concrete industry by adapting the ISO standards to suit the Korean situations. The proposed LCA method involves a system that performs environmental impact assessment on the basis of input information on concrete mix design, transport distance, and energy consumption in a batch plant. The Concrete Lifecycle Assessment System (CLAS thus developed provides user-friendly support for environmental impact assessment with specialized database for concrete mix materials and energy sources. In the case analysis using the CLAS, among the substances discharged from the production of 24 MPa concrete, those contributing to GWP, AP, EP, ADP, ODP, and POCP were assessed to amount to 309 kg-CO2 eq/m3, 28.7 kg-SO2 eq/m3, 5.21 kg-PO43− eq/m3, 0.000049 kg-CFC11 eq/m3, 34 kg/m3, and 21 kg-Ethylene eq/m3, respectively. Of these six environmental impact categories selected for the LCA in this study, ordinary Portland cement (OPC was found to contribute most intensely to GWP and POCP, and aggregates, to AP, EP, ODP, and ADP. It was also found that the mix design with increased prop proportion of recycled aggregate was found to contribute to reducing the impact in all other categories.

  6. A proposed impact assessment method for genetically modified plants (AS-GMP Method)

    International Nuclear Information System (INIS)

    Jesus-Hitzschky, Katia Regina Evaristo de; Silveira, Jose Maria F.J. da

    2009-01-01

    An essential step in the development of products based on biotechnology is an assessment of their potential economic impacts and safety, including an evaluation of the potential impact of transgenic crops and practices related to their cultivation on the environment and human or animal health. The purpose of this paper is to provide an assessment method to evaluate the impact of biotechnologies that uses quantifiable parameters and allows a comparative analysis between conventional technology and technologies using GMOs. This paper introduces a method to perform an impact analysis associated with the commercial release and use of genetically modified plants, the Assessment System GMP Method. The assessment is performed through indicators that are arranged according to their dimension criterion likewise: environmental, economic, social, capability and institutional approach. To perform an accurate evaluation of the GMP specific indicators related to genetic modification are grouped in common fields: genetic insert features, GM plant features, gene flow, food/feed field, introduction of the GMP, unexpected occurrences and specific indicators. The novelty is the possibility to include specific parameters to the biotechnology under assessment. In this case by case analysis the factors of moderation and the indexes are parameterized to perform an available assessment.

  7. Constant Jacobian Matrix-Based Stochastic Galerkin Method for Probabilistic Load Flow

    Directory of Open Access Journals (Sweden)

    Yingyun Sun

    2016-03-01

    Full Text Available An intrusive spectral method of probabilistic load flow (PLF is proposed in the paper, which can handle the uncertainties arising from renewable energy integration. Generalized polynomial chaos (gPC expansions of dependent random variables are utilized to build a spectral stochastic representation of PLF model. Instead of solving the coupled PLF model with a traditional, cumbersome method, a modified stochastic Galerkin (SG method is proposed based on the P-Q decoupling properties of load flow in power system. By introducing two pre-calculated constant sparse Jacobian matrices, the computational burden of the SG method is significantly reduced. Two cases, IEEE 14-bus and IEEE 118-bus systems, are used to verify the computation speed and efficiency of the proposed method.

  8. A proposal for a determination method of element division on an analytical model for finite element elastic waves propagation analysis

    International Nuclear Information System (INIS)

    Ishida, Hitoshi; Meshii, Toshiyuki

    2010-01-01

    This study proposes an element size selection method named the 'Impact-Meshing (IM) method' for a finite element waves propagation analysis model, which is characterized by (1) determination of element division of the model with strain energy in the whole model, (2) static analysis (dynamic analysis in a single time step) with boundary conditions which gives a maximum change of displacement in the time increment and inertial (impact) force caused by the displacement change. In this paper, an example of application of the IM method to 3D ultrasonic wave propagation problem in an elastic solid is described. These examples showed an analysis result with a model determined by the IM method was convergence and calculation time for determination of element subdivision was reduced to about 1/6 by the IM Method which did not need determination of element subdivision by a dynamic transient analysis with 100 time steps. (author)

  9. Current lipid extraction methods are significantly enhanced adding a water treatment step in Chlorella protothecoides.

    Science.gov (United States)

    Ren, Xiaojie; Zhao, Xinhe; Turcotte, François; Deschênes, Jean-Sébastien; Tremblay, Réjean; Jolicoeur, Mario

    2017-02-11

    helps the subsequent release of intracellular lipids in the second extraction step, thus improving the global lipids extraction yield. In addition, the water treatment positively modifies the intracellular lipid class ratios of the final extract, in which TAG ratio is significantly increased without changes in the fatty acids composition. The novel method thus provides an efficient way to improve lipid extraction yield of existing methods, as well as selectively favoring TAG, a lipid of the upmost interest for biodiesel production.

  10. Numerical Feynman integrals with physically inspired interpolation: Faster convergence and significant reduction of computational cost

    Directory of Open Access Journals (Sweden)

    Nikesh S. Dattani

    2012-03-01

    Full Text Available One of the most successful methods for calculating reduced density operator dynamics in open quantum systems, that can give numerically exact results, uses Feynman integrals. However, when simulating the dynamics for a given amount of time, the number of time steps that can realistically be used with this method is always limited, therefore one often obtains an approximation of the reduced density operator at a sparse grid of points in time. Instead of relying only on ad hoc interpolation methods (such as splines to estimate the system density operator in between these points, I propose a method that uses physical information to assist with this interpolation. This method is tested on a physically significant system, on which its use allows important qualitative features of the density operator dynamics to be captured with as little as two time steps in the Feynman integral. This method allows for an enormous reduction in the amount of memory and CPU time required for approximating density operator dynamics within a desired accuracy. Since this method does not change the way the Feynman integral itself is calculated, the value of the density operator approximation at the points in time used to discretize the Feynamn integral will be the same whether or not this method is used, but its approximation in between these points in time is considerably improved by this method. A list of ways in which this proposed method can be further improved is presented in the last section of the article.

  11. The review and results of different methods for facial recognition

    Science.gov (United States)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  12. Proposal of adaptive human interface and study of interface evaluation method for plant operators

    International Nuclear Information System (INIS)

    Ujita, Hiroshi; Kubota, Ryuji.

    1994-01-01

    In this report, a new concept of human interface adaptive to plant operators' mental model, cognitive process and psychological state which change with time is proposed. It is composed of a function to determine information which should be indicated to operators based on the plant situation, a function to estimate operators' internal conditions, and a function to arrange the information amount, position, timing, form etc. based on their conditions. The method to evaluate the fitness of the interface by using the analysis results based on cognitive science, ergonomics, psychology and physiology is developed to achieve such an interface. Fundamental physiological experiments have been performed. Stress and workload can be identified by the ratio of the power average of the α wave fraction of a brain wave and be distinguished by the ratio of the standard deviation of the R-R interval in test and at rest, in the case of low stress such as mouse operation, calculation and walking. (author)

  13. Proposal of adaptive human interface and study of interface evaluation method for plant operators

    Energy Technology Data Exchange (ETDEWEB)

    Ujita, Hiroshi [Hitachi Ltd., Ibaraki (Japan). Energy Research Lab.; Kubota, Ryuji

    1994-07-01

    In this report, a new concept of human interface adaptive to plant operators' mental model, cognitive process and psychological state which change with time is proposed. It is composed of a function to determine information which should be indicated to operators based on the plant situation, a function to estimate operators' internal conditions, and a function to arrange the information amount, position, timing, form etc. based on their conditions. The method to evaluate the fitness of the interface by using the analysis results based on cognitive science, ergonomics, psychology and physiology is developed to achieve such an interface. Fundamental physiological experiments have been performed. Stress and workload can be identified by the ratio of the power average of the [alpha] wave fraction of a brain wave and be distinguished by the ratio of the standard deviation of the R-R interval in test and at rest, in the case of low stress such as mouse operation, calculation and walking. (author).

  14. High dimensional model representation method for fuzzy structural dynamics

    Science.gov (United States)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  15. A finite volume method for cylindrical heat conduction problems based on local analytical solution

    KAUST Repository

    Li, Wang

    2012-10-01

    A new finite volume method for cylindrical heat conduction problems based on local analytical solution is proposed in this paper with detailed derivation. The calculation results of this new method are compared with the traditional second-order finite volume method. The newly proposed method is more accurate than conventional ones, even though the discretized expression of this proposed method is slightly more complex than the second-order central finite volume method, making it cost more calculation time on the same grids. Numerical result shows that the total CPU time of the new method is significantly less than conventional methods for achieving the same level of accuracy. © 2012 Elsevier Ltd. All rights reserved.

  16. A finite volume method for cylindrical heat conduction problems based on local analytical solution

    KAUST Repository

    Li, Wang; Yu, Bo; Wang, Xinran; Wang, Peng; Sun, Shuyu

    2012-01-01

    A new finite volume method for cylindrical heat conduction problems based on local analytical solution is proposed in this paper with detailed derivation. The calculation results of this new method are compared with the traditional second-order finite volume method. The newly proposed method is more accurate than conventional ones, even though the discretized expression of this proposed method is slightly more complex than the second-order central finite volume method, making it cost more calculation time on the same grids. Numerical result shows that the total CPU time of the new method is significantly less than conventional methods for achieving the same level of accuracy. © 2012 Elsevier Ltd. All rights reserved.

  17. An enhanced topologically significant directed random walk in cancer classification using gene expression datasets

    Directory of Open Access Journals (Sweden)

    Choon Sen Seah

    2017-12-01

    Full Text Available Microarray technology has become one of the elementary tools for researchers to study the genome of organisms. As the complexity and heterogeneity of cancer is being increasingly appreciated through genomic analysis, cancerous classification is an emerging important trend. Significant directed random walk is proposed as one of the cancerous classification approach which have higher sensitivity of risk gene prediction and higher accuracy of cancer classification. In this paper, the methodology and material used for the experiment are presented. Tuning parameter selection method and weight as parameter are applied in proposed approach. Gene expression dataset is used as the input datasets while pathway dataset is used to build a directed graph, as reference datasets, to complete the bias process in random walk approach. In addition, we demonstrate that our approach can improve sensitive predictions with higher accuracy and biological meaningful classification result. Comparison result takes place between significant directed random walk and directed random walk to show the improvement in term of sensitivity of prediction and accuracy of cancer classification.

  18. Mining Significant Semantic Locations from GPS Data

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian Søndergaard

    2010-01-01

    With the increasing deployment and use of GPS-enabled devices, massive amounts of GPS data are becoming available. We propose a general framework for the mining of semantically meaningful, significant locations, e.g., shopping malls and restaurants, from such data. We present techniques capable...... of extracting semantic locations from GPS data. We capture the relationships between locations and between locations and users with a graph. Significance is then assigned to locations using random walks over the graph that propagates significance among the locations. In doing so, mutual reinforcement between...

  19. Mining significant semantic locations from GPS data

    DEFF Research Database (Denmark)

    Cao, Xin; Cong, Gao; Jensen, Christian S.

    2010-01-01

    With the increasing deployment and use of GPS-enabled devices, massive amounts of GPS data are becoming available. We propose a general framework for the mining of semantically meaningful, significant locations, e.g., shopping malls and restaurants, from such data. We present techniques capable...... of extracting semantic locations from GPS data. We capture the relationships between locations and between locations and users with a graph. Significance is then assigned to locations using random walks over the graph that propagates significance among the locations. In doing so, mutual reinforcement between...

  20. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis

    NARCIS (Netherlands)

    Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.

    2017-01-01

    Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels

  1. Computation of spatial significance of mountain objects extracted from multiscale digital elevation models

    International Nuclear Information System (INIS)

    Sathyamoorthy, Dinesh

    2014-01-01

    The derivation of spatial significance is an important aspect of geospatial analysis and hence, various methods have been proposed to compute the spatial significance of entities based on spatial distances with other entities within the cluster. This paper is aimed at studying the spatial significance of mountain objects extracted from multiscale digital elevation models (DEMs). At each scale, the value of spatial significance index SSI of a mountain object is the minimum number of morphological dilation iterations required to occupy all the other mountain objects in the terrain. The mountain object with the lowest value of SSI is the spatially most significant mountain object, indicating that it has the shortest distance to the other mountain objects. It is observed that as the area of the mountain objects reduce with increasing scale, the distances between the mountain objects increase, resulting in increasing values of SSI. The results obtained indicate that the strategic location of a mountain object at the centre of the terrain is more important than its size in determining its reach to other mountain objects and thus, its spatial significance

  2. Proposed algorithm to improve job shop production scheduling using ant colony optimization method

    Science.gov (United States)

    Pakpahan, Eka KA; Kristina, Sonna; Setiawan, Ari

    2017-12-01

    This paper deals with the determination of job shop production schedule on an automatic environment. On this particular environment, machines and material handling system are integrated and controlled by a computer center where schedule were created and then used to dictate the movement of parts and the operations at each machine. This setting is usually designed to have an unmanned production process for a specified interval time. We consider here parts with various operations requirement. Each operation requires specific cutting tools. These parts are to be scheduled on machines each having identical capability, meaning that each machine is equipped with a similar set of cutting tools therefore is capable of processing any operation. The availability of a particular machine to process a particular operation is determined by the remaining life time of its cutting tools. We proposed an algorithm based on the ant colony optimization method and embedded them on matlab software to generate production schedule which minimize the total processing time of the parts (makespan). We test the algorithm on data provided by real industry and the process shows a very short computation time. This contributes a lot to the flexibility and timelines targeted on an automatic environment.

  3. Statistical Method to Overcome Overfitting Issue in Rational Function Models

    Science.gov (United States)

    Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.

    2017-09-01

    Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.

  4. Control range: a controllability-based index for node significance in directed networks

    International Nuclear Information System (INIS)

    Wang, Bingbo; Gao, Lin; Gao, Yong

    2012-01-01

    While a large number of methods for module detection have been developed for undirected networks, it is difficult to adapt them to handle directed networks due to the lack of consensus criteria for measuring the node significance in a directed network. In this paper, we propose a novel structural index, the control range, motivated by recent studies on the structural controllability of large-scale directed networks. The control range of a node quantifies the size of the subnetwork that the node can effectively control. A related index, called the control range similarity, is also introduced to measure the structural similarity between two nodes. When applying the index of control range to several real-world and synthetic directed networks, it is observed that the control range of the nodes is mainly influenced by the network's degree distribution and that nodes with a low degree may have a high control range. We use the index of control range similarity to detect and analyze functional modules in glossary networks and the enzyme-centric network of homo sapiens. Our results, as compared with other approaches to module detection such as modularity optimization algorithm, dynamic algorithm and clique percolation method, indicate that the proposed indices are effective and practical in depicting structural and modular characteristics of sparse directed networks

  5. Proposal and field practice of a 'hiyarihatto' activity method for promotion of statements of participants for nuclear power plant organization

    International Nuclear Information System (INIS)

    Aoyagi, Saizo; Fujino, Hidenori; Ishii, Hirotake; Shimoda, Hiroshi; Sakuda, Hiroshi; Yoshikawa, Hidekazu; Sugiman, Toshio

    2011-01-01

    In a 'hiyarihatto' activity, workers report and discuss incident cases related to their work. Such an activity is particularly effective for cultivating participants' attitudes about safety. Nevertheless, a conventional face-to-face hiyarihatto activity includes features that are inappropriate for conduct in a nuclear power plant organization. For example, workers at nuclear power plants are geographically distributed and busy. Therefore, they have great difficulty in participating in a face-to-face hiyarihatto activity. Furthermore, workers' hesitation in discussing problems inhibits the continuation of their active participation. This study is conducted to propose a hiyarihatto activity with an asynchronous and distributed computer-mediated communication (CMC) for a nuclear power plant organization, with the demonstration of its effectiveness through field practice. The proposed method also involves the introduction of special participants who follow action guidelines for the promotion of the continuation of the activity. The method was used in an actual nuclear power plant organization. Results showed that the method is effective under some conditions, such as during periods without facility inspection. Special participants promoted the activity in some cases. Moreover, other factors affecting the activity and some improvements were identified. (author)

  6. Proposed torque optimized behavior for digital speed control of induction motors

    Energy Technology Data Exchange (ETDEWEB)

    Metwally, H.M.B.; El-Shewy, H.M.; El-Kholy, M.M. [Zagazig Univ., Dept. of Electrical Engineering, Zagazig (Egypt); Abdel-Kader, F.E. [Menoufyia Univ., Dept. of Electrical Engineering, Menoufyia (Egypt)

    2002-09-01

    In this paper, a control strategy for speed control of induction motors with field orientation is proposed. The proposed method adjusts the output voltage and frequency of the converter to operate the motor at the desired speed with maximum torque per ampere at all load torques keeping the torque angle equal to 90 deg. A comparison between the performance characteristics of a 2 hp induction motor using three methods of speed control is presented. These methods are the proposed method, the direct torque control method and the constant V/f method. The comparison showed that better performance characteristics are obtained using the proposed speed control strategy. A computer program, based on this method, is developed. Starting from the motor parameters, the program calculates a data set for the stator voltage and frequency required to obtain maximum torque per ampere at any motor speed and load torque. This data set can be used by the digital speed control system of induction motors. (Author)

  7. A new method to detect significant basal body temperature changes during a woman's menstrual cycle.

    Science.gov (United States)

    Freundl, Günter; Frank-Herrmann, Petra; Brown, Simon; Blackwell, Leonard

    2014-10-01

    To compare the results of a computer programme based on the Trigg's tracking system (TTS) identification of the basal body temperature (BBT) shift day from daily records of BBT values (TTS transition day), with the BBT shift day identified from the same records using the Sensiplan(®) symptothermal method of natural family planning. A computer programme was written to display the daily BBT readings for 364 menstrual cycles from 51 women aged 24 to 35 years, obtained from the German Natural Family Planning (NFP) database. The TTS transition day so identified from each record was then compared with the BBT shift day estimated from the same record by the Sensiplan(®) method. Total agreement between the methods was obtained for 81% (294/364) of the cycles and 18% (67) cycles differed by ± 1 day. For the 364 pairs of values distributed among 51 women the medians of the differences between the TTS transition day and Sensiplan(®) initial day of the BBT rise (shift day) were not significantly different (χ(2) = 65.28, df = 50, p = 0.07205). The advantages of the tracking signal algorithm are that in many cases it was possible to identify the BBT shift day on that very day - rather than only some days later - and to estimate the probability that a transition had occurred from the different values of the tracking signal.

  8. Communication: Proper treatment of classically forbidden electronic transitions significantly improves detailed balance in surface hopping

    Energy Technology Data Exchange (ETDEWEB)

    Sifain, Andrew E. [Department of Physics and Astronomy, University of Southern California, Los Angeles, California 90089-0485 (United States); Wang, Linjun [Department of Chemistry, Zhejiang University, Hangzhou 310027 (China); Prezhdo, Oleg V. [Department of Physics and Astronomy, University of Southern California, Los Angeles, California 90089-0485 (United States); Department of Chemistry, University of Southern California, Los Angeles, California 90089-1062 (United States)

    2016-06-07

    Surface hopping is the most popular method for nonadiabatic molecular dynamics. Many have reported that it does not rigorously attain detailed balance at thermal equilibrium, but does so approximately. We show that convergence to the Boltzmann populations is significantly improved when the nuclear velocity is reversed after a classically forbidden hop. The proposed prescription significantly reduces the total number of classically forbidden hops encountered along a trajectory, suggesting that some randomization in nuclear velocity is needed when classically forbidden hops constitute a large fraction of attempted hops. Our results are verified computationally using two- and three-level quantum subsystems, coupled to a classical bath undergoing Langevin dynamics.

  9. Reliability-based design optimization via high order response surface method

    International Nuclear Information System (INIS)

    Li, Hong Shuang

    2013-01-01

    To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.

  10. A new computational method for reactive power market clearing

    International Nuclear Information System (INIS)

    Zhang, T.; Elkasrawy, A.; Venkatesh, B.

    2009-01-01

    After deregulation of electricity markets, ancillary services such as reactive power supply are priced separately. However, unlike real power supply, procedures for costing and pricing reactive power supply are still evolving and spot markets for reactive power do not exist as of now. Further, traditional formulations proposed for clearing reactive power markets use a non-linear mixed integer programming formulation that are difficult to solve. This paper proposes a new reactive power supply market clearing scheme. Novelty of this formulation lies in the pricing scheme that rewards transformers for tap shifting while participating in this market. The proposed model is a non-linear mixed integer challenge. A significant portion of the manuscript is devoted towards the development of a new successive mixed integer linear programming (MILP) technique to solve this formulation. The successive MILP method is computationally robust and fast. The IEEE 6-bus and 300-bus systems are used to test the proposed method. These tests serve to demonstrate computational speed and rigor of the proposed method. (author)

  11. A Horizontal Tilt Correction Method for Ship License Numbers Recognition

    Science.gov (United States)

    Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi

    2018-02-01

    An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.

  12. Proposal of a segmentation procedure for skid resistance data

    International Nuclear Information System (INIS)

    Tejeda, S. V.; Tampier, Hernan de Solominihac; Navarro, T.E.

    2008-01-01

    Skin resistance of pavements presents a high spatial variability along a road. This pavement characteristic is directly related to wet weather accidents; therefore, it is important to identify and characterize the skid resistance of homogeneous segments along a road in order to implement proper road safety management. Several data segmentation methods have been applied to other pavement characteristics (e.g. roughness). However, no application to skin resistance data was found during the literature review for this study. Typical segmentation methods are rather too general or too specific to ensure a detailed segmentation of skid resistance data, which can be used for managing pavement performance. The main objective of this paper is to propose a procedure for segmenting skid resistance data, based on existing data segmentation methods. The procedure needs to be efficient and to fulfill road management requirements. The proposed procedure considers the Leverage method to identify outlier data, the CUSUM method to accomplish initial data segmentation and a statistical method to group consecutive segments that are statistically similar. The statistical method applies the Student's t-test of mean equities, along with analysis of variance and the Tuckey test for the multiple comparison of means. The proposed procedure was applied to a sample of skid resistance data measured with SCRIM (Side Force Coefficient Routine Investigatory Machine) on a 4.2 km section of Chilean road and was compared to conventional segmentation methods. Results showed that the proposed procedure is more efficient than the conventional segmentation procedures, achieving the minimum weighted sum of square errors (SSEp) with all the identified segments statistically different. Due to its mathematical basis, proposed procedure can be easily adapted and programmed for use in road safety management. (author)

  13. Difference in method of administration did not significantly impact item response

    DEFF Research Database (Denmark)

    Bjorner, Jakob B; Rose, Matthias; Gandek, Barbara

    2014-01-01

    assistant (PDA), or personal computer (PC) on the Internet, and a second form by PC, in the same administration. Structural invariance, equivalence of item responses, and measurement precision were evaluated using confirmatory factor analysis and item response theory methods. RESULTS: Multigroup...... levels in IVR, PQ, or PDA administration as compared to PC. Availability of large item response theory-calibrated PROMIS item banks allowed for innovations in study design and analysis.......PURPOSE: To test the impact of method of administration (MOA) on the measurement characteristics of items developed in the Patient-Reported Outcomes Measurement Information System (PROMIS). METHODS: Two non-overlapping parallel 8-item forms from each of three PROMIS domains (physical function...

  14. A Proposal to Build an Education Research and Development Program: The Kamehameha Early Education Project Proposal. Technical Report #3.

    Science.gov (United States)

    Gallimore, Ronald; And Others

    This report summarizes the programmatic features of a proposal for the Kamehameha Early Education Project (KEEP), a program aimed at the development, demonstration, and dissemination of methods for improving the education of Hawaiian and part-Hawaiian children. A brief description of the proposed project goals, structure, organization, and…

  15. New Jersey proposes rule reducing NOx emissions

    International Nuclear Information System (INIS)

    Anon.

    1993-01-01

    The New Jersey Department of Environmental Protection and Energy has proposed a rule requiring utility and industrial sources to significantly reduce their emission levels of nitrogen oxide (NO x ). If approved, it will be the first major rule mandated by the Clean Air Act Amendments of 1990 to affect New Jersey's stationary sources of these air pollutants - primarily electric generating utilities and other large fossil fuel burning facilities. The proposed rule requires all facilities with the potential to emit 25 tons or more of NO x each year to install reasonably available control technology by May 30, 1995. According to Richard Sinding, the environment and energy agency's assistant commissioner for policy and planning, the rule will likely require installation of low-NO x burners or other modifications to the combustion process. Sinding says the proposed rule will reduce the State's NO x emissions by approximately 30,000 tons a year, roughly 30 percent from current levels from these stationary sources. The pollution prevention measures are estimated to cost approximately $1,000 for each ton of NO x removed. The state energy agency estimates the average residential utility customer will see an increase in the monthly electric bill of about 50 cents. The agency said the proposed regulation includes provisions to make implementation more flexible and less costly for achieving the NO x reductions. It has approved the use of natural gas during the ozone season if low-NO x burners are not available. Additionally, emissions may be averaged from all units at the same utility or company location, effectively allowing a company to select the most cost-effective method of achieving the required emissions reductions

  16. Preconditioned iterative methods for space-time fractional advection-diffusion equations

    Science.gov (United States)

    Zhao, Zhi; Jin, Xiao-Qing; Lin, Matthew M.

    2016-08-01

    In this paper, we propose practical numerical methods for solving a class of initial-boundary value problems of space-time fractional advection-diffusion equations. First, we propose an implicit method based on two-sided Grünwald formulae and discuss its stability and consistency. Then, we develop the preconditioned generalized minimal residual (preconditioned GMRES) method and preconditioned conjugate gradient normal residual (preconditioned CGNR) method with easily constructed preconditioners. Importantly, because resulting systems are Toeplitz-like, fast Fourier transform can be applied to significantly reduce the computational cost. We perform numerical experiments to demonstrate the efficiency of our preconditioners, even in cases with variable coefficients.

  17. Immune Algorithm Complex Method for Transducer Calibration

    Directory of Open Access Journals (Sweden)

    YU Jiangming

    2014-08-01

    Full Text Available As a key link in engineering test tasks, the transducer calibration has significant influence on accuracy and reliability of test results. Because of unknown and complex nonlinear characteristics, conventional method can’t achieve satisfactory accuracy. An Immune algorithm complex modeling approach is proposed, and the simulated studies on the calibration of third multiple output transducers is made respectively by use of the developed complex modeling. The simulated and experimental results show that the Immune algorithm complex modeling approach can improve significantly calibration precision comparison with traditional calibration methods.

  18. Adaptive significance of root grafting in trees

    Energy Technology Data Exchange (ETDEWEB)

    Loehle, C.; Jones, R.

    1988-12-31

    Root grafting has long been observed in forest trees but the adaptive significance of this trait has not been fully explained. Various authors have proposed that root grafting between trees contributes to mechanical support by linking adjacent root systems. Keeley proposes that this trait would be of greatest advantage in swamps where soils provide poor mechanical support. He provides as evidence a greenhouse study of Nyssa sylvatica Marsh in which seedlings of swamp provenance formed between-individual root grafts more frequently than upland provenance seedlings. In agreement with this within-species study, Keeley observed that arid zone species rarely exhibit grafts. Keeley also demonstrated that vines graft less commonly than trees, and herbs never do. Since the need for mechanical support coincides with this trend, these data seem to support his model. In this paper, the authors explore the mechanisms and ecological significance of root grafting, leading to predictions of root grafting incidence. Some observations support and some contradict the mechanical support hypothesis.

  19. Proposal for element size and time increment selection guideline by 3-D finite element method for elastic waves propagation analysis

    International Nuclear Information System (INIS)

    Ishida, Hitoshi; Meshii, Toshiyuki

    2008-01-01

    This paper proposes a guideline for selection of element size and time increment by 3-D finite element method, which is applied to elastic wave propagation analysis for a long distance of a large structure. An element size and a time increment are determined by quantitative evaluation of strain, which must be 0 on the analysis model with a uniform motion, caused by spatial and time discretization. (author)

  20. Website-based PNG image steganography using the modified Vigenere Cipher, least significant bit, and dictionary based compression methods

    Science.gov (United States)

    Rojali, Salman, Afan Galih; George

    2017-08-01

    Along with the development of information technology in meeting the needs, various adverse actions and difficult to avoid are emerging. One of such action is data theft. Therefore, this study will discuss about cryptography and steganography that aims to overcome these problems. This study will use the Modification Vigenere Cipher, Least Significant Bit and Dictionary Based Compression methods. To determine the performance of study, Peak Signal to Noise Ratio (PSNR) method is used to measure objectively and Mean Opinion Score (MOS) method is used to measure subjectively, also, the performance of this study will be compared to other method such as Spread Spectrum and Pixel Value differencing. After comparing, it can be concluded that this study can provide better performance when compared to other methods (Spread Spectrum and Pixel Value Differencing) and has a range of MSE values (0.0191622-0.05275) and PSNR (60.909 to 65.306) with a hidden file size of 18 kb and has a MOS value range (4.214 to 4.722) or image quality that is approaching very good.

  1. Reliability Evaluation of Bridges Based on Nonprobabilistic Response Surface Limit Method

    Directory of Open Access Journals (Sweden)

    Xuyong Chen

    2017-01-01

    Full Text Available Due to many uncertainties in nonprobabilistic reliability assessment of bridges, the limit state function is generally unknown. The traditional nonprobabilistic response surface method is a lengthy and oscillating iteration process and leads to difficultly solving the nonprobabilistic reliability index. This article proposes a nonprobabilistic response surface limit method based on the interval model. The intention of this method is to solve the upper and lower limits of the nonprobabilistic reliability index and to narrow the range of the nonprobabilistic reliability index. If the range of the reliability index reduces to an acceptable accuracy, the solution will be considered convergent, and the nonprobabilistic reliability index will be obtained. The case study indicates that using the proposed method can avoid oscillating iteration process, make iteration process stable and convergent, reduce iteration steps significantly, and improve computational efficiency and precision significantly compared with the traditional nonprobabilistic response surface method. Finally, the nonprobabilistic reliability evaluation process of bridge will be built through evaluating the reliability of one PC continuous rigid frame bridge with three spans using the proposed method, which appears to be more simple and reliable when lack of samples and parameters in the bridge nonprobabilistic reliability evaluation is present.

  2. Trends and regional variations in provision of contraception methods in a commercially insured population in the United States based on nationally proposed measures.

    Science.gov (United States)

    Law, A; Yu, J S; Wang, W; Lin, J; Lynen, R

    2017-09-01

    Three measures to assess the provision of effective contraception methods among reproductive-aged women have recently been endorsed for national public reporting. Based on these measures, this study examined real-world trends and regional variations of contraceptive provision in a commercially insured population in the United States. Women 15-44years old with continuous enrollment in each year from 2005 to 2014 were identified from a commercial claims database. In accordance with the proposed measures, percentages of women (a) provided most effective or moderately effective (MEME) methods of contraception and (b) provided a long-acting reversible contraceptive (LARC) method were calculated in two populations: women at risk for unintended pregnancy and women who had a live birth within 3 and 60days of delivery. During the 10-year period, the percentages of women at risk for unintended pregnancy provided MEME contraceptive methods increased among 15-20-year-olds (24.5%-35.9%) and 21-44-year-olds (26.2%-31.5%), and those provided a LARC method also increased among 15-20-year-olds (0.1%-2.4%) and 21-44-year-olds (0.8%-3.9%). Provision of LARC methods increased most in the North Central and West among both age groups of women. Provision of MEME contraceptives and LARC methods to women who had a live birth within 60days postpartum also increased across age groups and regions. This assessment indicates an overall trend of increasing provision of MEME contraceptive methods in the commercial sector, albeit with age group and regional variations. If implemented, these proposed measures may have impacts on health plan contraceptive access policy. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Ranking of small scale proposals for water system repair using the Rapid Impact Assessment Matrix (RIAM)

    Energy Technology Data Exchange (ETDEWEB)

    Shakib-Manesh, T.E.; Hirvonen, K.O.; Jalava, K.J.; Ålander, T.; Kuitunen, M.T., E-mail: markku.kuitunen@jyu.fi

    2014-11-15

    Environmental impacts of small scale projects are often assessed poorly, or not assessed at all. This paper examines the usability of the Rapid Impact Assessment Matrix (RIAM) as a tool to prioritize project proposals for small scale water restoration projects in relation to proposals' potential to improve the environment. The RIAM scoring system was used to assess and rank the proposals based on their environmental impacts, the costs of the projects to repair the harmful impacts, and the size of human population living around the sites. A four-member assessment group (The expert panel) gave the RIAM-scores to the proposals. The assumed impacts of the studied projects at the Eastern Finland water systems were divided into the ecological and social impacts. The more detailed assessment categories of the ecological impacts in this study were impacts on landscape, natural state, and limnology. The social impact categories were impacts to recreational use of the area, fishing, industry, population, and economy. These impacts were scored according to their geographical and social significance, their magnitude of change, their character, permanence, reversibility, and cumulativeness. The RIAM method proved to be an appropriate and recommendable method for the small-scale assessment and prioritizing of project proposals. If the assessments are well documented, the RIAM can be a method for easy assessing and comparison of the various kinds of projects. In the studied project proposals there were no big surprises in the results: the best ranks were received by the projects, which were assumed to return watersheds toward their original state.

  4. Ranking of small scale proposals for water system repair using the Rapid Impact Assessment Matrix (RIAM)

    International Nuclear Information System (INIS)

    Shakib-Manesh, T.E.; Hirvonen, K.O.; Jalava, K.J.; Ålander, T.; Kuitunen, M.T.

    2014-01-01

    Environmental impacts of small scale projects are often assessed poorly, or not assessed at all. This paper examines the usability of the Rapid Impact Assessment Matrix (RIAM) as a tool to prioritize project proposals for small scale water restoration projects in relation to proposals' potential to improve the environment. The RIAM scoring system was used to assess and rank the proposals based on their environmental impacts, the costs of the projects to repair the harmful impacts, and the size of human population living around the sites. A four-member assessment group (The expert panel) gave the RIAM-scores to the proposals. The assumed impacts of the studied projects at the Eastern Finland water systems were divided into the ecological and social impacts. The more detailed assessment categories of the ecological impacts in this study were impacts on landscape, natural state, and limnology. The social impact categories were impacts to recreational use of the area, fishing, industry, population, and economy. These impacts were scored according to their geographical and social significance, their magnitude of change, their character, permanence, reversibility, and cumulativeness. The RIAM method proved to be an appropriate and recommendable method for the small-scale assessment and prioritizing of project proposals. If the assessments are well documented, the RIAM can be a method for easy assessing and comparison of the various kinds of projects. In the studied project proposals there were no big surprises in the results: the best ranks were received by the projects, which were assumed to return watersheds toward their original state

  5. A Proposal of a Method to Measure and Evaluate the Effect to Apply External Support Measures for Owners by Construction Management Method, etc

    Science.gov (United States)

    Tada, Hiroshi; Miyatake, Ichiro; Mouri, Junji; Ajiki, Norihiko; Fueta, Toshiharu

    In Japan, various approaches have been taken to ensure the quality of public works or to support the procurement regime of the governmental agencies, as a means to utilize external resources, which include the procurement support service or the construction management (CM) method. Although discussions on these measures to utilize external resources (hereinafter referred to as external support measure) have been going on, as well as the follow-up surveys showing the positive effects of such measures have been conducted, the surveys only deal with the matters concerning the overall effects of the external support measure on the whole, meaning that the effect of each item of the tasks have not been addressed, and that the extent it dealt with the expectations of the client is unknown. However, the effective use of the external support measure in future cannot be achieved without knowing what was the purpose to introduce the external support measure, and what effect was expected on each task item, and what extent the expectation fulfilled. Furthermore, it is important to clarify not only the effect as compared to the client's expectation (performance), but also the public benefit of this measure (value improvement). From this point of view, there is not an established method to figure out the effect of the client's measure to utilize external resources. In view of this background, this study takes the CM method as an example of the external support measure, and proposes a method to measure and evaluate the effect by each task item, and suggests the future issues and possible responses, in the aim of contributing the promotion, improvement, and proper implementation of the external support measures in future.

  6. 48 CFR 715.305 - Proposal evaluation.

    Science.gov (United States)

    2010-10-01

    ... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Source Selection 715.305 Proposal evaluation... perceive no actual or potential conflict of interests. (An acceptable certification appears under ADS...

  7. A method of ECG template extraction for biometrics applications.

    Science.gov (United States)

    Zhou, Xiang; Lu, Yang; Chen, Meng; Bao, Shu-Di; Miao, Fen

    2014-01-01

    ECG has attracted widespread attention as one of the most important non-invasive physiological signals in healthcare-system related biometrics for its characteristics like ease-of-monitoring, individual uniqueness as well as important clinical value. This study proposes a method of dynamic threshold setting to extract the most stable ECG waveform as the template for the consequent ECG identification process. With the proposed method, the accuracy of ECG biometrics using the dynamic time wraping for difference measures has been significantly improved. Analysis results with the self-built electrocardiogram database show that the deployment of the proposed method was able to reduce the half total error rate of the ECG biometric system from 3.35% to 1.45%. Its average running time on the platform of android mobile terminal was around 0.06 seconds, and thus demonstrates acceptable real-time performance.

  8. Analysis of methods for quantitative renography

    International Nuclear Information System (INIS)

    Archambaud, F.; Maksud, P.; Prigent, A.; Perrin-Fayolle, O.

    1995-01-01

    This article reviews the main methods using renography to estimate renal perfusion indices and to quantify differential and global renal function. The review addresses the pathophysiological significance of estimated parameters according to the underlying models and the choice of the radiopharmaceutical. The dependence of these parameters on the region of interest characteristics and on the methods of background and attenuation corrections are surveyed. Some current recommendations are proposed. (authors). 66 refs., 8 figs

  9. Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method

    Directory of Open Access Journals (Sweden)

    Darae Jeong

    2018-01-01

    Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.

  10. Extension of Tom Booth's Modified Power Method for Higher Eigen Modes

    International Nuclear Information System (INIS)

    Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung

    2015-01-01

    A possible technique to get the even higher modes is suggested, but it is difficult to be applied practically. In this paper, a general solution strategy is proposed, which can extend Tom Booth's modified power method to get the higher Eigenmodes and there is no limitation about the number of Eigenmodes that can be obtained with this method. In this paper, a general solution strategy is proposed, which can extend Tom Booth's modified power method to get the higher Eigenmodes and there is no limitation about the number of Eigenmodes that can be obtained with this method. It is more practical than the original solution strategy that Tom Booth proposed. The implementation of the method in Monte Carlo code shows significant advantages comparing to the original power method

  11. Proposal for Testing and Validation of Vacuum Ultra-Violet Atomic Laser-Induced Fluorescence as a Method to Analyze Carbon Grid Erosion in Ion Thrusters

    Science.gov (United States)

    Stevens, Richard

    2003-01-01

    Previous investigation under award NAG3-25 10 sought to determine the best method of LIF to determine the carbon density in a thruster plume. Initial reports from other groups were ambiguous as to the number of carbon clusters that might be present in the plume of a thruster. Carbon clusters would certainly affect the ability to LIF; if they were the dominant species, then perhaps the LIF method should target clusters. The results of quadrupole mass spectroscopy on sputtered carbon determined that minimal numbers of clusters were sputtered from graphite under impact from keV Krypton. There were some investigations in the keV range by other groups that hinted at clusters, but at the time the proposal was presented to NASA, there was no data from low-energy sputtering available. Thus, the proposal sought to develop a method to characterize the population only of atoms sputtered from a graphite target in a test cell. Most of the ground work had been established by the previous two years of investigation. The proposal covering 2003 sought to develop an anti-Stokes Raman shifting cell to generate VUW light and test this cell on two different laser systems, ArF and YAG- pumped dye. The second goal was to measure the lowest detectable amounts of carbon atoms by 156.1 nm and 165.7 nm LIF. If equipment was functioning properly, it was expected that these goals would be met easily during the timeframe of the proposal, and that is the reason only modest funding was requested. The PI was only funded at half- time by Glenn during the summer months. All other work time was paid for by Whitworth College. The college also funded a student, Charles Shawley, who worked on the project during the spring.

  12. Test Methods for Evaluating Solid Waste, Physical/Chemical Methods. First Update. (3rd edition)

    International Nuclear Information System (INIS)

    Friedman; Sellers.

    1988-01-01

    The proposed Update is for Test Methods for Evaluating Solid Waste, Physical/Chemical Methods, SW-846, Third Edition. Attached to the report is a list of methods included in the proposed update indicating whether the method is a new method, a partially revised method, or a totally revised method. Do not discard or replace any of the current pages in the SW-846 manual until the proposed update I package is promulgated. Until promulgation of the update package, the methods in the update package are not officially part of the SW-846 manual and thus do not carry the status of EPA-approved methods. In addition to the proposed Update, six finalized methods are included for immediate inclusion into the Third Edition of SW-846. Four methods, originally proposed October 1, 1984, will be finalized in a soon to be released rulemaking. They are, however, being submitted to subscribers for the first time in the update. These methods are 7211, 7381, 7461, and 7951. Two other methods were finalized in the 2nd Edition of SW-846. They were inadvertantly omitted from the 3rd Edition and are not being proposed as new. These methods are 7081 and 7761

  13. TODIM Method for Single-Valued Neutrosophic Multiple Attribute Decision Making

    Directory of Open Access Journals (Sweden)

    Dong-Sheng Xu

    2017-10-01

    Full Text Available Recently, the TODIM has been used to solve multiple attribute decision making (MADM problems. The single-valued neutrosophic sets (SVNSs are useful tools to depict the uncertainty of the MADM. In this paper, we will extend the TODIM method to the MADM with the single-valued neutrosophic numbers (SVNNs. Firstly, the definition, comparison, and distance of SVNNs are briefly presented, and the steps of the classical TODIM method for MADM problems are introduced. Then, the extended classical TODIM method is proposed to deal with MADM problems with the SVNNs, and its significant characteristic is that it can fully consider the decision makers’ bounded rationality which is a real action in decision making. Furthermore, we extend the proposed model to interval neutrosophic sets (INSs. Finally, a numerical example is proposed.

  14. Understanding Creativity Methods in Design

    DEFF Research Database (Denmark)

    Biskjaer, Michael Mose; Dalsgaard, Peter; Halskov, Kim

    2017-01-01

    This paper contributes an analytical framework to improve understanding of the composition of recognized creativity methods used in design. Based on an extensive literature review, our framework synthesizes key concepts from design and particularly creativity research, and is further supported...... by significant experience with creativity methods in design. We propose that nine concepts are relevant for analyzing creativity methods in design: process structure, materials, tools, combination, metaphor, analogy, framing, divergence, and convergence. To test their relevance as components of an analytical...... are composed, how and why they work, and how they potentially may be tweaked or refined for enhanced deployment in design....

  15. Demodulation method for tilted fiber Bragg grating refractometer with high sensitivity

    Science.gov (United States)

    Pham, Xuantung; Si, Jinhai; Chen, Tao; Wang, Ruize; Yan, Lihe; Cao, Houjun; Hou, Xun

    2018-05-01

    In this paper, we propose a demodulation method for refractive index (RI) sensing with tilted fiber Bragg gratings (TFBGs). It operates by monitoring the TFBG cladding mode resonance "cut-off wavelengths." The idea of a "cut-off wavelength" and its determination method are introduced. The RI sensitivities of TFBGs are significantly enhanced in certain RI ranges by using our demodulation method. The temperature-induced cross sensitivity is eliminated. We also demonstrate a parallel-double-angle TFBG (PDTFBG), in which two individual TFBGs are inscribed in the fiber core in parallel using a femtosecond laser and a phase mask. The RI sensing range of the PDTFBG is significantly broader than that of a conventional single-angle TFBG. In addition, its RI sensitivity can reach 1023.1 nm/refractive index unit in the 1.4401-1.4570 RI range when our proposed demodulation method is used.

  16. Path Planning with a Lazy Significant Edge Algorithm (LSEA

    Directory of Open Access Journals (Sweden)

    Joseph Polden

    2013-04-01

    Full Text Available Probabilistic methods have been proven to be effective for robotic path planning in a geometrically complex environment. In this paper, we propose a novel approach, which utilizes a specialized roadmap expansion phase, to improve lazy probabilistic path planning. This expansion phase analyses roadmap connectivity information to bias sampling towards objects in the workspace that have not yet been navigated by the robot. A new method to reduce the number of samples required to navigate narrow passages is also proposed and tested. Experimental results show that the new algorithm is more efficient than the traditional path planning methodologies. It was able to generate solutions for a variety of path planning problems faster, using fewer samples to arrive at a valid solution.

  17. Bullying in Academe: Prevalent, Significant, and Incessant

    Science.gov (United States)

    Cassell, Macgorine A.

    2011-01-01

    This paper examines the top-down perspective of bullying and mobbing of professors by analyzing why it is prevalent, significant, and incessant and then proposes a framework to produce a caring, respectful, and safe environment for professors to engage in their teaching, scholarship, and service. The author suggests that the failure of…

  18. Statistical methods to detect novel genetic variants using publicly available GWAS summary data.

    Science.gov (United States)

    Guo, Bin; Wu, Baolin

    2018-03-01

    We propose statistical methods to detect novel genetic variants using only genome-wide association studies (GWAS) summary data without access to raw genotype and phenotype data. With more and more summary data being posted for public access in the post GWAS era, the proposed methods are practically very useful to identify additional interesting genetic variants and shed lights on the underlying disease mechanism. We illustrate the utility of our proposed methods with application to GWAS meta-analysis results of fasting glucose from the international MAGIC consortium. We found several novel genome-wide significant loci that are worth further study. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Adobe Boxes: Locating Object Proposals Using Object Adobes.

    Science.gov (United States)

    Fang, Zhiwen; Cao, Zhiguo; Xiao, Yang; Zhu, Lei; Yuan, Junsong

    2016-09-01

    Despite the previous efforts of object proposals, the detection rates of the existing approaches are still not satisfactory enough. To address this, we propose Adobe Boxes to efficiently locate the potential objects with fewer proposals, in terms of searching the object adobes that are the salient object parts easy to be perceived. Because of the visual difference between the object and its surroundings, an object adobe obtained from the local region has a high probability to be a part of an object, which is capable of depicting the locative information of the proto-object. Our approach comprises of three main procedures. First, the coarse object proposals are acquired by employing randomly sampled windows. Then, based on local-contrast analysis, the object adobes are identified within the enlarged bounding boxes that correspond to the coarse proposals. The final object proposals are obtained by converging the bounding boxes to tightly surround the object adobes. Meanwhile, our object adobes can also refine the detection rate of most state-of-the-art methods as a refinement approach. The extensive experiments on four challenging datasets (PASCAL VOC2007, VOC2010, VOC2012, and ILSVRC2014) demonstrate that the detection rate of our approach generally outperforms the state-of-the-art methods, especially with relatively small number of proposals. The average time consumed on one image is about 48 ms, which nearly meets the real-time requirement.

  20. A volume of fluid method based on multidimensional advection and spline interface reconstruction

    International Nuclear Information System (INIS)

    Lopez, J.; Hernandez, J.; Gomez, P.; Faura, F.

    2004-01-01

    A new volume of fluid method for tracking two-dimensional interfaces is presented. The method involves a multidimensional advection algorithm based on the use of edge-matched flux polygons to integrate the volume fraction evolution equation, and a spline-based reconstruction algorithm. The accuracy and efficiency of the proposed method are analyzed using different tests, and the results are compared with those obtained recently by other authors. Despite its simplicity, the proposed method represents a significant improvement, and compares favorably with other volume of fluid methods as regards the accuracy and efficiency of both the advection and reconstruction steps

  1. Discovering significant evolution patterns from satellite image time series.

    Science.gov (United States)

    Petitjean, François; Masseglia, Florent; Gançarski, Pierre; Forestier, Germain

    2011-12-01

    Satellite Image Time Series (SITS) provide us with precious information on land cover evolution. By studying these series of images we can both understand the changes of specific areas and discover global phenomena that spread over larger areas. Changes that can occur throughout the sensing time can spread over very long periods and may have different start time and end time depending on the location, which complicates the mining and the analysis of series of images. This work focuses on frequent sequential pattern mining (FSPM) methods, since this family of methods fits the above-mentioned issues. This family of methods consists of finding the most frequent evolution behaviors, and is actually able to extract long-term changes as well as short term ones, whenever the change may start and end. However, applying FSPM methods to SITS implies confronting two main challenges, related to the characteristics of SITS and the domain's constraints. First, satellite images associate multiple measures with a single pixel (the radiometric levels of different wavelengths corresponding to infra-red, red, etc.), which makes the search space multi-dimensional and thus requires specific mining algorithms. Furthermore, the non evolving regions, which are the vast majority and overwhelm the evolving ones, challenge the discovery of these patterns. We propose a SITS mining framework that enables discovery of these patterns despite these constraints and characteristics. Our proposal is inspired from FSPM and provides a relevant visualization principle. Experiments carried out on 35 images sensed over 20 years show the proposed approach makes it possible to extract relevant evolution behaviors.

  2. Method Points: towards a metric for method complexity

    Directory of Open Access Journals (Sweden)

    Graham McLeod

    1998-11-01

    Full Text Available A metric for method complexity is proposed as an aid to choosing between competing methods, as well as in validating the effects of method integration or the products of method engineering work. It is based upon a generic method representation model previously developed by the author and adaptation of concepts used in the popular Function Point metric for system size. The proposed technique is illustrated by comparing two popular I.E. deliverables with counterparts in the object oriented Unified Modeling Language (UML. The paper recommends ways to improve the practical adoption of new methods.

  3. A proposed method for accurate 3D analysis of cochlear implant migration using fusion of cone beam CT

    Directory of Open Access Journals (Sweden)

    Guido eDees

    2016-01-01

    Full Text Available IntroductionThe goal of this investigation was to compare fusion of sequential cone beam CT volumes to the gold standard (fiducial registration in order to be able to analyze clinical CI migration with high accuracy in three dimensions. Materials and MethodsPaired time-lapsed cone beam CT volumes were performed on five human cadaver temporal bones and one human subject. These volumes were fused using 3D Slicer 4 and BRAINSFit software. Using a gold standard fiducial technique, the accuracy, robustness and performance time of the fusion process were assessed.Results This proposed fusion protocol achieves a sub voxel mean Euclidean distance of 0.05 millimeter in human cadaver temporal bones and 0.16 millimeter when applied to the described in vivo human synthetic data set in over 95% of all fusions. Performance times are less than two minutes.ConclusionHere a new and validated method based on existing techniques is described which could be used to accurately quantify migration of cochlear implant electrodes.

  4. Significance of steel electrical resistance method in the evaluation of reinforcement corrosion in cementitious systems

    Directory of Open Access Journals (Sweden)

    Krajci, L.

    2004-06-01

    Full Text Available The suitable detection system of steel reinforcement corrosion in concrete structures contributes to the reduction of their maintenance costs. Method of steel electrical resistance represents non-destructive monitoring of steel in cementitious systems. Specially prepared and arranged test specimen of steel as a corrosion sensor is embedded in mortar specimen. Verification tests of this method based on chloride corrosion of steel in mortars as well as its visual inspection are introduced. Significance of steel electrical resistance method lies in the expression of steel corrosion by these quantitative parameters: reduction of cross-section of steel, thickness of corroded layer and loss of weight of steel material. This method is an integral method that allows the indirect determination of mentioned corrosion characteristics. The comparison of verified method with gravimetric evaluation of steel corrosion gives a good correspondence. Test results on mortars with calcium chloride dosages between 0.5% and 4.0% by weight of cement prove high sensitiveness and reliability of steel electrical resistance method.

    La utilización de un sistema de detección de la corrosión de las armaduras en estructuras de hormigón puede contribuir a la reducción de sus costes de mantenimiento. El método de la resistencia eléctrica del acero consiste en la monitorización no-destructiva realizada sobre el acero en sistemas cementantes. Dentro de la muestra de mortero se coloca el sistema de detección, especialmente preparado y fijado, actuando como un sensor de la corrosión. En este trabajo se presentan ensayos de verificación de este método, junto con inspecciones visuales, en morteros sometidos a corrosión de armaduras por efecto de los cloruros. La efectividad de este método de la resistencia eléctrica del acero se expresa, en la corrosión de armaduras, de acuerdo a los siguientes parámetros cuantitativos: reducción de la sección transversal del

  5. A Novel Polygonal Finite Element Method: Virtual Node Method

    Science.gov (United States)

    Tang, X. H.; Zheng, C.; Zhang, J. H.

    2010-05-01

    Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.

  6. Scientific Opinion on a composting method proposed by Portugal as a heat treatment to eliminate pine wood nematode from the bark of pine trees

    DEFF Research Database (Denmark)

    Baker, R.; Candresse, T.; Dormannsné Simon, E.

    2010-01-01

    Following a request from the European Commission, the Panel on Plant Health was asked to deliver a scientific opinion on the appropriateness of a composting method proposed by Portugal as a heat treatment to eliminate pine wood nematode (PWN), Bursaphelenchus xylophilus (Steiner and Buhrer) Nickle......) insufficient evidence on the sampling methodology is provided to determine the reliability of the testing method provided by the Portuguese document to determine freedom from PWN. Although there is potential for development of a composting method as a heat treatment to eliminate PWN from bark of pine trees...

  7. A proposal for experimental homework

    Science.gov (United States)

    Rodríguez, Eduardo E.

    1998-10-01

    Homework in Physics courses usually deal with conceptual inquiries or numerical solution of theoretical problems. However, experimental homework is rather uncommon. I propose that certain physical situations properly simulated may be useful to encourage students to seek a solution behind the steps of the "experimental method."

  8. Proposal of a New Method for Neutron Dosimetry Based on Spectral Information Obtained by Application of Artificial Neural Networks

    International Nuclear Information System (INIS)

    Fehrenbacher, G.; Schuetz, R.; Hahn, K.; Sprunck, M.; Cordes, E.; Biersack, J.P.; Wahl, W.

    1999-01-01

    A new method for the monitoring of neutron radiation is proposed. It is based on the determination of spectral information on the neutron field in order to derive dose quantities like the ambient dose equivalent, the dose equivalent, or other dose quantities which depend on the neutron energy. The method uses a multi-element system consisting of converter type silicon detectors. The unfolding procedure is based on an artificial neural network (ANN). The response function of each element is determined by a computational model considering the neutron interaction with the dosemeter layers and the subsequent transport of produced ions. An example is given for a multi-element system. The ANN is trained by a given set of neutron spectra and then applied to count responses obtained in neutron fields. Four examples of spectra unfolded using the ANN are presented. (author)

  9. Mapping subsurface pathways for contaminant migration at a proposed low level waste disposal site using electromagnetic methods

    International Nuclear Information System (INIS)

    Pin, F.G.; Ketelle, R.H.

    1984-01-01

    Electromagnetic methods have been used to measure apparent terrain conductivity in the downstream portion of a watershed in which a waste disposal site is proposed. At that site, the pathways for waste migration in ground water are controlled by subsurface channels. The channels are identified using isocurves of measured apparent conductivity. Two upstream channel branches are found to merge into a single downstream channel which constitutes the main drainage path out of the watershed. The identification and mapping of the ground water pathways is an important contribution to the site characterization study and the pathways analysis. The direct applications of terrain conductivity mapping to the planning of the monitoring program, the hydrogeological testing, and the modeling study are demonstrated. 7 references, 4 figures

  10. A novel scene-based non-uniformity correction method for SWIR push-broom hyperspectral sensors

    Science.gov (United States)

    Hu, Bin-Lin; Hao, Shi-Jing; Sun, De-Xin; Liu, Yin-Nian

    2017-09-01

    A novel scene-based non-uniformity correction (NUC) method for short-wavelength infrared (SWIR) push-broom hyperspectral sensors is proposed and evaluated. This method relies on the assumption that for each band there will be ground objects with similar reflectance to form uniform regions when a sufficient number of scanning lines are acquired. The uniform regions are extracted automatically through a sorting algorithm, and are used to compute the corresponding NUC coefficients. SWIR hyperspectral data from airborne experiment are used to verify and evaluate the proposed method, and results show that stripes in the scenes have been well corrected without any significant information loss, and the non-uniformity is less than 0.5%. In addition, the proposed method is compared to two other regular methods, and they are evaluated based on their adaptability to the various scenes, non-uniformity, roughness and spectral fidelity. It turns out that the proposed method shows strong adaptability, high accuracy and efficiency.

  11. RS-SNP: a random-set method for genome-wide association studies

    Directory of Open Access Journals (Sweden)

    Mukherjee Sayan

    2011-03-01

    Full Text Available Abstract Background The typical objective of Genome-wide association (GWA studies is to identify single-nucleotide polymorphisms (SNPs and corresponding genes with the strongest evidence of association (the 'most-significant SNPs/genes' approach. Borrowing ideas from micro-array data analysis, we propose a new method, named RS-SNP, for detecting sets of genes enriched in SNPs moderately associated to the phenotype. RS-SNP assesses whether the number of significant SNPs, with p-value P ≤ α, belonging to a given SNP set is statistically significant. The rationale of proposed method is that two kinds of null hypotheses are taken into account simultaneously. In the first null model the genotype and the phenotype are assumed to be independent random variables and the null distribution is the probability of the number of significant SNPs in greater than observed by chance. The second null model assumes the number of significant SNPs in depends on the size of and not on the identity of the SNPs in . Statistical significance is assessed using non-parametric permutation tests. Results We applied RS-SNP to the Crohn's disease (CD data set collected by the Wellcome Trust Case Control Consortium (WTCCC and compared the results with GENGEN, an approach recently proposed in literature. The enrichment analysis using RS-SNP and the set of pathways contained in the MSigDB C2 CP pathway collection highlighted 86 pathways rich in SNPs weakly associated to CD. Of these, 47 were also indicated to be significant by GENGEN. Similar results were obtained using the MSigDB C5 pathway collection. Many of the pathways found to be enriched by RS-SNP have a well-known connection to CD and often with inflammatory diseases. Conclusions The proposed method is a valuable alternative to other techniques for enrichment analysis of SNP sets. It is well founded from a theoretical and statistical perspective. Moreover, the experimental comparison with GENGEN highlights that it is

  12. 78 FR 22576 - Application and Amendment to Facility Operating License Involving Proposed No Significant Hazards...

    Science.gov (United States)

    2013-04-16

    ... sign documents and access the E-Submittal server for any proceeding in which it is participating; and... conditions associated with reactor core power levels up to 70 percent Rated Thermal Power (2406.6 megawatts... (unless this document describes a different method for submitting comments on a specific subject): Federal...

  13. DAPs: Deep Action Proposals for Action Understanding

    KAUST Repository

    Escorcia, Victor

    2016-09-17

    Object proposals have contributed significantly to recent advances in object understanding in images. Inspired by the success of this approach, we introduce Deep Action Proposals (DAPs), an effective and efficient algorithm for generating temporal action proposals from long videos. We show how to take advantage of the vast capacity of deep learning models and memory cells to retrieve from untrimmed videos temporal segments, which are likely to contain actions. A comprehensive evaluation indicates that our approach outperforms previous work on a large scale action benchmark, runs at 134 FPS making it practical for large-scale scenarios, and exhibits an appealing ability to generalize, i.e. to retrieve good quality temporal proposals of actions unseen in training.

  14. A multilevel correction adaptive finite element method for Kohn-Sham equation

    Science.gov (United States)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  15. Adjusted Empirical Likelihood Method in the Presence of Nuisance Parameters with Application to the Sharpe Ratio

    Directory of Open Access Journals (Sweden)

    Yuejiao Fu

    2018-04-01

    Full Text Available The Sharpe ratio is a widely used risk-adjusted performance measurement in economics and finance. Most of the known statistical inferential methods devoted to the Sharpe ratio are based on the assumption that the data are normally distributed. In this article, without making any distributional assumption on the data, we develop the adjusted empirical likelihood method to obtain inference for a parameter of interest in the presence of nuisance parameters. We show that the log adjusted empirical likelihood ratio statistic is asymptotically distributed as the chi-square distribution. The proposed method is applied to obtain inference for the Sharpe ratio. Simulation results illustrate that the proposed method is comparable to Jobson and Korkie’s method (1981 and outperforms the empirical likelihood method when the data are from a symmetric distribution. In addition, when the data are from a skewed distribution, the proposed method significantly outperforms all other existing methods. A real-data example is analyzed to exemplify the application of the proposed method.

  16. Recommendation advertising method based on behavior retargeting

    Science.gov (United States)

    Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min

    2011-10-01

    Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.

  17. Quality Assurance in Higher Education: Proposals for Consultation.

    Science.gov (United States)

    Higher Education Funding Council for England, Bristol.

    This document sets out for consultation proposals for a revised method for quality assurance of teaching and learning in higher education. The proposals cover: (1) the objectives and principles of quality assurance; (2) an approach to quality assurance based on external audit principles; (3) the collection and publication of information; (4)…

  18. Proposed test method for determining discharge rates from water closets

    DEFF Research Database (Denmark)

    Nielsen, V.; Fjord Jensen, T.

    At present the rates at which discharge takes place from sanitary appliances are mostly known only in the form of estimated average values. SBI has developed a measuring method enabling determination of the exact rate of discharge from a sanitary appliance as function of time. The methods depends...

  19. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel ‘V-plot’ methodology to display accuracy values

    Science.gov (United States)

    Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Background Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test’s performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. Methods and findings We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Cholrapid and Cholgold) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). Conclusion No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard. PMID:29387424

  20. Monju: Current status and proposed improvements

    International Nuclear Information System (INIS)

    Nagata, Takashi

    2001-01-01

    Activities since the Monju reactor accident are described. They include investigation of sodium leak, namely cause of thermocouple well failure and damage caused by sodium combustion. This accident did not affect the safety of the reactor or the integrity of the buildings and structures. Improvements have been proposed to overcome the problems highlighted by the comprehensive safety review of Monju. Improvements of communication are discussed, including incident reporting, public information, corporate culture. The proposed countermeasures against sodium leakage are described in detail. They are as follows: prevention of sodium leakage, early detection of sodium leakage, reduction of sodium spilling and prevention of re-ignition, suppression of moisture release from concrete structures. Replacement of thermocouple wells is proposed, as well as methods of preventing flow induced vibration

  1. A Novel Method to Identify Differential Pathways in Hippocampus Alzheimer's Disease.

    Science.gov (United States)

    Liu, Chun-Han; Liu, Lian

    2017-05-08

    BACKGROUND Alzheimer's disease (AD) is the most common type of dementia. The objective of this paper is to propose a novel method to identify differential pathways in hippocampus AD. MATERIAL AND METHODS We proposed a combined method by merging existed methods. Firstly, pathways were identified by four known methods (DAVID, the neaGUI package, the pathway-based co-expressed method, and the pathway network approach), and differential pathways were evaluated through setting weight thresholds. Subsequently, we combined all pathways by a rank-based algorithm and called the method the combined method. Finally, common differential pathways across two or more of five methods were selected. RESULTS Pathways obtained from different methods were also different. The combined method obtained 1639 pathways and 596 differential pathways, which included all pathways gained from the four existing methods; hence, the novel method solved the problem of inconsistent results. Besides, a total of 13 common pathways were identified, such as metabolism, immune system, and cell cycle. CONCLUSIONS We have proposed a novel method by combining four existing methods based on a rank product algorithm, and identified 13 significant differential pathways based on it. These differential pathways might provide insight into treatment and diagnosis of hippocampus AD.

  2. Statistical significance of cis-regulatory modules

    Directory of Open Access Journals (Sweden)

    Smith Andrew D

    2007-01-01

    Full Text Available Abstract Background It is becoming increasingly important for researchers to be able to scan through large genomic regions for transcription factor binding sites or clusters of binding sites forming cis-regulatory modules. Correspondingly, there has been a push to develop algorithms for the rapid detection and assessment of cis-regulatory modules. While various algorithms for this purpose have been introduced, most are not well suited for rapid, genome scale scanning. Results We introduce methods designed for the detection and statistical evaluation of cis-regulatory modules, modeled as either clusters of individual binding sites or as combinations of sites with constrained organization. In order to determine the statistical significance of module sites, we first need a method to determine the statistical significance of single transcription factor binding site matches. We introduce a straightforward method of estimating the statistical significance of single site matches using a database of known promoters to produce data structures that can be used to estimate p-values for binding site matches. We next introduce a technique to calculate the statistical significance of the arrangement of binding sites within a module using a max-gap model. If the module scanned for has defined organizational parameters, the probability of the module is corrected to account for organizational constraints. The statistical significance of single site matches and the architecture of sites within the module can be combined to provide an overall estimation of statistical significance of cis-regulatory module sites. Conclusion The methods introduced in this paper allow for the detection and statistical evaluation of single transcription factor binding sites and cis-regulatory modules. The features described are implemented in the Search Tool for Occurrences of Regulatory Motifs (STORM and MODSTORM software.

  3. Real-time 3D imaging methods using 2D phased arrays based on synthetic focusing techniques.

    Science.gov (United States)

    Kim, Jung-Jun; Song, Tai-Kyong

    2008-07-01

    A fast 3D ultrasound imaging technique using a 2D phased array transducer based on the synthetic focusing method for nondestructive testing or medical imaging is proposed. In the proposed method, each column of a 2D array is fired successively to produce transverse fan beams focused at a fixed depth along a given longitudinal direction and the resulting pulse echoes are received at all elements of a 2D array used. After firing all column arrays, a frame of high-resolution image along a given longitudinal direction is obtained with dynamic focusing employed in the longitudinal direction on receive and in the transverse direction on both transmit and receive. The volume rate of the proposed method can be increased much higher than that of the conventional 2D array imaging by employing an efficient sparse array technique. A simple modification to the proposed method can further increase the volume scan rate significantly. The proposed methods are verified through computer simulations.

  4. Efficient and accurate Greedy Search Methods for mining functional modules in protein interaction networks.

    Science.gov (United States)

    He, Jieyue; Li, Chaojun; Ye, Baoliu; Zhong, Wei

    2012-06-25

    Most computational algorithms mainly focus on detecting highly connected subgraphs in PPI networks as protein complexes but ignore their inherent organization. Furthermore, many of these algorithms are computationally expensive. However, recent analysis indicates that experimentally detected protein complexes generally contain Core/attachment structures. In this paper, a Greedy Search Method based on Core-Attachment structure (GSM-CA) is proposed. The GSM-CA method detects densely connected regions in large protein-protein interaction networks based on the edge weight and two criteria for determining core nodes and attachment nodes. The GSM-CA method improves the prediction accuracy compared to other similar module detection approaches, however it is computationally expensive. Many module detection approaches are based on the traditional hierarchical methods, which is also computationally inefficient because the hierarchical tree structure produced by these approaches cannot provide adequate information to identify whether a network belongs to a module structure or not. In order to speed up the computational process, the Greedy Search Method based on Fast Clustering (GSM-FC) is proposed in this work. The edge weight based GSM-FC method uses a greedy procedure to traverse all edges just once to separate the network into the suitable set of modules. The proposed methods are applied to the protein interaction network of S. cerevisiae. Experimental results indicate that many significant functional modules are detected, most of which match the known complexes. Results also demonstrate that the GSM-FC algorithm is faster and more accurate as compared to other competing algorithms. Based on the new edge weight definition, the proposed algorithm takes advantages of the greedy search procedure to separate the network into the suitable set of modules. Experimental analysis shows that the identified modules are statistically significant. The algorithm can reduce the

  5. A new ART iterative method and a comparison of performance among various ART methods

    International Nuclear Information System (INIS)

    Tan, Yufeng; Sato, Shunsuke

    1993-01-01

    Many algebraic reconstruction techniques (ART) image reconstruction algorithms, for instance, simultaneous iterative reconstruction technique (SIRT), the relaxation method and multiplicative ART (MART), have been proposed and their convergent properties have been studied. SIRT and the underrelaxed relaxation method converge to the least-squares solution, but the convergent speeds are very slow. The Kaczmarz method converges very quickly, but the reconstructed images contain a lot of noise. The comparative studies between these algorithms have been done by Gilbert and others, but are not adequate. In this paper, we (1) propose a new method which is a modified Kaczmarz method and prove its convergence property, (2) study performance of 7 algorithms including the one proposed here by computer simulation for 3 kinds of typical phantoms. The method proposed here does not give the least-square solution, but the root mean square errors of its reconstructed images decrease very quickly after few interations. The result shows that the method proposed here gives a better reconstructed image. (author)

  6. Enhancing the (MSLDIP) image steganographic method (ESLDIP method)

    Science.gov (United States)

    Seddik Saad, Al-hussien

    2011-10-01

    Message transmissions over the Internet still have data security problem. Therefore, secure and secret communication methods are needed for transmitting messages over the Internet. Cryptography scrambles the message so that it cannot be understood. However, it makes the message suspicious enough to attract eavesdropper's attention. Steganography hides the secret message within other innocuous-looking cover files (i.e. images, music and video files) so that it cannot be observed [1].The term steganography originates from the Greek root words "steganos'' and "graphein'' which literally mean "covered writing''. It is defined as the science that involves communicating secret data in an appropriate multimedia carrier, e.g., image, audio text and video files [3].Steganographic techniques allow one party to communicate information to another without a third party even knowing that the communication is occurring. The ways to deliver these "secret messages" vary greatly [3].Our proposed method called Enhanced SLDIP (ESLDIP). In which the maximmum hiding capacity (MHC) of proposed ESLDIP method is higher than the previously proposed MSLDIP methods and the PSNR of the ESLDIP method is higher than the MSLDIP PSNR values', which means that the image quality of the ESLDIP method will be better than MSLDIP method and the maximmum hiding capacity (MHC) also improved. The rest of this paper is organized as follows. In section 2, steganography has been discussed; lingo, carriers and types. In section 3, related works are introduced. In section 4, the proposed method will be discussed in details. In section 5, the simulation results are given and Section 6 concludes the paper.

  7. A Root-MUSIC-Like Direction Finding Method for Cyclostationary Signals

    Directory of Open Access Journals (Sweden)

    Yide Wang

    2005-01-01

    Full Text Available We propose a new root-MUSIC-like direction finding algorithm that exploits cyclostationarity in order to improve the direction-of-arrival estimation. The proposed cyclic method is signal selective, it allows to increase the resolution power and the noise robustness significantly, and it is also able to handle more sources than the number of sensors. Computer simulations are used to show the performance of the algorithm.

  8. Dosimetry using radiosensitive gels in radiotherapy: significance and methods

    International Nuclear Information System (INIS)

    Gibon, D.; Bourel, P.; Castelain, B.; Marchandise, X.; Rousseau, J.

    2001-01-01

    The goal of conformal radiotherapy is to concentrate the dose in a well-defined volume by avoiding the neighbouring healthy structures. This technique requires powerful treatment planning software and a rigorous control of estimated dosimetry. The usual dosimetric tools are not adapted to visualize and validate complex 3D treatment. Dosimetry by radiosensitive gel permits visualization and measurement of the three-dimensional dose distribution. The objective of this work is to report on current work in this field and, based on our results and our experience, to draw prospects for an optimal use of this technique. Further developments will relate to the realization of new radiosensitive gels satisfying, as well as possible, cost requirements, easy realization and use, magnetic resonance imagery (MRI) sensitivity, tissue equivalence, and stability. Other developments focus on scanning methods, especially in MRI to measure T1 and T2. (author)

  9. Proposals for Iterated Hash Functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2008-01-01

    The past few years have seen an increase in the number of attacks on cryptographic hash functions. These include attacks directed at specific hash functions, and generic attacks on the typical method of constructing hash functions. In this paper we discuss possible methods for protecting against...... some generic attacks. We also give a concrete proposal for a new hash function construction, given a secure compression function which, unlike in typical existing constructions, is not required to be resistant to all types of collisions. Finally, we show how members of the SHA-family can be turned...

  10. Proposals for iterated hash functions

    DEFF Research Database (Denmark)

    Knudsen, Lars Ramkilde; Thomsen, Søren Steffen

    2006-01-01

    The past few years have seen an increase in the number of attacks on cryptographic hash functions. These include attacks directed at specific hash functions, and generic attacks on the typical method of constructing hash functions. In this paper we discuss possible methods for protecting against...... some generic attacks. We also give a concrete proposal for a new hash function construction, given a secure compression function which, unlike in typical existing constructions, is not required to be resistant to all types of collisions. Finally, we show how members of the SHA-family can be turned...

  11. Untargeted metabolomic profiling plasma samples of patients with lung cancer for searching significant metabolites by HPLC-MS method

    Science.gov (United States)

    Dementeva, N.; Ivanova, K.; Kokova, D.; Kurzina, I.; Ponomaryova, A.; Kzhyshkowska, J.

    2017-09-01

    Lung cancer is one of the most common types of cancer leading to death. Consequently, the search and the identification of the metabolites associated with the risk of developing cancer are very valuable. For the purpose, untargeted metabolic profiling of the plasma samples collected from the patients with lung cancer (n = 100) and the control group (n = 100) was conducted. After sample preparation, the plasma samples were analyzed using LC-MS method. Biostatistics methods were applied to pre-process the data for elicitation of dominating metabolites which responded to the difference between the case and the control groups. At least seven significant metabolites were evaluated and annotated. The most part of identified metabolites are connected with lipid metabolism and their combination could be useful for follow-up studies of lung cancer pathogenesis.

  12. Text-in-context: a method for extracting findings in mixed-methods mixed research synthesis studies.

    Science.gov (United States)

    Sandelowski, Margarete; Leeman, Jennifer; Knafl, Kathleen; Crandell, Jamie L

    2013-06-01

    Our purpose in this paper is to propose a new method for extracting findings from research reports included in mixed-methods mixed research synthesis studies. International initiatives in the domains of systematic review and evidence synthesis have been focused on broadening the conceptualization of evidence, increased methodological inclusiveness and the production of evidence syntheses that will be accessible to and usable by a wider range of consumers. Initiatives in the general mixed-methods research field have been focused on developing truly integrative approaches to data analysis and interpretation. The data extraction challenges described here were encountered, and the method proposed for addressing these challenges was developed, in the first year of the ongoing (2011-2016) study: Mixed-Methods Synthesis of Research on Childhood Chronic Conditions and Family. To preserve the text-in-context of findings in research reports, we describe a method whereby findings are transformed into portable statements that anchor results to relevant information about sample, source of information, time, comparative reference point, magnitude and significance and study-specific conceptions of phenomena. The data extraction method featured here was developed specifically to accommodate mixed-methods mixed research synthesis studies conducted in nursing and other health sciences, but reviewers might find it useful in other kinds of research synthesis studies. This data extraction method itself constitutes a type of integration to preserve the methodological context of findings when statements are read individually and in comparison to each other. © 2012 Blackwell Publishing Ltd.

  13. Understanding the unbundled utility conservation bidding proposal

    International Nuclear Information System (INIS)

    Joskow, P.L.

    1990-01-01

    For several years regulatory advisers have been engaged in controversy about the propriety of integrating energy conservation measures into the total resource planning processes of electric utilities, and of proposed methods of doing so in the competitive supply procurement programs which have been initiated by some utilities. Two prominent economists conceived a method for doing this in competitive bidding programs and at the same time overcoming objections to previous proposals which were based on perceived violations of basic economic principles. They explained their concept and its operation in an article published here in June of last year. In this article another economist subjects the concept to further analysis, identifying its essential elements, and point to inevitable results of their application

  14. Tax Policy Trends: Republicans Reveal Proposed Tax Overhaul

    Directory of Open Access Journals (Sweden)

    Philip Bazel

    2017-10-01

    Full Text Available REPUBLICANS REVEAL PROPOSED TAX OVERHAUL The White House and Congressional Republicans have revealed their much-anticipated proposal for reform of the U.S. personal and corporate tax systems. The proposal titled, “UNIFIED FRAMEWORK FOR FIXING OUR BROKEN TAX CODE” outlines a number of central policy changes, which will significantly alter the U.S. corporate tax system. The proposal includes a top federal marginal rate reduction for the sole proprietorships, partnerships and S corporation—small business equivalents— from 39.6% to 25% (state income tax rates would no longer be deductible. Large corporations would also see a meaningful federal rate reduction given the proposed drop in the federal corporate income tax rate from 35% to 20%. Additionally, the proposal includes a generous temporary measure intended to stimulate investment, full capital expensing for machinery with a partial limitation of interest deductions.

  15. An Enhanced Run-Length Encoding Compression Method for Telemetry Data

    Directory of Open Access Journals (Sweden)

    Shan Yanhu

    2017-09-01

    Full Text Available The telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.

  16. A Method for Automated Planning of FTTH Access Network Infrastructures

    DEFF Research Database (Denmark)

    Riaz, Muhammad Tahir; Pedersen, Jens Myrup; Madsen, Ole Brun

    2005-01-01

    In this paper a method for automated planning of Fiber to the Home (FTTH) access networks is proposed. We introduced a systematic approach for planning access network infrastructure. The GIS data and a set of algorithms were employed to make the planning process more automatic. The method explains...... method. The method, however, does not fully automate the planning but make the planning process significantly fast. The results and discussion are presented and conclusion is given in the end....

  17. Proposal of secure camera-based radiation warning system for nuclear detection

    International Nuclear Information System (INIS)

    Tsuchiya, Ken'ichi; Kurosawa, Kenji; Akiba, Norimitsu; Kakuda, Hidetoshi; Imoto, Daisuke; Hirabayashi, Manato; Kuroki, Kenro

    2016-01-01

    Counter-terrorisms against radiological and nuclear threat are significant issues toward Tokyo 2020 Olympic and Paralympic Games. In terms of cost benefit, it is not easy to build a warning system for nuclear detection to prevent a Dirty Bomb attack (dispersion of radioactive materials using a conventional explosive) or a Silent Source attack (hidden radioactive materials) from occurring. We propose a nuclear detection system using the installed secure cameras. We describe a method to estimate radiation dose from noise pattern in CCD images caused by radiation. Some dosimeters under neutron and gamma-ray irradiations (0.1mSv-100mSv) were taken in CCD video camera. We confirmed amount of noise in CCD images increased in radiation exposure. The radiation detection using CMOS in secure cameras or cell phones has been implemented. However, in this presentation, we propose a warning system including neutron detection to search shielded nuclear materials or radiation exposure devices using criticality. (author)

  18. Cytogenetic chromosomal aberration dosimetry method after radiation accidents and prognostic significance of stereotypically appearing chromosomal aberrations after radiation exposure

    International Nuclear Information System (INIS)

    Bloennigen, K.A.

    1973-01-01

    The paper reports on a radiation accident involving an Iridium-192 rod of an activity of 7.8 Ci and a size of 2 x 2 x 2 mm 3 . The radiation source had remained in direct contact with the left hip and elbow of the examined person for a period of 45 minutes. On the points that had been directly exposed, physical values of 5,000 rad and 10,000 rad were measured while the whole-body dose was 100-200 rad and the gonad dose 300-400 rad. These values were confirmed by observations of the clinical course and haematological and andrological examinations. Chromosome analysis of lymphocytes produced values between 100 and 125 and thus a significant agreement with the values determined by physical methods. The findings suggest that the relatively simple and fast method of cytogenetic dosimetry provides a useful complementary method to physical dosimetry. (orig./AK) [de

  19. Review of Dercum’s disease and proposal of diagnostic criteria, diagnostic methods, classification and management

    Directory of Open Access Journals (Sweden)

    Hansson Emma

    2012-04-01

    Full Text Available Abstract Definition and clinical picture We propose the minimal definition of Dercum’s disease to be generalised overweight or obesity in combination with painful adipose tissue. The associated symptoms in Dercum’s disease include fatty deposits, easy bruisability, sleep disturbances, impaired memory, depression, difficulty concentrating, anxiety, rapid heartbeat, shortness of breath, diabetes, bloating, constipation, fatigue, weakness and joint aches. Classification We suggest that Dercum’s disease is classified into: I. Generalised diffuse form A form with diffusely widespread painful adipose tissue without clear lipomas, II. Generalised nodular form - a form with general pain in adipose tissue and intense pain in and around multiple lipomas, and III. Localised nodular form - a form with pain in and around multiple lipomas IV. Juxtaarticular form - a form with solitary deposits of excess fat for example at the medial aspect of the knee. Epidemiology Dercum’s disease most commonly appears between the ages of 35 and 50 years and is five to thirty times more common in women than in men. The prevalence of Dercum’s disease has not yet been exactly established. Aetiology Proposed, but unconfirmed aetiologies include: nervous system dysfunction, mechanical pressure on nerves, adipose tissue dysfunction and trauma. Diagnosis and diagnostic methods Diagnosis is based on clinical criteria and should be made by systematic physical examination and thorough exclusion of differential diagnoses. Advisably, the diagnosis should be made by a physician with a broad experience of patients with painful conditions and knowledge of family medicine, internal medicine or pain management. The diagnosis should only be made when the differential diagnoses have been excluded. Differential diagnosis Differential diagnoses include: fibromyalgia, lipoedema, panniculitis, endocrine disorders, primary psychiatric disorders, multiple symmetric lipomatosis, familial

  20. Hybrid methods for airframe noise numerical prediction

    Energy Technology Data Exchange (ETDEWEB)

    Terracol, M.; Manoha, E.; Herrero, C.; Labourasse, E.; Redonnet, S. [ONERA, Department of CFD and Aeroacoustics, BP 72, Chatillon (France); Sagaut, P. [Laboratoire de Modelisation en Mecanique - UPMC/CNRS, Paris (France)

    2005-07-01

    This paper describes some significant steps made towards the numerical simulation of the noise radiated by the high-lift devices of a plane. Since the full numerical simulation of such configuration is still out of reach for present supercomputers, some hybrid strategies have been developed to reduce the overall cost of such simulations. The proposed strategy relies on the coupling of an unsteady nearfield CFD with an acoustic propagation solver based on the resolution of the Euler equations for midfield propagation in an inhomogeneous field, and the use of an integral solver for farfield acoustic predictions. In the first part of this paper, this CFD/CAA coupling strategy is presented. In particular, the numerical method used in the propagation solver is detailed, and two applications of this coupling method to the numerical prediction of the aerodynamic noise of an airfoil are presented. Then, a hybrid RANS/LES method is proposed in order to perform some unsteady simulations of complex noise sources. This method allows for significant reduction of the cost of such a simulation by considerably reducing the extent of the LES zone. This method is described and some results of the numerical simulation of the three-dimensional unsteady flow in the slat cove of a high-lift profile are presented. While these results remain very difficult to validate with experiments on similar configurations, they represent up to now the first 3D computations of this kind of flow. (orig.)

  1. Proposal of a simple screening method for a rapid preliminary evaluation of ''heavy metals'' mobility in soils of contaminated sites

    Energy Technology Data Exchange (ETDEWEB)

    Pinto, Valentina; Chiusolo, Francesca; Cremisini, Carlo [ENEA - Italian Agency for New Technologies, Energy and Environment, Rome (Italy). Section PROTCHIM

    2010-09-15

    Risks associated to ''heavy metals'' (HM) soil contamination depend not only on their total content but, mostly, on their mobility. Many extraction procedures have been developed to evaluate HM mobility in contaminated soils, but they are generally time consuming (especially the sequential extraction procedures (SEPs)) and consequently applicable on a limited number of samples. For this reason, a simple screening method, applicable even ''in field'', has been proposed in order to obtain a rapid evaluation of HM mobility in polluted soils, mainly focused on the fraction associated to Fe and Mn oxide/hydroxides. A buffer solution of trisodium citrate and hydroxylamine hydrochloride was used as extractant for a single-step leaching test. The choice of this buffered solution was strictly related to the possibility of directly determining, via titration with dithizone (DZ), the content of Zn, Cu, Pb and Cd, which are among the most representative contaminants in highly mineralised soils. Moreover, the extraction solution is similar, aside from for the pH value, which is the one used in the BCR SEP second step. The analysis of bivalents ions through DZ titration was exploited in order to further simplify and quicken the whole procedure. The proposed method generically measures, in few minutes, the concentration of total extractable ''heavy metals'' expressed as molL{sup -1} without distinguishing between elements. The proposed screening method has been developed and applied on soil samples collected from rural, urban and mining areas, representing different situation of soil contamination. Results were compared with data obtained from the BCR procedure. The screening method demonstrated to be a reliable tool for a rapid evaluation of metals mobility. Therefore, it could be very useful, even ''in field'', both to guide the sampling activity on site and to monitor the efficacy of the subsequent

  2. Methods of Environmental Impact Assessment in Colombia

    Directory of Open Access Journals (Sweden)

    Javier Toro Calderón

    2013-10-01

    Full Text Available The Environmental Impact Assessment (EIA in Colombia constitutes the primary tool for making decisions with respect to projects, works and activities (PWA with potential for significant environmental impacts. In the case of the infrastructure of the PWA, the EIA is mandatory and determines the environmental license (EL for construction and operation. This paper analyzes the methods used to assess the environmental impact of the PWA that have applied for licenses with the Ministry of Environment and Sustainable Development. It was found that the method most frequently used is the qualitative proposal by Conesa, with modifications that reduce the effectiveness of the EIA and favor the subjectivity and bias of the evaluator. Finally a series of recom­mendations to improve the process in the country are proposed.

  3. Proposal of Innovative Approaches of Relationship Marketing in Business

    Directory of Open Access Journals (Sweden)

    Viliam Lendel

    2015-03-01

    Full Text Available The aim of this paper is to propose innovative approaches to relationship marketing that affect the process of building relationships with customers, based on a detailed analysis of the literary sources and the research. This proposal is supported by the information technology e-CRM and social CRM. The paper contains a detailed description of the procedure for successfully implementing innovative approaches to relationship marketing in business. This should serve mainly to marketing managers as a valuable tool in their use of innovative approaches to relationship marketing, especially in the process of obtaining innovative ideas from customers, in order to identify their needs and requirements. Furthermore, the paper contains the main results of our research aimed at identifying the extent of utilization of innovative approaches to relationship marketing in Slovak businesses. A total of 207 respondents were involved in the research (medium and large businesses and following methods were used: comparative method of qualitative evaluation method, the method of structured and structured interview method, observation, document analysis method (method of content analysis and questionnaire method.

  4. Acoustic methods for cavitation mapping in biomedical applications

    Science.gov (United States)

    Wan, M.; Xu, S.; Ding, T.; Hu, H.; Liu, R.; Bai, C.; Lu, S.

    2015-12-01

    In recent years, cavitation is increasingly utilized in a wide range of applications in biomedical field. Monitoring the spatial-temporal evolution of cavitation bubbles is of great significance for efficiency and safety in biomedical applications. In this paper, several acoustic methods for cavitation mapping proposed or modified on the basis of existing work will be presented. The proposed novel ultrasound line-by-line/plane-by-plane method can depict cavitation bubbles distribution with high spatial and temporal resolution and may be developed as a potential standard 2D/3D cavitation field mapping method. The modified ultrafast active cavitation mapping based upon plane wave transmission and reception as well as bubble wavelet and pulse inversion technique can apparently enhance the cavitation to tissue ratio in tissue and further assist in monitoring the cavitation mediated therapy with good spatial and temporal resolution. The methods presented in this paper will be a foundation to promote the research and development of cavitation imaging in non-transparent medium.

  5. Analysis of random response of structure with uncertain parameters. Combination of substructure synthesis method and hierarchy method

    International Nuclear Information System (INIS)

    Iwatsubo, Takuzo; Kawamura, Shozo; Mori, Hiroyuki.

    1995-01-01

    In this paper, the method to obtain the random response of a structure with uncertain parameters is proposed. The proposed method is a combination of the substructure synthesis method and the hierarchy method. The concept of the proposed method is that the hierarchy equation of each substructure is obtained using the hierarchy method, and the hierarchy equation of the overall structure is obtained using the substructure synthesis method. Using the proposed method, the reduced order hierarchy equation can be obtained without analyzing the original whole structure. After the calculation of the mean square value of response, the reliability analysis can be carried out based on the first passage problem and Poisson's excursion rate. As a numerical example of structure, a simple piping system is considered. The damping constant of the support is considered as the uncertainty parameter. Then the random response is calculated using the proposed method. As a result, the proposed method is useful to analyze the random response in terms of the accuracy, computer storage and calculation time. (author)

  6. A novel String Banana Template Method for Tracks Reconstruction in High Multiplicity Events with significant Multiple Scattering and its Firmware Implementation

    CERN Document Server

    Kulinich, P; Krylov, V

    2004-01-01

    Novel String Banana Template Method (SBTM) for track reconstruction in difficult conditions is proposed and implemented for off-line analysis of relativistic heavy ion collision events. The main idea of the method is in use of features of ensembles of tracks selected by 3-fold coincidence. Two steps model of track is used: the first one - averaged over selected ensemble and the second - per event dependent. It takes into account Multiple Scattering (MS) for this particular track. SBTM relies on use of stored templates generated by precise Monte Carlo simulation, so it's more time efficient for the case of 2D spectrometer. All data required for track reconstruction in such difficult conditions could be prepared in convenient format for fast use. Its template based nature and the fact that the SBTM track model is actually very close to the hits implies that it can be implemented in a firmware processor. In this report a block diagram of firmware based pre-processor for track reconstruction in CMS-like Si tracke...

  7. Optimized star sensors laboratory calibration method using a regularization neural network.

    Science.gov (United States)

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  8. Bayesian inference of chemical kinetic models from proposed reactions

    KAUST Repository

    Galagali, Nikhil

    2015-02-01

    © 2014 Elsevier Ltd. Bayesian inference provides a natural framework for combining experimental data with prior knowledge to develop chemical kinetic models and quantify the associated uncertainties, not only in parameter values but also in model structure. Most existing applications of Bayesian model selection methods to chemical kinetics have been limited to comparisons among a small set of models, however. The significant computational cost of evaluating posterior model probabilities renders traditional Bayesian methods infeasible when the model space becomes large. We present a new framework for tractable Bayesian model inference and uncertainty quantification using a large number of systematically generated model hypotheses. The approach involves imposing point-mass mixture priors over rate constants and exploring the resulting posterior distribution using an adaptive Markov chain Monte Carlo method. The posterior samples are used to identify plausible models, to quantify rate constant uncertainties, and to extract key diagnostic information about model structure-such as the reactions and operating pathways most strongly supported by the data. We provide numerical demonstrations of the proposed framework by inferring kinetic models for catalytic steam and dry reforming of methane using available experimental data.

  9. An outline of the systematic-dialectical method: scientific and political significance

    NARCIS (Netherlands)

    Reuten, G.; Moseley, F.; Smith, T.

    2014-01-01

    The method of systematic-dialectics (SD) is reconstructed with a focus on what institutions and processes are necessary - rather than contingent - for the capitalist system. This allows for the detection of strengths and weaknesses in the actual structure of the system. Weaknesses should be

  10. Binary Classification Method of Social Network Users

    Directory of Open Access Journals (Sweden)

    I. A. Poryadin

    2017-01-01

    Full Text Available The subject of research is a binary classification method of social network users based on the data analysis they have placed. Relevance of the task to gain information about a person by examining the content of his/her pages in social networks is exemplified. The most common approach to its solution is a visual browsing. The order of the regional authority in our country illustrates that its using in school education is needed. The article shows restrictions on the visual browsing of pupil’s pages in social networks as a tool for the teacher and the school psychologist and justifies that a process of social network users’ data analysis should be automated. Explores publications, which describe such data acquisition, processing, and analysis methods and considers their advantages and disadvantages. The article also gives arguments to support a proposal to study the classification method of social network users. One such method is credit scoring, which is used in banks and credit institutions to assess the solvency of clients. Based on the high efficiency of the method there is a proposal for significant expansion of its using in other areas of society. The possibility to use logistic regression as the mathematical apparatus of the proposed method of binary classification has been justified. Such an approach enables taking into account the different types of data extracted from social networks. Among them: the personal user data, information about hobbies, friends, graphic and text information, behaviour characteristics. The article describes a number of existing methods of data transformation that can be applied to solve the problem. An experiment of binary gender-based classification of social network users is described. A logistic model obtained for this example includes multiple logical variables obtained by transforming the user surnames. This experiment confirms the feasibility of the proposed method. Further work is to define a system

  11. Cross-Cultural Adaptation and Validation of the MPAM-R to Brazilian Portuguese and Proposal of a New Method to Calculate Factor Scores

    Science.gov (United States)

    Albuquerque, Maicon R.; Lopes, Mariana C.; de Paula, Jonas J.; Faria, Larissa O.; Pereira, Eveline T.; da Costa, Varley T.

    2017-01-01

    In order to understand the reasons that lead individuals to practice physical activity, researchers developed the Motives for Physical Activity Measure-Revised (MPAM-R) scale. In 2010, a translation of MPAM-R to Portuguese and its validation was performed. However, psychometric measures were not acceptable. In addition, factor scores in some sports psychology scales are calculated by the mean of scores by items of the factor. Nevertheless, it seems appropriate that items with higher factor loadings, extracted by Factor Analysis, have greater weight in the factor score, as items with lower factor loadings have less weight in the factor score. The aims of the present study are to translate, validate the MPAM-R for Portuguese versions, and investigate agreement between two methods used to calculate factor scores. Three hundred volunteers who were involved in physical activity programs for at least 6 months were collected. Confirmatory Factor Analysis of the 30 items indicated that the version did not fit the model. After excluding four items, the final model with 26 items showed acceptable model fit measures by Exploratory Factor Analysis, as well as it conceptually supports the five factors as the original proposal. When two methods are compared to calculate factors scores, our results showed that only “Enjoyment” and “Appearance” factors showed agreement between methods to calculate factor scores. So, the Portuguese version of the MPAM-R can be used in a Brazilian context, and a new proposal for the calculation of the factor score seems to be promising. PMID:28293203

  12. Analysis of time integration methods for the compressible two-fluid model for pipe flow simulations

    NARCIS (Netherlands)

    B. Sanderse (Benjamin); I. Eskerud Smith (Ivar); M.H.W. Hendrix (Maurice)

    2017-01-01

    textabstractIn this paper we analyse different time integration methods for the two-fluid model and propose the BDF2 method as the preferred choice to simulate transient compressible multiphase flow in pipelines. Compared to the prevailing Backward Euler method, the BDF2 scheme has a significantly

  13. 75 FR 60013 - Proposed Flood Elevation Determinations

    Science.gov (United States)

    2010-09-29

    ... Planning and Review. This proposed rule is not a significant regulatory action under the criteria of.... Approximately 4.14 None +4953 miles downstream of Tom Miner Creek Road. Yellowstone River East Branch...

  14. METHOD OF CONSTRUCTION OF GENETIC DATA CLUSTERS

    Directory of Open Access Journals (Sweden)

    N. A. Novoselova

    2016-01-01

    Full Text Available The paper presents a method of construction of genetic data clusters (functional modules using the randomized matrices. To build the functional modules the selection and analysis of the eigenvalues of the gene profiles correlation matrix is performed. The principal components, corresponding to the eigenvalues, which are significantly different from those obtained for the randomly generated correlation matrix, are used for the analysis. Each selected principal component forms gene cluster. In a comparative experiment with the analogs the proposed method shows the advantage in allocating statistically significant different-sized clusters, the ability to filter non- informative genes and to extract the biologically interpretable functional modules matching the real data structure.

  15. Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel 'V-plot' methodology to display accuracy values.

    Science.gov (United States)

    Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P

    2018-01-01

    Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.

  16. An innovative method for determining the diffusion coefficient of product nuclide

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Chih Lung [Dept. of Nuclear Back-end Management, Taiwan Power Company, Taipei (China); Wang, Tsing Hai [Dept. Biomedical Engineering and Environment Sciences, National Tsing Hua University, Hsinchu (China)

    2017-08-15

    Diffusion is a crucial mechanism that regulates the migration of radioactive nuclides. In this study, an innovative numerical method was developed to simultaneously calculate the diffusion coefficient of both parent and, afterward, series daughter nuclides in a sequentially reactive through-diffusion model. Two constructed scenarios, a serial reaction (RN{sub 1} → RN{sub 2} → RN{sub 3}) and a parallel reaction (RN{sub 1} → RN{sub 2}A + RN{sub 2}B), were proposed and calculated for verification. First, the accuracy of the proposed three-member reaction equations was validated using several default numerical experiments. Second, by applying the validated numerical experimental concentration variation data, the as-determined diffusion coefficient of the product nuclide was observed to be identical to the default data. The results demonstrate the validity of the proposed method. The significance of the proposed numerical method will be particularly powerful in determining the diffusion coefficients of systems with extremely thin specimens, long periods of diffusion time, and parent nuclides with fast decay constants.

  17. Deterministic factor analysis: methods of integro-differentiation of non-integral order

    Directory of Open Access Journals (Sweden)

    Valentina V. Tarasova

    2016-12-01

    Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes

  18. Modern Methods of Voice Authentication in Mobile Devices

    Directory of Open Access Journals (Sweden)

    Vladimir Leonovich Evseev

    2016-03-01

    Full Text Available Modern methods of voice authentication in mobile devices.The proposed evaluation of the probability errors of the first and second kind for multi-modal methods of voice authentication. The advantages of multimodal multivariate methods before, when authentication takes place in several stages – this is the one-stage, which means convenience for customers. Further development of multimodal methods of authentication will be based on the significantly increased computing power of mobile devices, the growing number and improved accuracy built-in mobile device sensors, as well as to improve the algorithms of signal processing.

  19. Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.

    Science.gov (United States)

    Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun

    2018-01-01

    Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.

  20. A Simple and Automatic Method for Locating Surgical Guide Hole

    Science.gov (United States)

    Li, Xun; Chen, Ming; Tang, Kai

    2017-12-01

    Restoration-driven surgical guides are widely used in implant surgery. This study aims to provide a simple and valid method of automatically locating surgical guide hole, which can reduce operator's experiences and improve the design efficiency and quality of surgical guide. Few literatures can be found on this topic and the paper proposed a novel and simple method to solve this problem. In this paper, a local coordinate system for each objective tooth is geometrically constructed in CAD system. This coordinate system well represents dental anatomical features and the center axis of the objective tooth (coincide with the corresponding guide hole axis) can be quickly evaluated in this coordinate system, finishing the location of the guide hole. The proposed method has been verified by comparing two types of benchmarks: manual operation by one skilled doctor with over 15-year experiences (used in most hospitals) and automatic way using one popular commercial package Simplant (used in few hospitals).Both the benchmarks and the proposed method are analyzed in their stress distribution when chewing and biting. The stress distribution is visually shown and plotted as a graph. The results show that the proposed method has much better stress distribution than the manual operation and slightly better than Simplant, which will significantly reduce the risk of cervical margin collapse and extend the wear life of the restoration.

  1. 76 FR 40850 - Glymes; Proposed Significant New Use Rule

    Science.gov (United States)

    2011-07-12

    ... detergents. Pentaethylene glycol dibutyl ether None. Butyltriglyme None. B. What are the estimated production... production level appears to be increasing, and given its toxicity, EPA would be concerned if this chemical... glycol dibutyl ether and butyltriglyme, which presently show no reported production to the IUR or any...

  2. Proposed Objective Odor Control Test Methodology for Waste Containment

    Science.gov (United States)

    Vos, Gordon

    2010-01-01

    The Orion Cockpit Working Group has requested that an odor control testing methodology be proposed to evaluate the odor containment effectiveness of waste disposal bags to be flown on the Orion Crew Exploration Vehicle. As a standardized "odor containment" test does not appear to be a matter of record for the project, a new test method is being proposed. This method is based on existing test methods used in industrial hygiene for the evaluation of respirator fit in occupational settings, and takes into consideration peer reviewed documentation of human odor thresholds for standardized contaminates, industry stardnard atmostpheric testing methodologies, and established criteria for laboratory analysis. The proposed methodology is quantitative, though it can readily be complimented with a qualitative subjective assessment. Isoamyl acetate (IAA - also known at isopentyl acetate) is commonly used in respirator fit testing, and there are documented methodologies for both measuring its quantitative airborne concentrations. IAA is a clear, colorless liquid with a banana-like odor, documented detectable smell threshold for humans of 0.025 PPM, and a 15 PPB level of quantation limit.

  3. How significant is the ‘significant other’? Associations between significant others’ health behaviors and attitudes and young adults’ health outcomes

    Directory of Open Access Journals (Sweden)

    Berge Jerica M

    2012-04-01

    Full Text Available Abstract Background Having a significant other has been shown to be protective against physical and psychological health conditions for adults. Less is known about the period of emerging young adulthood and associations between significant others’ weight and weight-related health behaviors (e.g. healthy dietary intake, the frequency of physical activity, weight status. This study examined the association between significant others’ health attitudes and behaviors regarding eating and physical activity and young adults’ weight status, dietary intake, and physical activity. Methods This study uses data from Project EAT-III, a population-based cohort study with emerging young adults from diverse ethnic and socioeconomic backgrounds (n = 1212. Logistic regression models examining cross-sectional associations, adjusted for sociodemographics and health behaviors five years earlier, were used to estimate predicted probabilities and calculate prevalence differences. Results Young adult women whose significant others had health promoting attitudes/behaviors were significantly less likely to be overweight/obese and were more likely to eat ≥ 5 fruits/vegetables per day and engage in ≥ 3.5 hours/week of physical activity, compared to women whose significant others did not have health promoting behaviors/attitudes. Young adult men whose significant other had health promoting behaviors/attitudes were more likely to engage in ≥ 3.5 hours/week of physical activity compared to men whose significant others did not have health promoting behaviors/attitudes. Conclusions Findings suggest the protective nature of the significant other with regard to weight-related health behaviors of young adults, particularly for young adult women. Obesity prevention efforts should consider the importance of including the significant other in intervention efforts with young adult women and potentially men.

  4. A new method for mobile phone image denoising

    Science.gov (United States)

    Jin, Lianghai; Jin, Min; Li, Xiang; Xu, Xiangyang

    2015-12-01

    Images captured by mobile phone cameras via pipeline processing usually contain various kinds of noises, especially granular noise with different shapes and sizes in both luminance and chrominance channels. In chrominance channels, noise is closely related to image brightness. To improve image quality, this paper presents a new method to denoise such mobile phone images. The proposed scheme converts the noisy RGB image to luminance and chrominance images, which are then denoised by a common filtering framework. The common filtering framework processes a noisy pixel by first excluding the neighborhood pixels that significantly deviate from the (vector) median and then utilizing the other neighborhood pixels to restore the current pixel. In the framework, the strength of chrominance image denoising is controlled by image brightness. The experimental results show that the proposed method obviously outperforms some other representative denoising methods in terms of both objective measure and visual evaluation.

  5. A proposed method for fast determination of plasma parameters

    International Nuclear Information System (INIS)

    Braams, B.J.; Lackner, K.

    1984-09-01

    The method of function parametrization, developed and applied by H. Wind for fast data evaluation in high energy physics, is presented in the context of controlled fusion research. This method relies on statistical analysis of a data base of simulated experiments in order to obtain a functional representation for the intrinsic physical parameters of a system in terms of the values of the measurements. Some variations on Wind's original procedure are suggested. A specific application for tokamak experiments would be the determination of certain global parameters of the plasma, characterizing the current profile, shape of the cross-section, plasma pressure, and the internal inductance. The relevant measurements for this application include values of the poloidal field and flux external to the plasma, and a diamagnetic measurement. These may be combined with other diagnostics, such as electron-cyclotron emission and laser interferometry, in order to obtain also density and temperature profiles. There appears to be a capability for on-line determination of basic physical parameters, in a millisecond timescale on a minicomputer instead of in seconds on a large mainframe. (orig.)

  6. Large subcriticality measurement by pulsed neutron method

    International Nuclear Information System (INIS)

    Yamane, Y.; Yoshida, A.; Nishina, K.; Kobayashi, K.; Kanda, K.

    1985-01-01

    To establish the method determining large subcriticalities in the field of nuclear criticality safety, the authors performed pulsed neutron experiments using the Kyoto University Critical Assembly (KUCA) at Research Reactor Institute, Kyoto University and the Cockcroft-Walton type accelerator attached to the assembly. The area-ratio method proposed by Sjoestrand was employed to evaluate subcriticalities from neutron decay curves measured. This method has the shortcomings that the neutron component due to a decay of delayed neutrons remarkably decreases as the subcriticality of an objective increases. To overcome the shortcoming, the authors increased the frequency of pulsed neutron generation. The integral-version of the area-ratio method proposed by Kosaly and Fisher was employed in addition in order to remove a contamination of spatial higher modes from the decay curve. The latter becomes significant as subcriticality increases. The largest subcriticality determined in the present experiments was 125.4 dollars, which was equal to 0.5111 in a multiplication factor. The calculational values evaluated by the computer code KENO-IV with 137 energy groups based on the Monte Carlo method agreed well with those experimental values

  7. Proposed Fermilab upgrade main injector project

    International Nuclear Information System (INIS)

    1992-04-01

    The US Department of Energy (DOE) proposes to construct and operate a ''Fermilab Main Injector'' (FMI), a 150 GeV proton injector accelerator, at the Fermi National Accelerator Laboratory (Fermilab) in Batavia, Illinois. The purpose and need for this action are given of this Environmental Assessment (EA). A description of the proposed FMI and construction activities are also given. The proposed FMI would be housed in an underground tunnel with a circumference of approximately 2.1 miles (3.4 kilometers), and the construction would affect approximately 135 acres of the 6,800 acre Fermilab site. The purpose of the proposed FMI is to construct and bring into operation a new 150 GeV proton injector accelerator. This addition to Fermilab's Tevatron would enable scientists to penetrate ever more deeply into the subatomic world through the detection of the super massive particles that can be created when a proton and antiproton collide head-on. The conversion of energy into matter in these collisions makes it possible to create particles that existed only an instant after the beginning of time. The proposed FMI would significantly extend the scientific reach of the Tevatron, the world's first superconducting accelerator and highest energy proton-antiproton collider

  8. [Significance of three-dimensional reconstruction as a method of preoperative planning of laparoscopic radiofrequency ablation].

    Science.gov (United States)

    Zhang, W W; Wang, H G; Shi, X J; Chen, M Y; Lu, S C

    2016-09-01

    To discuss the significance of three-dimensional reconstruction as a method of preoperative planning of laparoscopic radiofrequency ablation(LRFA). Thirty-two cases of LRFA admitted from January 2014 to December 2015 in Department of Hepatobiliary Surgery, Chinese People's Liberation Army General Hospital were analyzed(3D-LRFA group). Three-dimensional(3D) reconstruction were taken as a method of preoperative planning in 3D-LRFA group.Other 64 LRFA cases were paired over the same period without three-dimensional reconstruction before the operation (LRFA group). Hepatobiliary system contrast enhanced CT scan of 3D-RFA patients were taken by multi-slice spiral computed tomography(MSCT), and the DICOM data were processed by IQQA(®)-Liver and IQQA(®)-guide to make 3D reconstruction.Using 3D reconstruction model, diameter and scope of tumor were measured, suitable size (length and radiofrequency length) and number of RFA electrode were chosen, scope and effect of radiofrequency were simulated, reasonable needle track(s) was planed, position and angle of laparoscopic ultrasound (LUS) probe was designed and LUS image was simulated.Data of operation and recovery were collected and analyzed. Data between two sets of measurement data were compared with t test or rank sum test, and count data with χ(2) test or Fisher exact probability test.Tumor recurrence rate was analyzed with the Kaplan-Meier survival curve and Log-rank (Mantel-Cox) test. Compared with LRFA group ((216.8±66.2) minutes, (389.1±183.4) s), 3D-LRFA group ((173.3±59.4) minutes, (242.2±90.8) s) has shorter operation time(t=-3.138, P=0.002) and shorter mean puncture time(t=-2.340, P=0.021). There was no significant difference of blood loss(P=0.170), ablation rate (P=0.871) and incidence of complications(P=1.000). Compared with LRFA group ((6.3±3.9)days, (330±102)U/L, (167±64)ng/L), 3D-LRFA group ((4.3±3.1) days, (285±102) U/L, (139±43) ng/L) had shorter post-operative stay(t=-2.527, P=0.016), less

  9. What if there were no significance tests?

    CERN Document Server

    Harlow, Lisa L; Steiger, James H

    2013-01-01

    This book is the result of a spirited debate stimulated by a recent meeting of the Society of Multivariate Experimental Psychology. Although the viewpoints span a range of perspectives, the overriding theme that emerges states that significance testing may still be useful if supplemented with some or all of the following -- Bayesian logic, caution, confidence intervals, effect sizes and power, other goodness of approximation measures, replication and meta-analysis, sound reasoning, and theory appraisal and corroboration. The book is organized into five general areas. The first presents an overview of significance testing issues that sythesizes the highlights of the remainder of the book. The next discusses the debate in which significance testing should be rejected or retained. The third outlines various methods that may supplement current significance testing procedures. The fourth discusses Bayesian approaches and methods and the use of confidence intervals versus significance tests. The last presents the p...

  10. A Finite Segment Method for Skewed Box Girder Analysis

    Directory of Open Access Journals (Sweden)

    Xingwei Xue

    2018-01-01

    Full Text Available A finite segment method is presented to analyze the mechanical behavior of skewed box girders. By modeling the top and bottom plates of the segments with skew plate beam element under an inclined coordinate system and the webs with normal plate beam element, a spatial elastic displacement model for skewed box girder is constructed, which can satisfy the compatibility condition at the corners of the cross section for box girders. The formulation of the finite segment is developed based on the variational principle. The major advantage of the proposed approach, in comparison with the finite element method, is that it can simplify a three-dimensional structure into a one-dimensional structure for structural analysis, which results in significant saving in computational times. At last, the accuracy and efficiency of the proposed finite segment method are verified by a model test.

  11. Online sequential condition prediction method of natural circulation systems based on EOS-ELM and phase space reconstruction

    International Nuclear Information System (INIS)

    Chen, Hanying; Gao, Puzhen; Tan, Sichao; Tang, Jiguo; Yuan, Hongsheng

    2017-01-01

    Highlights: •An online condition prediction method for natural circulation systems in NPP was proposed based on EOS-ELM. •The proposed online prediction method was validated using experimental data. •The training speed of the proposed method is significantly fast. •The proposed method can achieve good accuracy in wide parameter range. -- Abstract: Natural circulation design is widely used in the passive safety systems of advanced nuclear power reactors. The irregular and chaotic flow oscillations are often observed in boiling natural circulation systems so it is difficult for operators to monitor and predict the condition of these systems. An online condition forecasting method for natural circulation system is proposed in this study as an assisting technique for plant operators. The proposed prediction approach was developed based on Ensemble of Online Sequential Extreme Learning Machine (EOS-ELM) and phase space reconstruction. Online Sequential Extreme Learning Machine (OS-ELM) is an online sequential learning neural network algorithm and EOS-ELM is the ensemble method of it. The proposed condition prediction method can be initiated by a small chunk of monitoring data and it can be updated by newly arrived data at very fast speed during the online prediction. Simulation experiments were conducted on the data of two natural circulation loops to validate the performance of the proposed method. The simulation results show that the proposed predication model can successfully recognize different types of flow oscillations and accurately forecast the trend of monitored plant variables. The influence of the number of hidden nodes and neural network inputs on prediction performance was studied and the proposed model can achieve good accuracy in a wide parameter range. Moreover, the comparison results show that the proposed condition prediction method has much faster online learning speed and better prediction accuracy than conventional neural network model.

  12. Subdomain Precise Integration Method for Periodic Structures

    Directory of Open Access Journals (Sweden)

    F. Wu

    2014-01-01

    Full Text Available A subdomain precise integration method is developed for the dynamical responses of periodic structures comprising many identical structural cells. The proposed method is based on the precise integration method, the subdomain scheme, and the repeatability of the periodic structures. In the proposed method, each structural cell is seen as a super element that is solved using the precise integration method, considering the repeatability of the structural cells. The computational efforts and the memory size of the proposed method are reduced, while high computational accuracy is achieved. Therefore, the proposed method is particularly suitable to solve the dynamical responses of periodic structures. Two numerical examples are presented to demonstrate the accuracy and efficiency of the proposed method through comparison with the Newmark and Runge-Kutta methods.

  13. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  14. Board Game in Physics Classes—a Proposal for a New Method of Student Assessment

    Science.gov (United States)

    Dziob, Daniel

    2018-03-01

    The aim of this study was to examine the impact of assessing students' achievements in a physics course in the form of a group board game. Research was conducted in two groups of 131 high school students in Poland. In each school, the research sample was divided into experimental and control groups. Each group was taught by the same teacher and participated in the same courses and tests before the game. Just after finishing the course on waves and vibrations (school 1) and optics (school 2), experimental groups took part in a group board game to assess their knowledge. One week after the game, the experimental and control groups (not involved in the game) took part in the post-tests. Students from the experimental groups performed better in the game than in the tests given before the game. As well their results in the post-tests were significantly higher statistically than students from the control groups. Simultaneously, student's opinions in the experimental groups about the board game as an assessment method were collected in an open-descriptive form and in a short questionnaire, and analyzed. Results showed that students experienced a positive attitude toward the assessment method, a reduction of test anxiety and an increase in their motivation for learning.

  15. On jet substructure methods for signal jets

    Energy Technology Data Exchange (ETDEWEB)

    Dasgupta, Mrinal [Consortium for Fundamental Physics, School of Physics & Astronomy, University of Manchester,Oxford Road, Manchester M13 9PL (United Kingdom); Powling, Alexander [School of Physics & Astronomy, University of Manchester,Oxford Road, Manchester M13 9PL (United Kingdom); Siodmok, Andrzej [Institute of Nuclear Physics, Polish Academy of Sciences,ul. Radzikowskiego 152, 31-342 Kraków (Poland); CERN, PH-TH,CH-1211 Geneva 23 (Switzerland)

    2015-08-17

    We carry out simple analytical calculations and Monte Carlo studies to better understand the impact of QCD radiation on some well-known jet substructure methods for jets arising from the decay of boosted Higgs bosons. Understanding differences between taggers for these signal jets assumes particular significance in situations where they perform similarly on QCD background jets. As an explicit example of this we compare the Y-splitter method to the more recently proposed Y-pruning technique. We demonstrate how the insight we gain can be used to significantly improve the performance of Y-splitter by combining it with trimming and show that this combination outperforms the other taggers studied here, at high p{sub T}. We also make analytical estimates for optimal parameter values, for a range of methods and compare to results from Monte Carlo studies.

  16. Two-Dimensional Impact Reconstruction Method for Rail Defect Inspection

    Directory of Open Access Journals (Sweden)

    Jie Zhao

    2014-01-01

    Full Text Available The safety of train operating is seriously menaced by the rail defects, so it is of great significance to inspect rail defects dynamically while the train is operating. This paper presents a two-dimensional impact reconstruction method to realize the on-line inspection of rail defects. The proposed method utilizes preprocessing technology to convert time domain vertical vibration signals acquired by wireless sensor network to space signals. The modern time-frequency analysis method is improved to reconstruct the obtained multisensor information. Then, the image fusion processing technology based on spectrum threshold processing and node color labeling is proposed to reduce the noise, and blank the periodic impact signal caused by rail joints and locomotive running gear. This method can convert the aperiodic impact signals caused by rail defects to partial periodic impact signals, and locate the rail defects. An application indicates that the two-dimensional impact reconstruction method could display the impact caused by rail defects obviously, and is an effective on-line rail defects inspection method.

  17. Coordinate alignment of combined measurement systems using a modified common points method

    Science.gov (United States)

    Zhao, G.; Zhang, P.; Xiao, W.

    2018-03-01

    The co-ordinate metrology has been extensively researched for its outstanding advantages in measurement range and accuracy. The alignment of different measurement systems is usually achieved by integrating local coordinates via common points before measurement. The alignment errors would accumulate and significantly reduce the global accuracy, thus need to be minimized. In this thesis, a modified common points method (MCPM) is proposed to combine different traceable system errors of the cooperating machines, and optimize the global accuracy by introducing mutual geometric constraints. The geometric constraints, obtained by measuring the common points in individual local coordinate systems, provide the possibility to reduce the local measuring uncertainty whereby enhance the global measuring certainty. A simulation system is developed in Matlab to analyze the feature of MCPM using the Monto-Carlo method. An exemplary setup is constructed to verify the feasibility and efficiency of the proposed method associated with laser tracker and indoor iGPS systems. Experimental results show that MCPM could significantly improve the alignment accuracy.

  18. An Efficient Explicit-time Description Method for Timed Model Checking

    Directory of Open Access Journals (Sweden)

    Hao Wang

    2009-12-01

    Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.

  19. A Clustering K-Anonymity Privacy-Preserving Method for Wearable IoT Devices

    Directory of Open Access Journals (Sweden)

    Fang Liu

    2018-01-01

    Full Text Available Wearable technology is one of the greatest applications of the Internet of Things. The popularity of wearable devices has led to a massive scale of personal (user-specific data. Generally, data holders (manufacturers of wearable devices are willing to share these data with others to get benefits. However, significant privacy concerns would arise when sharing the data with the third party in an improper manner. In this paper, we first propose a specific threat model about the data sharing process of wearable devices’ data. Then we propose a K-anonymity method based on clustering to preserve privacy of wearable IoT devices’ data and guarantee the usability of the collected data. Experiment results demonstrate the effectiveness of the proposed method.

  20. New implementation method for essential boundary condition to extended element-free Galerkin method. Application to nonlinear problem

    International Nuclear Information System (INIS)

    Saitoh, Ayumu; Matsui, Nobuyuki; Itoh, Taku; Kamitani, Atsushi; Nakamura, Hiroaki

    2011-01-01

    A new method has been proposed for implementing essential boundary conditions to the Element-Free Galerkin Method (EFGM) without using the Lagrange multiplier. Furthermore, the performance of the proposed method has been investigated for a nonlinear Poisson problem. The results of computations show that, as interpolation functions become closer to delta functions, the accuracy of the solution is improved on the boundary. In addition, the accuracy of the proposed method is higher than that of the conventional EFGM. Therefore, it might be concluded that the proposed method is useful for solving the nonlinear Poisson problem. (author)

  1. 48 CFR 815.404-1 - Proposal analysis techniques.

    Science.gov (United States)

    2010-10-01

    ... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Contract Pricing 815.404-1 Proposal analysis... necessary for initial and revised pricing of all negotiated prime contracts, including subcontract pricing...

  2. A philosophical taxonomy of ethically significant moral distress.

    Science.gov (United States)

    Thomas, Tessy A; McCullough, Laurence B

    2015-02-01

    Moral distress is one of the core topics of clinical ethics. Although there is a large and growing empirical literature on the psychological aspects of moral distress, scholars, and empirical investigators of moral distress have recently called for greater conceptual clarity. To meet this recognized need, we provide a philosophical taxonomy of the categories of what we call ethically significant moral distress: the judgment that one is not able, to differing degrees, to act on one's moral knowledge about what one ought to do. We begin by unpacking the philosophical components of Andrew Jameton's original formulation from his landmark 1984 work and identify two key respects in which that formulation remains unclear: the origins of moral knowledge and impediments to acting on that moral knowledge. We then selectively review subsequent literature that shows that there is more than one concept of moral distress and that explores the origin of the values implicated in moral distress and impediments to acting on those values. This review sets the stage for identifying the elements of a philosophical taxonomy of ethically significant moral distress. The taxonomy uses these elements to create six categories of ethically significant moral distress: challenges to, threats to, and violations of professional integrity; and challenges to, threats to, and violations of individual integrity. We close with suggestions about how the proposed philosophical taxonomy of ethically significant moral distress sheds light on the concepts of moral residue and crescendo effect of moral distress and how the proposed taxonomy might usefully guide prevention of and future qualitative and quantitative empirical research on ethically significant moral distress. © The Author 2014. Published by Oxford University Press, on behalf of the Journal of Medicine and Philosophy Inc. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    Science.gov (United States)

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  4. A coupling method for a cardiovascular simulation model which includes the Kalman filter.

    Science.gov (United States)

    Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya

    2012-01-01

    Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.

  5. Global positioning method based on polarized light compass system

    Science.gov (United States)

    Liu, Jun; Yang, Jiangtao; Wang, Yubo; Tang, Jun; Shen, Chong

    2018-05-01

    This paper presents a global positioning method based on a polarized light compass system. A main limitation of polarization positioning is the environment such as weak and locally destroyed polarization environments, and the solution to the positioning problem is given in this paper which is polarization image de-noising and segmentation. Therefore, the pulse coupled neural network is employed for enhancing positioning performance. The prominent advantages of the present positioning technique are as follows: (i) compared to the existing position method based on polarized light, better sun tracking accuracy can be achieved and (ii) the robustness and accuracy of positioning under weak and locally destroyed polarization environments, such as cloudy or building shielding, are improved significantly. Finally, some field experiments are given to demonstrate the effectiveness and applicability of the proposed global positioning technique. The experiments have shown that our proposed method outperforms the conventional polarization positioning method, the real time longitude and latitude with accuracy up to 0.0461° and 0.0911°, respectively.

  6. Identification of influential spreaders in online social networks using interaction weighted K-core decomposition method

    Science.gov (United States)

    Al-garadi, Mohammed Ali; Varathan, Kasturi Dewi; Ravana, Sri Devi

    2017-02-01

    Online social networks (OSNs) have become a vital part of everyday living. OSNs provide researchers and scientists with unique prospects to comprehend individuals on a scale and to analyze human behavioral patterns. Influential spreaders identification is an important subject in understanding the dynamics of information diffusion in OSNs. Targeting these influential spreaders is significant in planning the techniques for accelerating the propagation of information that is useful for various applications, such as viral marketing applications or blocking the diffusion of annoying information (spreading of viruses, rumors, online negative behaviors, and cyberbullying). Existing K-core decomposition methods consider links equally when calculating the influential spreaders for unweighted networks. Alternatively, the proposed link weights are based only on the degree of nodes. Thus, if a node is linked to high-degree nodes, then this node will receive high weight and is treated as an important node. Conversely, the degree of nodes in OSN context does not always provide accurate influence of users. In the present study, we improve the K-core method for OSNs by proposing a novel link-weighting method based on the interaction among users. The proposed method is based on the observation that the interaction of users is a significant factor in quantifying the spreading capability of user in OSNs. The tracking of diffusion links in the real spreading dynamics of information verifies the effectiveness of our proposed method for identifying influential spreaders in OSNs as compared with degree centrality, PageRank, and original K-core.

  7. 75 FR 16706 - Proposed Significant New Use Rule for 1-Propene, 2,3,3,3-tetrafluoro-

    Science.gov (United States)

    2010-04-02

    ... most flammable refrigerants, including the PMN substance, in existing MVAC systems as a retrofit has... section 5(a)(2) of the Toxic Substances Control Act (TSCA) for the chemical substance identified as 1... substance for an activity that is designated as a significant new use to notify EPA at least 90 days before...

  8. Atmospheric pollution problems and control proposals associated with solid waste management in China: A review

    International Nuclear Information System (INIS)

    Tian, Hezhong; Gao, Jiajia; Hao, Jiming; Lu, Long; Zhu, Chuanyong; Qiu, Peipei

    2013-01-01

    Highlights: ► Air pollution problems generated in MSW management processes in China are presented. ► Both quantity and composition of MSW generation in China are identified. ► The status of different methods for MSW treatment and disposal are reviewed. ► Some comprehensive control proposals for improving MSW management are proposed. -- Abstract: Along with population growth, rapid urbanization and industrialization process, the volume of municipal solid waste (MSW) generation in China has been increasing sharply in the past 30 years and the total amount of MSW yields will continue to increase. Nowadays, due to global warming warrants particular attention throughout the world, a series of air pollutants (including greenhouse gases, odorous gases, PCDD/Fs, heavy metals, PM, etc.) discharged from waste disposal and treatment processes have become one of the new significant emerging air pollution sources, which arousing great concerns about their adverse effects on surrounding ambient air quality and public health. At present, the overall safely disposed ratio of the collected MSW in China is reported at approximately 78% in 2010, and there are mainly three types of MSW disposal methods practiced in China, including landfill, composting and incineration. The characteristics of air pollutants and greenhouse gases discharge vary substantially among different MSW disposal methods. By presenting a thorough review of MSW generation in China and providing a summarization of the current status of MSW disposal methods practices, this review article makes an integrated overview analysis of existing air pollution problems associated with MSW collection, separation, and disposal processes. Furthermore, some comprehensive control proposals to prevent air pollution for improving MSW management of China in the future are put forward

  9. Atmospheric pollution problems and control proposals associated with solid waste management in China: A review

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Hezhong, E-mail: hztian@bnu.edu.cn [State Key Joint Laboratory of Environmental Simulation and Pollution Control, School of Environment, Beijing Normal University, Beijing 100875 (China); Gao, Jiajia [State Key Joint Laboratory of Environmental Simulation and Pollution Control, School of Environment, Beijing Normal University, Beijing 100875 (China); Hao, Jiming [School of Environment, Tsinghua University, Beijing 100084 (China); Lu, Long; Zhu, Chuanyong; Qiu, Peipei [State Key Joint Laboratory of Environmental Simulation and Pollution Control, School of Environment, Beijing Normal University, Beijing 100875 (China)

    2013-05-15

    Highlights: ► Air pollution problems generated in MSW management processes in China are presented. ► Both quantity and composition of MSW generation in China are identified. ► The status of different methods for MSW treatment and disposal are reviewed. ► Some comprehensive control proposals for improving MSW management are proposed. -- Abstract: Along with population growth, rapid urbanization and industrialization process, the volume of municipal solid waste (MSW) generation in China has been increasing sharply in the past 30 years and the total amount of MSW yields will continue to increase. Nowadays, due to global warming warrants particular attention throughout the world, a series of air pollutants (including greenhouse gases, odorous gases, PCDD/Fs, heavy metals, PM, etc.) discharged from waste disposal and treatment processes have become one of the new significant emerging air pollution sources, which arousing great concerns about their adverse effects on surrounding ambient air quality and public health. At present, the overall safely disposed ratio of the collected MSW in China is reported at approximately 78% in 2010, and there are mainly three types of MSW disposal methods practiced in China, including landfill, composting and incineration. The characteristics of air pollutants and greenhouse gases discharge vary substantially among different MSW disposal methods. By presenting a thorough review of MSW generation in China and providing a summarization of the current status of MSW disposal methods practices, this review article makes an integrated overview analysis of existing air pollution problems associated with MSW collection, separation, and disposal processes. Furthermore, some comprehensive control proposals to prevent air pollution for improving MSW management of China in the future are put forward.

  10. Fast Temporal Activity Proposals for Efficient Detection of Human Actions in Untrimmed Videos

    KAUST Repository

    Heilbron, Fabian Caba; Niebles, Juan Carlos; Ghanem, Bernard

    2016-01-01

    In many large-scale video analysis scenarios, one is interested in localizing and recognizing human activities that occur in short temporal intervals within long untrimmed videos. Current approaches for activity detection still struggle to handle large-scale video collections and the task remains relatively unexplored. This is in part due to the computational complexity of current action recognition approaches and the lack of a method that proposes fewer intervals in the video, where activity processing can be focused. In this paper, we introduce a proposal method that aims to recover temporal segments containing actions in untrimmed videos. Building on techniques for learning sparse dictionaries, we introduce a learning framework to represent and retrieve activity proposals. We demonstrate the capabilities of our method in not only producing high quality proposals but also in its efficiency. Finally, we show the positive impact our method has on recognition performance when it is used for action detection, while running at 10FPS.

  11. Fast Temporal Activity Proposals for Efficient Detection of Human Actions in Untrimmed Videos

    KAUST Repository

    Heilbron, Fabian Caba

    2016-12-13

    In many large-scale video analysis scenarios, one is interested in localizing and recognizing human activities that occur in short temporal intervals within long untrimmed videos. Current approaches for activity detection still struggle to handle large-scale video collections and the task remains relatively unexplored. This is in part due to the computational complexity of current action recognition approaches and the lack of a method that proposes fewer intervals in the video, where activity processing can be focused. In this paper, we introduce a proposal method that aims to recover temporal segments containing actions in untrimmed videos. Building on techniques for learning sparse dictionaries, we introduce a learning framework to represent and retrieve activity proposals. We demonstrate the capabilities of our method in not only producing high quality proposals but also in its efficiency. Finally, we show the positive impact our method has on recognition performance when it is used for action detection, while running at 10FPS.

  12. Accurate Modeling Method for Cu Interconnect

    Science.gov (United States)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  13. A "conservative" method of thoracic wall dissection: a proposal for teaching human anatomy.

    Science.gov (United States)

    Barberini, Fabrizio; Brunone, Francesca

    2008-01-01

    The common methods of dissection exposing the thoracic organs include crossing of the wall together with wide resection of its muscular planes. In order to preserve these structures, a little demolishing technique of the thoracic wall is proposed, entering the thoracic cavity without extensive resection of the pectoral muscles. This method is based on the fact that these muscles rise up from the wall, like a bridge connecting the costal plane with the upper limb, and that the pectoralis major shows a segmental constitution. SUPERIOR LIMIT: Resect the sternal manubrium transversely between the 1st and the 2nd rib. The incision is prolonged along the 1st intercostal space, separating the first sterno-costal segment of the pectoralis major from the second one, and involving the intercostal muscles as far as the medial margin of the pectoralis minor. This muscle must be raised up, and the transverse resection continued below its medial margin latero-medially along the 1st intercostal space, to rejoin the cut performed before. Then, the incision of the 1st intercostal space is prolonged below the lateral margin of the pectoralis minor, which must be kept raised up, medio-laterally as far as the anterior axillary line. INFERIOR LIMIT: It corresponds to the inferior border of the thoracic cage, resected from the xiphoid process to the anterior axillary line, together with the sterno-costal insertions of the diaphragm. Then, an incision of the sterno-pericardial ligaments and a median sternotomy from the xiphoid process to the transverse resection of the manubrium should be performed. LATERAL LIMIT: From the point of crossing of the anterior axillary line with the inferior limit, resect the ribs from the 10th to the 2nd one. The lateral part of the pectoralis major must be raised up, so that the costal resection may be continued below it. Then, at the lateral extremity of the superior incision, the first and the second sternocostal segment of the pectoralis major must be

  14. A holistic calibration method with iterative distortion compensation for stereo deflectometry

    Science.gov (United States)

    Xu, Yongjia; Gao, Feng; Zhang, Zonghua; Jiang, Xiangqian

    2018-07-01

    This paper presents a novel holistic calibration method for stereo deflectometry system to improve the system measurement accuracy. The reconstruction result of stereo deflectometry is integrated with the calculated normal data of the measured surface. The calculation accuracy of the normal data is seriously influenced by the calibration accuracy of the geometrical relationship of the stereo deflectometry system. Conventional calibration approaches introduce form error to the system due to inaccurate imaging model and distortion elimination. The proposed calibration method compensates system distortion based on an iterative algorithm instead of the conventional distortion mathematical model. The initial value of the system parameters are calculated from the fringe patterns displayed on the systemic LCD screen through a reflection of a markless flat mirror. An iterative algorithm is proposed to compensate system distortion and optimize camera imaging parameters and system geometrical relation parameters based on a cost function. Both simulation work and experimental results show the proposed calibration method can significantly improve the calibration and measurement accuracy of a stereo deflectometry. The PV (peak value) of measurement error of a flat mirror can be reduced to 69.7 nm by applying the proposed method from 282 nm obtained with the conventional calibration approach.

  15. FPGA Implementation of the Coupled Filtering Method and the Affine Warping Method.

    Science.gov (United States)

    Zhang, Chen; Liang, Tianzhu; Mok, Philip K T; Yu, Weichuan

    2017-07-01

    In ultrasound image analysis, the speckle tracking methods are widely applied to study the elasticity of body tissue. However, "feature-motion decorrelation" still remains as a challenge for the speckle tracking methods. Recently, a coupled filtering method and an affine warping method were proposed to accurately estimate strain values, when the tissue deformation is large. The major drawback of these methods is the high computational complexity. Even the graphics processing unit (GPU)-based program requires a long time to finish the analysis. In this paper, we propose field-programmable gate array (FPGA)-based implementations of both methods for further acceleration. The capability of FPGAs on handling different image processing components in these methods is discussed. A fast and memory-saving image warping approach is proposed. The algorithms are reformulated to build a highly efficient pipeline on FPGA. The final implementations on a Xilinx Virtex-7 FPGA are at least 13 times faster than the GPU implementation on the NVIDIA graphic card (GeForce GTX 580).

  16. Structural reliability calculation method based on the dual neural network and direct integration method.

    Science.gov (United States)

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  17. 76 FR 56226 - Proposed Collection; Comment Request

    Science.gov (United States)

    2011-09-12

    ... Bureau. Since its creation in 1988, the BSRL has advanced the study of survey methods research... the cognitive, statistical, and social sciences. The BSRL research focuses primarily on the assessment... the ``Cognitive and Psychological Research.'' A copy of the proposed information collection request...

  18. Signal predictions for a proposed fast neutron interrogation method

    International Nuclear Information System (INIS)

    Sale, K.E.

    1992-12-01

    We have applied the Monte Carlo radiation transport code COG) to assess the utility of a proposed explosives detection scheme based on neutron emission. In this scheme a pulsed neutron beam is generated by an approximately seven MeV deuteron beam incident on a thick Be target. A scintillation detector operating in the current mode measures the neutrons transmitted through the object as a function of time. The flight time of unscattered neutrons from the source to the detector is simply related to the neutron energy. This information along with neutron cross section excitation functions is used to infer the densities of H, C, N and O in the volume sampled. The code we have chosen to use enables us to create very detailed and realistic models of the geometrical configuration of the system, the neutron source and of the detector response. By calculating the signals that will be observed for several configurations and compositions of interrogated object we can investigate and begin to understand how a system that could actually be fielded will perform. Using this modeling capability many early on with substantial savings in time and cost and with improvements in performance. We will present our signal predictions for simple single element test cases and for explosive compositions. From these studies it is dear that the interpretation of the signals from such an explosives identification system will pose a substantial challenge

  19. A Proposal of Product Development Collaboration Method Using User Support Information and its Experimental Evaluation

    Science.gov (United States)

    Tanaka, Mitsuru; Kataoka, Masatoshi; Koizumi, Hisao

    As the market changes more rapidly and new products continue to get more complex and multifunctional, product development collaboration with competent partners and leading users is getting more important to come up with new products that are successful in the market in a timely manner. ECM (engineering chain management) and SCM (supply chain management) are supply-side approaches toward this collaboration. In this paper, we propose a demand-side approach toward product development collaboration with users based on the information gathered through user support interactions. The approach and methodology proposed here was applied to a real data set, and its effectiveness was verified.

  20. Comparative analysis of methods for integrating various environmental impacts as a single index in life cycle assessment

    International Nuclear Information System (INIS)

    Ji, Changyoon; Hong, Taehoon

    2016-01-01

    Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weighting factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.

  1. Comparative analysis of methods for integrating various environmental impacts as a single index in life cycle assessment

    Energy Technology Data Exchange (ETDEWEB)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    2016-02-15

    Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weighting factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.

  2. Proposal for an ecoradiological centre model

    International Nuclear Information System (INIS)

    Perovic, S.M.; Zunic, Z.; Demajo, M.; Konjevic, N.

    1998-01-01

    The problem of establishing an optimal Ecoradiological Centre Model is studied in some detail for the town of Kotor which is under the protection of the World Cultural and Natural Heritage. The proposed structure of the Centre is analyzed from the view of Engineering, Education and Scientific parameters. This Model is suitable for implementation as a network Centre Model for the state of Montenegro. Further, the modelling strategy of the ecoradiological condition control of natural, construction, bio and technological systems is elaborated. The proposal includes the ecoradiological monitoring, radioactive and electromagnetic radiation processing and protection for different natural zones as well as their different geostructures, aerial and hydrogeological conditions. The programme also includes all housing objects (hotels, flats, houses, office premises etc.). Here will also be presented the radiation protection and recommendations for the implementation of Title VII of the European Basic Safety Standards Directive (BSS), concerning significant increase in exposure due to natural radiation sources. Also, the proposal of Local Radiation Protection for the town of Kotor is presented. Our proposal for an Ecoradiological Centre Model presented here is in a form of a pilot programme, applicable also for other towns and states. (author)

  3. Effective Solar Indices for Ionospheric Modeling: A Review and a Proposal for a Real-Time Regional IRI

    Science.gov (United States)

    Pignalberi, A.; Pezzopane, M.; Rizzi, R.; Galkin, I.

    2018-01-01

    The first part of this paper reviews methods using effective solar indices to update a background ionospheric model focusing on those employing the Kriging method to perform the spatial interpolation. Then, it proposes a method to update the International Reference Ionosphere (IRI) model through the assimilation of data collected by a European ionosonde network. The method, called International Reference Ionosphere UPdate (IRI UP), that can potentially operate in real time, is mathematically described and validated for the period 9-25 March 2015 (a time window including the well-known St. Patrick storm occurred on 17 March), using IRI and IRI Real Time Assimilative Model (IRTAM) models as the reference. It relies on foF2 and M(3000)F2 ionospheric characteristics, recorded routinely by a network of 12 European ionosonde stations, which are used to calculate for each station effective values of IRI indices IG_{12} and R_{12} (identified as IG_{{12{eff}}} and R_{{12{eff}}}); then, starting from this discrete dataset of values, two-dimensional (2D) maps of IG_{{12{eff}}} and R_{{12{eff}}} are generated through the universal Kriging method. Five variogram models are proposed and tested statistically to select the best performer for each effective index. Then, computed maps of IG_{{12{eff}}} and R_{{12{eff}}} are used in the IRI model to synthesize updated values of foF2 and hmF2. To evaluate the ability of the proposed method to reproduce rapid local changes that are common under disturbed conditions, quality metrics are calculated for two test stations whose measurements were not assimilated in IRI UP, Fairford (51.7°N, 1.5°W) and San Vito (40.6°N, 17.8°E), for IRI, IRI UP, and IRTAM models. The proposed method turns out to be very effective under highly disturbed conditions, with significant improvements of the foF2 representation and noticeable improvements of the hmF2 one. Important improvements have been verified also for quiet and moderately disturbed

  4. CASE METHOD. ACTIVE LEARNING METHODOLOGY TO ACQUIRE SIGNIFICANT IN CHEMISTRY

    Directory of Open Access Journals (Sweden)

    Clotilde Pizarro

    2015-09-01

    Full Text Available In this paper the methodology of cases in first year students of the Engineering Risk Prevention and Environment is applied. For this purpose a real case of contamination occurred at a school in the region of Valparaiso called "La Greda" is presented. If the application starts delivering an extract of the information collected from the media and they made a brief induction on the methodology to be applied. A plenary session, which is debate about possible solutions to the problem and establishing a relationship between the case and drives the chemistry program is then performed. Is concluded that the application of the case method, was a fruitful tool in yields obtained by students, since the percentage of approval was 75%, which is considerably higher than previous years.

  5. Finding the magnetic center of a quadrupole to high resolution: A draft proposal

    International Nuclear Information System (INIS)

    Fischer, G.E.; Cobb, J.K.; Jensen, D.R.

    1989-03-01

    In a companion proposal it is proposed to align quadrupoles of a transport line to within transverse tolerances of 5 to 10 micrometers. Such a proposal is meaningful only if the effective magnetic center of such lenses can in fact be repeatably located with respect to some external mechanical tooling to comparable accuracy. It is the purpose of this note to describe some new methods and procedures that will accomplish this aim. It will be shown that these methods are capable of yielding greater sensitivity than the more traditional methods used in the past. The notion of the ''nodal'' point is exploited. 4 refs., 5 figs., 1 tab

  6. Integrative Genomics: Quantifying significance of phenotype-genotype relationships from multiple sources of high-throughput data

    Directory of Open Access Journals (Sweden)

    Eric eGamazon

    2013-05-01

    Full Text Available Given recent advances in the generation of high-throughput data such as whole genome genetic variation and transcriptome expression, it is critical to come up with novel methods to integrate these heterogeneous datasets and to assess the significance of identified phenotype-genotype relationships. Recent studies show that genome-wide association findings are likely to fall in loci with gene regulatory effects such as expression quantitative trait loci (eQTLs, demonstrating the utility of such integrative approaches. When genotype and gene expression data are available on the same individuals, we developed methods wherein top phenotype-associated genetic variants are prioritized if they are associated, as eQTLs, with gene expression traits that are themselves associated with the phenotype. Yet there has been no method to determine an overall p-value for the findings that arise specifically from the integrative nature of the approach. We propose a computationally feasible permutation method that accounts for the assimilative nature of the method and the correlation structure among gene expression traits and among genotypes. We apply the method to data from a study of cellular sensitivity to etoposide, one of the most widely used chemotherapeutic drugs. To our knowledge, this study is the first statistically sound quantification of the significance of the genotype-phenotype relationships resulting from applying an integrative approach. This method can be easily extended to cases in which gene expression data are replaced by other molecular phenotypes of interest, e.g., microRNA or proteomic data. This study has important implications for studies seeking to expand on genetic association studies by the use of omics data. Finally, we provide an R code to compute the empirical FDR when p-values for the observed and simulated phenotypes are available.

  7. An Image Registration Method for Colposcopic Images

    Directory of Open Access Journals (Sweden)

    Efrén Mezura-Montes

    2013-01-01

    sequence and a division of such image into small windows. A search process is then carried out to find the window with the highest affinity in each image of the sequence and replace it with the window in the reference image. The affinity value is based on polynomial approximation of the time series computed and the search is bounded by a search radius which defines the neighborhood of each window. The proposed approach is tested in ten 310-frame real cases in two experiments: the first one to determine the best values for the window size and the search radius and the second one to compare the best obtained results with respect to four registration methods found in the specialized literature. The obtained results show a robust and competitive performance of the proposed approach with a significant lower time with respect to the compared methods.

  8. Northeast Asia regional energy infrastructure proposals

    International Nuclear Information System (INIS)

    Hippel, David von; Gulidov, Ruslan; Kalashnikov, Victor; Hayes, Peter

    2011-01-01

    Economic growth in the countries of Northeast Asia has spurred a massive increase in the need for energy, especially oil, gas, coal, and electricity. Although the region, taken as a whole, possesses financial, technical, labor, and natural resources sufficient to address much of the region's needs now and into the future, no one country has all of those attributes. As a result, over the past two decades, there has been significant interest in regional proposals that would allow sharing of resources, including infrastructure to develop and transport energy resources from the Russian Far East to South Korea, China, and Japan, and cooperation on energy-efficiency, renewable energy, and the nuclear fuel cycle as well. In this article we review some of these proposals, identify some of the factors that could contribute to the success or failure of infrastructure proposals, and explore some of the implications and ramifications of energy cooperation activities for energy security in the region.

  9. AUTOMATIC SUMMARIZATION OF WEB FORUMS AS SOURCES OF PROFESSIONALLY SIGNIFICANT INFORMATION

    Directory of Open Access Journals (Sweden)

    K. I. Buraya

    2016-07-01

    assessment characteristics of the post text in general come to the fore, and also the features connected with structure of a thread as the text and the social graph. We have shown that efficiency of extraction of informative posts poorly depends on a way of keywords assignment while such dependence is essential to extraction of relevant posts. The way of keywords extraction, the most effective for real appendices has been revealed. We have shown that at extraction of relevant posts linear methods are better in efficiency in comparison with nonlinear, and the LDA model is intermediate; at the same time at extraction of informative posts linear and nonlinear methods are identical by efficiency, and the LDA model considerably concedes to both of them. We have proposed substantial model explaining the received results. Practical Relevance. The obtained results can provide background for creation of new and adequate application of the existing algorithms of web forums summarization that will allow reducing significantly user’s time and resource expenditure by receiving and studying the last minute professionally significant information.

  10. A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images

    Science.gov (United States)

    Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong

    2016-01-01

    A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles’ in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians. PMID:27548179

  11. A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images.

    Science.gov (United States)

    Xu, Yongzheng; Yu, Guizhen; Wang, Yunpeng; Wu, Xinkai; Ma, Yalong

    2016-08-19

    A new hybrid vehicle detection scheme which integrates the Viola-Jones (V-J) and linear SVM classifier with HOG feature (HOG + SVM) methods is proposed for vehicle detection from low-altitude unmanned aerial vehicle (UAV) images. As both V-J and HOG + SVM are sensitive to on-road vehicles' in-plane rotation, the proposed scheme first adopts a roadway orientation adjustment method, which rotates each UAV image to align the roads with the horizontal direction so the original V-J or HOG + SVM method can be directly applied to achieve fast detection and high accuracy. To address the issue of descending detection speed for V-J and HOG + SVM, the proposed scheme further develops an adaptive switching strategy which sophistically integrates V-J and HOG + SVM methods based on their different descending trends of detection speed to improve detection efficiency. A comprehensive evaluation shows that the switching strategy, combined with the road orientation adjustment method, can significantly improve the efficiency and effectiveness of the vehicle detection from UAV images. The results also show that the proposed vehicle detection method is competitive compared with other existing vehicle detection methods. Furthermore, since the proposed vehicle detection method can be performed on videos captured from moving UAV platforms without the need of image registration or additional road database, it has great potentials of field applications. Future research will be focusing on expanding the current method for detecting other transportation modes such as buses, trucks, motors, bicycles, and pedestrians.

  12. Significance of likes: Analysing passive interactions on Facebook during campaigning

    OpenAIRE

    Khairuddin, Mohammad Adib; Rao, Asha

    2017-01-01

    With more and more political candidates using social media for campaigning, researchers are looking at measuring the effectiveness of this medium. Most research, however, concentrates on the bare count of likes (or twitter mentions) in an attempt to correlate social media presence and winning. In this paper, we propose a novel method, Interaction Strength Plot (IntS) to measure the passive interactions between a candidate's posts on Facebook and the users (liking the posts). Using this method...

  13. Proposing New Methods to Enhance the Low-Resolution Simulated GPR Responses in the Frequency and Wavelet Domains

    Directory of Open Access Journals (Sweden)

    Reza Ahmadi

    2014-12-01

    Full Text Available To date, a number of numerical methods, including the popular Finite-Difference Time Domain (FDTD technique, have been proposed to simulate Ground-Penetrating Radar (GPR responses. Despite having a number of advantages, the finite-difference method also has pitfalls such as being very time consuming in simulating the most common case of media with high dielectric permittivity, causing the forward modelling process to be very long lasting, even with modern high-speed computers. In the present study the well-known hyperbolic pattern response of horizontal cylinders, usually found in GPR B-Scan images, is used as a basic model to examine the possibility of reducing the forward modelling execution time. In general, the simulated GPR traces of common reflected objects are time shifted, as with the Normal Moveout (NMO traces encountered in seismic reflection responses. This suggests the application of Fourier transform to the GPR traces, employing the time-shifting property of the transformation to interpolate the traces between the adjusted traces in the frequency domain (FD. Therefore, in the present study two post-processing algorithms have been adopted to increase the speed of forward modelling while maintaining the required precision. The first approach is based on linear interpolation in the Fourier domain, resulting in increasing lateral trace-to-trace interval of appropriate sampling frequency of the signal, preventing any aliasing. In the second approach, a super-resolution algorithm based on 2D-wavelet transform is developed to increase both vertical and horizontal resolution of the GPR B-Scan images through preserving scale and shape of hidden hyperbola features. Through comparing outputs from both methods with the corresponding actual high-resolution forward response, it is shown that both approaches can perform satisfactorily, although the wavelet-based approach outperforms the frequency-domain approach noticeably, both in amplitude and

  14. A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems

    Directory of Open Access Journals (Sweden)

    Qingzhou Mao

    2015-06-01

    Full Text Available In environments that are hostile to Global Navigation Satellites Systems (GNSS, the precision achieved by a mobile light detection and ranging (LiDAR system (MLS can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS. This paper proposes a novel least squares collocation (LSC-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, the proposed LSC-based method effectively corrects these errors using LiDAR control points, thereby improving the accuracy of the MLS. This method is also applied to the calibration of misalignment between the laser scanner and the POS. Several datasets from different scenarios have been adopted in order to evaluate the effectiveness of the proposed method. The results from experiments indicate that this method would represent a significant improvement in terms of the accuracy of the MLS in environments that are essentially hostile to GNSS and is also effective regarding the calibration of misalignment.

  15. Radioisotope identification method for poorly resolved gamma-ray spectrum of nuclear security concern

    Energy Technology Data Exchange (ETDEWEB)

    Ninh, Giang Nguyen; Phongphaeth, Pengvanich, E-mail: phongphaeth.p@chula.ac.th; Nares, Chankow [Nuclear Engineering Department, Faculty of Engineering, Chulalongkorn University, 254 Phayathai Road, Pathumwan, Bangkok 10330 (Thailand); Hao, Quang Nguyen [Vietnam Atomic Energy Institute, Ministry of Science and Technology, Hanoi (Viet Nam)

    2016-01-22

    Gamma-ray signal can be used as a fingerprint for radioisotope identification. In the context of radioactive and nuclear materials security at the border control point, the detection task can present a significant challenge due to various constraints such as the limited measurement time, the shielding conditions, and the noise interference. This study proposes a novel method to identify the signal of one or several radioisotopes from a poorly resolved gamma-ray spectrum. In this method, the noise component in the raw spectrum is reduced by the wavelet decomposition approach, and the removal of the continuum background is performed using the baseline determination algorithm. Finally, the identification of radioisotope is completed using the matrix linear regression method. The proposed method has been verified by experiments using the poorly resolved gamma-ray signals from various scenarios including single source, mixing of natural uranium with five of the most common industrial radioactive sources (57Co, 60Co, 133Ba, 137Cs, and 241Am). The preliminary results show that the proposed algorithm is comparable with the commercial method.

  16. Radioisotope identification method for poorly resolved gamma-ray spectrum of nuclear security concern

    International Nuclear Information System (INIS)

    Ninh, Giang Nguyen; Phongphaeth, Pengvanich; Nares, Chankow; Hao, Quang Nguyen

    2016-01-01

    Gamma-ray signal can be used as a fingerprint for radioisotope identification. In the context of radioactive and nuclear materials security at the border control point, the detection task can present a significant challenge due to various constraints such as the limited measurement time, the shielding conditions, and the noise interference. This study proposes a novel method to identify the signal of one or several radioisotopes from a poorly resolved gamma-ray spectrum. In this method, the noise component in the raw spectrum is reduced by the wavelet decomposition approach, and the removal of the continuum background is performed using the baseline determination algorithm. Finally, the identification of radioisotope is completed using the matrix linear regression method. The proposed method has been verified by experiments using the poorly resolved gamma-ray signals from various scenarios including single source, mixing of natural uranium with five of the most common industrial radioactive sources (57Co, 60Co, 133Ba, 137Cs, and 241Am). The preliminary results show that the proposed algorithm is comparable with the commercial method

  17. 48 CFR 15.607 - Criteria for acceptance and negotiation of an unsolicited proposal.

    Science.gov (United States)

    2010-10-01

    ... and negotiation of an unsolicited proposal. 15.607 Section 15.607 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Unsolicited Proposals 15.607 Criteria for acceptance and negotiation of an unsolicited proposal. (a) A...

  18. A bootstrapping method for development of Treebank

    Science.gov (United States)

    Zarei, F.; Basirat, A.; Faili, H.; Mirain, M.

    2017-01-01

    Using statistical approaches beside the traditional methods of natural language processing could significantly improve both the quality and performance of several natural language processing (NLP) tasks. The effective usage of these approaches is subject to the availability of the informative, accurate and detailed corpora on which the learners are trained. This article introduces a bootstrapping method for developing annotated corpora based on a complex and rich linguistically motivated elementary structure called supertag. To this end, a hybrid method for supertagging is proposed that combines both of the generative and discriminative methods of supertagging. The method was applied on a subset of Wall Street Journal (WSJ) in order to annotate its sentences with a set of linguistically motivated elementary structures of the English XTAG grammar that is using a lexicalised tree-adjoining grammar formalism. The empirical results confirm that the bootstrapping method provides a satisfactory way for annotating the English sentences with the mentioned structures. The experiments show that the method could automatically annotate about 20% of WSJ with the accuracy of F-measure about 80% of which is particularly 12% higher than the F-measure of the XTAG Treebank automatically generated from the approach proposed by Basirat and Faili [(2013). Bridge the gap between statistical and hand-crafted grammars. Computer Speech and Language, 27, 1085-1104].

  19. A strategy for evaluating pathway analysis methods.

    Science.gov (United States)

    Yu, Chenggang; Woo, Hyung Jun; Yu, Xueping; Oyama, Tatsuya; Wallqvist, Anders; Reifman, Jaques

    2017-10-13

    Researchers have previously developed a multitude of methods designed to identify biological pathways associated with specific clinical or experimental conditions of interest, with the aim of facilitating biological interpretation of high-throughput data. Before practically applying such pathway analysis (PA) methods, we must first evaluate their performance and reliability, using datasets where the pathways perturbed by the conditions of interest have been well characterized in advance. However, such 'ground truths' (or gold standards) are often unavailable. Furthermore, previous evaluation strategies that have focused on defining 'true answers' are unable to systematically and objectively assess PA methods under a wide range of conditions. In this work, we propose a novel strategy for evaluating PA methods independently of any gold standard, either established or assumed. The strategy involves the use of two mutually complementary metrics, recall and discrimination. Recall measures the consistency of the perturbed pathways identified by applying a particular analysis method to an original large dataset and those identified by the same method to a sub-dataset of the original dataset. In contrast, discrimination measures specificity-the degree to which the perturbed pathways identified by a particular method to a dataset from one experiment differ from those identifying by the same method to a dataset from a different experiment. We used these metrics and 24 datasets to evaluate six widely used PA methods. The results highlighted the common challenge in reliably identifying significant pathways from small datasets. Importantly, we confirmed the effectiveness of our proposed dual-metric strategy by showing that previous comparative studies corroborate the performance evaluations of the six methods obtained by our strategy. Unlike any previously proposed strategy for evaluating the performance of PA methods, our dual-metric strategy does not rely on any ground truth

  20. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.

    Directory of Open Access Journals (Sweden)

    Chen Lu

    Full Text Available Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for

  1. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.

    Science.gov (United States)

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.

  2. A New Method for Single-Epoch Ambiguity Resolution with Indoor Pseudolite Positioning.

    Science.gov (United States)

    Li, Xin; Zhang, Peng; Guo, Jiming; Wang, Jinling; Qiu, Weining

    2017-04-21

    Ambiguity resolution (AR) is crucial for high-precision indoor pseudolite positioning. Due to the existing characteristics of the pseudolite positioning system, such as the geometry structure of the stationary pseudolite which is consistently invariant, the indoor signal is easy to interrupt and the first order linear truncation error cannot be ignored, and a new AR method based on the idea of the ambiguity function method (AFM) is proposed in this paper. The proposed method is a single-epoch and nonlinear method that is especially well-suited for indoor pseudolite positioning. Considering the very low computational efficiency of conventional AFM, we adopt an improved particle swarm optimization (IPSO) algorithm to search for the best solution in the coordinate domain, and variances of a least squares adjustment is conducted to ensure the reliability of the solving ambiguity. Several experiments, including static and kinematic tests, are conducted to verify the validity of the proposed AR method. Numerical results show that the IPSO significantly improved the computational efficiency of AFM and has a more elaborate search ability compared to the conventional grid searching method. For the indoor pseudolite system, which had an initial approximate coordinate precision better than 0.2 m, the AFM exhibited good performances in both static and kinematic tests. With the corrected ambiguity gained from our proposed method, indoor pseudolite positioning can achieve centimeter-level precision using a low-cost single-frequency software receiver.

  3. 77 FR 25235 - Significant New Use Rules on Certain Chemical Substances

    Science.gov (United States)

    2012-04-27

    .... discusses a procedure companies may use to ascertain whether a proposed use constitutes a significant new...-00-2, P-00-5, and P-00-6 Chemical names: Polymeric MDI based polyurethanes (generic). CAS numbers...

  4. Method Engineering: Engineering of Information Systems Development Methods and Tools

    NARCIS (Netherlands)

    Brinkkemper, J.N.; Brinkkemper, Sjaak

    1996-01-01

    This paper proposes the term method engineering for the research field of the construction of information systems development methods and tools. Some research issues in method engineering are identified. One major research topic in method engineering is discussed in depth: situational methods, i.e.

  5. Principles and Overview of Sampling Methods for Modeling Macromolecular Structure and Dynamics.

    Science.gov (United States)

    Maximova, Tatiana; Moffatt, Ryan; Ma, Buyong; Nussinov, Ruth; Shehu, Amarda

    2016-04-01

    Investigation of macromolecular structure and dynamics is fundamental to understanding how macromolecules carry out their functions in the cell. Significant advances have been made toward this end in silico, with a growing number of computational methods proposed yearly to study and simulate various aspects of macromolecular structure and dynamics. This review aims to provide an overview of recent advances, focusing primarily on methods proposed for exploring the structure space of macromolecules in isolation and in assemblies for the purpose of characterizing equilibrium structure and dynamics. In addition to surveying recent applications that showcase current capabilities of computational methods, this review highlights state-of-the-art algorithmic techniques proposed to overcome challenges posed in silico by the disparate spatial and time scales accessed by dynamic macromolecules. This review is not meant to be exhaustive, as such an endeavor is impossible, but rather aims to balance breadth and depth of strategies for modeling macromolecular structure and dynamics for a broad audience of novices and experts.

  6. Restoring stream habitat connectivity: a proposed method for prioritizing the removal of resident fish passage barriers.

    Science.gov (United States)

    O'Hanley, Jesse R; Wright, Jed; Diebel, Matthew; Fedora, Mark A; Soucy, Charles L

    2013-08-15

    Systematic methods for prioritizing the repair and removal of fish passage barriers, while growing of late, have hitherto focused almost exclusively on meeting the needs of migratory fish species (e.g., anadromous salmonids). An important but as of yet unaddressed issue is the development of new modeling approaches which are applicable to resident fish species habitat restoration programs. In this paper, we develop a budget constrained optimization model for deciding which barriers to repair or remove in order to maximize habitat availability for stream resident fish. Habitat availability at the local stream reach is determined based on the recently proposed C metric, which accounts for the amount, quality, distance and level of connectivity to different stream habitat types. We assess the computational performance of our model using geospatial barrier and stream data collected from the Pine-Popple Watershed, located in northeast Wisconsin (USA). The optimization model is found to be an efficient and practical decision support tool. Optimal solutions, which are useful in informing basin-wide restoration planning efforts, can be generated on average in only a few minutes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. A Proposal of Client Application Architecture using Loosely Coupled Component Connection Method in Banking Branch System

    Science.gov (United States)

    Someya, Harushi; Mori, Yuichi; Abe, Masahiro; Machida, Isamu; Hasegawa, Atsushi; Yoshie, Osamu

    Due to the deregulation of financial industry, the branches in banking industry need to shift to the sales-oriented bases from the operation-oriented bases. For corresponding to this movement, new banking branch systems are being developed. It is the main characteristics of new systems that we bring the form operations that have traditionally been performed at each branch into the centralized operation center for the purpose of rationalization and efficiency of the form operations. The branches treat a wide variety of forms. The forms can be described by common items in many cases, but the items include the different business logic and each form has the different relation among the items. And there is a need to develop the client application by user oneself. Consequently the challenge is to arrange the development environment that is high reusable, easy customizable and user developable. We propose a client application architecture that has a loosely coupled component connection method, and allows developing the applications by only describing the screen configurations and their transitions in XML documents. By adopting our architecture, we developed client applications of the centralized operation center for the latest banking branch system. Our experiments demonstrate good performances.

  8. QUANTITATIVE EVALUATION METHOD OF ELEMENTS PRIORITY OF CARTOGRAPHIC GENERALIZATION BASED ON TAXI TRAJECTORY DATA

    Directory of Open Access Journals (Sweden)

    Z. Long

    2017-09-01

    Full Text Available Considering the lack of quantitative criteria for the selection of elements in cartographic generalization, this study divided the hotspot areas of passengers into parts at three levels, gave them different weights, and then classified the elements from the different hotspots. On this basis, a method was proposed to quantify the priority of elements selection. Subsequently, the quantitative priority of different cartographic elements was summarized based on this method. In cartographic generalization, the method can be preferred to select the significant elements and discard those that are relatively non-significant.

  9. Final environmental assessment and Finding-of-No-Significant-Impact - drum storage facility for interim storage of materials generated by environmental restoration operations

    International Nuclear Information System (INIS)

    1994-09-01

    The Department of Energy (DOE) has prepared an Environmental Assessment (EA), DOE/EA-0995, for the construction and operation of a drum storage facility at Rocky Flats Environmental Technology Site, Golden, Colorado. The proposal for construction of the facility was generated in response to current and anticipated future needs for interim storage of waste materials generated by environmental restoration operations. A public meeting was held on July 20, 1994, at which the scope and analyses of the EA were presented. The scope of the EA included evaluation of alternative methods of storage, including no action. A comment period from July 5, 1994 through August 4, 1994, was provided to the public and the State of Colorado to submit written comment on the EA. No written comments were received regarding this proposed action, therefore no comment response is included in the Final EA. Based on the analyses in the EA, DOE has determined that the proposed action would not significantly affect the quality of the human environment within the meaning of the National Environmental Policy Act of 1969 (NEPA). Therefore, preparation of an Environmental Impact Statement is not required and the Department is issuing this Finding of No Significant Impact

  10. Proposed Model for Integrating RAMS Method in the Design Process in Construction

    Directory of Open Access Journals (Sweden)

    Saad Al-Jibouri

    2010-05-01

    Full Text Available There is a growing trend in the Netherlands for outsourcing public construction activities to the private sector through the use of integrated contracts. There is also an increasing emphasis from public clients on the use of RAMS and life cycle costing (LCC in the design process of infrastructural projects to improve the performance of designed systems and optimize the project cost. RAMS is an acronym for `reliability, availability, maintainability and safety' and represents a collection of techniques to provide predictions of the performance targets of the required system. Increasingly, RAMS targets are being specified in invitation to tender or contract documents and the parties responsible for the design are required to provide evidence of its application in their design. Recent evidence from practice, complemented with a literature study, has shown that the knowledge and application of RAMS in infrastructural designs are in their infancy compared with other industrial sectors and many designers in construction do not have the necessary knowledge and experience to apply it. This paper describes a proposed model for the integration of RAMS and LCC into the design process in construction. A variation of the model for the application of RAMS in `design, build, finance and maintain' (DBFM contracts that include maintenance requirements is also proposed. The two models involve providing guidelines to simplify the application of RAMs by the designers. The model has been validated for its practicality and usefulness during a workshop by experienced designers. DOI: 10.3763/aedm.2008.0100 Published in the Journal AEDM - Volume 5, Number 4, 2009 , pp. 179-192(14

  11. A parametric finite element method for solid-state dewetting problems with anisotropic surface energies

    Science.gov (United States)

    Bao, Weizhu; Jiang, Wei; Wang, Yan; Zhao, Quan

    2017-02-01

    We propose an efficient and accurate parametric finite element method (PFEM) for solving sharp-interface continuum models for solid-state dewetting of thin films with anisotropic surface energies. The governing equations of the sharp-interface models belong to a new type of high-order (4th- or 6th-order) geometric evolution partial differential equations about open curve/surface interface tracking problems which include anisotropic surface diffusion flow and contact line migration. Compared to the traditional methods (e.g., marker-particle methods), the proposed PFEM not only has very good accuracy, but also poses very mild restrictions on the numerical stability, and thus it has significant advantages for solving this type of open curve evolution problems with applications in the simulation of solid-state dewetting. Extensive numerical results are reported to demonstrate the accuracy and high efficiency of the proposed PFEM.

  12. 48 CFR 915.607 - Criteria for acceptance and negotiation of an unsolicited proposal.

    Science.gov (United States)

    2010-10-01

    ... and negotiation of an unsolicited proposal. 915.607 Section 915.607 Federal Acquisition Regulations System DEPARTMENT OF ENERGY CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Unsolicited Proposals 915.607 Criteria for acceptance and negotiation of an unsolicited proposal. (c) DOE's cost...

  13. Improved image alignment method in application to X-ray images and biological images.

    Science.gov (United States)

    Wang, Ching-Wei; Chen, Hsiang-Chou

    2013-08-01

    Alignment of medical images is a vital component of a large number of applications throughout the clinical track of events; not only within clinical diagnostic settings, but prominently so in the area of planning, consummation and evaluation of surgical and radiotherapeutical procedures. However, image registration of medical images is challenging because of variations on data appearance, imaging artifacts and complex data deformation problems. Hence, the aim of this study is to develop a robust image alignment method for medical images. An improved image registration method is proposed, and the method is evaluated with two types of medical data, including biological microscopic tissue images and dental X-ray images and compared with five state-of-the-art image registration techniques. The experimental results show that the presented method consistently performs well on both types of medical images, achieving 88.44 and 88.93% averaged registration accuracies for biological tissue images and X-ray images, respectively, and outperforms the benchmark methods. Based on the Tukey's honestly significant difference test and Fisher's least square difference test tests, the presented method performs significantly better than all existing methods (P ≤ 0.001) for tissue image alignment, and for the X-ray image registration, the proposed method performs significantly better than the two benchmark b-spline approaches (P < 0.001). The software implementation of the presented method and the data used in this study are made publicly available for scientific communities to use (http://www-o.ntust.edu.tw/∼cweiwang/ImprovedImageRegistration/). cweiwang@mail.ntust.edu.tw.

  14. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Science.gov (United States)

    Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong

    2012-01-01

    Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.

  15. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Directory of Open Access Journals (Sweden)

    Ai-bing Zhang

    Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.

  16. SRS Process Facility Significance Fire Frequency

    Energy Technology Data Exchange (ETDEWEB)

    Sarrack, A.G. [Westinghouse Savannah River Company, AIKEN, SC (United States)

    1995-10-01

    This report documents the method and assumptions of a study performed to determine a site generic process facility significant fire initiator frequency and explains the proper way this value should be used.

  17. SRS Process Facility Significance Fire Frequency

    International Nuclear Information System (INIS)

    Sarrack, A.G.

    1995-10-01

    This report documents the method and assumptions of a study performed to determine a site generic process facility significant fire initiator frequency and explains the proper way this value should be used

  18. Small punch test evaluation methods for material characterisation

    Energy Technology Data Exchange (ETDEWEB)

    Janča, Adam, E-mail: adam.janca@fjfi.cvut.cz; Siegl, Jan, E-mail: jan.siegl@fjfi.cvut.cz; Haušild, Petr, E-mail: petr.hausild@fjfi.cvut.cz

    2016-12-01

    The Small Punch Test (SPT) is one of the most widespread mechanical testing methods using miniaturized specimens. The paper presented deals with the time independent SPT, in which a flat specimen is bent by means of a (hemi)spherical punch moving at a constant velocity. The main goal is to relate the measured data to deformation processes taking place during specimen loading. Understanding of such relations is crucial for characterizing a material using any non-standardized experimental procedure. Using enhanced instrumentation, not only traditional load-displacement or load-deflection curves could be obtained, but also specimen thinning could be continuously measured and evaluated. Five alloys having a broad range of mechanical properties were tested. The results obtained were evaluated using both traditional and newly proposed methods and they were correlated with results of the conventional tensile test. The methods proposed seem to lead to a universal correlation between SPT results and tensile characteristics. - Highlights: • The newly proposed methodology significantly improved results of SPT. • Plastic deformation starts inside the specimen from the very beginning of loading. • Specimen thinning = punch displacement−specimen deflection. • Material response to loading is well illustrated by the novel load-thinning curve.

  19. An appraisal of current regulations and recommendations in relation to proposals for the use of X-ray optics equipment

    International Nuclear Information System (INIS)

    Weatherley, E.G.

    1984-01-01

    Current regulatory requirements for the use of x-ray optics equipment in factories are compared with the 19-year-old Guidance Notes for their use in research and teaching establishments. The difficulties in drafting new legislation to cater for both areas of use are discussed and the proposed regulations and approved code of practice having a direct bearing on x-ray optics equipment are reviewed. Comment is made concerning the proposed Health and Safety Executive document, ''Radiation Safety in the use of X-ray Optics Equipment, which will give practical advice and guidance on cost effective methods of achieving the regulatory objectives. The costs involved are unlikely to be significant for the majority of users and implementation of the proposed legislation should not restrict the use of x-ray optics equipment. (author)

  20. A new method for the radiation representation of musical instruments in auralizations

    DEFF Research Database (Denmark)

    Rindel, Jens Holger; Otondo, Felipe

    2005-01-01

    A new method for the representation of sound sources that vary their directivity in time in auralizations is introduced. A recording method with multi-channel anechoic recordings is proposed in connection with the use of a multiple virtual source reproduction system in auralizations. Listening ex...... to be significant. Further applications of the method are considered for ensembles within room auralizations as well as in the field of studio recording techniques for large instruments....

  1. An ME-PC Enhanced HDMR Method for Efficient Statistical Analysis of Multiconductor Transmission Line Networks

    KAUST Repository

    Yucel, Abdulkadir C.

    2015-05-05

    An efficient method for statistically characterizing multiconductor transmission line (MTL) networks subject to a large number of manufacturing uncertainties is presented. The proposed method achieves its efficiency by leveraging a high-dimensional model representation (HDMR) technique that approximates observables (quantities of interest in MTL networks, such as voltages/currents on mission-critical circuits) in terms of iteratively constructed component functions of only the most significant random variables (parameters that characterize the uncertainties in MTL networks, such as conductor locations and widths, and lumped element values). The efficiency of the proposed scheme is further increased using a multielement probabilistic collocation (ME-PC) method to compute the component functions of the HDMR. The ME-PC method makes use of generalized polynomial chaos (gPC) expansions to approximate the component functions, where the expansion coefficients are expressed in terms of integrals of the observable over the random domain. These integrals are numerically evaluated and the observable values at the quadrature/collocation points are computed using a fast deterministic simulator. The proposed method is capable of producing accurate statistical information pertinent to an observable that is rapidly varying across a high-dimensional random domain at a computational cost that is significantly lower than that of gPC or Monte Carlo methods. The applicability, efficiency, and accuracy of the method are demonstrated via statistical characterization of frequency-domain voltages in parallel wire, interconnect, and antenna corporate feed networks.

  2. A novel method of S-box design based on chaotic map and composition method

    International Nuclear Information System (INIS)

    Lambić, Dragan

    2014-01-01

    Highlights: • Novel chaotic S-box generation method is presented. • Presented S-box has better cryptographic properties than other examples of chaotic S-boxes. • The advantages of the proposed method are the low complexity and large key space. -- Abstract: An efficient algorithm for obtaining random bijective S-boxes based on chaotic maps and composition method is presented. The proposed method is based on compositions of S-boxes from a fixed starting set. The sequence of the indices of starting S-boxes used is obtained by using chaotic maps. The results of performance test show that the S-box presented in this paper has good cryptographic properties. The advantages of the proposed method are the low complexity and the possibility to achieve large key space

  3. THE PROPOSED METHODOLOGIES FOR THE SIX SIGMA METHOD AND TQM STRATEGY AS WELL AS THEIR APPLICATION IN PRACTICE IN MACEDONIA

    Directory of Open Access Journals (Sweden)

    Elizabeta Mitreva

    2014-05-01

    Full Text Available This paper presents the proposed methodologies for the Six Sigma method and the TQM strategy as well as their application in practice in Macedonia. Although the philosophy of the total quality management (TQM is deeply involved in many industries and business areas of European and other countries it is insufficiently known and present in our country and other developing countries. The same applies to the Six Sigma approach of reducing the dispersion of a process and it is present in a small fraction in Macedonian companies. The results of the implementation have shown that the application of the Six Sigma approach does not refer to the number of defects per million opportunities but to the systematic and systemic lowering of the dispersion process. The operation and effect of the implementation of the six sigma method engages experts that receive a salary depending on the success of the Six Sigma program. On other hand the results of the application of the TQM methodology within the Macedonian companies will depend on the commitment of all employees and their motivation.

  4. Radiometric method for the characterization of particulate processes in colloidal suspensions. II. Experimental verification of the method

    Energy Technology Data Exchange (ETDEWEB)

    Subotic, B. [Institut Rudjer Boskovic, Zagreb (Yugoslavia)

    1979-09-15

    A radiometric method for the characterization of particulate processes is verified using stable hydrosols of silver iodide. Silver iodide hydrosols satisfy the conditions required for the applications of the proposed method. Comparison shows that the values for the change of particle size measured in silver iodide hydrosols by the proposed method are in excellent agreement with the values obtained by other methods on the same systems (electron microscopy, sedimentation analysis, light scattering). This shows that the proposed method is suitable for the characterization of particulate processes in colloidal suspensions. (Auth.).

  5. 75 FR 60433 - Proposed Collection; Comment Request

    Science.gov (United States)

    2010-09-30

    ... Request AGENCY: Marine Corps Recruiting Command, Marine Corps Base Quantico, DoD. ACTION: Notice. SUMMARY..., identified by docket number and title, by any of the following methods: Federal eRulemaking Portal: http... proposal and associated collection instruments, write to Marine Corps Recruiting Command (Code G3 OR), 3280...

  6. Improved Cole parameter extraction based on the least absolute deviation method

    International Nuclear Information System (INIS)

    Yang, Yuxiang; Ni, Wenwen; Sun, Qiang; Wen, He; Teng, Zhaosheng

    2013-01-01

    The Cole function is widely used in bioimpedance spectroscopy (BIS) applications. Fitting the measured BIS data onto the model and then extracting the Cole parameters (R 0 , R ∞ , α and τ) is a common practice. Accurate extraction of the Cole parameters from the measured BIS data has great significance for evaluating the physiological or pathological status of biological tissue. The traditional least-squares (LS)-based curve fitting method for Cole parameter extraction is often sensitive to noise or outliers and becomes non-robust. This paper proposes an improved Cole parameter extraction based on the least absolute deviation (LAD) method. Comprehensive simulation experiments are carried out and the performances of the LAD method are compared with those of the LS method under the conditions of outliers, random noises and both disturbances. The proposed LAD method exhibits much better robustness under all circumstances, which demonstrates that the LAD method is deserving as an improved alternative to the LS method for Cole parameter extraction for its robustness to outliers and noises. (paper)

  7. A method for detecting nonlinear determinism in normal and epileptic brain EEG signals.

    Science.gov (United States)

    Meghdadi, Amir H; Fazel-Rezai, Reza; Aghakhani, Yahya

    2007-01-01

    A robust method of detecting determinism for short time series is proposed and applied to both healthy and epileptic EEG signals. The method provides a robust measure of determinism through characterizing the trajectories of the signal components which are obtained through singular value decomposition. Robustness of the method is shown by calculating proposed index of determinism at different levels of white and colored noise added to a simulated chaotic signal. The method is shown to be able to detect determinism at considerably high levels of additive noise. The method is then applied to both intracranial and scalp EEG recordings collected in different data sets for healthy and epileptic brain signals. The results show that for all of the studied EEG data sets there is enough evidence of determinism. The determinism is more significant for intracranial EEG recordings particularly during seizure activity.

  8. Methodological proposal for validation of the disinfecting efficacy of an automated flexible endoscope reprocessor

    Directory of Open Access Journals (Sweden)

    Kazuko Uchikawa Graziano

    Full Text Available ABSTRACT Objective: to elaborate and apply a method to assess the efficacy of automated flexible endoscope reprocessors at a time when there is not an official method or trained laboratories to comply with the requirements described in specific standards for this type of health product in Brazil. Method: the present methodological study was developed based on the following theoretical references: International Organization for Standardization (ISO standard ISO 15883-4/2008 and Brazilian Health Surveillance Agency (Agência Nacional de Vigilância Sanitária - ANVISA Collegiate Board Resolution (Resolução de Diretoria Colegiada - RDC no. 35/2010 and 15/2012. The proposed method was applied to a commercially available device using a high-level 0.2% peracetic acid-based disinfectant. Results: the proposed method of assessment was found to be robust when the recommendations made in the relevant legislation were incorporated with some adjustments to ensure their feasibility. Application of the proposed method provided evidence of the efficacy of the tested equipment for the high-level disinfection of endoscopes. Conclusion: the proposed method may serve as a reference for the assessment of flexible endoscope reprocessors, thereby providing solid ground for the purchase of this category of health products.

  9. Measurement of residual stress in quenched 1045 steel by the nanoindentation method

    International Nuclear Information System (INIS)

    Zhu Lina; Xu Binshi; Wang Haidou; Wang Chengbiao

    2010-01-01

    In this paper, the residual stress in quenched AISI 1045 steel was measured by a recently developed nanoindentation technique. Depth control mode was adopted to measure the residual stress. It was found that residual compressive stress was generated in the quenched steel. The material around nanoindents exhibits significant pile-up deformation. A new method was proposed to determine the real contact area for pile-up material on the basis of invariant pile-up morphology of the loaded or unloaded states. The results obtained by the new method were in good agreement with the residual stresses measured by the classical X-ray diffraction (XRD) method. - Research Highlights: → A new method was proposed to measure the real contact area for pile-up materials. → The real contact depth is defined as the sum of h max and the pile-up height h p . → The value of residual stress measured by the nanoindentation method was in good agreement with that by the XRD method.

  10. bNEAT: a Bayesian network method for detecting epistatic interactions in genome-wide association studies

    Directory of Open Access Journals (Sweden)

    Chen Xue-wen

    2011-07-01

    Full Text Available Abstract Background Detecting epistatic interactions plays a significant role in improving pathogenesis, prevention, diagnosis and treatment of complex human diseases. A recent study in automatic detection of epistatic interactions shows that Markov Blanket-based methods are capable of finding genetic variants strongly associated with common diseases and reducing false positives when the number of instances is large. Unfortunately, a typical dataset from genome-wide association studies consists of very limited number of examples, where current methods including Markov Blanket-based method may perform poorly. Results To address small sample problems, we propose a Bayesian network-based approach (bNEAT to detect epistatic interactions. The proposed method also employs a Branch-and-Bound technique for learning. We apply the proposed method to simulated datasets based on four disease models and a real dataset. Experimental results show that our method outperforms Markov Blanket-based methods and other commonly-used methods, especially when the number of samples is small. Conclusions Our results show bNEAT can obtain a strong power regardless of the number of samples and is especially suitable for detecting epistatic interactions with slight or no marginal effects. The merits of the proposed approach lie in two aspects: a suitable score for Bayesian network structure learning that can reflect higher-order epistatic interactions and a heuristic Bayesian network structure learning method.

  11. A dynamic method for magnetic torque measurement

    Science.gov (United States)

    Lin, C. E.; Jou, H. L.

    1994-01-01

    In a magnetic suspension system, accurate force measurement will result in better control performance in the test section, especially when a wider range of operation is required. Although many useful methods were developed to obtain the desired model, however, significant error is inevitable since the magnetic field distribution of the large-gap magnetic suspension system is extremely nonlinear. This paper proposed an easy approach to measure the magnetic torque of a magnetic suspension system using an angular photo encoder. Through the measurement of the velocity change data, the magnetic torque is converted. The proposed idea is described and implemented to obtain the desired data. It is useful to the calculation of a magnetic force in the magnetic suspension system.

  12. 25 CFR 900.125 - What shall a construction contract proposal contain?

    Science.gov (United States)

    2010-04-01

    ... organization shall also include in its construction contract proposal the following: (1) In the case of a... accept the tribally proposed standards; and (2) In the case of a contract for construction activities... methods (including national, regional, state, or tribal building codes or construction industry standards...

  13. Proposed Food and Drug Administration protective action guides for human food and animal feed: methods and implementation

    International Nuclear Information System (INIS)

    Schmidt, G.D.; Shleien, B.; Chiacchierini, R.P.

    1978-01-01

    The Food and Drug Administration's proposed recommendations to State and local agencies provide guidance on appropriate planning actions necessary for evaluating and preventing radioactive contamination of foods and animal feeds and the control and use of such products should they become contaminated. This presentation will cover the recommendations on implementation of the Preventive and Emergency PAG's. These recommendations include (1) the use of 'Dietary Factors' to obtain PAG's for specific food items from the general guidance, (2) procedures to be used for radionuclide mixtures and other radionuclides, (3) field and laboratory methods for the measurement of the level of contamination in the event of an incident and, (4) protective actions to be implemented by State and local agencies to limit the radiation dose to the public. Specific protective actions which should be considered for implementation when the projected dose exceeds the Preventive PAG are given for application to pasture, milk, fruits and vegetables, and grains. At the Emergency PAG level, the protective action decision is whether condemnation or other disposition is appropriate. (author)

  14. Significant Change Spotting for Periodic Human Motion Segmentation of Cleaning Tasks Using Wearable Sensors

    Directory of Open Access Journals (Sweden)

    Kai-Chun Liu

    2017-01-01

    Full Text Available The proportion of the aging population is rapidly increasing around the world, which will cause stress on society and healthcare systems. In recent years, advances in technology have created new opportunities for automatic activities of daily living (ADL monitoring to improve the quality of life and provide adequate medical service for the elderly. Such automatic ADL monitoring requires reliable ADL information on a fine-grained level, especially for the status of interaction between body gestures and the environment in the real-world. In this work, we propose a significant change spotting mechanism for periodic human motion segmentation during cleaning task performance. A novel approach is proposed based on the search for a significant change of gestures, which can manage critical technical issues in activity recognition, such as continuous data segmentation, individual variance, and category ambiguity. Three typical machine learning classification algorithms are utilized for the identification of the significant change candidate, including a Support Vector Machine (SVM, k-Nearest Neighbors (kNN, and Naive Bayesian (NB algorithm. Overall, the proposed approach achieves 96.41% in the F1-score by using the SVM classifier. The results show that the proposed approach can fulfill the requirement of fine-grained human motion segmentation for automatic ADL monitoring.

  15. A novel approach proposed for fractured zone detection using petrophysical logs

    International Nuclear Information System (INIS)

    Tokhmechi, B; Memarian, H; Noubari, H A; Moshiri, B

    2009-01-01

    Fracture detection is a key step in wellbore stability and fractured reservoir fluid flow simulation. While different methods have been proposed for fractured zones detection, each of them is associated with certain shortcomings that prevent their full use in different related engineering applications. In this paper, a novel combined method is proposed for fractured zone detection, using processing of petrophysical logs with wavelet, classification and data fusion techniques. Image and petrophysical logs from Asmari reservoir in eight wells of an oilfield in southwestern Iran were used to investigate the accuracy and applicability of the proposed method. Initially, an energy matching strategy was utilized to select the optimum mother wavelets for de-noising and decomposition of petrophysical logs. Parzen and Bayesian classifiers were applied to raw, de-noised and various frequency bands of logs after decomposition in order to detect fractured zones. Results show that the low-frequency bands (approximation 2, a 2 ) of de-noised logs are the best data for fractured zones detection. These classifiers considered one well as test well and the other seven wells as train wells. Majority voting, optimistic OWA (ordered weighted averaging) and pessimistic OWA methods were used to fuse the results obtained from seven train wells. Results confirmed that Parzen and optimistic OWA are the best combined methods to detect fractured zones. The generalization of method is confirmed with an average accuracy of about 72%

  16. A Sparsity-Promoted Method Based on Majorization-Minimization for Weak Fault Feature Enhancement.

    Science.gov (United States)

    Ren, Bangyue; Hao, Yansong; Wang, Huaqing; Song, Liuyang; Tang, Gang; Yuan, Hongfang

    2018-03-28

    Fault transient impulses induced by faulty components in rotating machinery usually contain substantial interference. Fault features are comparatively weak in the initial fault stage, which renders fault diagnosis more difficult. In this case, a sparse representation method based on the Majorzation-Minimization (MM) algorithm is proposed to enhance weak fault features and extract the features from strong background noise. However, the traditional MM algorithm suffers from two issues, which are the choice of sparse basis and complicated calculations. To address these challenges, a modified MM algorithm is proposed in which a sparse optimization objective function is designed firstly. Inspired by the Basis Pursuit (BP) model, the optimization function integrates an impulsive feature-preserving factor and a penalty function factor. Second, a modified Majorization iterative method is applied to address the convex optimization problem of the designed function. A series of sparse coefficients can be achieved through iterating, which only contain transient components. It is noteworthy that there is no need to select the sparse basis in the proposed iterative method because it is fixed as a unit matrix. Then the reconstruction step is omitted, which can significantly increase detection efficiency. Eventually, envelope analysis of the sparse coefficients is performed to extract weak fault features. Simulated and experimental signals including bearings and gearboxes are employed to validate the effectiveness of the proposed method. In addition, comparisons are made to prove that the proposed method outperforms the traditional MM algorithm in terms of detection results and efficiency.

  17. Caribbean alternative energy programme project proposals

    International Nuclear Information System (INIS)

    1978-03-01

    This is the third report to follow the Project Group Meeting on ALTERNATIVE ENERGY RESOURCES, Barbados, September, 1977. It consists of summaries of projects proposals identified at the Meeting. The first two reports have been previously circulated. The first CSC(77)AER-1 covers the background, proceedings and recommendations resulting from the meeting as well as containing a brief outline of the project proposals. The country papers and technical papers that were presented at the meeting or served as background material, form the second report, CSC(77)AER-2. Copies of the first two reports can be obtained on request to the Commonwealth Science Council. Projects with potential for making significant progress in the short term have been marked with an asterisk

  18. A Sea-Sky Line Detection Method for Unmanned Surface Vehicles Based on Gradient Saliency.

    Science.gov (United States)

    Wang, Bo; Su, Yumin; Wan, Lei

    2016-04-15

    Special features in real marine environments such as cloud clutter, sea glint and weather conditions always result in various kinds of interference in optical images, which make it very difficult for unmanned surface vehicles (USVs) to detect the sea-sky line (SSL) accurately. To solve this problem a saliency-based SSL detection method is proposed. Through the computation of gradient saliency the line features of SSL are enhanced effectively, while other interference factors are relatively suppressed, and line support regions are obtained by a region growing method on gradient orientation. The SSL identification is achieved according to region contrast, line segment length and orientation features, and optimal state estimation of SSL detection is implemented by introducing a cubature Kalman filter (CKF). In the end, the proposed method is tested on a benchmark dataset from the "XL" USV in a real marine environment, and the experimental results demonstrate that the proposed method is significantly superior to other state-of-the-art methods in terms of accuracy rate and real-time performance, and its accuracy and stability are effectively improved by the CKF.

  19. Emissions trading and competitive positions. The European Proposal for a Directive establishing a Framework for Greenhouse Gas Emissions Trading and Methods for the initial Allocation of Pollution Rights

    International Nuclear Information System (INIS)

    Grimeaud, D.; Peeters, M.

    2002-10-01

    The study on the intention to introduce emissions trading on a European Union level was conducted on the basis of the following three questions: Which methods can be used (by the Member States) to distribute the tradable emissions rights en which legal preconditions should be observed considering the EU-Treaty and the relevant directive proposal? Whenever necessary and possible international agreements on climate change and international trade law will be mentioned. Which safeguards are available for fair competition and which system of emissions trading is advisable in this perspective? How should the PSR (performance standard rate) system, which is preferred by industry, be valued? The structure of this study is as follows: in chapter 2 insight is given into the various methods that can be used to start an emissions trading system, i.e. the way tradable pollution rights are distributed (initial allocation). Chapter 3 will further examine the system of the initial allocation of pollution rights as it has been chosen in the proposal for the European directive. The aim is to give an exact qualification of the method of emissions trading, especially the method of initial allocation, that is used in the directive proposal. Chapter 4 examines whether safeguards are available to prevent competition distortions between firms that fall under the scope of the emissions trading scheme. Special attention will be given to conditions that result from the EU-Treaty in this context, such as the prohibition of state aid. In this chapter the international trade law will be dealt with as well. Chapter 5 will present an executive summary and the specific question whether the PSR-system is legally acceptable or maybe even recommendable, will be answered

  20. A fuzzy logic based PROMETHEE method for material selection problems

    Directory of Open Access Journals (Sweden)

    Muhammet Gul

    2018-03-01

    Full Text Available Material selection is a complex problem in the design and development of products for diverse engineering applications. This paper presents a fuzzy PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation method based on trapezoidal fuzzy interval numbers that can be applied to the selection of materials for an automotive instrument panel. Also, it presents uniqueness in making a significant contribution to the literature in terms of the application of fuzzy decision-making approach to material selection problems. The method is illustrated, validated, and compared against three different fuzzy MCDM methods (fuzzy VIKOR, fuzzy TOPSIS, and fuzzy ELECTRE in terms of its ranking performance. Also, the relationships between the compared methods and the proposed scenarios for fuzzy PROMETHEE are evaluated via the Spearman’s correlation coefficient. Styrene Maleic Anhydride and Polypropylene are determined optionally as suitable materials for the automotive instrument panel case. We propose a generic fuzzy MCDM methodology that can be practically implemented to material selection problem. The main advantages of the methodology are consideration of the vagueness, uncertainty, and fuzziness to decision making environment.

  1. Experiments study on attitude coupling control method for flexible spacecraft

    Science.gov (United States)

    Wang, Jie; Li, Dongxu

    2018-06-01

    High pointing accuracy and stabilization are significant for spacecrafts to carry out Earth observing, laser communication and space exploration missions. However, when a spacecraft undergoes large angle maneuver, the excited elastic oscillation of flexible appendages, for instance, solar wing and onboard antenna, would downgrade the performance of the spacecraft platform. This paper proposes a coupling control method, which synthesizes the adaptive sliding mode controller and the positive position feedback (PPF) controller, to control the attitude and suppress the elastic vibration simultaneously. Because of its prominent performance for attitude tracking and stabilization, the proposed method is capable of slewing the flexible spacecraft with a large angle. Also, the method is robust to parametric uncertainties of the spacecraft model. Numerical simulations are carried out with a hub-plate system which undergoes a single-axis attitude maneuver. An attitude control testbed for the flexible spacecraft is established and experiments are conducted to validate the coupling control method. Both numerical and experimental results demonstrate that the method discussed above can effectively decrease the stabilization time and improve the attitude accuracy of the flexible spacecraft.

  2. Nanomaterials, and Occupational Health and Safety—A Literature Review About Control Banding and a Semi-Quantitative Method Proposed for Hazard Assessment.

    Science.gov (United States)

    Dimou, Kaotar; Emond, Claude

    2017-06-01

    In recent decades, the control banding (CB) approach has been recognised as a hazard assessment methodology because of its increased importance in the occupational safety, health and hygiene (OSHH) industry. According to the American Industrial Hygiene Association, this approach originates from the pharmaceutical industry in the United Kingdom. The aim of the CB approach is to protect more than 90% (or approximately 2.7 billion) of the world’s workers who do not have access to OSHH professionals and traditional quantitative risk assessment methods. In other words, CB is a qualitative or semi-quantitative tool designed to prevent occupational accidents by controlling worker exposures to potentially hazardous chemicals in the absence of comprehensive toxicological and exposure data. These criteria correspond very precisely to the development and production of engineered nanomaterials (ENMs). Considering the significant lack of scientific knowledge about work-related health risks because of ENMs, CB is, in general, appropriate for these issues. Currently, CB can be adapted to the specificities of ENMs; hundreds of nanotechnology products containing ENMs are already on the market. In this context, this qualitative or semi-quantitative approach appears to be relevant for characterising and quantifying the degree of physico-chemical and biological reactivities of ENMs, leading towards better control of human health effects and the safe handling of ENMs in workplaces. The need to greater understand the CB approach is important to further manage the risks related to handling hazardous substances, such as ENMs, without established occupational exposure limits. In recent years, this topic has garnered much interest, including discussions in many technical papers. Several CB models have been developed, and many countries have created their own nano-specific CB instruments. The aims of this research were to perform a literature review about CBs, to classify the main

  3. Nanomaterials, and Occupational Health and Safety—A Literature Review About Control Banding and a Semi-Quantitative Method Proposed for Hazard Assessment

    International Nuclear Information System (INIS)

    Dimou, Kaotar; Emond, Claude

    2017-01-01

    In recent decades, the control banding (CB) approach has been recognised as a hazard assessment methodology because of its increased importance in the occupational safety, health and hygiene (OSHH) industry. According to the American Industrial Hygiene Association, this approach originates from the pharmaceutical industry in the United Kingdom. The aim of the CB approach is to protect more than 90% (or approximately 2.7 billion) of the world’s workers who do not have access to OSHH professionals and traditional quantitative risk assessment methods. In other words, CB is a qualitative or semi-quantitative tool designed to prevent occupational accidents by controlling worker exposures to potentially hazardous chemicals in the absence of comprehensive toxicological and exposure data. These criteria correspond very precisely to the development and production of engineered nanomaterials (ENMs). Considering the significant lack of scientific knowledge about work-related health risks because of ENMs, CB is, in general, appropriate for these issues. Currently, CB can be adapted to the specificities of ENMs; hundreds of nanotechnology products containing ENMs are already on the market. In this context, this qualitative or semi-quantitative approach appears to be relevant for characterising and quantifying the degree of physico-chemical and biological reactivities of ENMs, leading towards better control of human health effects and the safe handling of ENMs in workplaces. The need to greater understand the CB approach is important to further manage the risks related to handling hazardous substances, such as ENMs, without established occupational exposure limits. In recent years, this topic has garnered much interest, including discussions in many technical papers. Several CB models have been developed, and many countries have created their own nano-specific CB instruments. The aims of this research were to perform a literature review about CBs, to classify the main

  4. The cultural significance of wild mushrooms in San Mateo Huexoyucan, Tlaxcala, Mexico

    Science.gov (United States)

    2014-01-01

    Background We performed an ethnomycological study in a community in Tlaxcala, Central Mexico to identify the most important species of wild mushrooms growing in an oak forest, their significance criteria, and to validate the Cultural Significance Index (CSI). Methods Thirty-three mestizo individuals were randomly selected in San Mateo Huexoyucan and were asked seven questions based on criteria established by the CSI. Among the 49 mushroom species collected in the oak forest and open areas, 20 species were mentioned most often and were analyzed in more detail. Ordination and grouping techniques were used to determine the relationship between the cultural significance of the mushroom species, according to a perceived abundance index, frequency of use index, taste score appreciation index, multifunctional food index, knowledge transmission index, and health index. Results The mushrooms with highest CSI values were Agaricus campestris, Ramaria spp., Amanita aff. basii, Russula spp., Ustilago maydis, and Boletus variipes. These species were characterized by their good taste and were considered very nutritional. The species with the lowest cultural significance included Russula mexicana, Lycoperdon perlatum, and Strobylomyces strobilaceus. The ordination and grouping analyses identified four groups of mushrooms by their significance to the people of Huexoyucan. The most important variables that explained the grouping were the taste score appreciation index, health index, the knowledge transmission index, and the frequency of use index. Conclusions A. aff. basii and A. campestris were the most significant wild mushrooms to the people of San Mateo. The diversity of the Russula species and the variety of Amanita and Ramaria species used by these people was outstanding. Environments outside the forest also produced useful resources. The CSI used in Oaxaca was useful for determining the cultural significance of mushrooms in SMH, Tlaxcala. This list of mushrooms can be used in

  5. NEW METHOD OF PREVENTION OF IRONDEFENSE ANEMIA IN PREGNANT TEENS

    Directory of Open Access Journals (Sweden)

    E. S. Mikhaylin

    2018-01-01

    Full Text Available The paper presents an assessment of the effectiveness of the method proposed by the authors for the prevention of iron deficiency anemia in minor pregnant women. In the first stage, 593 histories of childbirth were retrospectively analyzed (group 1 — minors 13-15 years (n = 49, 2 group — minors 16-17 (n = 434, 3rd group — middle reproductive age (n = 110 . In the second stage, a prospective study of the frequency and structure of anemia of pregnant women was carried out (group 1 — minors aged 13-15 years (n = 17, group 2 — minors 16-17 (n = 127, 3rd group — women of middle reproductive age (n = 110. At the III stage, minor pregnant women were divided into two groups: in 1 (main group (n = 144, iron deficiency anemia was prevented according to the method we proposed; in the 2nd group (comparison group traditional therapy with iron preparations was carried out at the appearance of signs of anemia. The essence of the proposed method is that an minor pregnant woman, without waiting for laboratory signs of anemia, is examined for ferritin in venous blood, and at a value below 35 ng/ml, oral iron preparations are prescribed in conventional preventive doses for a period of 3 months, and if through three months the content of ferritin in the venous blood is again below 35 ng/ml — the intake of iron-containing preparations continues for another 3 months. The use of the proposed method contributed to a significant decrease in the incidence of anemia in minor pregnant women. The proposed method of preventing iron deficiency anemia in minor pregnant women helps to reduce the frequency and severity of anemia in this complex category of patients. 

  6. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  7. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke

    2017-01-01

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  8. Improve the functional status of students using the proposed method recovery

    Directory of Open Access Journals (Sweden)

    Evtukh M.I.

    2012-12-01

    Full Text Available Purpose - to improve the organizational and methodological foundations of physical education for the improvement of high school students in training. The study involved 152 students of the second year of the International Economics and Humanities University named after Stepan Demyanchuk. Students were divided into control (n = 76 and primary (n = 76 groups, which were similar in age and physical development. At the end of the study, through the application of the proposed technique improvement in students the core group, was able to restore the function of the respiratory and cardiovascular systems to the possibilities of healthy untrained people. A similar increase in the functionality of the core group of students registered with the definition of the index Skibinski - held a combined evaluation of functions of the respiratory and cardiovascular systems of students and determine its growth with satisfactory to good level.

  9. 76 FR 39800 - Proposed Flood Elevation Determinations

    Science.gov (United States)

    2011-07-07

    ... Planning and Review. This proposed rule is not a significant regulatory action under the criteria of... Idaho Springs Maps are available for inspection at City Hall, 1711 Miner Street, Idaho Springs, CO 80452.... Town of Macclesfield Maps are available for inspection at the Edgecombe County Planning Department, 201...

  10. Proposed New Method of Interpretation of Infrared Ship Signature Requirements

    NARCIS (Netherlands)

    Neele, F.P.; Wilson, M.T.; Youern, K.

    2005-01-01

    new method of deriving and defining requirements for the infrared signature of new ships is presented. The current approach is to specify the maximum allowed temperature or radiance contrast of the ship with respect to its background. At present, in most NATO countries, it is the contractor’s

  11. PACMan to Help Sort Hubble Proposals

    Science.gov (United States)

    Kohler, Susanna

    2017-04-01

    Every year, astronomers submit over a thousand proposals requesting time on the Hubble Space Telescope (HST). Currently, humans must sort through each of these proposals by hand before sending them off for review. Could this burden be shifted to computers?A Problem of VolumeAstronomer Molly Peeples gathered stats on the HST submissions sent in last week for the upcoming HST Cycle 25 (the deadline was Friday night), relative to previous years. This years proposal round broke the record, with over 1200 proposals submitted in total for Cycle 25. [Molly Peeples]Each proposal cycle for HST time attracts on the order of 1100 proposals accounting for far more HST time than is available. The proposals are therefore carefully reviewed by around 150 international members of the astronomy community during a six-month process to select those with the highest scientific merit.Ideally, each proposal will be read by reviewers that have scientific expertise relevant to the proposal topic: if a proposal requests HST time to study star formation, for instance, then the reviewers assigned to it should have research expertise in star formation.How does this matching of proposals to reviewers occur? The current method relies on self-reported categorization of the submitted proposals. This is unreliable, however; proposals are often mis-categorized by submitters due to misunderstanding or ambiguous cases.As a result, the Science Policies Group at the Space Telescope Science Institute (STScI) which oversees the review of HST proposals must go through each of the proposals by hand and re-categorize them. The proposals are then matched to reviewers with self-declared expertise in the same category.With the number of HST proposals on the rise and the expectation that the upcoming James Webb Space Telescope (JWST) will elicit even more proposals for time than Hubble scientists at STScI and NASA are now asking: could the human hours necessary for this task be spared? Could a computer program

  12. Customer churn prediction using a hybrid method and censored data

    Directory of Open Access Journals (Sweden)

    Reza Tavakkoli-Moghaddam

    2013-05-01

    Full Text Available Customers are believed to be the main part of any organization’s assets and customer retention as well as customer churn management are important responsibilities of organizations. In today’s competitive environment, organization must do their best to retain their existing customers since attracting new customers cost significantly more than taking care of existing ones. In this paper, we present a hybrid method based on neural network and Cox regression analysis where neural network is used for outlier data and Cox regression method is implemented for prediction of future events. The proposed model of this paper has been implemented on some data and the results are compared based on five criteria including prediction accuracy, errors’ type I and II, root mean square error and mean absolute deviation. The preliminary results indicate that the proposed model of this paper performs better than alternative methods.

  13. Hyperspectral image compressing using wavelet-based method

    Science.gov (United States)

    Yu, Hui; Zhang, Zhi-jie; Lei, Bo; Wang, Chen-sheng

    2017-10-01

    Hyperspectral imaging sensors can acquire images in hundreds of continuous narrow spectral bands. Therefore each object presented in the image can be identified from their spectral response. However, such kind of imaging brings a huge amount of data, which requires transmission, processing, and storage resources for both airborne and space borne imaging. Due to the high volume of hyperspectral image data, the exploration of compression strategies has received a lot of attention in recent years. Compression of hyperspectral data cubes is an effective solution for these problems. Lossless compression of the hyperspectral data usually results in low compression ratio, which may not meet the available resources; on the other hand, lossy compression may give the desired ratio, but with a significant degradation effect on object identification performance of the hyperspectral data. Moreover, most hyperspectral data compression techniques exploits the similarities in spectral dimensions; which requires bands reordering or regrouping, to make use of the spectral redundancy. In this paper, we explored the spectral cross correlation between different bands, and proposed an adaptive band selection method to obtain the spectral bands which contain most of the information of the acquired hyperspectral data cube. The proposed method mainly consist three steps: First, the algorithm decomposes the original hyperspectral imagery into a series of subspaces based on the hyper correlation matrix of the hyperspectral images between different bands. And then the Wavelet-based algorithm is applied to the each subspaces. At last the PCA method is applied to the wavelet coefficients to produce the chosen number of components. The performance of the proposed method was tested by using ISODATA classification method.

  14. Benefit analysis of proposed information systems

    OpenAIRE

    Besore, Mark H.

    1991-01-01

    Approved for public release; distribution is unlimited This thesis reviewed two different approaches to benefit analysis, benefit comparison and user satisfaction, that could be applied to the evaluation of proposed information systems which are under consideration for acquisition by the federal government. Currently the General Services Administration only recommends that present value analysis methods be used in the analysis of alternatives even though the GSA specifies...

  15. The clinical significance of computerized axial tomography (CAT) in consideration of conventional diagnostic methods

    International Nuclear Information System (INIS)

    Huenig, R.

    1976-01-01

    Regarding CAT of the intracranial region, the article informs on a) techniques of examination including the production of normal structures, b) the recognizable pathological changes, c) possibilities of enhancement, d) possibilities of course observation, e) limitations of the methods, as well as on f) risk/benefit aspects g) benefit/cost calculations as compared to conventional methods, and on h) the influence of CAT on the frequency of conventional methods of examination. Regarding CAT of the extracranial region, the information available up to the meeting is reported on. (orig./LH) [de

  16. Test the Overall Significance of p-values by Using Joint Tail Probability of Ordered p-values as Test Statistic

    OpenAIRE

    Fang, Yongxiang; Wit, Ernst

    2008-01-01

    Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values and decide the test result. This is, in some cases, viewed as a flaw. In order to overcome this flaw and improve the power of the test, the joint tail probability of a set p-values is proposed as a ...

  17. Degradation of light emitting diodes: a proposed methodology

    International Nuclear Information System (INIS)

    Koh, Sau; Vam Driel, Willem; Zhang, G.Q.

    2011-01-01

    Due to their long lifetime and high efficacy, light emitting diodes have the potential to revolutionize the illumination industry. However, self heat and high environmental temperature which will lead to increased junction temperature and degradation due to electrical overstress can shorten the life of the light emitting diode. In this research, a methodology to investigate the degradation of the LED emitter has been proposed. The epoxy lens of the emitter can be modelled using simplified Eyring methods whereas an equation has been proposed for describing the degradation of the LED emitters. (semiconductor devices)

  18. An algebraic method for constructing stable and consistent autoregressive filters

    International Nuclear Information System (INIS)

    Harlim, John; Hong, Hoon; Robbins, Jacob L.

    2015-01-01

    In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides a discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern

  19. Evaluation of a method to determine the myocardial uptake from 123I-BMIPP myocardial SPECT and its significance

    International Nuclear Information System (INIS)

    Iwase, Mikio; Toriyama, Takayuki; Itou, Masato; Shimao, Ryuichiro; Ikeda, Koshiro; Suzuki, Takeshi; Nobuta, Takaaki; Iida, Akihiko.

    1996-01-01

    We examined methods of calculating myocardial uptake (TU) of 123 I-BMIPP by SPECT, and compared TU to heart function (ejection fraction (EF), cardiac output (CO), cardiac index (CI)) calculated by left ventriculography. Forty-two patients with acute myocardial infarction were classified into 5 groups; within 1 week (I), from 1 to 2 weeks (II), from 2 weeks to 1.5 months (III), from 1.5 to 3 months (IV) and more than 3 months (V) after percutaneous transluminal coronary angioplasty (PTCA). Chest depth (Tw) was calculated by measuring the thoracic absorption rate of 123 I. In calculating TU, the myocardial count was calculated from short-axis tomograms, and then absorption was corrected using Tw to calculate each value on early-phase image (E) and delay-phase image (D). The influence of lung uptake on myocardial count was only 1.76%. When TU was compared to heart function, there were correlations between group I and group V. Especially in group VD-TU was a significantly correlated with heart function. In heart function CI, but not EF nor CO, was significantly correlated with TU. It was suggested that the correlation between TU and heart function reflected the infarct condition before PTCA in group I, and that the individual difference in recovery of fatty acid metabolism in group V. The significant correlation between D-TU and CI suggests that D-TU reflects heart function and fatty acid metabolism, although TU is influenced by differences in physical status. (author)

  20. Proposals to overcome limitations in the EU chemical risk assessment scheme

    DEFF Research Database (Denmark)

    Trapp, Stefan; Schwartz, S.

    2000-01-01

    The noti®cation of new chemicals in the European Union requires a risk assessment. A Technical Guidance Document (TGD) was prepared for assistance. The TGD proposes QSARs, regressions and models from various sources. Each method has its own range of applicability and its own restrictions. Regress......The noti®cation of new chemicals in the European Union requires a risk assessment. A Technical Guidance Document (TGD) was prepared for assistance. The TGD proposes QSARs, regressions and models from various sources. Each method has its own range of applicability and its own restrictions...

  1. A Project Strategic Index proposal for portfolio selection in electrical company based on the Analytic Network Process

    Energy Technology Data Exchange (ETDEWEB)

    Smith-Perera, Aida [Universidad Metropolitana de Caracas, Departamento de Gestion Tecnologica, Caracas 1071, Edo Miranda (Venezuela); Garcia-Melon, Monica; Poveda-Bautista, Rocio; Pastor-Ferrando, Juan-Pascual [Universidad Politecnica de Valencia, Departamento de Proyectos de Ingenieria, Camino de vera s/n 46022 Valencia (Spain)

    2010-08-15

    In this paper a new approach to prioritize project portfolio in an efficient and reliable way is presented. It is based on strategic objectives of the company and multicriteria decision methods. The paper introduces a rigorous method with acceptable complexity which seeks to assist managers of a big Electrical Company of Venezuela to distribute the annual budget among the possible improvement actions to be conducted on the electrical network of Caracas. A total of 15 network improvement actions grouped into three clusters according to the strategic objectives of the company have been analyzed using the Project Strategic Index (PSI) proposed. The approach combines the use of the Analytic Network Process (ANP) method with the information obtained from the experts during the decision-making process. The ANP method allows the aggregation of the experts' judgments on each of the indicators used into one Project Strategic Index. In addition, ANP is based on utility ratio functions which are the most appropriate for the analysis of uncertain data, like experts' estimations. Finally, unlike the other multicriteria techniques, ANP allows the decision problem to be modelled using the relationships among dependent criteria. The participating experts coincided in the appreciation that the method proposed in this paper is useful and an improvement from traditional budget distribution techniques. They find the results obtained coherent, the process seems sufficiently rigorous and precise, and the use of resources is significantly less than in other methods. (author)

  2. A Project Strategic Index proposal for portfolio selection in electrical company based on the Analytic Network Process

    International Nuclear Information System (INIS)

    Smith-Perera, Aida; Garcia-Melon, Monica; Poveda-Bautista, Rocio; Pastor-Ferrando, Juan-Pascual

    2010-01-01

    In this paper a new approach to prioritize project portfolio in an efficient and reliable way is presented. It is based on strategic objectives of the company and multicriteria decision methods. The paper introduces a rigorous method with acceptable complexity which seeks to assist managers of a big Electrical Company of Venezuela to distribute the annual budget among the possible improvement actions to be conducted on the electrical network of Caracas. A total of 15 network improvement actions grouped into three clusters according to the strategic objectives of the company have been analyzed using the Project Strategic Index (PSI) proposed. The approach combines the use of the Analytic Network Process (ANP) method with the information obtained from the experts during the decision-making process. The ANP method allows the aggregation of the experts' judgments on each of the indicators used into one Project Strategic Index. In addition, ANP is based on utility ratio functions which are the most appropriate for the analysis of uncertain data, like experts' estimations. Finally, unlike the other multicriteria techniques, ANP allows the decision problem to be modelled using the relationships among dependent criteria. The participating experts coincided in the appreciation that the method proposed in this paper is useful and an improvement from traditional budget distribution techniques. They find the results obtained coherent, the process seems sufficiently rigorous and precise, and the use of resources is significantly less than in other methods. (author)

  3. Theoretical Significance in Q Methodology: A Qualitative Approach to a Mixed Method

    Science.gov (United States)

    Ramlo, Susan

    2015-01-01

    Q methodology (Q) has offered researchers a unique scientific measure of subjectivity since William Stephenson's first article in 1935. Q's focus on subjectivity includes self-referential meaning and interpretation. Q is most often identified with its technique (Q-sort) and its method (factor analysis to group people); yet, it consists of a…

  4. DNS Tunneling Detection Method Based on Multilabel Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Ahmed Almusawi

    2018-01-01

    Full Text Available DNS tunneling is a method used by malicious users who intend to bypass the firewall to send or receive commands and data. This has a significant impact on revealing or releasing classified information. Several researchers have examined the use of machine learning in terms of detecting DNS tunneling. However, these studies have treated the problem of DNS tunneling as a binary classification where the class label is either legitimate or tunnel. In fact, there are different types of DNS tunneling such as FTP-DNS tunneling, HTTP-DNS tunneling, HTTPS-DNS tunneling, and POP3-DNS tunneling. Therefore, there is a vital demand to not only detect the DNS tunneling but rather classify such tunnel. This study aims to propose a multilabel support vector machine in order to detect and classify the DNS tunneling. The proposed method has been evaluated using a benchmark dataset that contains numerous DNS queries and is compared with a multilabel Bayesian classifier based on the number of corrected classified DNS tunneling instances. Experimental results demonstrate the efficacy of the proposed SVM classification method by obtaining an f-measure of 0.80.

  5. How to identify partial exposures to ionizing radiation? Proposal for a cytogenetic method; Como identificar exposicoes parciais as radiacoes ionizantes? Proposta de um metodo citogenetico

    Energy Technology Data Exchange (ETDEWEB)

    Fernandes, T.S.; Silva, E.B.; Pinto, M.M.P.L.; Amaral, A., E-mail: thiagosalazar@hotmail.com [Universidade Federal de Pernambuco (LAMBDA/UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear. Lab. de Modelagem e Biodosimetria Aplicada; Lloyd, David [Health Protection Agency, Oxford (United Kingdom). Radiation Protection Division

    2013-08-15

    In cases of radiological incidents or in occupational exposures to ionizing radiation, the majority of exposures are not related to the total body, but only partial. In this context, if the cytogenetic dosimetry is performed, there will be an underestimation of the absorbed dose due to the dilution of irradiated cells with non-irradiated cells. Considering the norms of NR 32 - Safety and Health in the Work of Health Service - which recommends cytogenetic dosimetry in the investigation of accidental exposures to ionizing radiations, it is necessary to develop of a tool to provide a better identification of partial exposures. With this aim, a partial body exposure was simulated by mixing, in vitro, 70% of blood irradiated with 4 Gy of X-rays with 30% of unirradiated blood from the same healthy donor. Aliquots of this mixture were cultured for 48 and 72 hours. Prolonging the time of cell culture from 48 to 72 hours produced no significant change in the yield of dicentrics. However, when only M1 (first division cells) were analyzed, the frequency of dicentrics per cell was increased. Prolonging the time of cell culture allowed cells in mitotic delay by irradiation to reach metaphase, and thus provides enough time for the damage to be visualized. The results of this research present the proposed method as an important tool in the investigation of exposed individuals, allowing associating the cytogenetic analysis with the real percentage of irradiated cells, contributing significantly for the decision making in terms of occupational health. (author)

  6. Employment of hypersonic glide vehicles: Proposed criteria for use

    Energy Technology Data Exchange (ETDEWEB)

    Olguin, Abel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-07-01

    Hypersonic Glide Vehicles (HGVs) are a type of reentry vehicle that couples the high speed of ballistic missiles with the maneuverability of aircraft. The HGV has been in development since the 1970s, and its technology falls under the category of Conventional Prompt Global Strike (CPGS) weapons. As noted by James M. Acton, a senior associate in the Nuclear Policy Program at the Carnegie Endowment, CPGS is a “missile in search of a mission.” With the introduction of any significant new military capability, a doctrine for use—including specifics regarding how, when and where it would be used, as well as tactics, training and procedures—must be clearly defined and understood by policy makers, military commanders, and planners. In this paper, benefits and limitations of the HGV are presented. Proposed criteria and four scenarios illustrate a possible method for assessing when to use an HGV.

  7. Proposal and Validation of an Entrepreneur Competency Profile: Implications for Education

    Science.gov (United States)

    Alda-Varas, Rodrigo; Villardon-Gallego, Lourdes; Elexpuru-Albizuri, Itziar

    2012-01-01

    Introduction: This research presents the validated proposal of an entrepreneur competency profile. We analyzed the phases of the entrepreneurial process, and the functions involved in each of them, in order to identify the tasks involved in each function/role and consequently the specific competencies of entrepreneurs. Method: The proposal was…

  8. Improved method for solving the neutron transport problem by discretization of space and energy variables

    International Nuclear Information System (INIS)

    Bosevski, T.

    1971-01-01

    The polynomial interpolation of neutron flux between the chosen space and energy variables enabled transformation of the integral transport equation into a system of linear equations with constant coefficients. Solutions of this system are the needed values of flux for chosen values of space and energy variables. The proposed improved method for solving the neutron transport problem including the mathematical formalism is simple and efficient since the number of needed input data is decreased both in treating the spatial and energy variables. Mathematical method based on this approach gives more stable solutions with significantly decreased probability of numerical errors. Computer code based on the proposed method was used for calculations of one heavy water and one light water reactor cell, and the results were compared to results of other very precise calculations. The proposed method was better concerning convergence rate, decreased computing time and needed computer memory. Discretization of variables enabled direct comparison of theoretical and experimental results

  9. An Effective News Recommendation Method for Microblog User

    Directory of Open Access Journals (Sweden)

    Wanrong Gu

    2014-01-01

    Full Text Available Recommending news stories to users, based on their preferences, has long been a favourite domain for recommender systems research. Traditional systems strive to satisfy their user by tracing users' reading history and choosing the proper candidate news articles to recommend. However, most of news websites hardly require any user to register before reading news. Besides, the latent relations between news and microblog, the popularity of particular news, and the news organization are not addressed or solved efficiently in previous approaches. In order to solve these issues, we propose an effective personalized news recommendation method based on microblog user profile building and sub class popularity prediction, in which we propose a news organization method using hybrid classification and clustering, implement a sub class popularity prediction method, and construct user profile according to our actual situation. We had designed several experiments compared to the state-of-the-art approaches on a real world dataset, and the experimental results demonstrate that our system significantly improves the accuracy and diversity in mass text data.

  10. An effective news recommendation method for microblog user.

    Science.gov (United States)

    Gu, Wanrong; Dong, Shoubin; Zeng, Zhizhao; He, Jinchao

    2014-01-01

    Recommending news stories to users, based on their preferences, has long been a favourite domain for recommender systems research. Traditional systems strive to satisfy their user by tracing users' reading history and choosing the proper candidate news articles to recommend. However, most of news websites hardly require any user to register before reading news. Besides, the latent relations between news and microblog, the popularity of particular news, and the news organization are not addressed or solved efficiently in previous approaches. In order to solve these issues, we propose an effective personalized news recommendation method based on microblog user profile building and sub class popularity prediction, in which we propose a news organization method using hybrid classification and clustering, implement a sub class popularity prediction method, and construct user profile according to our actual situation. We had designed several experiments compared to the state-of-the-art approaches on a real world dataset, and the experimental results demonstrate that our system significantly improves the accuracy and diversity in mass text data.

  11. ECG-derived respiration methods: adapted ICA and PCA.

    Science.gov (United States)

    Tiinanen, Suvi; Noponen, Kai; Tulppo, Mikko; Kiviniemi, Antti; Seppänen, Tapio

    2015-05-01

    Respiration is an important signal in early diagnostics, prediction, and treatment of several diseases. Moreover, a growing trend toward ambulatory measurements outside laboratory environments encourages developing indirect measurement methods such as ECG derived respiration (EDR). Recently, decomposition techniques like principal component analysis (PCA), and its nonlinear version, kernel PCA (KPCA), have been used to derive a surrogate respiration signal from single-channel ECG. In this paper, we propose an adapted independent component analysis (AICA) algorithm to obtain EDR signal, and extend the normal linear PCA technique based on the best principal component (PC) selection (APCA, adapted PCA) to improve its performance further. We also demonstrate that the usage of smoothing spline resampling and bandpass-filtering improve the performance of all EDR methods. Compared with other recent EDR methods using correlation coefficient and magnitude squared coherence, the proposed AICA and APCA yield a statistically significant improvement with correlations 0.84, 0.82, 0.76 and coherences 0.90, 0.91, 0.85 between reference respiration and AICA, APCA and KPCA, respectively. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. An Effective News Recommendation Method for Microblog User

    Science.gov (United States)

    Gu, Wanrong; Dong, Shoubin; Zeng, Zhizhao; He, Jinchao

    2014-01-01

    Recommending news stories to users, based on their preferences, has long been a favourite domain for recommender systems research. Traditional systems strive to satisfy their user by tracing users' reading history and choosing the proper candidate news articles to recommend. However, most of news websites hardly require any user to register before reading news. Besides, the latent relations between news and microblog, the popularity of particular news, and the news organization are not addressed or solved efficiently in previous approaches. In order to solve these issues, we propose an effective personalized news recommendation method based on microblog user profile building and sub class popularity prediction, in which we propose a news organization method using hybrid classification and clustering, implement a sub class popularity prediction method, and construct user profile according to our actual situation. We had designed several experiments compared to the state-of-the-art approaches on a real world dataset, and the experimental results demonstrate that our system significantly improves the accuracy and diversity in mass text data. PMID:24983011

  13. Management of Gene Variants of Unknown Significance

    DEFF Research Database (Denmark)

    Alosi, Daniela; Bisgaard, Marie Luise; Hemmingsen, Sophie Nowak

    2017-01-01

    by germline mutations in the VHL gene, which predispose to the development of multiple tumors such as central nervous system hemangioblastomas and renal cell carcinoma (RCC). Objective: We propose a method for the evaluation of VUS pathogenicity through our experience with the VHL missense mutation c.241C...... (IHC); 3) Assessment of the variant’s impact on protein structure and function, using multiple databases, in silico algorithms, and reports of functional studies. Results: Only one family member had clinical signs of vHL with early-onset RCC. IHC analysis showed no VHL protein expressed in the tumor...

  14. A proposal for a multivariate quantitative approach to infer karyological relationships among taxa

    Directory of Open Access Journals (Sweden)

    Lorenzo Peruzzi

    2014-12-01

    Full Text Available Until now, basic karyological parameters have been used in different ways by researchers to infer karyological relationships among organisms. In the present study, we propose a standardized approach to this aim, integrating six different, not redundant, parameters in a multivariate PCoA analysis. These parameters are chromosome number, basic chromosome number, total haploid chromosome length, MCA (Mean Centromeric Asymmetry, CVCL (Coefficient of Variation of Chromosome Length and CVCI (Coefficient of Variation of Centromeric Index. The method is exemplified with the application to several plant taxa, and its significance and limits are discussed in the light of current phylogenetic knowledge of these groups.

  15. Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain

    Science.gov (United States)

    Jin, Xin; Jiang, Qian; Yao, Shaowen; Zhou, Dongming; Nie, Rencan; Lee, Shin-Jye; He, Kangjian

    2018-01-01

    In order to promote the performance of infrared and visual image fusion and provide better visual effects, this paper proposes a hybrid fusion method for infrared and visual image by the combination of discrete stationary wavelet transform (DSWT), discrete cosine transform (DCT) and local spatial frequency (LSF). The proposed method has three key processing steps. Firstly, DSWT is employed to decompose the important features of the source image into a series of sub-images with different levels and spatial frequencies. Secondly, DCT is used to separate the significant details of the sub-images according to the energy of different frequencies. Thirdly, LSF is applied to enhance the regional features of DCT coefficients, and it can be helpful and useful for image feature extraction. Some frequently-used image fusion methods and evaluation metrics are employed to evaluate the validity of the proposed method. The experiments indicate that the proposed method can achieve good fusion effect, and it is more efficient than other conventional image fusion methods.

  16. A New Method for the 2D DOA Estimation of Coherently Distributed Sources

    Directory of Open Access Journals (Sweden)

    Liang Zhou

    2014-03-01

    Full Text Available The purpose of this paper is to develop a new technique for estimating the two- dimensional (2D direction-of-arrivals (DOAs of coherently distributed (CD sources, which can estimate effectively the central azimuth and central elevation of CD sources at the cost of less computational cost. Using the special L-shape array, a new approach for parametric estimation of CD sources is proposed. The proposed method is based on two rotational invariance relations under small angular approximation, and estimates two rotational matrices which depict the relations, using propagator technique. And then the central DOA estimations are obtained by utilizing the primary diagonal elements of two rotational matrices. Simulation results indicate that the proposed method can exhibit a good performance under small angular spread and be applied to the multisource scenario where different sources may have different angular distribution shapes. Without any peak-finding search and the eigendecomposition of the high-dimensional sample covariance matrix, the proposed method has significantly reduced the computational cost compared with the existing methods, and thus is beneficial to real-time processing and engineering realization. In addition, our approach is also a robust estimator which does not depend on the angular distribution shape of CD sources.

  17. Improvement of spatial discretization error on the semi-analytic nodal method using the scattered source subtraction method

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Tatsumi, Masahiro

    2006-01-01

    In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)

  18. Proposed diagnostic criteria for internet addiction.

    Science.gov (United States)

    Tao, Ran; Huang, Xiuqin; Wang, Jinan; Zhang, Huimin; Zhang, Ying; Li, Mengchen

    2010-03-01

    The objective of this study was to develop diagnostic criteria for internet addiction disorder (IAD) and to evaluate the validity of our proposed diagnostic criteria for discriminating non-dependent from dependent internet use in the general population. This study was conducted in three stages: the developmental stage (110 subjects in the survey group; 408 subjects in the training group), where items of the proposed diagnostic criteria were developed and tested; the validation stage (n = 405), where the proposed criteria were evaluated for criterion-related validity; and the clinical stage (n = 150), where the criteria and the global clinical impression of IAD were evaluated by more than one psychiatrist to determine inter-rater reliability. The proposed internet addiction diagnostic criteria consisted of symptom criterion (seven clinical symptoms of IAD), clinically significant impairment criterion (functional and psychosocial impairments), course criterion (duration of addiction lasting at least 3 months, with at least 6 hours of non-essential internet usage per day) and exclusion criterion (exclusion of dependency attributed to psychotic disorders). A diagnostic score of 2 + 1, where the first two symptoms (preoccupation and withdrawal symptoms) and at least one of the five other symptoms (tolerance, lack of control, continued excessive use despite knowledge of negative effects/affects, loss of interests excluding internet, and use of the internet to escape or relieve a dysphoric mood) was established. Inter-rater reliability was 98%. Our findings suggest that the proposed diagnostic criteria may be useful for the standardization of diagnostic criteria for IAD.

  19. A novel MPPT method for enhancing energy conversion efficiency taking power smoothing into account

    International Nuclear Information System (INIS)

    Liu, Jizhen; Meng, Hongmin; Hu, Yang; Lin, Zhongwei; Wang, Wei

    2015-01-01

    Highlights: • We discuss the disadvantages of conventional OTC MPPT method. • We study the relationship between enhancing efficiency and power smoothing. • The conversion efficiency is enhanced and the volatility of power is suppressed. • Small signal analysis is used to verify the effectiveness of proposed method. - Abstract: With the increasing capacity of wind energy conversion system (WECS), the rotational inertia of wind turbine is becoming larger. And the efficiency of energy conversion is significantly reduced by the large inertia. This paper proposes a novel maximum power point tracking (MPPT) method to enhance the efficiency of energy conversion for large-scale wind turbine. Since improving the efficiency may increase the fluctuations of output power, power smoothing is considered as the second control objective. A T-S fuzzy inference system (FIS) is adapted to reduce the fluctuations according to the volatility of wind speed and accelerated rotor speed by regulating the compensation gain. To verify the effectiveness, stability and good dynamic performance of the new method, mechanism analyses, small signal analyses, and simulation studies are carried out based on doubly-fed induction generator (DFIG) wind turbine, respectively. Study results show that both the response speed and the efficiency of proposed method are increased. In addition, the extra fluctuations of output power caused by the high efficiency are reduced effectively by the proposed method with FIS

  20. Substoichiometric method in the simple radiometric analysis

    International Nuclear Information System (INIS)

    Ikeda, N.; Noguchi, K.

    1979-01-01

    The substoichiometric method is applied to simple radiometric analysis. Two methods - the standard reagent method and the standard sample method - are proposed. The validity of the principle of the methods is verified experimentally in the determination of silver by the precipitation method, or of zinc by the ion-exchange or solvent-extraction method. The proposed methods are simple and rapid compared with the conventional superstoichiometric method. (author)

  1. The historical significance of oak

    Science.gov (United States)

    J. V. Thirgood

    1971-01-01

    A brief history of the importance of oak in Europe, contrasting the methods used in France and Britain to propagate the species and manage the forests for continued productivity. The significance of oak as a strategic resource during the sailing-ship era is stressed, and mention is made of the early development of oak management in North America. The international...

  2. Microfluidic method for measuring viscosity using images from smartphone

    Science.gov (United States)

    Kim, Sooyeong; Kim, Kyung Chun; Yeom, Eunseop

    2018-05-01

    The viscosity of a fluid is the most important characteristic in fluid rheology. Many microfluidic devices have been proposed for easily measuring the fluid viscosity of small samples. A hybrid system consisting of a smartphone and microfluidic device can offer a mobile laboratory for performing a wide range of detection and analysis functions related to healthcare. In this study, a new mobile sensing method based on a microfluidic device was proposed for fluid viscosity measurements. By separately delivering sample and reference fluids into the two inlets of a Y-shaped microfluidic device, an interfacial line is induced at downstream of the device. Because the interfacial width (W) between the sample and reference fluid flows was determined by their pressure ratio, the viscosity (μ) of the sample could be estimated by measuring the interfacial width. To distinguish the interfacial width of a sample, optical images of the flows at downstream of the Y-shaped microfluidic device were acquired using a smartphone. To check the measurement accuracy of the proposed method, the viscosities of glycerol mixtures were compared with those measured by a conventional viscometer. The proposed technique was applied to monitor the variations in blood and oil samples depending on storage or rancidity. We expect that this mobile sensing method based on a microfluidic device could be utilized as a viscometer with significant advantages in terms of mobility, ease-of-operation, and data management.

  3. Histological staining methods preparatory to laser capture microdissection significantly affect the integrity of the cellular RNA.

    Science.gov (United States)

    Wang, Hongyang; Owens, James D; Shih, Joanna H; Li, Ming-Chung; Bonner, Robert F; Mushinski, J Frederic

    2006-04-27

    Gene expression profiling by microarray analysis of cells enriched by laser capture microdissection (LCM) faces several technical challenges. Frozen sections yield higher quality RNA than paraffin-imbedded sections, but even with frozen sections, the staining methods used for histological identification of cells of interest could still damage the mRNA in the cells. To study the contribution of staining methods to degradation of results from gene expression profiling of LCM samples, we subjected pellets of the mouse plasma cell tumor cell line TEPC 1165 to direct RNA extraction and to parallel frozen sectioning for LCM and subsequent RNA extraction. We used microarray hybridization analysis to compare gene expression profiles of RNA from cell pellets with gene expression profiles of RNA from frozen sections that had been stained with hematoxylin and eosin (H&E), Nissl Stain (NS), and for immunofluorescence (IF) as well as with the plasma cell-revealing methyl green pyronin (MGP) stain. All RNAs were amplified with two rounds of T7-based in vitro transcription and analyzed by two-color expression analysis on 10-K cDNA microarrays. The MGP-stained samples showed the least introduction of mRNA loss, followed by H&E and immunofluorescence. Nissl staining was significantly more detrimental to gene expression profiles, presumably owing to an aqueous step in which RNA may have been damaged by endogenous or exogenous RNAases. RNA damage can occur during the staining steps preparatory to laser capture microdissection, with the consequence of loss of representation of certain genes in microarray hybridization analysis. Inclusion of RNAase inhibitor in aqueous staining solutions appears to be important in protecting RNA from loss of gene transcripts.

  4. Mid-Columbia Coho Reintroduction Feasibility Project : Final Environmental Assessment and Finding of No Significant Impact.

    Energy Technology Data Exchange (ETDEWEB)

    United States. Bonneville Power Administration; Confederated Tribes and Bands of the Yakama Nation; Washington State Department of Fish and Wildlife

    1999-04-01

    Bonneville Power Administration (BPA) is proposing to fund research for 2 to 3 years on the feasibility of reintroducing coho salmon into mid-Columbia River basin tributaries. The research would take place in the Methow and Wenatchee river basins in Chelan and Okanogan Counties, Washington. BPA has prepared an Environmental Assessment (EA) (DOE/EA-1282) evaluating the proposed project. Based on the analysis in the EA, BPA has determined that the proposed action is not a major Federal action significantly affecting the quality of the human environment, within the meaning of the National Environmental Policy Act (NEPA) of 1969. Therefore, the preparation of an Environmental Impact Statement (EIS) is not required, and BPA is issuing this Finding of No Significant Impact.

  5. Mid-Columbia Coho Reintroduction Feasibility Project. Final Environmental Assessment and Finding of No Significant Impact

    International Nuclear Information System (INIS)

    1999-01-01

    Bonneville Power Administration (BPA) is proposing to fund research for 2 to 3 years on the feasibility of reintroducing coho salmon into mid-Columbia River basin tributaries. The research would take place in the Methow and Wenatchee river basins in Chelan and Okanogan Counties, Washington. BPA has prepared an Environmental Assessment (EA) (DOE/EA-1282) evaluating the proposed project. Based on the analysis in the EA, BPA has determined that the proposed action is not a major Federal action significantly affecting the quality of the human environment, within the meaning of the National Environmental Policy Act (NEPA) of 1969. Therefore, the preparation of an Environmental Impact Statement (EIS) is not required, and BPA is issuing this Finding of No Significant Impact

  6. A pre-trained convolutional neural network based method for thyroid nodule diagnosis.

    Science.gov (United States)

    Ma, Jinlian; Wu, Fa; Zhu, Jiang; Xu, Dong; Kong, Dexing

    2017-01-01

    In ultrasound images, most thyroid nodules are in heterogeneous appearances with various internal components and also have vague boundaries, so it is difficult for physicians to discriminate malignant thyroid nodules from benign ones. In this study, we propose a hybrid method for thyroid nodule diagnosis, which is a fusion of two pre-trained convolutional neural networks (CNNs) with different convolutional layers and fully-connected layers. Firstly, the two networks pre-trained with ImageNet database are separately trained. Secondly, we fuse feature maps learned by trained convolutional filters, pooling and normalization operations of the two CNNs. Finally, with the fused feature maps, a softmax classifier is used to diagnose thyroid nodules. The proposed method is validated on 15,000 ultrasound images collected from two local hospitals. Experiment results show that the proposed CNN based methods can accurately and effectively diagnose thyroid nodules. In addition, the fusion of the two CNN based models lead to significant performance improvement, with an accuracy of 83.02%±0.72%. These demonstrate the potential clinical applications of this method. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Data-Sharing Method for Multi-Smart Devices at Close Range

    Directory of Open Access Journals (Sweden)

    Myoungbeom Chung

    2015-01-01

    Full Text Available We proposed a useful data-sharing method among multi-smart devices at close range using inaudible frequencies and Wi-Fi. The existing near data-sharing methods mostly use Bluetooth technology, but these methods have the problem of being unable to be operated using different operating systems. To correct this flaw, the proposed method that uses inaudible frequencies through the inner speaker and microphone of smart device can solve the problems of the existing methods. Using the proposed method, the sending device generates trigger signals composed of inaudible sound. Moreover, smart devices that receive the signals obtain the shared data from the sending device through Wi-Fi. To evaluate the efficacy of the proposed method, we developed a near data-sharing application based on the trigger signals and conducted a performance evaluation experiment. The success rate of the proposed method was 98.8%. Furthermore, we tested the user usability of the Bump application and the proposed method and found that the proposed method is more useful than Bump. Therefore, the proposed method is an effective approach for sharing data practically among multi-smart devices at close range.

  8. Engineering Evaluation of Proposed Alternative Salt Transfer Method for the Molten Salt Reactor Experiment for the Oak Ridge National Laboratory, Oak Ridge, Tennessee

    International Nuclear Information System (INIS)

    Carlberg, Jon A.; Roberts, Kenneth T.; Kollie, Thomas G.; Little, Leslie E.; Brady, Sherman D.

    2009-01-01

    This evaluation was performed by Pro2Serve in accordance with the Technical Specification for an Engineering Evaluation of the Proposed Alternative Salt Transfer Method for the Molten Salt Reactor Experiment at the Oak Ridge National Laboratory (BJC 2009b). The evaluators reviewed the Engineering Evaluation Work Plan for Molten Salt Reactor Experiment Residual Salt Removal, Oak Ridge National Laboratory, Oak Ridge, Tennessee (DOE 2008). The Work Plan (DOE 2008) involves installing a salt transfer probe and new drain line into the Fuel Drain Tanks and Fuel Flush Tank and connecting them to the new salt transfer line at the drain tank cell shield. The probe is to be inserted through the tank ball valve and the molten salt to the bottom of the tank. The tank would then be pressurized through the Reactive Gas Removal System to force the salt into the salt canisters. The Evaluation Team reviewed the work plan, interviewed site personnel, reviewed numerous documents on the Molten Salt Reactor (Sects. 7 and 8), and inspected the probes planned to be used for the transfer. Based on several concerns identified during this review, the team recommends not proceeding with the salt transfer via the proposed alternate salt transfer method. The major concerns identified during this evaluation are: (1) Structural integrity of the tanks - The main concern is with the corrosion that occurred during the fluorination phase of the uranium removal process. This may also apply to the salt transfer line for the Fuel Flush Tank. Corrosion Associated with Fluorination in the Oak Ridge National Laboratory Fluoride Volatility Process (Litman 1961) shows that this problem is significant. (2) Continued generation of Fluorine - Although the generation of Fluorine will be at a lower rate than experienced before the uranium removal, it will continue to be generated. This needs to be taken into consideration regardless of what actions are taken with the salt. (3) More than one phase of material

  9. Analysis of Non Local Image Denoising Methods

    Science.gov (United States)

    Pardo, Álvaro

    Image denoising is probably one of the most studied problems in the image processing community. Recently a new paradigm on non local denoising was introduced. The Non Local Means method proposed by Buades, Morel and Coll attracted the attention of other researches who proposed improvements and modifications to their proposal. In this work we analyze those methods trying to understand their properties while connecting them to segmentation based on spectral graph properties. We also propose some improvements to automatically estimate the parameters used on these methods.

  10. Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    Science.gov (United States)

    Liu, Rong

    2017-01-01

    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781

  11. Significant Tsunami Events

    Science.gov (United States)

    Dunbar, P. K.; Furtney, M.; McLean, S. J.; Sweeney, A. D.

    2014-12-01

    Tsunamis have inflicted death and destruction on the coastlines of the world throughout history. The occurrence of tsunamis and the resulting effects have been collected and studied as far back as the second millennium B.C. The knowledge gained from cataloging and examining these events has led to significant changes in our understanding of tsunamis, tsunami sources, and methods to mitigate the effects of tsunamis. The most significant, not surprisingly, are often the most devastating, such as the 2011 Tohoku, Japan earthquake and tsunami. The goal of this poster is to give a brief overview of the occurrence of tsunamis and then focus specifically on several significant tsunamis. There are various criteria to determine the most significant tsunamis: the number of deaths, amount of damage, maximum runup height, had a major impact on tsunami science or policy, etc. As a result, descriptions will include some of the most costly (2011 Tohoku, Japan), the most deadly (2004 Sumatra, 1883 Krakatau), and the highest runup ever observed (1958 Lituya Bay, Alaska). The discovery of the Cascadia subduction zone as the source of the 1700 Japanese "Orphan" tsunami and a future tsunami threat to the U.S. northwest coast, contributed to the decision to form the U.S. National Tsunami Hazard Mitigation Program. The great Lisbon earthquake of 1755 marked the beginning of the modern era of seismology. Knowledge gained from the 1964 Alaska earthquake and tsunami helped confirm the theory of plate tectonics. The 1946 Alaska, 1952 Kuril Islands, 1960 Chile, 1964 Alaska, and the 2004 Banda Aceh, tsunamis all resulted in warning centers or systems being established.The data descriptions on this poster were extracted from NOAA's National Geophysical Data Center (NGDC) global historical tsunami database. Additional information about these tsunamis, as well as water level data can be found by accessing the NGDC website www.ngdc.noaa.gov/hazard/

  12. Proposal of a method for formulating strategy in small and medium enterprises

    Directory of Open Access Journals (Sweden)

    Luís Henrique Piovezan

    2008-07-01

    Full Text Available Strategy models found in the literature are usually more suitable for big companies. However, small and medium enterprises (SME also need to plan their strategies, but in such a way that considers their peculiarities. In this context, this paper presents a simple method for strategy formulation and deployment in SME. This method was developed through a sequence of cases studies, developed in small companies (10 to 500 employees. The final version of this method is a seven-step framework that considers both business environment and firm core competencies. The final aim is the alignment of business and manufacturing strategies. This framework can be considered suitable for SME, since it is simple and allows saving time and scarce available resources for strategy formulation, both important issues in this kind of enterprises. Finally, a case study is presented, encompassing the analysis of the application of the final version of the method in a small Brazilian company. Key-words: Competitive Strategy, Small Business Strategy, Manufacturing Strategy.

  13. The mGPC method:

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Palima, Darwin

    2009-01-01

    We adapt concepts from matched filtering to propose a method for generating rapidly reconfigurable multiple beams. As a phase-only spatially filtering extension of the Generalized Phase Contrast (GPC) technique, the proposed method coined mGPC can yield dynamically reconfigurable optical beam...... arrays with high light efficiency for optical manipulation, high-speed sorting and other parallel spatial light applications [1]....

  14. SUPERVISION OF CREDIT INSTITUTIONS SIGNIFICANT RISKS TO FINANCIAL STABILITY

    Directory of Open Access Journals (Sweden)

    LUCIAN-ION MEDAR

    2014-12-01

    Full Text Available Financial stability of Romanian banking system is determined by the constant supervision of credit institutions significant risks. Accession of Romania to Union Banking requires the signing of a linked protocol between the central bank and European Central Bank regarding prudential supervision to ensure financial stability. This means that from the next year, the central bank will impose a new supervision of credit institutions in our country. And especially to those credit institutions that do not fall under European supervisors, according to the procedures of the ECB. Through this study we propose to specify the main elements of management of significant risks to ensure financial stability.

  15. A Nonlinear Framework of Delayed Particle Smoothing Method for Vehicle Localization under Non-Gaussian Environment

    Directory of Open Access Journals (Sweden)

    Zhu Xiao

    2016-05-01

    Full Text Available In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS, is proposed, which enables vehicle state estimation (VSE with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods.

  16. New method of control of tooth whitening

    Science.gov (United States)

    Angelov, I.; Mantareva, V.; Gisbrecht, A.; Valkanov, S.; Uzunov, Tz.

    2010-10-01

    New methods of control of tooth bleaching stages through simultaneous measurements of a reflected light and a fluorescence signal are proposed. It is shown that the bleaching process leads to significant changes in the intensity of a scattered signal and also in the shape and intensity of the fluorescence spectra. Experimental data illustrate that the bleaching process causes essential changes in the teeth discoloration in short time as 8-10 min from the beginning of the application procedure. The continuation of the treatment is not necessary moreover the probability of the enamel destroy increases considerably. The proposed optical back control of tooth surface is a base for development of a practical set up to control the duration of the bleaching procedure.

  17. A review of analytical techniques to determine long-lived radiobiologically significant nuclides in encapsulated waste

    International Nuclear Information System (INIS)

    Amey, M.H.D.; Kenward, D.; Symons, W.J.

    1985-05-01

    Methods have been identified, or are proposed for, most of the radionuclides considered important for long term storage. From the information obtained the development of liquid chromatography together with an in-line radiochemical detector is proposed as a powerful technique for sequential separation and analysis of multi-component systems. (UK)

  18. Proposal of quality indicators for cardiac rehabilitation after acute coronary syndrome in Japan: a modified Delphi method and practice test.

    Science.gov (United States)

    Ohtera, Shosuke; Kanazawa, Natsuko; Ozasa, Neiko; Ueshima, Kenji; Nakayama, Takeo

    2017-01-27

    Cardiac rehabilitation is underused and its quality in practice is unclear. A quality indicator is a measurable element of clinical practice performance. This study aimed to propose a set of quality indicators for cardiac rehabilitation following an acute coronary event in the Japanese population and conduct a small-size practice test to confirm feasibility and applicability of the indicators in real-world clinical practice. This study used a modified Delphi technique (the RAND/UCLA appropriateness method), a consensus method which involves an evidence review, a face-to-face multidisciplinary panel meeting and repeated anonymous rating. Evidence to be reviewed included clinical practice guidelines available in English or Japanese and existing quality indicators. Performance of each indicator was assessed retrospectively using medical records at a university hospital in Japan. 10 professionals in cardiac rehabilitation for the consensus panel. In the literature review, 23 clinical practice guidelines and 16 existing indicators were identified to generate potential indicators. Through the consensus-building process, a total of 30 indicators were assessed and finally 13 indicators were accepted. The practice test (n=39) revealed that 74% of patients underwent cardiac rehabilitation. Median performance of process measures was 93% (IQR 46-100%). 'Communication with the doctor who referred the patient to cardiac rehabilitation' and 'continuous participation in cardiac rehabilitation' had low performance (32% and 38%, respectively). A modified Delphi technique identified a comprehensive set of quality indicators for cardiac rehabilitation. The single-site, small-size practice test confirmed that most of the proposed indicators were measurable in real-world clinical practice. However, some clinical processes which are not covered by national health insurance in Japan had low performance. Further studies will be needed to clarify and improve the quality of care in cardiac

  19. Oriented Polar Molecules in a Solid Inert-Gas Matrix: A Proposed Method for Measuring the Electric Dipole Moment of the Electron

    Directory of Open Access Journals (Sweden)

    A. C. Vutha

    2018-01-01

    Full Text Available We propose a very sensitive method for measuring the electric dipole moment of the electron using polar molecules embedded in a cryogenic solid matrix of inert-gas atoms. The polar molecules can be oriented in the z ^ -direction by an applied electric field, as has recently been demonstrated by Park et al. The trapped molecules are prepared into a state that has its electron spin perpendicular to z ^ , and a magnetic field along z ^ causes precession of this spin. An electron electric dipole moment d e would affect this precession due to the up to 100 GV/cm effective electric field produced by the polar molecule. The large number of polar molecules that can be embedded in a matrix, along with the expected long coherence times for the precession, allows for the possibility of measuring d e to an accuracy that surpasses current measurements by many orders of magnitude. Because the matrix can inhibit molecular rotations and lock the orientation of the polar molecules, it may not be necessary to have an electric field present during the precession. The proposed technique can be applied using a variety of polar molecules and inert gases, which, along with other experimental variables, should allow for careful study of systematic uncertainties in the measurement.

  20. Centrifugal compressor shape modification using a proposed inverse design method

    International Nuclear Information System (INIS)

    Niliahmadabadi, Mahdi; Poursadegh, Farzad

    2013-01-01

    This paper is concerned with a quasi-3D design method for the radial and axial diffusers of a centrifugal compressor on the meridional plane. The method integrates a novel inverse design algorithm, called ball-spine algorithm (BSA), and a quasi-3D analysis code. The Euler equation is solved on the meridional plane for a numerical domain, of which unknown boundaries (hub and shroud) are iteratively modified under the BSA until a prescribed pressure distribution is reached. In BSA, unknown walls are composed of a set of virtual balls that move freely along specified directions called spines. The difference between target and current pressure distributions causes the flexible boundary to deform at each modification step. In validating the quasi-3D analysis code, a full 3D Navier-Stokes code is used to analyze the existing and designed compressors numerically. Comparison of the quasi-3D analysis results with full 3D analysis results shows viable agreement. The 3D numerical analysis of the current compressor shows a huge total pressure loss on the 90 .deg. bend between the radial and axial diffusers. Geometric modification of the meridional plane causes the efficiency to improve by about 10%.

  1. Centrifugal compressor shape modification using a proposed inverse design method

    Energy Technology Data Exchange (ETDEWEB)

    Niliahmadabadi, Mahdi [Isfahan University of Technology, Isfahan (Iran, Islamic Republic of); Poursadegh, Farzad [Sharif University of Technology, Tehran (Iran, Islamic Republic of)

    2013-03-15

    This paper is concerned with a quasi-3D design method for the radial and axial diffusers of a centrifugal compressor on the meridional plane. The method integrates a novel inverse design algorithm, called ball-spine algorithm (BSA), and a quasi-3D analysis code. The Euler equation is solved on the meridional plane for a numerical domain, of which unknown boundaries (hub and shroud) are iteratively modified under the BSA until a prescribed pressure distribution is reached. In BSA, unknown walls are composed of a set of virtual balls that move freely along specified directions called spines. The difference between target and current pressure distributions causes the flexible boundary to deform at each modification step. In validating the quasi-3D analysis code, a full 3D Navier-Stokes code is used to analyze the existing and designed compressors numerically. Comparison of the quasi-3D analysis results with full 3D analysis results shows viable agreement. The 3D numerical analysis of the current compressor shows a huge total pressure loss on the 90 .deg. bend between the radial and axial diffusers. Geometric modification of the meridional plane causes the efficiency to improve by about 10%.

  2. Proposing water balance method for water availability estimation in Indonesian regional spatial planning

    Science.gov (United States)

    Juniati, A. T.; Sutjiningsih, D.; Soeryantono, H.; Kusratmoko, E.

    2018-01-01

    The water availability (WA) of a region is one of important consideration in both the formulation of spatial plans and the evaluation of the effectiveness of actual land use in providing sustainable water resources. Information on land-water needs vis-a-vis their availability in a region determines the state of the surplus or deficit to inform effective land use utilization. How to calculate water availability have been described in the Guideline in Determining the Carrying Capacity of the Environment in Regional Spatial Planning. However, the method of determining the supply and demand of water on these guidelines is debatable since the determination of WA in this guideline used a rational method. The rational method is developed the basis for storm drain design practice and it is essentially a peak discharge method peak discharge calculation method. This paper review the literature in methods of water availability estimation which is described descriptively, and present arguments to claim that water balance method is a more fundamental and appropriate tool in water availability estimation. A better water availability estimation method would serve to improve the practice in preparing formulations of Regional Spatial Plan (RSP) as well as evaluating land use capacity in providing sustainable water resources.

  3. An improved hydrometeor detection method for millimeter-wavelength cloud radar

    Directory of Open Access Journals (Sweden)

    J. Ge

    2017-07-01

    Full Text Available A modified method with a new noise reduction scheme that can reduce the noise distribution to a narrow range is proposed to distinguish clouds and other hydrometeors from noise and recognize more features with weak signal in cloud radar observations. A spatial filter with central weighting, which is widely used in cloud radar hydrometeor detection algorithms, is also applied in our method to examine radar return for significant levels of signals. Square clouds were constructed to test our algorithm and the method used for the US Department of Energy Atmospheric Radiation Measurements Program millimeter-wavelength cloud radar. We also applied both the methods to 6 months of cloud radar observations at the Semi-Arid Climate and Environment Observatory of Lanzhou University and compared the results. It was found that our method has significant advantages in reducing the rates of both failed negative and false positive hydrometeor identifications in simulated clouds and recognizing clouds with weak signal from our cloud radar observations.

  4. Homeopathy--between tradition and modern science: remedies as carriers of significance.

    Science.gov (United States)

    Almirantis, Yannis

    2013-04-01

    The healing potential and description of homeopathic remedies, as determined in homeopathic pathogenic trials (HPTs) and verified by medical experience, are often found to be meaningfully connected with the symbolic content attributed to the original materials (tinctures, metals etc) through tradition or modern semantics. Such a connection is incompatible with a biomolecular mechanistic explanation of the healing action of remedies. The physiological effects of crude substances are often similar to the symptoms of illnesses cured by the corresponding homeopathic remedy. This is considered a manifestation of the similia principle. Evidence is brought here that in several cases the inverse situation occurs, with the healing properties of the crude substance and those of its homeopathic preparation partially coinciding, the remedy usually having broader healing properties. The existence of these two possibilities in the relationship of medicinal actions of remedy and the crude substance, offers evidence in favor of a direct involvement of the level of significances in the mechanism underlying the homeopathic phenomenon. Finally, an experimental methodology is proposed, which may bring the result of double-blind randomized studies for homeopathic remedies closer to the reported performance of homeopathy in real life medical practice. If successful, this method would be a further indication of a non-local, significance-related interpretation of homeopathy. Copyright © 2013 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  5. Linear, Transfinite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2018-01-01

    of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...

  6. R and D proposals to improve outages operation. Methods, practices and tools

    International Nuclear Information System (INIS)

    Dionis, Francois

    2014-01-01

    This paper deals with outage operation improvement. It offers a number of tracks on the interactions between the operation activities and maintenance, with a methodological perspective and proposals concerning the Information System. On the methodological point of view, a clever plant systems modeling may allow representing the needed characteristics in order to optimize tagouts, alignment procedures and the schedule. Tools must be taken n into account for new tagout practices such as tags sharing. It is possible to take advantage of 2D drawings integrated into the information system in order to improve the data controls and to visualize operation activities. An integrated set of mobile applications should allow field operators to join the information system for a better and safer performance. (author)

  7. Spherical aberration compensation method for long focal-length measurement based on Talbot interferometry

    Science.gov (United States)

    Luo, Yupeng; Huang, Xiao; Bai, Jian; Du, Juan; Liu, Qun; Luo, Yujie; Luo, Jia

    2017-08-01

    Large-aperture and long focal-length lens is widely used in high energy laser system. The method based on Talbot interferometry is a reliable method to measure the focal length of such elements. By employing divergent beam and two gratings of different periods, this method could realize full-aperture measurement, higher accuracy and better repeatability. However, it does not take into account the spherical aberration of the measured lens resulting in the moiré fringes bending, which will introduce measurement error. Furthermore, in long-focal measurement with divergent beam, this error is an important factor affecting the measurement accuracy. In this paper, we propose a new spherical aberration compensation method, which could significantly reduce the measurement error. Characterized by central-symmetric scanning window, the proposed method is based on the relationship between spherical aberration and the lens aperture. Angle data of moiré fringes in each scanning window is retrieved by Fourier analysis and statistically fitted to estimate a globally optimum value for spherical-aberration-free focal length calculation. Simulation and experiment have been carried out. Compared to the previous work, the proposed method is able to reduce the relative measurement error by 50%. The effect of scanning window size and shift step length on the results is also discussed.

  8. Application of dietary fiber method AOAC 2011.25 in fruit and comparison with AOAC 991.43 method.

    Science.gov (United States)

    Tobaruela, Eric de C; Santos, Aline de O; Almeida-Muradian, Ligia B de; Araujo, Elias da S; Lajolo, Franco M; Menezes, Elizabete W

    2018-01-01

    AOAC 2011.25 method enables the quantification of most of the dietary fiber (DF) components according to the definition proposed by Codex Alimentarius. This study aimed to compare the DF content in fruits analyzed by the AOAC 2011.25 and AOAC 991.43 methods. Plums (Prunus salicina), atemoyas (Annona x atemoya), jackfruits (Artocarpus heterophyllus), and mature coconuts (Cocos nucifera) from different Brazilian regions (3 lots/fruit) were analyzed for DF, resistant starch, and fructans contents. The AOAC 2011.25 method was evaluated for precision, accuracy, and linearity in different food matrices and carbohydrate standards. The DF contents of plums, atemoyas, and jackfruits obtained by AOAC 2011.25 was higher than those obtained by AOAC 991.43 due to the presence of fructans. The DF content of mature coconuts obtained by the same methods did not present a significant difference. The AOAC 2011.25 method is recommended for fruits with considerable fructans content because it achieves more accurate values. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. A Novel Non-Iterative Method for Real-Time Parameter Estimation of the Fricke-Morse Model

    Directory of Open Access Journals (Sweden)

    SIMIC, M.

    2016-11-01

    Full Text Available Parameter estimation of Fricke-Morse model of biological tissue is widely used in bioimpedance data processing and analysis. Complex nonlinear least squares (CNLS data fitting is often used for parameter estimation of the model, but limitations such as high processing time, converging into local minimums, need for good initial guess of model parameters and non-convergence have been reported. Thus, there is strong motivation to develop methods which can solve these flaws. In this paper a novel real-time method for parameter estimation of Fricke-Morse model of biological cells is presented. The proposed method uses the value of characteristic frequency estimated from the measured imaginary part of bioimpedance, whereupon the Fricke-Morse model parameters are calculated using the provided analytical expressions. The proposed method is compared with CNLS in frequency ranges of 1 kHz to 10 MHz (beta-dispersion and 10 kHz to 100 kHz, which is more suitable for low-cost microcontroller-based bioimpedance measurement systems. The obtained results are promising, and in both frequency ranges, CNLS and the proposed method have accuracies suitable for most electrical bioimpedance (EBI applications. However, the proposed algorithm has significantly lower computation complexity, so it was 20-80 times faster than CNLS.

  10. Methodical approaches to value assessment and determination of the capitalization level of high-rise construction

    Science.gov (United States)

    Smirnov, Vitaly; Dashkov, Leonid; Gorshkov, Roman; Burova, Olga; Romanova, Alina

    2018-03-01

    The article presents the analysis of the methodological approaches to cost estimation and determination of the capitalization level of high-rise construction objects. Factors determining the value of real estate were considered, three main approaches for estimating the value of real estate objects are given. The main methods of capitalization estimation were analyzed, the most reasonable method for determining the level of capitalization of high-rise buildings was proposed. In order to increase the value of real estate objects, the author proposes measures that enable to increase significantly the capitalization of the enterprise through more efficient use of intangible assets and goodwill.

  11. In Their Own Words: The Significance of Participant Perceptions in Assessing Entomology Citizen Science Learning Outcomes Using a Mixed Methods Approach.

    Science.gov (United States)

    Lynch, Louise I; Dauer, Jenny M; Babchuk, Wayne A; Heng-Moss, Tiffany; Golick, Doug

    2018-02-06

    A mixed methods study was used to transcend the traditional pre-, post-test approach of citizen science evaluative research by integrating adults' test scores with their perceptions. We assessed how contributory entomology citizen science affects participants' science self-efficacy, self-efficacy for environmental action, nature relatedness and attitude towards insects. Pre- and post-test score analyses from citizen scientists ( n = 28) and a control group ( n = 72) were coupled with interviews ( n = 11) about science experiences and entomological interactions during participation. Considering quantitative data alone, no statistically significant changes were evident in adults following participation in citizen science when compared to the control group. Citizen scientists' pre-test scores were significantly higher than the control group for self-efficacy for environmental action, nature relatedness and attitude towards insects. Interview data reveal a notable discrepancy between measured and perceived changes. In general, citizen scientists had an existing, long-term affinity for the natural world and perceived increases in their science self-efficacy, self-efficacy for environmental action, nature relatedness and attitude towards insects. Perceived influences may act independently of test scores. Scale instruments may not show impacts with variances in individual's prior knowledge and experiences. The value of mixed methods on citizen science program evaluation is discussed.

  12. In Their Own Words: The Significance of Participant Perceptions in Assessing Entomology Citizen Science Learning Outcomes Using a Mixed Methods Approach

    Science.gov (United States)

    Lynch, Louise I.; Dauer, Jenny M.; Babchuk, Wayne A.; Heng-Moss, Tiffany

    2018-01-01

    A mixed methods study was used to transcend the traditional pre-, post-test approach of citizen science evaluative research by integrating adults’ test scores with their perceptions. We assessed how contributory entomology citizen science affects participants’ science self-efficacy, self-efficacy for environmental action, nature relatedness and attitude towards insects. Pre- and post-test score analyses from citizen scientists (n = 28) and a control group (n = 72) were coupled with interviews (n = 11) about science experiences and entomological interactions during participation. Considering quantitative data alone, no statistically significant changes were evident in adults following participation in citizen science when compared to the control group. Citizen scientists’ pre-test scores were significantly higher than the control group for self-efficacy for environmental action, nature relatedness and attitude towards insects. Interview data reveal a notable discrepancy between measured and perceived changes. In general, citizen scientists had an existing, long-term affinity for the natural world and perceived increases in their science self-efficacy, self-efficacy for environmental action, nature relatedness and attitude towards insects. Perceived influences may act independently of test scores. Scale instruments may not show impacts with variances in individual’s prior knowledge and experiences. The value of mixed methods on citizen science program evaluation is discussed. PMID:29415522

  13. Synthesis of Numerical Methods for Modeling Wave Energy Converter-Point Absorbers: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y.; Yu, Y. H.

    2012-05-01

    During the past few decades, wave energy has received significant attention among all ocean energy formats. Industry has proposed hundreds of prototypes such as an oscillating water column, a point absorber, an overtopping system, and a bottom-hinged system. In particular, many researchers have focused on modeling the floating-point absorber as the technology to extract wave energy. Several modeling methods have been used such as the analytical method, the boundary-integral equation method, the Navier-Stokes equations method, and the empirical method. However, no standardized method has been decided. To assist the development of wave energy conversion technologies, this report reviews the methods for modeling the floating-point absorber.

  14. Long-term effects of user preference-oriented recommendation method on the evolution of online system

    Science.gov (United States)

    Shi, Xiaoyu; Shang, Ming-Sheng; Luo, Xin; Khushnood, Abbas; Li, Jian

    2017-02-01

    As the explosion growth of Internet economy, recommender system has become an important technology to solve the problem of information overload. However, recommenders are not one-size-fits-all, different recommenders have different virtues, making them be suitable for different users. In this paper, we propose a novel personalized recommender based on user preferences, which allows multiple recommenders to exist in E-commerce system simultaneously. We find that output of a recommender to each user is quite different when using different recommenders, the recommendation accuracy can be significantly improved if each user is assigned with his/her optimal personalized recommender. Furthermore, different from previous works focusing on short-term effects on recommender, we also evaluate the long-term effect of the proposed method by modeling the evolution of mutual feedback between user and online system. Finally, compared with single recommender running on the online system, the proposed method can improve the accuracy of recommendation significantly and get better trade-offs between short- and long-term performances of recommendation.

  15. Proposed neutral-beam diagnostics for fast confined alpha particles in a burning plasma

    International Nuclear Information System (INIS)

    Schlachter, A.S.; Cooper, W.S.

    1986-10-01

    Diagnostic methods for fast confined alpha particles are essential for a burning plasma experiment. Several methods which use energetic neutral beams have been proposed. We review these methods and discuss system considerations for their implementation

  16. North Central Project: Environment act proposal

    International Nuclear Information System (INIS)

    1992-05-01

    Manitoba Hydro proposes to construct a power transmission and distribution line system to connect 12 northern Manitoba communities to the utility's central power grid. The purpose of this North Central Project (NCP) is to provide reliable and unrestricted electric service to remote communities now largely receiving limited diesel-generated power. The NCP is composed of a 138-kV transmission line running ca 350 km from the Kelsey Generating Station, ca 160 km of 25-kV distribution lines, new transformer stations at four communities, upgraded internal distribution systems within the communities, removal of existing diesel stations and restoration of the sites, modifications and additions to the Kelsey switchyard, and a communications system. The NCP is described in detail, including proposed line routes and transformer station locations, rationales for site and route selection, projected impacts on the environment and local societies, and consultations with the communities to be affected. Potential impacts are expected to be modest, with few unmitigable adverse impacts and a number of potentially significant positive benefits. Impact management measures are proposed to prevent or mitigate adverse effects and to create or enhance positive impacts such as local employment of native peoples. 49 figs., 1 tab

  17. Large-scale Comparative Study of Hi-C-based Chromatin 3D Structure Modeling Methods

    KAUST Repository

    Wang, Cheng

    2018-05-17

    Chromatin is a complex polymer molecule in eukaryotic cells, primarily consisting of DNA and histones. Many works have shown that the 3D folding of chromatin structure plays an important role in DNA expression. The recently proposed Chro- mosome Conformation Capture technologies, especially the Hi-C assays, provide us an opportunity to study how the 3D structures of the chromatin are organized. Based on the data from Hi-C experiments, many chromatin 3D structure modeling methods have been proposed. However, there is limited ground truth to validate these methods and no robust chromatin structure alignment algorithms to evaluate the performance of these methods. In our work, we first made a thorough literature review of 25 publicly available population Hi-C-based chromatin 3D structure modeling methods. Furthermore, to evaluate and to compare the performance of these methods, we proposed a novel data simulation method, which combined the population Hi-C data and single-cell Hi-C data without ad hoc parameters. Also, we designed a global and a local alignment algorithms to measure the similarity between the templates and the chromatin struc- tures predicted by different modeling methods. Finally, the results from large-scale comparative tests indicated that our alignment algorithms significantly outperform the algorithms in literature.

  18. 3D analytical method for the external dynamics of ship collisions and investigation of the coefficient of restitution

    Directory of Open Access Journals (Sweden)

    LIU Junfeng

    2017-03-01

    Full Text Available The analytical method for predicting the dynamic responses of a ship in a collision scenario features speed and accuracy,and the external dynamics constitute an important part. A 3D simplified analytical method is implemented by MATLAB and used to calculate the energy dissipation of ship-ship collisions. The results obtained by the proposed method are then compared with those of a 2D simplified analytical method. The total dissipated energy can be obtained through the proposed analytical method, and the influence of the collision heights,angles and locations on the dissipated energy is discussed on that basis. Furthermore,the effects of restitution on the conservative coefficients and the effects of conservative coefficients on energy dissipation are discussed. It is concluded that the proposed 3D analysis yields a lesser energy dissipation than that of the 2D analysis,and the collision height has a significant influence on the dissipated energy. In using the proposed simplified method,it is not safe to simplify the conservative coefficient as zero when the collision angle is greater than 90 degrees. In the future research, to get more accurate energy dissipation, it is a good way to adopt the 3D simplified analytical method instead of the 2D method.

  19. Histological staining methods preparatory to laser capture microdissection significantly affect the integrity of the cellular RNA

    Directory of Open Access Journals (Sweden)

    Li Ming-Chung

    2006-04-01

    Full Text Available Abstract Background Gene expression profiling by microarray analysis of cells enriched by laser capture microdissection (LCM faces several technical challenges. Frozen sections yield higher quality RNA than paraffin-imbedded sections, but even with frozen sections, the staining methods used for histological identification of cells of interest could still damage the mRNA in the cells. To study the contribution of staining methods to degradation of results from gene expression profiling of LCM samples, we subjected pellets of the mouse plasma cell tumor cell line TEPC 1165 to direct RNA extraction and to parallel frozen sectioning for LCM and subsequent RNA extraction. We used microarray hybridization analysis to compare gene expression profiles of RNA from cell pellets with gene expression profiles of RNA from frozen sections that had been stained with hematoxylin and eosin (H&E, Nissl Stain (NS, and for immunofluorescence (IF as well as with the plasma cell-revealing methyl green pyronin (MGP stain. All RNAs were amplified with two rounds of T7-based in vitro transcription and analyzed by two-color expression analysis on 10-K cDNA microarrays. Results The MGP-stained samples showed the least introduction of mRNA loss, followed by H&E and immunofluorescence. Nissl staining was significantly more detrimental to gene expression profiles, presumably owing to an aqueous step in which RNA may have been damaged by endogenous or exogenous RNAases. Conclusion RNA damage can occur during the staining steps preparatory to laser capture microdissection, with the consequence of loss of representation of certain genes in microarray hybridization analysis. Inclusion of RNAase inhibitor in aqueous staining solutions appears to be important in protecting RNA from loss of gene transcripts.

  20. A Decomposition Method for Security Constrained Economic Dispatch of a Three-Layer Power System

    Science.gov (United States)

    Yang, Junfeng; Luo, Zhiqiang; Dong, Cheng; Lai, Xiaowen; Wang, Yang

    2018-01-01

    This paper proposes a new decomposition method for the security-constrained economic dispatch in a three-layer large-scale power system. The decomposition is realized using two main techniques. The first is to use Ward equivalencing-based network reduction to reduce the number of variables and constraints in the high-layer model without sacrificing accuracy. The second is to develop a price response function to exchange signal information between neighboring layers, which significantly improves the information exchange efficiency of each iteration and results in less iterations and less computational time. The case studies based on the duplicated RTS-79 system demonstrate the effectiveness and robustness of the proposed method.