WorldWideScience

Sample records for methods provide means

  1. Methods and means for coating paper by film coating

    NARCIS (Netherlands)

    van der Maarel, Marc; Ter Veer, Arend Berend Cornelis; Vrieling-Smit, Annet; Delnoye, Pierre

    2015-01-01

    This invention relates to the field of paper coating, more in particular to means and methods for providing paper with at least one layer of pigment using film coating to obtain a well printable surface. Provided is a method for preparing coated paper comprising the steps of: a) providing a

  2. MEANS AND METHODS FOR CLONING NUCLEIC ACID SEQUENCES

    NARCIS (Netherlands)

    Geertsma, Eric Robin; Poolman, Berend

    2008-01-01

    The invention provides means and methods for efficiently cloning nucleic acid sequences of interest in micro-organisms that are less amenable to conventional nucleic acid manipulations, as compared to, for instance, E.coli. The present invention enables high-throughput cloning (and, preferably,

  3. MEANS AND METHODS OF CYBER WARFARE

    Directory of Open Access Journals (Sweden)

    Dan-Iulian VOITAȘEC

    2016-06-01

    Full Text Available According to the Declaration of Saint Petersburg of 1868 “the only legitimate object which States should endeavor to accomplish during war is to weaken the military forces of the enemy”. Thus, International Humanitarian Law prohibits or limits the use of certain means and methods of warfare. The rapid development of technology has led to the emergence of a new dimension of warfare. The cyber aspect of armed conflict has led to the development of new means and methods of warfare. The purpose of this paper is to study how the norms of international humanitarian law apply to the means and methods of cyber warfare.

  4. A SIMPLE ANALYTICAL METHOD TO DETERMINE SOLAR ENERGETIC PARTICLES' MEAN FREE PATH

    International Nuclear Information System (INIS)

    He, H.-Q.; Qin, G.

    2011-01-01

    To obtain the mean free path of solar energetic particles (SEPs) for a solar event, one usually has to fit time profiles of both flux and anisotropy from spacecraft observations to numerical simulations of SEPs' transport processes. This method can be called a simulation method. But a reasonably good fitting needs a lot of simulations, which demand a large amount of calculation resources. Sometimes, it is necessary to find an easy way to obtain the mean free path of SEPs quickly, for example, in space weather practice. Recently, Shalchi et al. provided an approximate analytical formula of SEPs' anisotropy time profile as a function of particles' mean free path for impulsive events. In this paper, we determine SEPs' mean free path by fitting the anisotropy time profiles from Shalchi et al.'s analytical formula to spacecraft observations. This new method can be called an analytical method. In addition, we obtain SEPs' mean free path with the traditional simulation methods. Finally, we compare the mean free path obtained with the simulation method to that of the analytical method to show that the analytical method, with some minor modifications, can give us a good, quick approximation of SEPs' mean free path for impulsive events.

  5. Fuzzy C-means method for clustering microarray data.

    Science.gov (United States)

    Dembélé, Doulaye; Kastner, Philippe

    2003-05-22

    Clustering analysis of data from DNA microarray hybridization studies is essential for identifying biologically relevant groups of genes. Partitional clustering methods such as K-means or self-organizing maps assign each gene to a single cluster. However, these methods do not provide information about the influence of a given gene for the overall shape of clusters. Here we apply a fuzzy partitioning method, Fuzzy C-means (FCM), to attribute cluster membership values to genes. A major problem in applying the FCM method for clustering microarray data is the choice of the fuzziness parameter m. We show that the commonly used value m = 2 is not appropriate for some data sets, and that optimal values for m vary widely from one data set to another. We propose an empirical method, based on the distribution of distances between genes in a given data set, to determine an adequate value for m. By setting threshold levels for the membership values, genes which are tigthly associated to a given cluster can be selected. Using a yeast cell cycle data set as an example, we show that this selection increases the overall biological significance of the genes within the cluster. Supplementary text and Matlab functions are available at http://www-igbmc.u-strasbg.fr/fcm/

  6. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    In unsupervised classification, kernel -means clustering method has been shown to perform better than conventional -means clustering method in ... 518501, India; Department of Computer Science and Engineering, Jawaharlal Nehru Technological University, Anantapur College of Engineering, Anantapur 515002, India ...

  7. Geriatric care: ways and means of providing comfort.

    Science.gov (United States)

    Ribeiro, Patricia Cruz Pontifice Sousa Valente; Marques, Rita Margarida Dourado; Ribeiro, Marta Pontifice

    2017-01-01

    To know the ways and means of comfort perceived by the older adults hospitalized in a medical service. Ethnographic study with a qualitative approach. We conducted semi-structured interviews with 22 older adults and participant observation of care situations. The ways and means of providing comfort are centered on strategies for promoting care mobilized by nurses and recognized by patients(clarifying/informing, positive interaction/communication, music therapy, touch, smile, unconditional presence, empathy/proximity relationship, integrating the older adult or the family as partner in the care, relief of discomfort through massage/mobilization/therapy) and on particular moments of comfort (the first contact, the moment of personal hygiene, and the visit of the family), which constitute the foundation of care/comfort. Geriatric care is built on the relationship that is established and complete with meaning, and is based on the meeting/interaction between the actors under the influence of the context in which they are inserted. The different ways and means of providing comfort aim to facilitate/increase care, relieve discomfort and/or invest in potential comfort. Conhecer os modos e formas de confortar percecionadas pelos idosos hospitalizados num serviço de medicina. Estudo etnográfico com abordagem qualitativa. Realizamos entrevistas semiestruturadas com 22 doentes idosos e observação participante nas situações de cuidados. Os modos e formas de confortar centram-se em estratégias promotoras de conforto mobilizadas pelo enfermeiro e reconhecidas pelos doentes (informação/esclarecimento, interação/comunicação positiva, toque, sorriso, presença incondicional, integração do idoso/família nos cuidados e o alívio de desconfortos através da massagem/mobilização/terapêutica) e em momentos particulares de conforto (contato inaugural, visita da família., cuidados de higiene e arranjo pessoal), que se constituem como alicerces do cuidar

  8. Critical study of the dispersive n- 90Zr mean field by means of a new variational method

    Science.gov (United States)

    Mahaux, C.; Sartor, R.

    1994-02-01

    A new variational method is developed for the construction of the dispersive nucleon-nucleus mean field at negative and positive energies. Like the variational moment approach that we had previously proposed, the new method only uses phenomenological optical-model potentials as input. It is simpler and more flexible than the previous approach. It is applied to a critical investigation of the n- 90Zr mean field between -25 and +25 MeV. This system is of particular interest because conflicting results had recently been obtained by two different groups. While the imaginary parts of the phenomenological optical-model potentials provided by these two groups are similar, their real parts are quite different. Nevertheless, we demonstrate that these two sets of phenomenological optical-model potentials are both compatible with the dispersion relation which connects the real and imaginary parts of the mean field. Previous hints to the contrary, by one of the two other groups, are shown to be due to unjustified approximations. A striking outcome of the present study is that it is important to explicitly introduce volume absorption in the dispersion relation, although volume absorption is negligible in the energy domain investigated here. Because of the existence of two sets of phenomenological optical-model potentials, our variational method yields two dispersive mean fields whose real parts are quite different at small or negative energies. No preference for one of the two dispersive mean fields can be expressed on purely empirical grounds since they both yield fair agreement with the experimental cross sections as well as with the observed energies of the bound single-particle states. However, we argue that one of these two mean fields is physically more meaningful, because the radial shape of its Hartree-Fock type component is independent of energy, as expected on theoretical grounds. This preferred mean field is very close to the one which had been obtained by the Ohio

  9. The meaning of providing caring to obese patients to a group of nurses

    Directory of Open Access Journals (Sweden)

    Emilly Souza Marques

    2014-03-01

    Full Text Available This qualitative study was performed with six nurses of a public hospital, with the objective to describe their view of the meaning of providing care to obese patients. Interviews were conducted using a semi-structured script. The data were organized under themes extracted from the subjects’ statements, after being thoroughly read. Symbolic Interactionism was adopted to interpret the findings. The results from the analysis were organized under the following themes: Being obese is excessive, it is not healthy; Providing care to the obese is a structural issue; Obese patients are troublesome, they require care, no big deal; Providing care to the obese requires teamwork. The grasped meanings can interfere in the care provided. The nurses, however, recognize the need to work as a team to deliver comprehensive care. Making positive changes to the meanings found in this study is possible, thus, contributing to providing prejudice-free nursing care to obese patients. Descriptors: Obesity; Nursing Care; Hospital Care.

  10. Incentives and provider payment methods.

    Science.gov (United States)

    Barnum, H; Kutzin, J; Saxenian, H

    1995-01-01

    The mode of payment creates powerful incentives affecting provider behavior and the efficiency, equity and quality outcomes of health finance reforms. This article examines provider incentives as well as administrative costs, and institutional conditions for successful implementation associated with provider payment alternatives. The alternatives considered are budget reforms, capitation, fee-for-service, and case-based reimbursement. We conclude that competition, whether through a regulated private sector or within a public system, has the potential to improve the performance of any payment method. All methods generate both adverse and beneficial incentives. Systems with mixed forms of provider payment can provide tradeoffs to offset the disadvantages of individual modes. Low-income countries should avoid complex payment systems requiring higher levels of institutional development.

  11. Gait variability: methods, modeling and meaning

    Directory of Open Access Journals (Sweden)

    Hausdorff Jeffrey M

    2005-07-01

    Full Text Available Abstract The study of gait variability, the stride-to-stride fluctuations in walking, offers a complementary way of quantifying locomotion and its changes with aging and disease as well as a means of monitoring the effects of therapeutic interventions and rehabilitation. Previous work has suggested that measures of gait variability may be more closely related to falls, a serious consequence of many gait disorders, than are measures based on the mean values of other walking parameters. The Current JNER series presents nine reports on the results of recent investigations into gait variability. One novel method for collecting unconstrained, ambulatory data is reviewed, and a primer on analysis methods is presented along with a heuristic approach to summarizing variability measures. In addition, the first studies of gait variability in animal models of neurodegenerative disease are described, as is a mathematical model of human walking that characterizes certain complex (multifractal features of the motor control's pattern generator. Another investigation demonstrates that, whereas both healthy older controls and patients with a higher-level gait disorder walk more slowly in reduced lighting, only the latter's stride variability increases. Studies of the effects of dual tasks suggest that the regulation of the stride-to-stride fluctuations in stride width and stride time may be influenced by attention loading and may require cognitive input. Finally, a report of gait variability in over 500 subjects, probably the largest study of this kind, suggests how step width variability may relate to fall risk. Together, these studies provide new insights into the factors that regulate the stride-to-stride fluctuations in walking and pave the way for expanded research into the control of gait and the practical application of measures of gait variability in the clinical setting.

  12. History based batch method preserving tally means

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Choi, Sung Hoon

    2012-01-01

    In the Monte Carlo (MC) eigenvalue calculations, the sample variance of a tally mean calculated from its cycle-wise estimates is biased because of the inter-cycle correlations of the fission source distribution (FSD). Recently, we proposed a new real variance estimation method named the history-based batch method in which a MC run is treated as multiple runs with small number of histories per cycle to generate independent tally estimates. In this paper, the history-based batch method based on the weight correction is presented to preserve the tally mean from the original MC run. The effectiveness of the new method is examined for the weakly coupled fissile array problem as a function of the dominance ratio and the batch size, in comparison with other schemes available

  13. Smoothed analysis of the k-means method

    NARCIS (Netherlands)

    Arthur, David; Manthey, Bodo; Röglin, Heiko

    2011-01-01

    The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means

  14. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  15. Regression toward the mean – a detection method for unknown population mean based on Mee and Chua's algorithm

    Directory of Open Access Journals (Sweden)

    Lüdtke Rainer

    2008-08-01

    Full Text Available Abstract Background Regression to the mean (RTM occurs in situations of repeated measurements when extreme values are followed by measurements in the same subjects that are closer to the mean of the basic population. In uncontrolled studies such changes are likely to be interpreted as a real treatment effect. Methods Several statistical approaches have been developed to analyse such situations, including the algorithm of Mee and Chua which assumes a known population mean μ. We extend this approach to a situation where μ is unknown and suggest to vary it systematically over a range of reasonable values. Using differential calculus we provide formulas to estimate the range of μ where treatment effects are likely to occur when RTM is present. Results We successfully applied our method to three real world examples denoting situations when (a no treatment effect can be confirmed regardless which μ is true, (b when a treatment effect must be assumed independent from the true μ and (c in the appraisal of results of uncontrolled studies. Conclusion Our method can be used to separate the wheat from the chaff in situations, when one has to interpret the results of uncontrolled studies. In meta-analysis, health-technology reports or systematic reviews this approach may be helpful to clarify the evidence given from uncontrolled observational studies.

  16. Methods and means of sodium fire fighting

    International Nuclear Information System (INIS)

    Zemskij, G.T.

    1985-01-01

    Methods and means for coaling sodium fire fighting are analyzed. Their advantages and drawbacks are considered. Comparative data on sodium fire fighting using some of considered compositions are presented. High efficiency of self-expanding compositions (Grafeks SK-23 and RS) is noted. Properties of MGS new composition for sodium fire fighting are considered. High fighting ability of the composition independent of burning metal layer width is shown. It is noted that fire fighting MGS efficiency decreases with growth of time of free fire burning which affects fire fighting methods. Technical means of powder delivery to burning sodium are reported

  17. Two numerical methods for mean-field games

    KAUST Repository

    Gomes, Diogo A.

    2016-01-01

    Here, we consider numerical methods for stationary mean-field games (MFG) and investigate two classes of algorithms. The first one is a gradient flow method based on the variational characterization of certain MFG. The second one uses monotonicity properties of MFG. We illustrate our methods with various examples, including one-dimensional periodic MFG, congestion problems, and higher-dimensional models.

  18. Two numerical methods for mean-field games

    KAUST Repository

    Gomes, Diogo A.

    2016-01-09

    Here, we consider numerical methods for stationary mean-field games (MFG) and investigate two classes of algorithms. The first one is a gradient flow method based on the variational characterization of certain MFG. The second one uses monotonicity properties of MFG. We illustrate our methods with various examples, including one-dimensional periodic MFG, congestion problems, and higher-dimensional models.

  19. Linear–Quadratic Mean-Field-Type Games: A Direct Method

    Directory of Open Access Journals (Sweden)

    Tyrone E. Duncan

    2018-02-01

    Full Text Available In this work, a multi-person mean-field-type game is formulated and solved that is described by a linear jump-diffusion system of mean-field type and a quadratic cost functional involving the second moments, the square of the expected value of the state, and the control actions of all decision-makers. We propose a direct method to solve the game, team, and bargaining problems. This solution approach does not require solving the Bellman–Kolmogorov equations or backward–forward stochastic differential equations of Pontryagin’s type. The proposed method can be easily implemented by beginners and engineers who are new to the emerging field of mean-field-type game theory. The optimal strategies for decision-makers are shown to be in a state-and-mean-field feedback form. The optimal strategies are given explicitly as a sum of the well-known linear state-feedback strategy for the associated deterministic linear–quadratic game problem and a mean-field feedback term. The equilibrium cost of the decision-makers are explicitly derived using a simple direct method. Moreover, the equilibrium cost is a weighted sum of the initial variance and an integral of a weighted variance of the diffusion and the jump process. Finally, the method is used to compute global optimum strategies as well as saddle point strategies and Nash bargaining solution in state-and-mean-field feedback form.

  20. Gas measuring apparatus with standardization means, and method therefor

    International Nuclear Information System (INIS)

    Typpo, P.M.

    1980-01-01

    An apparatus and a method for standardizing a gas measuring device has a source capable of emitting a beam of radiation aligned to impinge a detector. A housing means encloses the beam. The housing means has a plurality of apertures permitting the gas to enter the housing means, to intercept the beam, and to exit from the housing means. The device further comprises means for closing the apertures and a means for purging said gas from the housing means

  1. Improved Performance of Unsupervised Method by Renovated K-Means

    OpenAIRE

    Ashok, P.; Nawaz, G. M Kadhar; Elayaraja, E.; Vadivel, V.

    2013-01-01

    Clustering is a separation of data into groups of similar objects. Every group called cluster consists of objects that are similar to one another and dissimilar to objects of other groups. In this paper, the K-Means algorithm is implemented by three distance functions and to identify the optimal distance function for clustering methods. The proposed K-Means algorithm is compared with K-Means, Static Weighted K-Means (SWK-Means) and Dynamic Weighted K-Means (DWK-Means) algorithm by using Davis...

  2. Functional connectivity analysis of the neural bases of emotion regulation: A comparison of independent component method with density-based k-means clustering method.

    Science.gov (United States)

    Zou, Ling; Guo, Qian; Xu, Yi; Yang, Biao; Jiao, Zhuqing; Xiang, Jianbo

    2016-04-29

    Functional magnetic resonance imaging (fMRI) is an important tool in neuroscience for assessing connectivity and interactions between distant areas of the brain. To find and characterize the coherent patterns of brain activity as a means of identifying brain systems for the cognitive reappraisal of the emotion task, both density-based k-means clustering and independent component analysis (ICA) methods can be applied to characterize the interactions between brain regions involved in cognitive reappraisal of emotion. Our results reveal that compared with the ICA method, the density-based k-means clustering method provides a higher sensitivity of polymerization. In addition, it is more sensitive to those relatively weak functional connection regions. Thus, the study concludes that in the process of receiving emotional stimuli, the relatively obvious activation areas are mainly distributed in the frontal lobe, cingulum and near the hypothalamus. Furthermore, density-based k-means clustering method creates a more reliable method for follow-up studies of brain functional connectivity.

  3. Determination System Of Food Vouchers For the Poor Based On Fuzzy C-Means Method

    Science.gov (United States)

    Anamisa, D. R.; Yusuf, M.; Syakur, M. A.

    2018-01-01

    Food vouchers are government programs to tackle the poverty of rural communities. This program aims to help the poor group in getting enough food and nutrients from carbohydrates. There are several factors that influence to receive the food voucher, such as: job, monthly income, Taxes, electricity bill, size of house, number of family member, education certificate and amount of rice consumption every week. In the execution for the distribution of vouchers is often a lot of problems, such as: the distribution of food vouchers has been misdirected and someone who receives is still subjective. Some of the solutions to decision making have not been done. The research aims to calculating the change of each partition matrix and each cluster using Fuzzy C-Means method. Hopefully this research makes contribution by providing higher result using Fuzzy C-Means comparing to other method for this case study. In this research, decision making is done by using Fuzzy C-Means method. The Fuzzy C-Means method is a clustering method that has an organized and scattered cluster structure with regular patterns on two-dimensional datasets. Furthermore, Fuzzy C-Means method used for calculates the change of each partition matrix. Each cluster will be sorted by the proximity of the data element to the centroid of the cluster to get the ranking. Various trials were conducted for grouping and ranking of proposed data that received food vouchers based on the quota of each village. This testing by Fuzzy C-Means method, is developed and abled for determining the recipient of the food voucher with satisfaction results. Fulfillment of the recipient of the food voucher is 80% to 90% and this testing using data of 115 Family Card from 6 Villages. The quality of success affected, has been using the number of iteration factors is 20 and the number of clusters is 3

  4. Mean-Square Convergence of Drift-Implicit One-Step Methods for Neutral Stochastic Delay Differential Equations with Jump Diffusion

    Directory of Open Access Journals (Sweden)

    Lin Hu

    2011-01-01

    Full Text Available A class of drift-implicit one-step schemes are proposed for the neutral stochastic delay differential equations (NSDDEs driven by Poisson processes. A general framework for mean-square convergence of the methods is provided. It is shown that under certain conditions global error estimates for a method can be inferred from estimates on its local error. The applicability of the mean-square convergence theory is illustrated by the stochastic θ-methods and the balanced implicit methods. It is derived from Theorem 3.1 that the order of the mean-square convergence of both of them for NSDDEs with jumps is 1/2. Numerical experiments illustrate the theoretical results. It is worth noting that the results of mean-square convergence of the stochastic θ-methods and the balanced implicit methods are also new.

  5. A New Method and a New Scaling for Deriving Fermionic Mean-Field Dynamics

    International Nuclear Information System (INIS)

    Petrat, Sören; Pickl, Peter

    2016-01-01

    We introduce a new method for deriving the time-dependent Hartree or Hartree-Fock equations as an effective mean-field dynamics from the microscopic Schrödinger equation for fermionic many-particle systems in quantum mechanics. The method is an adaption of the method used in Pickl (Lett. Math. Phys. 97 (2) 151–164 2011) for bosonic systems to fermionic systems. It is based on a Gronwall type estimate for a suitable measure of distance between the microscopic solution and an antisymmetrized product state. We use this method to treat a new mean-field limit for fermions with long-range interactions in a large volume. Some of our results hold for singular attractive or repulsive interactions. We can also treat Coulomb interaction assuming either a mild singularity cutoff or certain regularity conditions on the solutions to the Hartree(-Fock) equations. In the considered limit, the kinetic and interaction energy are of the same order, while the average force is subleading. For some interactions, we prove that the Hartree(-Fock) dynamics is a more accurate approximation than a simpler dynamics that one would expect from the subleading force. With our method we also treat the mean-field limit coupled to a semiclassical limit, which was discussed in the literature before, and we recover some of the previous results. All results hold for initial data close (but not necessarily equal) to antisymmetrized product states and we always provide explicit rates of convergence.

  6. Performance Analysis of Entropy Methods on K Means in Clustering Process

    Science.gov (United States)

    Dicky Syahputra Lubis, Mhd.; Mawengkang, Herman; Suwilo, Saib

    2017-12-01

    K Means is a non-hierarchical data clustering method that attempts to partition existing data into one or more clusters / groups. This method partitions the data into clusters / groups so that data that have the same characteristics are grouped into the same cluster and data that have different characteristics are grouped into other groups.The purpose of this data clustering is to minimize the objective function set in the clustering process, which generally attempts to minimize variation within a cluster and maximize the variation between clusters. However, the main disadvantage of this method is that the number k is often not known before. Furthermore, a randomly chosen starting point may cause two points to approach the distance to be determined as two centroids. Therefore, for the determination of the starting point in K Means used entropy method where this method is a method that can be used to determine a weight and take a decision from a set of alternatives. Entropy is able to investigate the harmony in discrimination among a multitude of data sets. Using Entropy criteria with the highest value variations will get the highest weight. Given this entropy method can help K Means work process in determining the starting point which is usually determined at random. Thus the process of clustering on K Means can be more quickly known by helping the entropy method where the iteration process is faster than the K Means Standard process. Where the postoperative patient dataset of the UCI Repository Machine Learning used and using only 12 data as an example of its calculations is obtained by entropy method only with 2 times iteration can get the desired end result.

  7. Integration K-Means Clustering Method and Elbow Method For Identification of The Best Customer Profile Cluster

    Science.gov (United States)

    Syakur, M. A.; Khotimah, B. K.; Rochman, E. M. S.; Satoto, B. D.

    2018-04-01

    Clustering is a data mining technique used to analyse data that has variations and the number of lots. Clustering was process of grouping data into a cluster, so they contained data that is as similar as possible and different from other cluster objects. SMEs Indonesia has a variety of customers, but SMEs do not have the mapping of these customers so they did not know which customers are loyal or otherwise. Customer mapping is a grouping of customer profiling to facilitate analysis and policy of SMEs in the production of goods, especially batik sales. Researchers will use a combination of K-Means method with elbow to improve efficient and effective k-means performance in processing large amounts of data. K-Means Clustering is a localized optimization method that is sensitive to the selection of the starting position from the midpoint of the cluster. So choosing the starting position from the midpoint of a bad cluster will result in K-Means Clustering algorithm resulting in high errors and poor cluster results. The K-means algorithm has problems in determining the best number of clusters. So Elbow looks for the best number of clusters on the K-means method. Based on the results obtained from the process in determining the best number of clusters with elbow method can produce the same number of clusters K on the amount of different data. The result of determining the best number of clusters with elbow method will be the default for characteristic process based on case study. Measurement of k-means value of k-means has resulted in the best clusters based on SSE values on 500 clusters of batik visitors. The result shows the cluster has a sharp decrease is at K = 3, so K as the cut-off point as the best cluster.

  8. σ-SCF: A direct energy-targeting method to mean-field excited states.

    Science.gov (United States)

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D; Van Voorhis, Troy

    2017-12-07

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry-a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states-ground or excited-are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H 2 , HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  9. σ-SCF: A direct energy-targeting method to mean-field excited states

    Science.gov (United States)

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D.; Van Voorhis, Troy

    2017-12-01

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry—a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states—ground or excited—are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  10. The two means method for the attenuation coefficient determination of archaeological ceramics from the North of Parana

    International Nuclear Information System (INIS)

    Silva, Richard Maximiliano Cunha e

    1997-01-01

    This work reports an alternative methodology for the linear attenuation coefficient determination (μ ρ) of irregular form samples, in such a way that is not necessary to consider the sample thickness. With this methodology, indigenous archaeological ceramics fragments from the region of Londrina, north of Parana, were studied. These ceramics fragments belong to the Kaingaing and Tupiguarani traditions. The equation for the μ ρ determination employing the two mean method was obtained and it was used for μ ρ determination by the gamma ray beam attenuation if immersed ceramics, by turns, in two different means with known linear attenuation coefficient. By the other side, μ theoretical value was determined with the XCOM computer code. This code uses as input the ceramics chemistry composition and provides an energy versus mass attenuation coefficient table. In order to validate the two mean method validation, five ceramics samples of thickness 1.15 cm and 1.87 cm were prepared with homogeneous clay. Using these ceramics, μ ρ was determined using the attenuation method, and the two mean method. The result obtained for μ ρ and its respective deviation were compared for these samples, for the two methods. With the obtained results, it was concluded that the two means method is good for the linear attenuation coefficient determination of materials of irregular shape, what is suitable, specially, for archaeometric studies. (author)

  11. Improved Fuzzy Art Method for Initializing K-means

    Directory of Open Access Journals (Sweden)

    Sevinc Ilhan

    2010-09-01

    Full Text Available The K-means algorithm is quite sensitive to the cluster centers selected initially and can perform different clusterings depending on these initialization conditions. Within the scope of this study, a new method based on the Fuzzy ART algorithm which is called Improved Fuzzy ART (IFART is used in the determination of initial cluster centers. By using IFART, better quality clusters are achieved than Fuzzy ART do and also IFART is as good as Fuzzy ART about capable of fast clustering and capability on large scaled data clustering. Consequently, it is observed that, with the proposed method, the clustering operation is completed in fewer steps, that it is performed in a more stable manner by fixing the initialization points and that it is completed with a smaller error margin compared with the conventional K-means.

  12. Approximate calculation method for integral of mean square value of nonstationary response

    International Nuclear Information System (INIS)

    Aoki, Shigeru; Fukano, Azusa

    2010-01-01

    The response of the structure subjected to nonstationary random vibration such as earthquake excitation is nonstationary random vibration. Calculating method for statistical characteristics of such a response is complicated. Mean square value of the response is usually used to evaluate random response. Integral of mean square value of the response corresponds to total energy of the response. In this paper, a simplified calculation method to obtain integral of mean square value of the response is proposed. As input excitation, nonstationary white noise and nonstationary filtered white noise are used. Integrals of mean square value of the response are calculated for various values of parameters. It is found that the proposed method gives exact value of integral of mean square value of the response.

  13. Improved smoothed analysis of the k-means method

    NARCIS (Netherlands)

    Manthey, Bodo; Röglin, Heiko; Mathieu, C.

    2009-01-01

    The k-means method is a widely used clustering algorithm. One of its distinguished features is its speed in practice. Its worst-case running-time, however, is exponential, leaving a gap between practical and theoretical performance. Arthur and Vassilvitskii [3] aimed at closing this gap, and they

  14. Spiking cortical model-based nonlocal means method for speckle reduction in optical coherence tomography images

    Science.gov (United States)

    Zhang, Xuming; Li, Liu; Zhu, Fei; Hou, Wenguang; Chen, Xinjian

    2014-06-01

    Optical coherence tomography (OCT) images are usually degraded by significant speckle noise, which will strongly hamper their quantitative analysis. However, speckle noise reduction in OCT images is particularly challenging because of the difficulty in differentiating between noise and the information components of the speckle pattern. To address this problem, the spiking cortical model (SCM)-based nonlocal means method is presented. The proposed method explores self-similarities of OCT images based on rotation-invariant features of image patches extracted by SCM and then restores the speckled images by averaging the similar patches. This method can provide sufficient speckle reduction while preserving image details very well due to its effectiveness in finding reliable similar patches under high speckle noise contamination. When applied to the retinal OCT image, this method provides signal-to-noise ratio improvements of >16 dB with a small 5.4% loss of similarity.

  15. Mirror bootstrap method for testing hypotheses of one mean

    OpenAIRE

    Varvak, Anna

    2012-01-01

    The general philosophy for bootstrap or permutation methods for testing hypotheses is to simulate the variation of the test statistic by generating the sampling distribution which assumes both that the null hypothesis is true, and that the data in the sample is somehow representative of the population. This philosophy is inapplicable for testing hypotheses for a single parameter like the population mean, since the two assumptions are contradictory (e.g., how can we assume both that the mean o...

  16. Means and method for controlling the neutron output of a neutron generator tube

    International Nuclear Information System (INIS)

    Langford, O.M.; Peelman, H.E.

    1978-01-01

    Means and method are described for energizing and regulating a neutron generator tube having a target, an ion source and a replenisher. It providing a negative high voltage to the target and monitoring the target current. A constant current from a constant current source is divided into a shunt current and a replenisher current in accordence with the target current. The replenisher current is applied to the replenisher in a neutron generator tube so as to control the neutron output in accordance with the target current

  17. System and Method for Providing a Climate Data Analytic Services Application Programming Interface Distribution Package

    Science.gov (United States)

    Schnase, John L. (Inventor); Duffy, Daniel Q. (Inventor); Tamkin, Glenn S. (Inventor)

    2016-01-01

    A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.

  18. Automating the mean-field method for large dynamic gossip networks

    NARCIS (Netherlands)

    Bakhshi, Rena; Endrullis, Jörg; Endrullis, Stefan; Fokkink, Wan; Haverkort, Boudewijn R.H.M.

    We investigate an abstraction method, called mean- field method, for the performance evaluation of dynamic net- works with pairwise communication between nodes. It allows us to evaluate systems with very large numbers of nodes, that is, systems of a size where traditional performance evaluation

  19. A convolution method for predicting mean treatment dose including organ motion at imaging

    International Nuclear Information System (INIS)

    Booth, J.T.; Zavgorodni, S.F.; Royal Adelaide Hospital, SA

    2000-01-01

    Full text: The random treatment delivery errors (organ motion and set-up error) can be incorporated into the treatment planning software using a convolution method. Mean treatment dose is computed as the convolution of a static dose distribution with a variation kernel. Typically this variation kernel is Gaussian with variance equal to the sum of the organ motion and set-up error variances. We propose a novel variation kernel for the convolution technique that additionally considers the position of the mobile organ in the planning CT image. The systematic error of organ position in the planning CT image can be considered random for each patient over a population. Thus the variance of the variation kernel will equal the sum of treatment delivery variance and organ motion variance at planning for the population of treatments. The kernel is extended to deal with multiple pre-treatment CT scans to improve tumour localisation for planning. Mean treatment doses calculated with the convolution technique are compared to benchmark Monte Carlo (MC) computations. Calculations of mean treatment dose using the convolution technique agreed with MC results for all cases to better than ± 1 Gy in the planning treatment volume for a prescribed 60 Gy treatment. Convolution provides a quick method of incorporating random organ motion (captured in the planning CT image and during treatment delivery) and random set-up errors directly into the dose distribution. Copyright (2000) Australasian College of Physical Scientists and Engineers in Medicine

  20. More recent robust methods for the estimation of mean and standard deviation of data

    International Nuclear Information System (INIS)

    Kanisch, G.

    2003-01-01

    Outliers in a data set result in biased values of mean and standard deviation. One way to improve the estimation of a mean is to apply tests to identify outliers and to exclude them from the calculations. Tests according to Grubbs or to Dixon, which are frequently used in practice, especially within laboratory intercomparisons, are not very efficient in identifying outliers. Since more than ten years now so-called robust methods are used more and more, which determine mean and standard deviation by iteration and down-weighting values far from the mean, thereby diminishing the impact of outliers. In 1989 the Analytical Methods Committee of the British Royal Chemical Society published such a robust method. Since 1993 the US Environmental Protection Agency published a more efficient and quite versatile method. Mean and standard deviation are calculated by iteration and application of a special weight function for down-weighting outlier candidates. In 2000, W. Cofino et al. published a very efficient robust method which works quite different from the others. It applies methods taken from the basics of quantum mechanics, such as ''wave functions'' associated with each laboratory mean value and matrix algebra (solving eigenvalue problems). In contrast to the other ones, this method includes the individual measurement uncertainties. (orig.)

  1. Assessing the accuracy of globe thermometer method in predicting outdoor mean radiant temperature under Malaysia tropical microclimate

    Science.gov (United States)

    Khrit, N. G.; Alghoul, M. A.; Sopian, K.; Lahimer, A. A.; Elayeb, O. K.

    2017-11-01

    Assessing outdoor human thermal comfort and urban climate quality require experimental investigation of microclimatic conditions and their variations in open urban spaces. For this, it is essential to provide quantitative information on air temperature, humidity, wind velocity and mean radiant temperature. These parameters can be quantified directly except mean radiant temperature (Tmrt). The most accurate method to quantify Tmrt is integral radiation measurements (3-D shortwave and long-wave) which require using expensive radiometer instruments. To overcome this limitation the well-known globe thermometer method was suggested to calculate Tmrt. The aim of this study was to assess the possibility of using indoor globe thermometer method in predicting outdoor mean radiant temperature under Malaysia tropical microclimate. Globe thermometer method using small and large sizes of black-painted copper globes (50mm, 150mm) were used to estimate Tmrt and compare it with the reference Tmrt estimated by integral radiation method. The results revealed that the globe thermometer method considerably overestimated Tmrt during the middle of the day and slightly underestimated it in the morning and late evening. The difference between the two methods was obvious when the amount of incoming solar radiation was high. The results also showed that the effect of globe size on the estimated Tmrt is mostly small. Though, the estimated Tmrt by the small globe showed a relatively large amount of scattering caused by rapid changes in radiation and wind speed.

  2. Neutron flux calculation by means of Monte Carlo methods

    International Nuclear Information System (INIS)

    Barz, H.U.; Eichhorn, M.

    1988-01-01

    In this report a survey of modern neutron flux calculation procedures by means of Monte Carlo methods is given. Due to the progress in the development of variance reduction techniques and the improvements of computational techniques this method is of increasing importance. The basic ideas in application of Monte Carlo methods are briefly outlined. In more detail various possibilities of non-analog games and estimation procedures are presented, problems in the field of optimizing the variance reduction techniques are discussed. In the last part some important international Monte Carlo codes and own codes of the authors are listed and special applications are described. (author)

  3. Monotone numerical methods for finite-state mean-field games

    KAUST Repository

    Gomes, Diogo A.; Saude, Joao

    2017-01-01

    Here, we develop numerical methods for finite-state mean-field games (MFGs) that satisfy a monotonicity condition. MFGs are determined by a system of differential equations with initial and terminal boundary conditions. These non-standard conditions are the main difficulty in the numerical approximation of solutions. Using the monotonicity condition, we build a flow that is a contraction and whose fixed points solve the MFG, both for stationary and time-dependent problems. We illustrate our methods in a MFG modeling the paradigm-shift problem.

  4. Monotone numerical methods for finite-state mean-field games

    KAUST Repository

    Gomes, Diogo A.

    2017-04-29

    Here, we develop numerical methods for finite-state mean-field games (MFGs) that satisfy a monotonicity condition. MFGs are determined by a system of differential equations with initial and terminal boundary conditions. These non-standard conditions are the main difficulty in the numerical approximation of solutions. Using the monotonicity condition, we build a flow that is a contraction and whose fixed points solve the MFG, both for stationary and time-dependent problems. We illustrate our methods in a MFG modeling the paradigm-shift problem.

  5. Axial and Centrifugal Compressor Mean Line Flow Analysis Method

    Science.gov (United States)

    Veres, Joseph P.

    2009-01-01

    This paper describes a method to estimate key aerodynamic parameters of single and multistage axial and centrifugal compressors. This mean-line compressor code COMDES provides the capability of sizing single and multistage compressors quickly during the conceptual design process. Based on the compressible fluid flow equations and the Euler equation, the code can estimate rotor inlet and exit blade angles when run in the design mode. The design point rotor efficiency and stator losses are inputs to the code, and are modeled at off design. When run in the off-design analysis mode, it can be used to generate performance maps based on simple models for losses due to rotor incidence and inlet guide vane reset angle. The code can provide an improved understanding of basic aerodynamic parameters such as diffusion factor, loading levels and incidence, when matching multistage compressor blade rows at design and at part-speed operation. Rotor loading levels and relative velocity ratio are correlated to the onset of compressor surge. NASA Stage 37 and the three-stage NASA 74-A axial compressors were analyzed and the results compared to test data. The code has been used to generate the performance map for the NASA 76-B three-stage axial compressor featuring variable geometry. The compressor stages were aerodynamically matched at off-design speeds by adjusting the variable inlet guide vane and variable stator geometry angles to control the rotor diffusion factor and incidence angles.

  6. Improved means and methods for expressing recombinant proteins

    NARCIS (Netherlands)

    Poolman, Berend; Martinez Linares, Daniel; Gul, Nadia

    2014-01-01

    The invention relates to the field of genetic engineering and the production of recombinant proteins in microbial host cells. Provided is a method for enhanced expression of a recombinant protein of interest in a microbial host cell, comprising providing a microbial host cell wherein the function of

  7. A DIRECT METHOD TO DETERMINE THE PARALLEL MEAN FREE PATH OF SOLAR ENERGETIC PARTICLES WITH ADIABATIC FOCUSING

    International Nuclear Information System (INIS)

    He, H.-Q.; Wan, W.

    2012-01-01

    The parallel mean free path of solar energetic particles (SEPs), which is determined by physical properties of SEPs as well as those of solar wind, is a very important parameter in space physics to study the transport of charged energetic particles in the heliosphere, especially for space weather forecasting. In space weather practice, it is necessary to find a quick approach to obtain the parallel mean free path of SEPs for a solar event. In addition, the adiabatic focusing effect caused by a spatially varying mean magnetic field in the solar system is important to the transport processes of SEPs. Recently, Shalchi presented an analytical description of the parallel diffusion coefficient with adiabatic focusing. Based on Shalchi's results, in this paper we provide a direct analytical formula as a function of parameters concerning the physical properties of SEPs and solar wind to directly and quickly determine the parallel mean free path of SEPs with adiabatic focusing. Since all of the quantities in the analytical formula can be directly observed by spacecraft, this direct method would be a very useful tool in space weather research. As applications of the direct method, we investigate the inherent relations between the parallel mean free path and various parameters concerning physical properties of SEPs and solar wind. Comparisons of parallel mean free paths with and without adiabatic focusing are also presented.

  8. Children's activities and their meanings for parents: a mixed-methods study in six Western cultures.

    Science.gov (United States)

    Harkness, Sara; Zylicz, Piotr Olaf; Super, Charles M; Welles-Nyström, Barbara; Bermúdez, Moisés Ríos; Bonichini, Sabrina; Moscardino, Ughetta; Mavridis, Caroline Johnston

    2011-12-01

    Theoretical perspectives and research in sociology, anthropology, sociolinguistics, and cultural psychology converge in recognizing the significance of children's time spent in various activities, especially in the family context. Knowing how children's time is deployed, however, only gives us a partial answer to how children acquire competence; the other part must take into account the culturally constructed meanings of activities, from the perspective of those who organize and direct children's daily lives. In this article, we report on a study of children's routine daily activities and on the meanings that parents attribute to them in six Western middle-class cultural communities located in Italy, The Netherlands, Poland, Spain, Sweden, and the United States (N = 183). Using week-long time diaries kept by parents, we first demonstrate similarities as well as significant differences in children's daily routines across the cultural samples. We then present brief vignettes--"a day in the life" --of children from each sample. Parent interviews were coded for themes in the meanings attributed to various activities. Excerpts from parent interviews, focusing on four major activities (meals, family time, play, school- or developmentally related activities), are presented to illustrate how cultural meanings and themes are woven into parents' organization and understanding of their children's daily lives. The results of this mixed-method approach provide a more reliable and nuanced picture of children's and families' daily lives than could be derived from either method alone.

  9. Stats means business

    CERN Document Server

    Buglear, John

    2010-01-01

    Stats Means Business is an introductory textbook written for Business, Hospitality and Tourism students who take modules on Statistics or Quantitative research methods. Recognising that most users of this book will have limited if any grounding in the subject, this book minimises technical language, provides clear definition of key terms, and gives emphasis to interpretation rather than technique.Stats Means Business enables readers to:appreciate the importance of statistical analysis in business, hospitality and tourism understand statis

  10. Methods for providing intermediates in the synthesis of atorvastatin.

    NARCIS (Netherlands)

    Dömling, Alexander Stephan Siegfried

    2016-01-01

    The invention relates to the field of medicinal chemistry, In particular, it relates to methods for providing intermediates in the synthesis of Atorvastatin, a competitive inhibitor of HMG-Co A reductase. Provided is a process for providing a compound having a Formula (I) or a pharmaceutically

  11. A spectral chart method for estimating the mean turbulent kinetic energy dissipation rate

    Science.gov (United States)

    Djenidi, L.; Antonia, R. A.

    2012-10-01

    We present an empirical but simple and practical spectral chart method for determining the mean turbulent kinetic energy dissipation rate DNS spectra, points to this scaling being also valid at small Reynolds numbers, provided effects due to inhomogeneities in the flow are negligible. The methods avoid the difficulty associated with estimating time or spatial derivatives of the velocity fluctuations. It also avoids using the second hypothesis of K41, which implies the existence of a -5/3 inertial subrange only when the Taylor microscale Reynods number R λ is sufficiently large. The method is in fact applied to the lower wavenumber end of the dissipative range thus avoiding most of the problems due to inadequate spatial resolution of the velocity sensors and noise associated with the higher wavenumber end of this range.The use of spectral data (30 ≤ R λ ≤ 400) in both passive and active grid turbulence, a turbulent mixing layer and the turbulent wake of a circular cylinder indicates that the method is robust and should lead to reliable estimates of < \\varepsilon rangle in flows or flow regions where the first similarity hypothesis should hold; this would exclude, for example, the region near a wall.

  12. Using means and methods of general physical training in education of bowlers

    Directory of Open Access Journals (Sweden)

    Fanigina O.U.

    2011-04-01

    Full Text Available There were discovered the main directions of bowlers education. The means and methods of physical education, which insure the formation of high quality moves being the part of main skill, are discovered. There were shown different means of general education accounting individual peculiarities of bowler. The principles of choosing general developing exercises and main direction of influence on developing different abilities are represented. It's created the scientific-methodic support of physical education in teaching-training process for children who play bowing in sport schools.

  13. Determination of beta attenuation coefficients by means of timing method

    International Nuclear Information System (INIS)

    Ermis, E.E.; Celiktas, C.

    2012-01-01

    Highlights: ► Beta attenuation coefficients of absorber materials were found in this study. ► For this process, a new method (timing method) was suggested. ► The obtained beta attenuation coefficients were compatible with the results from the traditional one. ► The timing method can be used to determine beta attenuation coefficient. - Abstract: Using a counting system with plastic scintillation detector, beta linear and mass attenuation coefficients were determined for bakelite, Al, Fe and plexiglass absorbers by means of timing method. To show the accuracy and reliability of the obtained results through this method, the coefficients were also found via conventional energy method. Obtained beta attenuation coefficients from both methods were compared with each other and the literature values. Beta attenuation coefficients obtained through timing method were found to be compatible with the values obtained from conventional energy method and the literature.

  14. Position detectors, methods of detecting position, and methods of providing positional detectors

    Science.gov (United States)

    Weinberg, David M.; Harding, L. Dean; Larsen, Eric D.

    2002-01-01

    Position detectors, welding system position detectors, methods of detecting various positions, and methods of providing position detectors are described. In one embodiment, a welding system positional detector includes a base that is configured to engage and be moved along a curved surface of a welding work piece. At least one position detection apparatus is provided and is connected with the base and configured to measure angular position of the detector relative to a reference vector. In another embodiment, a welding system positional detector includes a weld head and at least one inclinometer mounted on the weld head. The one inclinometer is configured to develop positional data relative to a reference vector and the position of the weld head on a non-planar weldable work piece.

  15. Finite-State Mean-Field Games, Crowd Motion Problems, and its Numerical Methods

    KAUST Repository

    Machado Velho, Roberto

    2017-09-10

    In this dissertation, we present two research projects, namely finite-state mean-field games and the Hughes model for the motion of crowds. In the first part, we describe finite-state mean-field games and some applications to socio-economic sciences. Examples include paradigm shifts in the scientific community and the consumer choice behavior in a free market. The corresponding finite-state mean-field game models are hyperbolic systems of partial differential equations, for which we propose and validate a new numerical method. Next, we consider the dual formulation to two-state mean-field games, and we discuss numerical methods for these problems. We then depict different computational experiments, exhibiting a variety of behaviors, including shock formation, lack of invertibility, and monotonicity loss. We conclude the first part of this dissertation with an investigation of the shock structure for two-state problems. In the second part, we consider a model for the movement of crowds proposed by R. Hughes in [56] and describe a numerical approach to solve it. This model comprises a Fokker-Planck equation coupled with an Eikonal equation with Dirichlet or Neumann data. We first establish a priori estimates for the solutions. Next, we consider radial solutions, and we identify a shock formation mechanism. Subsequently, we illustrate the existence of congestion, the breakdown of the model, and the trend to the equilibrium. We also propose a new numerical method for the solution of Fokker-Planck equations and then to systems of PDEs composed by a Fokker-Planck equation and a potential type equation. Finally, we illustrate the use of the numerical method both to the Hughes model and mean-field games. We also depict cases such as the evacuation of a room and the movement of persons around Kaaba (Saudi Arabia).

  16. Specific binding-adsorbent assay method and test means

    International Nuclear Information System (INIS)

    1981-01-01

    A description is given of an improved specific binding assay method and test means employing a nonspecific adsorbent for the substance to be determined, particularly hepatitis B surface (HBsub(s)) antigen, in its free state or additionally in the form of its immune complex. The invention is illustrated by 1) the radioimmunoadsorbent assay for HBsub(s) antigen, 2) the radioimmunoadsorbent assay for HBsub(s) antigen in the form of immune complex with antibody, 3) a study of adsorption characteristics of various anion exchange materials for HBsub(s) antigen, 4) the use of hydrophobic adsorbents in a radioimmunoadsorbent assay for HBsub(s) antigen and 5) the radioimmunoadsorbent assay for antibody to HBsub(s) antigen. The advantages of the present method for detecting HBsub(s) antigen compared to previous methods include the manufacturing advantages of eliminating the need for insolubilised anti-HBsub(s) and the advantages of a single incubation step, fewer manipulations, storability of adsorbent materials, increased sensitivity and versatility of detecting HBsub(s) antigen in the form of its immune complex if desired. (U.K.)

  17. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  18. Subspace K-means clustering.

    Science.gov (United States)

    Timmerman, Marieke E; Ceulemans, Eva; De Roover, Kim; Van Leeuwen, Karla

    2013-12-01

    To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the existing related clustering methods, including deterministic, stochastic, and unsupervised learning approaches. To evaluate subspace K-means, we performed a comparative simulation study, in which we manipulated the overlap of subspaces, the between-cluster variance, and the error variance. The study shows that the subspace K-means algorithm is sensitive to local minima but that the problem can be reasonably dealt with by using partitions of various cluster procedures as a starting point for the algorithm. Subspace K-means performs very well in recovering the true clustering across all conditions considered and appears to be superior to its competitor methods: K-means, reduced K-means, factorial K-means, mixtures of factor analyzers (MFA), and MCLUST. The best competitor method, MFA, showed a performance similar to that of subspace K-means in easy conditions but deteriorated in more difficult ones. Using data from a study on parental behavior, we show that subspace K-means analysis provides a rich insight into the cluster characteristics, in terms of both the relative positions of the clusters (via the centroids) and the shape of the clusters (via the within-cluster residuals).

  19. Potential shallow aquifers characterization through an integrated geophysical method: multivariate approach by means of k-means algorithms

    Directory of Open Access Journals (Sweden)

    Stefano Bernardinetti

    2017-06-01

    Full Text Available The need to obtain a detailed hydrogeological characterization of the subsurface and its interpretation for the groundwater resources management, often requires to apply several and complementary geophysical methods. The goal of the approach in this paper is to provide a unique model of the aquifer by synthesizing and optimizing the information provided by several geophysical methods. This approach greatly reduces the degree of uncertainty and subjectivity of the interpretation by exploiting the different physical and mechanic characteristics of the aquifer. The studied area, into the municipality of Laterina (Arezzo, Italy, is a shallow basin filled by lacustrine and alluvial deposits (Pleistocene and Olocene epochs, Quaternary period, with alternated silt, sand with variable content of gravel and clay where the bottom is represented by arenaceous-pelitic rocks (Mt. Cervarola Unit, Tuscan Domain, Miocene epoch. This shallow basin constitutes the unconfined superficial aquifer to be exploited in the nearly future. To improve the geological model obtained from a detailed geological survey we performed electrical resistivity and P wave refraction tomographies along the same line in order to obtain different, independent and integrable data sets. For the seismic data also the reflected events have been processed, a remarkable contribution to draw the geologic setting. Through the k-means algorithm, we perform a cluster analysis for the bivariate data set to individuate relationships between the two sets of variables. This algorithm allows to individuate clusters with the aim of minimizing the dissimilarity within each cluster and maximizing it among different clusters of the bivariate data set. The optimal number of clusters “K”, corresponding to the individuated geophysical facies, depends to the multivariate data set distribution and in this work is estimated with the Silhouettes. The result is an integrated tomography that shows a finite

  20. AN EFFICIENT INITIALIZATION METHOD FOR K-MEANS CLUSTERING OF HYPERSPECTRAL DATA

    Directory of Open Access Journals (Sweden)

    A. Alizade Naeini

    2014-10-01

    Full Text Available K-means is definitely the most frequently used partitional clustering algorithm in the remote sensing community. Unfortunately due to its gradient decent nature, this algorithm is highly sensitive to the initial placement of cluster centers. This problem deteriorates for the high-dimensional data such as hyperspectral remotely sensed imagery. To tackle this problem, in this paper, the spectral signatures of the endmembers in the image scene are extracted and used as the initial positions of the cluster centers. For this purpose, in the first step, A Neyman–Pearson detection theory based eigen-thresholding method (i.e., the HFC method has been employed to estimate the number of endmembers in the image. Afterwards, the spectral signatures of the endmembers are obtained using the Minimum Volume Enclosing Simplex (MVES algorithm. Eventually, these spectral signatures are used to initialize the k-means clustering algorithm. The proposed method is implemented on a hyperspectral dataset acquired by ROSIS sensor with 103 spectral bands over the Pavia University campus, Italy. For comparative evaluation, two other commonly used initialization methods (i.e., Bradley & Fayyad (BF and Random methods are implemented and compared. The confusion matrix, overall accuracy and Kappa coefficient are employed to assess the methods’ performance. The evaluations demonstrate that the proposed solution outperforms the other initialization methods and can be applied for unsupervised classification of hyperspectral imagery for landcover mapping.

  1. Dependability validation by means of fault injection: method, implementation, application

    International Nuclear Information System (INIS)

    Arlat, Jean

    1990-01-01

    This dissertation presents theoretical and practical results concerning the use of fault injection as a means for testing fault tolerance in the framework of the experimental dependability validation of computer systems. The dissertation first presents the state-of-the-art of published work on fault injection, encompassing both hardware (fault simulation, physical fault Injection) and software (mutation testing) issues. Next, the major attributes of fault injection (faults and their activation, experimental readouts and measures, are characterized taking into account: i) the abstraction levels used to represent the system during the various phases of its development (analytical, empirical and physical models), and Il) the validation objectives (verification and evaluation). An evaluation method is subsequently proposed that combines the analytical modeling approaches (Monte Carlo Simulations, closed-form expressions. Markov chains) used for the representation of the fault occurrence process and the experimental fault Injection approaches (fault Simulation and physical injection); characterizing the error processing and fault treatment provided by the fault tolerance mechanisms. An experimental tool - MESSALINE - is then defined and presented. This tool enables physical faults to be Injected In an hardware and software prototype of the system to be validated. Finally, the application of MESSALINE for testing two fault-tolerant systems possessing very dissimilar features and the utilization of the experimental results obtained - both as design feedbacks and for dependability measures evaluation - are used to illustrate the relevance of the method. (author) [fr

  2. Methods for estimating flow-duration and annual mean-flow statistics for ungaged streams in Oklahoma

    Science.gov (United States)

    Esralew, Rachel A.; Smith, S. Jerrod

    2010-01-01

    Flow statistics can be used to provide decision makers with surface-water information needed for activities such as water-supply permitting, flow regulation, and other water rights issues. Flow statistics could be needed at any location along a stream. Most often, streamflow statistics are needed at ungaged sites, where no flow data are available to compute the statistics. Methods are presented in this report for estimating flow-duration and annual mean-flow statistics for ungaged streams in Oklahoma. Flow statistics included the (1) annual (period of record), (2) seasonal (summer-autumn and winter-spring), and (3) 12 monthly duration statistics, including the 20th, 50th, 80th, 90th, and 95th percentile flow exceedances, and the annual mean-flow (mean of daily flows for the period of record). Flow statistics were calculated from daily streamflow information collected from 235 streamflow-gaging stations throughout Oklahoma and areas in adjacent states. A drainage-area ratio method is the preferred method for estimating flow statistics at an ungaged location that is on a stream near a gage. The method generally is reliable only if the drainage-area ratio of the two sites is between 0.5 and 1.5. Regression equations that relate flow statistics to drainage-basin characteristics were developed for the purpose of estimating selected flow-duration and annual mean-flow statistics for ungaged streams that are not near gaging stations on the same stream. Regression equations were developed from flow statistics and drainage-basin characteristics for 113 unregulated gaging stations. Separate regression equations were developed by using U.S. Geological Survey streamflow-gaging stations in regions with similar drainage-basin characteristics. These equations can increase the accuracy of regression equations used for estimating flow-duration and annual mean-flow statistics at ungaged stream locations in Oklahoma. Streamflow-gaging stations were grouped by selected drainage

  3. An initialization method for the k-means using the concept of useful nearest centers

    OpenAIRE

    Ismkhan, Hassan

    2017-01-01

    The aim of the k-means is to minimize squared sum of Euclidean distance from the mean (SSEDM) of each cluster. The k-means can effectively optimize this function, but it is too sensitive for initial centers (seeds). This paper proposed a method for initialization of the k-means using the concept of useful nearest center for each data point.

  4. Evaluation of in-core measurements by means of principal components method

    International Nuclear Information System (INIS)

    Makai, M.; Temesvari, E.

    1996-01-01

    Surveillance of a nuclear reactor core comprehends determination of assemblies' three-dimensional (3D) power distribution. Derived from other assemblies' measured values, power of non-measured assembly is calculated for every assembly with the help of principal components method (PCM) which is also presented. The measured values are interpolated for different geometrical coverings of the WWER-440 core. Different procedures have been elaborated and investigated, among them the most successful methods are discussed. Each method offers self consistent means to determine numerical errors of the interpolated values. (author). 13 refs, 7 figs, 2 tabs

  5. An interpretive phenomenological method for illuminating the meaning of caring relationship.

    Science.gov (United States)

    Berg, Linda; Skott, Carola; Danielson, Ella

    2006-03-01

    This study is a part of a larger project in which the aim is to illuminate the meaning of the caring relationship between patients and nurses in daily nursing practice. Empirical studies in this area inspired from the interpretive phenomenological method are not commonly used. The aim of this paper is to describe how an interpretive phenomenological method was used to illuminate the meaning of the phenomenon caring relationship in daily nursing practice. Data were collected during 16 nursing care proceedings using participant observation with field notes, and in addition to that two interviews, one patient and one nurse. The interpretation moved back and forth between the whole and the parts in a dialectic process. Initial interpretive understanding of interviews and field notes, meaning units and comprehensive understanding were presented. Themes from the patient's interviews were competence, lack of continuity, strain and vulnerability. Themes from the nurse's interviews were competence and striving. Themes from the field notes were interactions towards a goal. The use of interpretive phenomenology offered an opportunity for learning to understand the meaning of the phenomenon caring relationship in daily nursing practice with both strengths and limitations. This study gave an understanding of the phenomenon through the illumination of the patient's and the nurse's thoughts, feelings and actions in the nursing care proceedings that led to a more profound knowledge about how they together create an encounter through their unique competence.

  6. Determining Coastal Mean Dynamic Topography by Geodetic Methods

    Science.gov (United States)

    Huang, Jianliang

    2017-11-01

    In geodesy, coastal mean dynamic topography (MDT) was traditionally determined by spirit leveling technique. Advances in navigation satellite positioning (e.g., GPS) and geoid determination enable space-based leveling with an accuracy of about 3 cm at tide gauges. Recent CryoSat-2, a satellite altimetry mission with synthetic aperture radar (SAR) and SAR interferometric measurements, extends the space-based leveling to the coastal ocean with the same accuracy. However, barriers remain in applying the two space-based geodetic methods for MDT determination over the coastal ocean because current geoid modeling focuses primarily on land as a substitute to spirit leveling to realize the vertical datum.

  7. An improved K-means clustering method for cDNA microarray image segmentation.

    Science.gov (United States)

    Wang, T N; Li, T J; Shao, G F; Wu, S X

    2015-07-14

    Microarray technology is a powerful tool for human genetic research and other biomedical applications. Numerous improvements to the standard K-means algorithm have been carried out to complete the image segmentation step. However, most of the previous studies classify the image into two clusters. In this paper, we propose a novel K-means algorithm, which first classifies the image into three clusters, and then one of the three clusters is divided as the background region and the other two clusters, as the foreground region. The proposed method was evaluated on six different data sets. The analyses of accuracy, efficiency, expression values, special gene spots, and noise images demonstrate the effectiveness of our method in improving the segmentation quality.

  8. Non-destructive methods and means for quality control of structural products

    International Nuclear Information System (INIS)

    Dmitriev, V.V.

    1989-01-01

    Progressive non-destructive methods (acoustic, magnetic, radiation with liquid penetrants) and means of control of structural product quality, allowing to determine the state of products and structures not only immediately after their production but directly at the erected or reconstructed objects are described

  9. Efficiency of Picture Description and Storytelling Methods in Language Sampling According to the Mean Length of Utterance Index

    Directory of Open Access Journals (Sweden)

    Salime Jafari

    2012-10-01

    Full Text Available Background and Aim: Due to limitation of standardized tests for Persian-speakers with language disorders, spontaneous language sampling collection is an important part of assessment of languageprotocol. Therefore, selection of a language sampling method, which will provide information of linguistic competence in a short time, is important. Therefore, in this study, we compared the languagesamples elicited with picture description and storytelling methods in order to determine the effectiveness of the two methods.Methods: In this study 30 first-grade elementary school girls were selected with simple sampling. To investigate picture description method, we used two illustrated stories with four pictures. Languagesamples were collected through storytelling by telling a famous children’s story. To determine the effectiveness of these two methods the two indices of duration of sampling and mean length ofutterance (MLU were compared.Results: There was no significant difference between MLU in description and storytelling methods(p>0.05. However, duration of sampling was shorter in the picture description method than the storytelling method (p<0.05.Conclusion: Findings show that, the two methods of picture description and storytelling have the same potential in language sampling. Since, picture description method can provide language samples with the same complexity in a shorter time than storytelling, it can be used as a beneficial method forclinical purposes.

  10. Examination of Cast Iron Material Properties by Means of the Nanoindentation Method

    Directory of Open Access Journals (Sweden)

    Trytek A.

    2012-12-01

    Full Text Available The paper presents results of examination of material parameters of cast iron with structure obtained under rapid resolidification conditions carried out by means of the nanoindentation method.

  11. Means and method for controlling the neutron output of a neutron generator tube

    International Nuclear Information System (INIS)

    1980-01-01

    A specification is given for an energizing and regulating circuit for a gas filled neutron generator tube consisting of a target, an ion source and a replenisher, the circuit consisting of a power supply to provide a negative high voltage to the target and a target current corresponding to the neutron output of the tube, a constant current source, and control means connected to the power supply and to the constant current source, the control means being responsive to the target current to provide a portion of the constant current to the replenisher substantially to regulate the neutron output of the tube. (author)

  12. A nonparametric mean-variance smoothing method to assess Arabidopsis cold stress transcriptional regulator CBF2 overexpression microarray data.

    Science.gov (United States)

    Hu, Pingsha; Maiti, Tapabrata

    2011-01-01

    Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.

  13. How should we best estimate the mean recency duration for the BED method?

    Directory of Open Access Journals (Sweden)

    John Hargrove

    Full Text Available BED estimates of HIV incidence from cross-sectional surveys are obtained by restricting, to fixed time T, the period over which incidence is estimated. The appropriate mean recency duration (Ω(T then refers to the time where BED optical density (OD is less than a pre-set cut-off C, given the patient has been HIV positive for at most time T. Five methods, tested using data for postpartum women in Zimbabwe, provided similar estimates of Ω(T for C = 0.8: i The ratio (r/s of the number of BED-recent infections to all seroconversions over T = 365 days: 192 days [95% CI 168-216]. ii Linear mixed modeling (LMM: 191 days [95% CI 174-208]. iii Non-linear mixed modeling (NLMM: 196 days [95% CrI 188-204]. iv Survival analysis (SA: 192 days [95% CI 168-216]. Graphical analysis: 193 days. NLMM estimates of Ω(T--based on a biologically more appropriate functional relationship than LMM--resulted in best fits to OD data, the smallest variance in estimates of VT, and best correspondence between BED and follow-up estimates of HIV incidence, for the same subjects over the same time period. SA and NLMM produced very similar estimates of Ω(T but the coefficient of variation of the former was .3 times as high. The r/s method requires uniformly distributed seroconversion events but is useful if data are available only from a single follow-up. The graphical method produces the most variable results, involves unsound methodology and should not be used to provide estimates of Ω(T. False-recent rates increased as a quadratic function of C: for incidence estimation C should thus be chosen as small as possible, consistent with an adequate resultant number of recent cases, and accurate estimation of Ω(T. Inaccuracies in the estimation of Ω(T should not now provide an impediment to incidence estimation.

  14. Detection of bursts in neuronal spike trains by the mean inter-spike interval method

    Institute of Scientific and Technical Information of China (English)

    Lin Chen; Yong Deng; Weihua Luo; Zhen Wang; Shaoqun Zeng

    2009-01-01

    Bursts are electrical spikes firing with a high frequency, which are the most important property in synaptic plasticity and information processing in the central nervous system. However, bursts are difficult to identify because bursting activities or patterns vary with phys-iological conditions or external stimuli. In this paper, a simple method automatically to detect bursts in spike trains is described. This method auto-adaptively sets a parameter (mean inter-spike interval) according to intrinsic properties of the detected burst spike trains, without any arbitrary choices or any operator judgrnent. When the mean value of several successive inter-spike intervals is not larger than the parameter, a burst is identified. By this method, bursts can be automatically extracted from different bursting patterns of cultured neurons on multi-electrode arrays, as accurately as by visual inspection. Furthermore, significant changes of burst variables caused by electrical stimulus have been found in spontaneous activity of neuronal network. These suggest that the mean inter-spike interval method is robust for detecting changes in burst patterns and characteristics induced by environmental alterations.

  15. GLOBAL CLASSIFICATION OF DERMATITIS DISEASE WITH K-MEANS CLUSTERING IMAGE SEGMENTATION METHODS

    OpenAIRE

    Prafulla N. Aerkewar1 & Dr. G. H. Agrawal2

    2018-01-01

    The objective of this paper to presents a global technique for classification of different dermatitis disease lesions using the process of k-Means clustering image segmentation method. The word global is used such that the all dermatitis disease having skin lesion on body are classified in to four category using k-means image segmentation and nntool of Matlab. Through the image segmentation technique and nntool can be analyze and study the segmentation properties of skin lesions occurs in...

  16. A New Soft Computing Method for K-Harmonic Means Clustering.

    Science.gov (United States)

    Yeh, Wei-Chang; Jiang, Yunzhi; Chen, Yee-Fen; Chen, Zhe

    2016-01-01

    The K-harmonic means clustering algorithm (KHM) is a new clustering method used to group data such that the sum of the harmonic averages of the distances between each entity and all cluster centroids is minimized. Because it is less sensitive to initialization than K-means (KM), many researchers have recently been attracted to studying KHM. In this study, the proposed iSSO-KHM is based on an improved simplified swarm optimization (iSSO) and integrates a variable neighborhood search (VNS) for KHM clustering. As evidence of the utility of the proposed iSSO-KHM, we present extensive computational results on eight benchmark problems. From the computational results, the comparison appears to support the superiority of the proposed iSSO-KHM over previously developed algorithms for all experiments in the literature.

  17. A novel method of providing a library of n-mers or biopolymers

    DEFF Research Database (Denmark)

    2012-01-01

    The present invention relates to a method of providing a library of n-mer sequences, wherein the library is composed of an n-mer sequence. Also the invention concerns a method of providing a library of biopolymer sequences having one or more n-mers in common. Further provided are specific primers...

  18. Variance-to-mean method generalized by linear difference filter technique

    International Nuclear Information System (INIS)

    Hashimoto, Kengo; Ohsaki, Hiroshi; Horiguchi, Tetsuo; Yamane, Yoshihiro; Shiroya, Seiji

    1998-01-01

    The conventional variance-to-mean method (Feynman-α method) seriously suffers the divergency of the variance under such a transient condition as a reactor power drift. Strictly speaking, then, the use of the Feynman-α is restricted to a steady state. To apply the method to more practical uses, it is desirable to overcome this kind of difficulty. For this purpose, we propose an usage of higher-order difference filter technique to reduce the effect of the reactor power drift, and derive several new formulae taking account of the filtering. The capability of the formulae proposed was demonstrated through experiments in the Kyoto University Critical Assembly. The experimental results indicate that the divergency of the variance can be effectively suppressed by the filtering technique, and that the higher-order filter becomes necessary with increasing variation rate in power

  19. Exponential mean-square stability of two classes of theta Milstein methods for stochastic delay differential equations

    Science.gov (United States)

    Rouz, Omid Farkhondeh; Ahmadian, Davood; Milev, Mariyan

    2017-12-01

    This paper establishes exponential mean square stability of two classes of theta Milstein methods, namely split-step theta Milstein (SSTM) method and stochastic theta Milstein (STM) method, for stochastic differential delay equations (SDDEs). We consider the SDDEs problem under a coupled monotone condition on drift and diffusion coefficients, as well as a necessary linear growth condition on the last term of theta Milstein method. It is proved that the SSTM method with θ ∈ [0, ½] can recover the exponential mean square stability of the exact solution with some restrictive conditions on stepsize, but for θ ∈ (½, 1], we proved that the stability results hold for any stepsize. Then, based on the stability results of SSTM method, we examine the exponential mean square stability of the STM method and obtain the similar stability results to that of the SSTM method. In the numerical section the figures show thevalidity of our claims.

  20. IP2P K-means: an efficient method for data clustering on sensor networks

    Directory of Open Access Journals (Sweden)

    Peyman Mirhadi

    2013-03-01

    Full Text Available Many wireless sensor network applications require data gathering as the most important parts of their operations. There are increasing demands for innovative methods to improve energy efficiency and to prolong the network lifetime. Clustering is considered as an efficient topology control methods in wireless sensor networks, which can increase network scalability and lifetime. This paper presents a method, IP2P K-means – Improved P2P K-means, which uses efficient leveling in clustering approach, reduces false labeling and restricts the necessary communication among various sensors, which obviously saves more energy. The proposed method is examined in Network Simulator Ver.2 (NS2 and the preliminary results show that the algorithm works effectively and relatively more precisely.

  1. Trajectory Optimization of Spray Painting Robot for Complex Curved Surface Based on Exponential Mean Bézier Method

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2017-01-01

    Full Text Available Automated tool trajectory planning for spray painting robots is still a challenging problem, especially for a large complex curved surface. This paper presents a new method of trajectory optimization for spray painting robot based on exponential mean Bézier method. The definition and the three theorems of exponential mean Bézier curves are discussed. Then a spatial painting path generation method based on exponential mean Bézier curves is developed. A new simple algorithm for trajectory optimization on complex curved surfaces is introduced. A golden section method is adopted to calculate the values. The experimental results illustrate that the exponential mean Bézier curves enhanced flexibility of the path planning, and the trajectory optimization algorithm achieved satisfactory performance. This method can also be extended to other applications.

  2. DETERMINATION OF HYDRAULIC TURBINE EFFICIENCY BY MEANS OF THE CURRENT METER METHOD

    Directory of Open Access Journals (Sweden)

    PURECE C.

    2016-12-01

    Full Text Available The paper presents methodology used for determining the efficiency of a low head Kaplan hydraulic turbine with short converging intake. The measurement method used was the current meters method, the only measurement method recommended by the IEC 41standard for flow measurement in this case. The paper also presents the methodology used for measuring the flow by means of the current meters method and the various procedures for calculating the flow. In the last part the paper presents the flow measurements carried out on the Fughiu HPP hydraulic turbines for determining the actual operating efficiency.

  3. Meaning in couples relationships.

    Directory of Open Access Journals (Sweden)

    Rodrigues T.F.

    2014-09-01

    Full Text Available Based on psycholinguistics and L. Vygotsky’s (2007 theories on sign, meaning and sense categories, as later discussed by A. Leontiev (2004, 2009, we present a case study that focuses on the intricacies of a love relationship for a woman who remained in a painful marriage. Interview material is presented in a Relational-Historical Psychology theoretical framework to provide central categories of meaning and sense. This is understood as a privileged method for apprehending the uniqueness of a human being. To segment the qualitative material, we used the “Analysis of the Nuclei of Meanings for the Apprehension of the Constitution of Sense,” by Aguiar and Ozella (2006, 2013. This approach seeks to discriminate the meanings and senses that constitute the content of a speech sample.

  4. CAMSHIFT IMPROVEMENT WITH MEAN-SHIFT SEGMENTATION, REGION GROWING, AND SURF METHOD

    Directory of Open Access Journals (Sweden)

    Ferdinan Ferdinan

    2013-10-01

    Full Text Available CAMSHIFT algorithm has been widely used in object tracking. CAMSHIFT utilizescolor features as the model object. Thus, original CAMSHIFT may fail when the object color issimilar with the background color. In this study, we propose CAMSHIFT tracker combined withmean-shift segmentation, region growing, and SURF in order to improve the tracking accuracy.The mean-shift segmentation and region growing are applied in object localization phase to extractthe important parts of the object. Hue-distance, saturation, and value are used to calculate theBhattacharyya distance to judge whether the tracked object is lost. Once the object is judged lost,SURF is used to find the lost object, and CAMSHIFT can retrack the object. The Object trackingsystem is built with OpenCV. Some measurements of accuracy have done using frame-basedmetrics. We use datasets BoBoT (Bonn Benchmark on Tracking to measure accuracy of thesystem. The results demonstrate that CAMSHIFT combined with mean-shift segmentation, regiongrowing, and SURF method has higher accuracy than the previous methods.

  5. Aircrew Exposure To Cosmic Radiation Evaluated By Means Of Several Methods; Results Obtained In 2006

    International Nuclear Information System (INIS)

    Ploc, Ondrej; Spurny, Frantisek; Jadrnickova, Iva; Turek, Karel

    2008-01-01

    Routine evaluation of aircraft crew exposure to cosmic radiation in the Czech Republic is performed by means of calculation method. Measurements onboard aircraft work as a control tool of the routine method, as well as a possibility of comparison of results measured by means of several methods. The following methods were used in 2006: (1) mobile dosimetry unit (MDU) type Liulin--a spectrometer of energy deposited in Si-detector; (2) two types of LET spectrometers based on the chemically etched track detectors (TED); (3) two types of thermoluminescent detectors; and (4) two calculation methods. MDU represents currently one of the most reliable equipments for evaluation of the aircraft crew exposure to cosmic radiation. It is an active device which measures total energy depositions (E dep ) in the semiconductor unit, and, after appropriate calibration, is able to give a separate estimation for non-neutron and neutron-like components of H*(10). This contribution consists mostly of results acquired by means of this equipment; measurements with passive detectors and calculations are mentioned because of comparison. Reasonably good agreement of all data sets could be stated

  6. Short-term variations in core surface flow resolved from an improved method of calculating observatory monthly means

    DEFF Research Database (Denmark)

    Olsen, Nils; Whaler, K. A.; Finlay, Chris

    2014-01-01

    Monthly means of the magnetic field measurements taken by ground observatories are a useful data source for studying temporal changes of the core magnetic field and the underlying core flow. However, the usual way of calculating monthly means as the arithmetic mean of all days (geomagnetic quiet...... as well as disturbed) and all local times (day and night) may result in contributions from external (magnetospheric and ionospheric) origin in the (ordinary, omm) monthly means. Such contamination makes monthly means less favourable for core studies. We calculated revised monthly means (rmm......), and their uncertainties, from observatory hourly means using robust means and after removal of external field predictions, using an improved method for characterising the magnetospheric ring current. The utility of the new method for calculating observatory monthly means is demonstrated by inverting their first...

  7. A spectral chart method for estimating the mean turbulent kinetic energy dissipation rate

    Energy Technology Data Exchange (ETDEWEB)

    Djenidi, L.; Antonia, R.A. [The University of Newcastle, School of Engineering, Newcastle, NSW (Australia)

    2012-10-15

    We present an empirical but simple and practical spectral chart method for determining the mean turbulent kinetic energy dissipation rate left angle {epsilon}right angle in a variety of turbulent flows. The method relies on the validity of the first similarity hypothesis of Kolmogorov (C R (Doklady) Acad Sci R R SS, NS 30:301-305, 1941) (or K41) which implies that spectra of velocity fluctuations scale on the kinematic viscosity {nu} and left angle {epsilon}right angle at large Reynolds numbers. However, the evidence, based on the DNS spectra, points to this scaling being also valid at small Reynolds numbers, provided effects due to inhomogeneities in the flow are negligible. The methods avoid the difficulty associated with estimating time or spatial derivatives of the velocity fluctuations. It also avoids using the second hypothesis of K41, which implies the existence of a -5/3 inertial subrange only when the Taylor microscale Reynolds number R{sub {lambda}} is sufficiently large. The method is in fact applied to the lower wavenumber end of the dissipative range thus avoiding most of the problems due to inadequate spatial resolution of the velocity sensors and noise associated with the higher wavenumber end of this range.The use of spectral data (30 {<=} R{sub {lambda}}{<=} 400) in both passive and active grid turbulence, a turbulent mixing layer and the turbulent wake of a circular cylinder indicates that the method is robust and should lead to reliable estimates of left angle {epsilon}right angle in flows or flow regions where the first similarity hypothesis should hold; this would exclude, for example, the region near a wall. (orig.)

  8. Analysis of S-box in Image Encryption Using Root Mean Square Error Method

    Science.gov (United States)

    Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan

    2012-07-01

    The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes

  9. Mean Field Type Control with Congestion (II): An Augmented Lagrangian Method

    Energy Technology Data Exchange (ETDEWEB)

    Achdou, Yves, E-mail: achdou@ljll.univ-paris-diderot.fr; Laurière, Mathieu [Univ. Paris Diderot, Sorbonne Paris Cité, Laboratoire Jacques-Louis Lions, UMR 7598, UPMC, CNRS (France)

    2016-12-15

    This work deals with a numerical method for solving a mean-field type control problem with congestion. It is the continuation of an article by the same authors, in which suitably defined weak solutions of the system of partial differential equations arising from the model were discussed and existence and uniqueness were proved. Here, the focus is put on numerical methods: a monotone finite difference scheme is proposed and shown to have a variational interpretation. Then an Alternating Direction Method of Multipliers for solving the variational problem is addressed. It is based on an augmented Lagrangian. Two kinds of boundary conditions are considered: periodic conditions and more realistic boundary conditions associated to state constrained problems. Various test cases and numerical results are presented.

  10. Mean-field approximation for spacing distribution functions in classical systems

    Science.gov (United States)

    González, Diego Luis; Pimpinelli, Alberto; Einstein, T. L.

    2012-01-01

    We propose a mean-field method to calculate approximately the spacing distribution functions p(n)(s) in one-dimensional classical many-particle systems. We compare our method with two other commonly used methods, the independent interval approximation and the extended Wigner surmise. In our mean-field approach, p(n)(s) is calculated from a set of Langevin equations, which are decoupled by using a mean-field approximation. We find that in spite of its simplicity, the mean-field approximation provides good results in several systems. We offer many examples illustrating that the three previously mentioned methods give a reasonable description of the statistical behavior of the system. The physical interpretation of each method is also discussed.

  11. Organizational culture associated with provider satisfaction.

    Science.gov (United States)

    Scammon, Debra L; Tabler, Jennifer; Brunisholz, Kimberly; Gren, Lisa H; Kim, Jaewhan; Tomoaia-Cotisel, Andrada; Day, Julie; Farrell, Timothy W; Waitzman, Norman J; Magill, Michael K

    2014-01-01

    Organizational culture is key to the successful implementation of major improvement strategies. Transformation to a patient-centered medical home (PCHM) is such an improvement strategy, requiring a shift from provider-centric care to team-based care. Because this shift may impact provider satisfaction, it is important to understand the relationship between provider satisfaction and organizational culture, specifically in the context of practices that have transformed to a PCMH model. This was a cross-sectional study of surveys conducted in 2011 among providers and staff in 10 primary care clinics implementing their version of a PCMH: Care by Design. Measures included the Organizational Culture Assessment Instrument and the American Medical Group Association provider satisfaction survey. Providers were most satisfied with quality of care (mean, 4.14; scale of 1-5) and interactions with patients (mean, 4.12) and were least satisfied with time spent working (mean, 3.47), paperwork (mean, 3.45), and compensation (mean, 3.35). Culture profiles differed across clinics, with family/clan and hierarchical cultures the most common. Significant correlations (P ≤ .05) between provider satisfaction and clinic culture archetypes included family/clan culture negatively correlated with administrative work; entrepreneurial culture positively correlated with the Time Spent Working dimension; market/rational culture positively correlated with how practices were facing economic and strategic challenges; and hierarchical culture negatively correlated with the Relationships with Staff and Resource dimensions. Provider satisfaction is an important metric for assessing experiences with features of a PCMH model. Identification of clinic-specific culture archetypes and archetype associations with provider satisfaction can help inform practice redesign. Attention to effective methods for changing organizational culture is recommended.

  12. A Theoretically Consistent Method for Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2014-01-01

    We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features for noise robust automatic speech recognition (ASR). The method is based on a minimum number of well-established statistical assumptions; no assumptions are made which are inconsistent with others....... The strength of the proposed method is that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC's), cepstral mean-subtracted MFCC's (CMS-MFCC's), velocity, and acceleration coefficients. Furthermore, the method is easily modified to take into account other compressive non-linearities than...... the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non...

  13. A comparison of latent class, K-means, and K-median methods for clustering dichotomous data.

    Science.gov (United States)

    Brusco, Michael J; Shireman, Emilie; Steinley, Douglas

    2017-09-01

    The problem of partitioning a collection of objects based on their measurements on a set of dichotomous variables is a well-established problem in psychological research, with applications including clinical diagnosis, educational testing, cognitive categorization, and choice analysis. Latent class analysis and K-means clustering are popular methods for partitioning objects based on dichotomous measures in the psychological literature. The K-median clustering method has recently been touted as a potentially useful tool for psychological data and might be preferable to its close neighbor, K-means, when the variable measures are dichotomous. We conducted simulation-based comparisons of the latent class, K-means, and K-median approaches for partitioning dichotomous data. Although all 3 methods proved capable of recovering cluster structure, K-median clustering yielded the best average performance, followed closely by latent class analysis. We also report results for the 3 methods within the context of an application to transitive reasoning data, in which it was found that the 3 approaches can exhibit profound differences when applied to real data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Estimating Mean and Variance Through Quantiles : An Experimental Comparison of Different Methods

    NARCIS (Netherlands)

    Moors, J.J.A.; Strijbosch, L.W.G.; van Groenendaal, W.J.H.

    2002-01-01

    If estimates of mean and variance are needed and only experts' opinions are available, the literature agrees that it is wise behaviour to ask only for their (subjective) estimates of quantiles: from these, estimates of the desired parameters are calculated.Quite a number of methods have been

  15. Simple Method to Estimate Mean Heart Dose From Hodgkin Lymphoma Radiation Therapy According to Simulation X-Rays

    Energy Technology Data Exchange (ETDEWEB)

    Nimwegen, Frederika A. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Cutter, David J. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Oxford Cancer Centre, Oxford University Hospitals NHS Trust, Oxford (United Kingdom); Schaapveld, Michael [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Rutten, Annemarieke [Department of Radiology, The Netherlands Cancer Institute, Amsterdam (Netherlands); Kooijman, Karen [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Krol, Augustinus D.G. [Department of Radiation Oncology, Leiden University Medical Center, Leiden (Netherlands); Janus, Cécile P.M. [Department of Radiation Oncology, Erasmus MC Cancer Center, Rotterdam (Netherlands); Darby, Sarah C. [Clinical Trial Service Unit, University of Oxford, Oxford (United Kingdom); Leeuwen, Flora E. van [Department of Psychosocial Research, Epidemiology, and Biostatistics, The Netherlands Cancer Institute, Amsterdam (Netherlands); Aleman, Berthe M.P., E-mail: b.aleman@nki.nl [Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam (Netherlands)

    2015-05-01

    Purpose: To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Methods and Materials: Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case–control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. Results: According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Conclusion: Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor

  16. Simple method to estimate mean heart dose from Hodgkin lymphoma radiation therapy according to simulation X-rays.

    Science.gov (United States)

    van Nimwegen, Frederika A; Cutter, David J; Schaapveld, Michael; Rutten, Annemarieke; Kooijman, Karen; Krol, Augustinus D G; Janus, Cécile P M; Darby, Sarah C; van Leeuwen, Flora E; Aleman, Berthe M P

    2015-05-01

    To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case-control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor-intensive representative CT-based method. This simpler method may produce a

  17. Method for providing uranium with a protective copper coating

    Science.gov (United States)

    Waldrop, Forrest B.; Jones, Edward

    1981-01-01

    The present invention is directed to a method for providing uranium metal with a protective coating of copper. Uranium metal is subjected to a conventional cleaning operation wherein oxides and other surface contaminants are removed, followed by etching and pickling operations. The copper coating is provided by first electrodepositing a thin and relatively porous flash layer of copper on the uranium in a copper cyanide bath. The resulting copper-layered article is then heated in an air or inert atmosphere to volatilize and drive off the volatile material underlying the copper flash layer. After the heating step an adherent and essentially non-porous layer of copper is electro-deposited on the flash layer of copper to provide an adherent, multi-layer copper coating which is essentially impervious to corrosion by most gases.

  18. Numerical Solutions of the Mean-Value Theorem: New Methods for Downward Continuation of Potential Fields

    Science.gov (United States)

    Zhang, Chong; Lü, Qingtian; Yan, Jiayong; Qi, Guang

    2018-04-01

    Downward continuation can enhance small-scale sources and improve resolution. Nevertheless, the common methods have disadvantages in obtaining optimal results because of divergence and instability. We derive the mean-value theorem for potential fields, which could be the theoretical basis of some data processing and interpretation. Based on numerical solutions of the mean-value theorem, we present the convergent and stable downward continuation methods by using the first-order vertical derivatives and their upward continuation. By applying one of our methods to both the synthetic and real cases, we show that our method is stable, convergent and accurate. Meanwhile, compared with the fast Fourier transform Taylor series method and the integrated second vertical derivative Taylor series method, our process has very little boundary effect and is still stable in noise. We find that the characters of the fading anomalies emerge properly in our downward continuation with respect to the original fields at the lower heights.

  19. Variance of a potential of mean force obtained using the weighted histogram analysis method.

    Science.gov (United States)

    Cukier, Robert I

    2013-11-27

    A potential of mean force (PMF) that provides the free energy of a thermally driven system along some chosen reaction coordinate (RC) is a useful descriptor of systems characterized by complex, high dimensional potential energy surfaces. Umbrella sampling window simulations use potential energy restraints to provide more uniform sampling along a RC so that potential energy barriers that would otherwise make equilibrium sampling computationally difficult can be overcome. Combining the results from the different biased window trajectories can be accomplished using the Weighted Histogram Analysis Method (WHAM). Here, we provide an analysis of the variance of a PMF along the reaction coordinate. We assume that the potential restraints used for each window lead to Gaussian distributions for the window reaction coordinate densities and that the data sampling in each window is from an equilibrium ensemble sampled so that successive points are statistically independent. Also, we assume that neighbor window densities overlap, as required in WHAM, and that further-than-neighbor window density overlap is negligible. Then, an analytic expression for the variance of the PMF along the reaction coordinate at a desired level of spatial resolution can be generated. The variance separates into a sum over all windows with two kinds of contributions: One from the variance of the biased window density normalized by the total biased window density and the other from the variance of the local (for each window's coordinate range) PMF. Based on the desired spatial resolution of the PMF, the former variance can be minimized relative to that from the latter. The method is applied to a model system that has features of a complex energy landscape evocative of a protein with two conformational states separated by a free energy barrier along a collective reaction coordinate. The variance can be constructed from data that is already available from the WHAM PMF construction.

  20. A Method for Calculating the Mean Orbits of Meteor Streams

    Science.gov (United States)

    Voloshchuk, Yu. I.; Kashcheev, B. L.

    An examination of the published catalogs of orbits of meteor streams and of a large number of works devoted to the selection of streams, their analysis and interpretation, showed that elements of stream orbits are calculated, as a rule, as arithmetical (sometimes, weighed) sample means. On the basis of these means, a search for parent bodies, a study of the evolution of swarms generating these streams, an analysis of one-dimensional and multidimensional distributions of these elements, etc., are performed. We show that systematic errors in the estimates of elements of the mean orbits are present in each of the catalogs. These errors are caused by the formal averaging of orbital elements over the sample, while ignoring the fact that they represent not only correlated, but dependent quantities, with nonlinear, in most cases, interrelations between them. Numerous examples are given of such inaccuracies, in particular, the cases where the "mean orbit of the stream" recorded by ground-based techniques does not cross the Earth's orbit. We suggest the computation algorithm, in which the averaging over the sample is carried out at the initial stage of the calculation of the mean orbit, and only for the variables required for subsequent calculations. After this, the known astrometric formulas are used to sequentially calculate all other parameters of the stream, considered now as a standard orbit. Variance analysis is used to estimate the errors in orbital elements of the streams, in the case that their orbits are obtained by averaging the orbital elements of meteoroids forming the stream, without taking into account their interdependence. The results obtained in this analysis indicate the behavior of systematic errors in the elements of orbits of meteor streams. As an example, the effect of the incorrect computation method on the distribution of elements of the stream orbits close to the orbits of asteroids of the Apollo, Aten, and Amor groups (AAA asteroids) is examined.

  1. Provider Payment Trends and Methods in the Massachusetts Health Care System

    OpenAIRE

    Allison Barrett; Timothy Lake

    2010-01-01

    This report investigates provider payment methods in Massachusetts. Payments include fee-for-service, the predominant model; global payments, which pay providers a single fee for all or most required services during a contract period; and pay-for-performance models, which layer quality incentives onto payments.

  2. K-Means Algorithm Performance Analysis With Determining The Value Of Starting Centroid With Random And KD-Tree Method

    Science.gov (United States)

    Sirait, Kamson; Tulus; Budhiarti Nababan, Erna

    2017-12-01

    Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.

  3. Method and apparatus for simultaneous determination of fluid mass flow rate, mean velocity and density

    International Nuclear Information System (INIS)

    Hamel, W.R.

    1984-01-01

    This invention relates to a new method and new apparatus for determining fluid mass flow rate and density. In one aspect of the invention, the fluid is passed through a straight cantilevered tube in which transient oscillation has been induced, thus generating Coriolis damping forces on the tube. The decay rate and frequency of the resulting damped oscillation are measured, and the fluid mass flow rate and density are determined therefrom. In another aspect of the invention, the fluid is passed through the cantilevered tube while an electrically powered device imparts steady-state harmonic excitation to the tube. This generates Coriolis tube-damping forces which are dependent on the mass flow rate of the fluid. Means are provided to respond to incipient flow-induced changes in the amplitude of vibration by changing the power input to the excitation device as required to sustain the original amplitude of vibration. The fluid mass flow rate and density are determined from the required change in power input. The invention provides stable, rapid, and accurate measurements. It does not require bending of the fluid flow

  4. Performance Analysis of Combined Methods of Genetic Algorithm and K-Means Clustering in Determining the Value of Centroid

    Science.gov (United States)

    Adya Zizwan, Putra; Zarlis, Muhammad; Budhiarti Nababan, Erna

    2017-12-01

    The determination of Centroid on K-Means Algorithm directly affects the quality of the clustering results. Determination of centroid by using random numbers has many weaknesses. The GenClust algorithm that combines the use of Genetic Algorithms and K-Means uses a genetic algorithm to determine the centroid of each cluster. The use of the GenClust algorithm uses 50% chromosomes obtained through deterministic calculations and 50% is obtained from the generation of random numbers. This study will modify the use of the GenClust algorithm in which the chromosomes used are 100% obtained through deterministic calculations. The results of this study resulted in performance comparisons expressed in Mean Square Error influenced by centroid determination on K-Means method by using GenClust method, modified GenClust method and also classic K-Means.

  5. Correction for tissue attenuation in radionuclide gastric emptying studies: a comparison of a lateral image method and a geometric mean method

    Energy Technology Data Exchange (ETDEWEB)

    Collins, P.J.; Chatterton, B.E. (Royal Adelaide Hospital (Australia)); Horowitz, M.; Shearman, D.J.C. (Adelaide Univ. (Australia). Dept. of Medicine)

    1984-08-01

    Variation in depth of radionuclide within the stomach may result in significant errors in the measurement of gastric emptying if no attempt is made to correct for gamma-ray attenuation by the patient's tissues. A method of attenuation correction, which uses a single posteriorly located scintillation camera and correction factors derived from a lateral image of the stomach, was compared with a two-camera geometric mean method, in phantom studies and in five volunteer subjects. A meal of 100 g of ground beef containing /sup 99/Tcsup(m)-chicken liver, and 150 ml of water was used in the in vivo studies. In all subjects the geometric mean data showed that solid food emptied in two phases: an initial lag period, followed by a linear emptying phase. Using the geometric mean data as a standard, the anterior camera overestimated the 50% emptying time (T/sub 50/) by an average of 15% (range 5-18) and the posterior camera underestimated this parameter by 15% (4-22). The posterior data, corrected for attenuation using the lateral image method, underestimated the T/sub 50/ by 2% (-7 to +7). The difference in the distances of the proximal and distal stomach from the posterior detector was large in all subjects (mean 5.7 cm, range 3.9-7.4).

  6. Method for Providing Semiconductors Having Self-Aligned Ion Implant

    Science.gov (United States)

    Neudeck, Philip G. (Inventor)

    2014-01-01

    A method is disclosed that provides a self-aligned nitrogen-implant particularly suited for a Junction Field Effect Transistor (JFET) semiconductor device preferably comprised of a silicon carbide (SiC). This self-aligned nitrogen-implant allows for the realization of durable and stable electrical functionality of high temperature transistors such as JFETs. The method implements the self-aligned nitrogen-implant having predetermined dimensions, at a particular step in the fabrication process, so that the SiC junction field effect transistors are capable of being electrically operating continuously at 500.degree. C. for over 10,000 hours in an air ambient with less than a 10% change in operational transistor parameters.

  7. An optimized ensemble local mean decomposition method for fault detection of mechanical components

    International Nuclear Information System (INIS)

    Zhang, Chao; Chen, Shuai; Wang, Jianguo; Li, Zhixiong; Hu, Chao; Zhang, Xiaogang

    2017-01-01

    Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error ( Relative RMSE ) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE , corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions. (paper)

  8. An optimized ensemble local mean decomposition method for fault detection of mechanical components

    Science.gov (United States)

    Zhang, Chao; Li, Zhixiong; Hu, Chao; Chen, Shuai; Wang, Jianguo; Zhang, Xiaogang

    2017-03-01

    Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error (Relative RMSE) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE, corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions.

  9. To the Question of Information Security and Providing State and Municipal Services by Means of the Internet

    Directory of Open Access Journals (Sweden)

    Alexander A. Galushkin

    2015-09-01

    Full Text Available In the present article author investigates interconnected questions of information security and providing state and municipal services by means of the global information Internet. Author analyzes opinions of the number of leading Russian and foreign experts and scientists. In the summary author draws a conclusion that implementation of rules of law answering to modern realities and also fruitful work of law enforcement and supervisory authorities regarding law application practice improvement is necessary for information security and human rights protection.

  10. Isotope decay equations solved by means of a recursive method

    International Nuclear Information System (INIS)

    Grant, Carlos

    2009-01-01

    The isotope decay equations have been solved using forward finite differences taking small time steps, among other methods. This is the case of the cell code WIMS, where it is assumed that concentrations of all fissionable isotopes remain constant during the integration interval among other simplifications. Even when the problem could be solved running through a logical tree, all algorithms used for resolution of these equations used an iterative programming formulation. That happened because nearly all computer languages used up to a recent past by the scientific programmers did not support recursion, such as the case of the old versions of FORTRAN or BASIC. Nowadays also an integral form of the depletion equations is used in Monte Carlo simulation. In this paper we propose another programming solution using a recursive algorithm, running through all descendants of each isotope and adding their contributions to all isotopes in each generation. The only assumption made for this solution is that fluxes remain constant during the whole time step. Recursive process is interrupted when a stable isotope was attained or the calculated contributions are smaller than a given precision. These algorithms can be solved by means an exact analytic method that can have some problems when circular loops appear for isotopes with alpha decay, and a more general polynomial method. Both methods are shown. (author)

  11. An Initialization Method Based on Hybrid Distance for k-Means Algorithm.

    Science.gov (United States)

    Yang, Jie; Ma, Yan; Zhang, Xiangfen; Li, Shunbao; Zhang, Yuping

    2017-11-01

    The traditional [Formula: see text]-means algorithm has been widely used as a simple and efficient clustering method. However, the performance of this algorithm is highly dependent on the selection of initial cluster centers. Therefore, the method adopted for choosing initial cluster centers is extremely important. In this letter, we redefine the density of points according to the number of its neighbors, as well as the distance between points and their neighbors. In addition, we define a new distance measure that considers both Euclidean distance and density. Based on that, we propose an algorithm for selecting initial cluster centers that can dynamically adjust the weighting parameter. Furthermore, we propose a new internal clustering validation measure, the clustering validation index based on the neighbors (CVN), which can be exploited to select the optimal result among multiple clustering results. Experimental results show that the proposed algorithm outperforms existing initialization methods on real-world data sets and demonstrates the adaptability of the proposed algorithm to data sets with various characteristics.

  12. A method of providing a barrier in a fracture-containing system

    DEFF Research Database (Denmark)

    2014-01-01

    The present invention relates to a method of providing a barrier in a fracture-containing system, comprising: i) Providing a treatment fluid comprising: a) a base fluid; b) an elastomeric material, wherein said elastomeric material comprises at least one polymer capable of crosslinking into an el......The present invention relates to a method of providing a barrier in a fracture-containing system, comprising: i) Providing a treatment fluid comprising: a) a base fluid; b) an elastomeric material, wherein said elastomeric material comprises at least one polymer capable of crosslinking...... into an elastomer, and c) at least one crosslinking agent; ii) Placing the treatment fluid in a fracture-containing system; iii) Allowing the elastomeric material to crosslink with itself to form a barrier in said fracture-containing system; wherein the elastomeric material and/or the crosslinking agent...... are of neutral buoyancy with regard to the base fluid. The invention is contemplated to having utility not only in the oil-drilling industry but also in the plugging of fractures in sewer drains, pipelines etc....

  13. Two new methods to determine the adhesion by means of internal friction in materials covered with films

    International Nuclear Information System (INIS)

    Colorado, H. A.; Ghilarducci, A. A.; Salva, H. R.

    2006-01-01

    Two new models are proposed to determine the adhesion energy be means of the internal friction technique (IF) in thin films layered materials. for the first method is necessary to determine enthalpy by means of the IF technique, for which the adhesion work has been determined with experimental data. In the second method are necessary to perform IF tests at constant temperature. (Author)

  14. Contemporary methods and means of monitoring for Karabach region's forest ecosystems

    International Nuclear Information System (INIS)

    Aliyev, N.R.; Abdurahmanova, I.G.; Askerov, R.A.

    2010-01-01

    Full Text: In the article is analyzed the changing of a condition in the Karabach regions forests. The negative influence to region as well as in other regions of Azerbaijan of a mass cutting down of forests because of need for energy and wood industry, life conditions and also the results of military operations were lighted. The effective methods of reception of the operative information on an ecological condition of wood ecological systems on the basis of modern technical means are offered

  15. Analysis of residual stresses on the transverse beam of a casting stand by means of drilling method

    Directory of Open Access Journals (Sweden)

    P. Frankovský

    2014-10-01

    Full Text Available The presented paper demonstrates the application of drilling method in the analysis of residual stresses on the transverse beam of a casting stand. In the initial stage of the analysis the determination of strains was done for individual steps of drilling in the area which was determined by means of numerical analysis. The drilling was carried out gradually by 0,5 mm up to the depth of 5 mm, while the diameter of the drilled hole was 3,2 mm. During the analysis we used the drilling device RS-200, strain indicator P3 and SGD 1-RY21-3/120. The paper presents the development of residual stresses throughout the depth of the drilled hole which were determined according to standard ASTM E837-01, by means of integral method, power series method and by means of Power Series method.

  16. Mapping Mixed Methods Research: Methods, Measures, and Meaning

    Science.gov (United States)

    Wheeldon, J.

    2010-01-01

    This article explores how concept maps and mind maps can be used as data collection tools in mixed methods research to combine the clarity of quantitative counts with the nuance of qualitative reflections. Based on more traditional mixed methods approaches, this article details how the use of pre/post concept maps can be used to design qualitative…

  17. Mean transit time image - a new method of analyzing brain perfusion studies

    Energy Technology Data Exchange (ETDEWEB)

    Szabo, Z.; Ritzl, F.

    1983-05-01

    Point-by-point calculation of the mean transit time based on gamma fit was used to analyze brain perfusion studies in a vertex view. The algorithm and preliminary results in normal brain and in different stages of cerebral perfusion abnormality (ischemia, stroke, migraine, tumor, abscess) are demonstrated. In contrast to the traditional methods using fixed, a priori defined regions of interest this type of mapping of the relative regions cerebral perfusion shows more clearly the irregular outlines of the disturbance. Right to left activity ratios in the arterial part of the time-activity curves showed significant correlation with the mean transit time ratios (Q/sub 1/=1.185-0.192 Qsub(a), n=38, r=0.716, P<0.001).

  18. Decision Making in Uncertain Rural Scenarios by means of Fuzzy TOPSIS Method

    Directory of Open Access Journals (Sweden)

    Eva Armero

    2011-01-01

    Full Text Available A great deal of uncertain information which is difficult to quantify is taken into account by farmers and experts in the enterprise when making decisions. We are interested in the problems of the implementation of a rabbit-breeding farm. One of the first decisions to be taken refers to the design or type of structure for housing the animals, which is determined by the level of environmental control sought to be maintained in its interior. A farmer was consulted, and his answers were incorporated into the analysis, by means of the fuzzy TOPSIS methodology. The main purpose of this paper is to study the problem by means of the fuzzy TOPSIS method as multicriteria decision making, when the information was given in linguistic terms.

  19. A diabetic retinopathy detection method using an improved pillar K-means algorithm.

    Science.gov (United States)

    Gogula, Susmitha Valli; Divakar, Ch; Satyanarayana, Ch; Rao, Allam Appa

    2014-01-01

    The paper presents a new approach for medical image segmentation. Exudates are a visible sign of diabetic retinopathy that is the major reason of vision loss in patients with diabetes. If the exudates extend into the macular area, blindness may occur. Automated detection of exudates will assist ophthalmologists in early diagnosis. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies K-means clustering to the image segmentation after getting optimized by Pillar algorithm; pillars are constructed in such a way that they can withstand the pressure. Improved pillar algorithm can optimize the K-means clustering for image segmentation in aspects of precision and computation time. This evaluates the proposed approach for image segmentation by comparing with Kmeans and Fuzzy C-means in a medical image. Using this method, identification of dark spot in the retina becomes easier and the proposed algorithm is applied on diabetic retinal images of all stages to identify hard and soft exudates, where the existing pillar K-means is more appropriate for brain MRI images. This proposed system help the doctors to identify the problem in the early stage and can suggest a better drug for preventing further retinal damage.

  20. The effect of skin thermistor fixation method on weighted mean skin temperature

    International Nuclear Information System (INIS)

    Tyler, Christopher James

    2011-01-01

    The purpose of this study was to investigate the effect of three different skin thermistor attachment methods on weighted mean skin temperature (WMT sk ) at three different ambient temperatures (∼24 °C (TEMP); ∼30 °C (WARM); ∼35 °C (HOT)) compared to uncovered thermistors. Eleven, non-acclimated, volunteers completed three 5 min bouts of submaximal cycling (∼70 W mechanical work)—one at each environmental condition in sequential order (TEMP, WARM, HOT). One thermistor was fixed to the sternal notch whilst four skin thermistors were spaced at 3 cm intervals on each of the sites on the limbs as per the formula of Ramanathan (1964 J. Appl. Physiol. 19 531–3). Each thermistor was either held against the skin uncovered (UC) or attached with surgical acrylic film dressing (T); surgical acrylic film dressing and hypoallergenic surgical tape (TT) or surgical acrylic film dressing, hypoallergenic surgical tape and surgical bandage (TTC). The WMT sk calculated was significantly lower in UC compared to T, TT and TTC (p < 0.001, d = 0.46), in T compared to TT and TTC (p < 0.001, d = 0.33) and in TT compared to TTC (p < 0.001; d = 0.25). The mean differences (across the three temperatures) were + 0.27 ±0.34 °C, + 0.52 ± 0.35 °C and + 0.82 ± 0.34 °C for T, TT and TTC, respectively. The results demonstrate that the method of skin thermistor attachment can result in the significant over-estimation of weighted mean skin temperature

  1. Study on variance-to-mean method as subcriticality monitor for accelerator driven system operated with pulse-mode

    International Nuclear Information System (INIS)

    Yamauchi, Hideto; Kitamura, Yasunori; Yamane, Yoshihiro; Misawa, Tsuyoshi; Unesaki, Hironobu

    2003-01-01

    Two types of the variance-to-mean methods for the subcritical system that was driven by the periodic and pulsed neutron source were developed and their experimental examination was performed with the Kyoto University Critical Assembly and a pulsed neutron generator. As a result, it was demonstrated that the prompt neutron decay constant could be measured by these methods. From this fact, it was concluded that the present variance-to-mean methods had potential for being used in the subcriticality monitor for the future accelerator driven system operated with the pulse-mode. (author)

  2. Evaluation of three methods of DNA extraction from paraffin-embedded material for the amplification of genomic DNA by means of the PCR technique

    Directory of Open Access Journals (Sweden)

    MESQUITA Ricardo Alves

    2001-01-01

    Full Text Available There are several protocols reported in the literature for the extraction of genomic DNA from formalin-fixed paraffin-embedded samples. Genomic DNA is utilized in molecular analyses, including PCR. This study compares three different methods for the extraction of genomic DNA from formalin-fixed paraffin-embedded (inflammatory fibrous hyperplasia and non-formalin-fixed (normal oral mucosa samples: phenol with enzymatic digestion, and silica with and without enzymatic digestion. The amplification of DNA by means of the PCR technique was carried out with primers for the exon 7 of human keratin type 14. Amplicons were analyzed by means of electrophoresis in an 8% polyacrylamide gel with 5% glycerol, followed by silver-staining visualization. The phenol/enzymatic digestion and the silica/enzymatic digestion methods provided amplicons from both tissue samples. The method described is a potential aid in the establishment of the histopathologic diagnosis and in retrospective studies with archival paraffin-embedded samples.

  3. 47 CFR 51.329 - Notice of network changes: Methods for providing notice.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Notice of network changes: Methods for providing notice. 51.329 Section 51.329 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED...) Filing a public notice with the Commission; or (2) Providing public notice through industry fora...

  4. A novel mean-centering method for normalizing microRNA expression from high-throughput RT-qPCR data

    Directory of Open Access Journals (Sweden)

    Wylie Dennis

    2011-12-01

    Full Text Available Abstract Background Normalization is critical for accurate gene expression analysis. A significant challenge in the quantitation of gene expression from biofluids samples is the inability to quantify RNA concentration prior to analysis, underscoring the need for robust normalization tools for this sample type. In this investigation, we evaluated various methods of normalization to determine the optimal approach for quantifying microRNA (miRNA expression from biofluids and tissue samples when using the TaqMan® Megaplex™ high-throughput RT-qPCR platform with low RNA inputs. Findings We compared seven normalization methods in the analysis of variation of miRNA expression from biofluid and tissue samples. We developed a novel variant of the common mean-centering normalization strategy, herein referred to as mean-centering restricted (MCR normalization, which is adapted to the TaqMan Megaplex RT-qPCR platform, but is likely applicable to other high-throughput RT-qPCR-based platforms. Our results indicate that MCR normalization performs comparable to or better than both standard mean-centering and other normalization methods. We also propose an extension of this method to be used when migrating biomarker signatures from Megaplex to singleplex RT-qPCR platforms, based on the identification of a small number of normalizer miRNAs that closely track the mean of expressed miRNAs. Conclusions We developed the MCR method for normalizing miRNA expression from biofluids samples when using the TaqMan Megaplex RT-qPCR platform. Our results suggest that normalization based on the mean of all fully observed (fully detected miRNAs minimizes technical variance in normalized expression values, and that a small number of normalizer miRNAs can be selected when migrating from Megaplex to singleplex assays. In our study, we find that normalization methods that focus on a restricted set of miRNAs tend to perform better than methods that focus on all miRNAs, including

  5. Means, methods and performances of the AREVA's HTR compact controls

    International Nuclear Information System (INIS)

    Banchet, J.; Guillermier, P.; Tisseur, D.; Vitali, M. P.

    2008-01-01

    In the AREVA's HTR development program, the reactor plant is composed of a prismatic core containing graphite cylindrical fuel elements, called compacts, where TRISO particles are dispersed. Starting from its past compacting process, the latter being revamped through the use of state of the art equipments, CERCA, 100% AREVA NP's subsidiary, was able to recover the quality of past compacts production. The recovered compacting process is composed of the following manufacturing steps: graphite matrix granulation, mix between the obtained granulates and particles, compacting and calcining at low pressure and temperature. To adapt this past process to new manufacturing equipments, non destructive examination tests were carried out to assess the compact quality, the latter being assessed via in house developed equipments and methods at each step of the design of experiments. As for the manufacturing process, past quality control methods were revamped to measure compact dimensional features (diameter, perpendicularity and cone effect), visual aspect, SiC layer failure fraction (via anodic disintegration and burn leach test) and homogeneity via 2D radiography coupled to ceramography. Although meeting quality requirements, 2D radiography method could not provide a quantified specification for compact homogeneity characterization. This limitation yielded the replacement of this past technique by a method based on X-Ray tomography. Development was conducted on this new technique to enable the definition of a criterion to quantify compact homogeneity, as well as to provide information about the distances in between particles. This study also included a comparison between simulated and real compacts to evaluate the accuracy of the technique as well as the influence of particle packing fraction on compact homogeneity. The developed quality control methods and equipments guided the choices of manufacturing parameters adjustments at the development stage and are now applied for

  6. Cloud Service Provider Methods for Managing Insider Threats: Analysis Phase 1

    Science.gov (United States)

    2013-11-01

    of Standards and Technology (NIST) Special Publication 800-145 (NIST SP 800-145) defines three types of cloud services : Software as a Service ( SaaS ...among these three models. NIST SP 800-145 describes the three service models as follows: SaaS —The capability provided to the consumer is to use the...Cloud Service Provider Methods for Managing Insider Threats: Analysis Phase I Greg Porter November 2013 TECHNICAL NOTE CMU/SEI-2013-TN-020

  7. A Trajectory Regression Clustering Technique Combining a Novel Fuzzy C-Means Clustering Algorithm with the Least Squares Method

    Directory of Open Access Journals (Sweden)

    Xiangbing Zhou

    2018-04-01

    Full Text Available Rapidly growing GPS (Global Positioning System trajectories hide much valuable information, such as city road planning, urban travel demand, and population migration. In order to mine the hidden information and to capture better clustering results, a trajectory regression clustering method (an unsupervised trajectory clustering method is proposed to reduce local information loss of the trajectory and to avoid getting stuck in the local optimum. Using this method, we first define our new concept of trajectory clustering and construct a novel partitioning (angle-based partitioning method of line segments; second, the Lagrange-based method and Hausdorff-based K-means++ are integrated in fuzzy C-means (FCM clustering, which are used to maintain the stability and the robustness of the clustering process; finally, least squares regression model is employed to achieve regression clustering of the trajectory. In our experiment, the performance and effectiveness of our method is validated against real-world taxi GPS data. When comparing our clustering algorithm with the partition-based clustering algorithms (K-means, K-median, and FCM, our experimental results demonstrate that the presented method is more effective and generates a more reasonable trajectory.

  8. System and Method for Providing Vertical Profile Measurements of Atmospheric Gases

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A system and method for using an air collection device to collect a continuous air sample as the device descends through the atmosphere are provided. The air...

  9. A Novel Method to Predict Genomic Islands Based on Mean Shift Clustering Algorithm.

    Directory of Open Access Journals (Sweden)

    Daniel M de Brito

    Full Text Available Genomic Islands (GIs are regions of bacterial genomes that are acquired from other organisms by the phenomenon of horizontal transfer. These regions are often responsible for many important acquired adaptations of the bacteria, with great impact on their evolution and behavior. Nevertheless, these adaptations are usually associated with pathogenicity, antibiotic resistance, degradation and metabolism. Identification of such regions is of medical and industrial interest. For this reason, different approaches for genomic islands prediction have been proposed. However, none of them are capable of predicting precisely the complete repertory of GIs in a genome. The difficulties arise due to the changes in performance of different algorithms in the face of the variety of nucleotide distribution in different species. In this paper, we present a novel method to predict GIs that is built upon mean shift clustering algorithm. It does not require any information regarding the number of clusters, and the bandwidth parameter is automatically calculated based on a heuristic approach. The method was implemented in a new user-friendly tool named MSGIP--Mean Shift Genomic Island Predictor. Genomes of bacteria with GIs discussed in other papers were used to evaluate the proposed method. The application of this tool revealed the same GIs predicted by other methods and also different novel unpredicted islands. A detailed investigation of the different features related to typical GI elements inserted in these new regions confirmed its effectiveness. Stand-alone and user-friendly versions for this new methodology are available at http://msgip.integrativebioinformatics.me.

  10. Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers

    International Nuclear Information System (INIS)

    Cardoso, Vanderlei

    2002-01-01

    The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)

  11. Method and means of passive detection of leaks in buried pipes

    International Nuclear Information System (INIS)

    Claytor, T.N.

    1981-01-01

    A method and means for passive detection of a leak in a buried pipe containing fluid under pressure includes a plurality of acoustic detectors that are placed in contact with the pipe. Noise produced by the leak is detected by the detectors, and the detected signals are correlated to locate the leak. In one embodiment of the invention two detectors are placed at different locations to locate a leak between them. In an alternate embodiment two detectors of different waves are placed at substantially the same location to determine the distance of the leak from the location

  12. CLASSIFICATION OF IRANIAN NURSES ACCORDING TO THEIR MENTAL HEALTH OUTCOMES USING GHQ-12 QUESTIONNAIRE: A COMPARISON BETWEEN LATENT CLASS ANALYSIS AND K-MEANS CLUSTERING WITH TRADITIONAL SCORING METHOD.

    Science.gov (United States)

    Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi

    2015-10-01

    Nurses constitute the most providers of health care systems. Their mental health can affect the quality of services and patients' satisfaction. General Health Questionnaire (GHQ-12) is a general screening tool used to detect mental disorders. Scoring method and determining thresholds for this questionnaire are debatable and the cut-off points can vary from sample to sample. This study was conducted to estimate the prevalence of mental disorders among Iranian nurses using GHQ-12 and also compare Latent Class Analysis (LCA) and K-means clustering with traditional scoring method. A cross-sectional study was carried out in Fars and Bushehr provinces of southern Iran in 2014. Participants were 771 Iranian nurses, who filled out the GHQ-12 questionnaire. Traditional scoring method, LCA and K-means were used to estimate the prevalence of mental disorder among Iranian nurses. Cohen's kappa statistic was applied to assess the agreement between the LCA and K-means with traditional scoring method of GHQ-12. The nurses with mental disorder by scoring method, LCA and K-mean were 36.3% (n=280), 32.2% (n=248), and 26.5% (n=204), respectively. LCA and logistic regression revealed that the prevalence of mental disorder in females was significantly higher than males. Mental disorder in nurses was in a medium level compared to other people living in Iran. There was a little difference between prevalence of mental disorder estimated by scoring method, K-means and LCA. According to the advantages of LCA than K-means and different results in scoring method, we suggest LCA for classification of Iranian nurses according to their mental health outcomes using GHQ-12 questionnaire.

  13. Short-term variations in core surface flow resolved from an improved method of calculating observatory monthly means

    Science.gov (United States)

    Olsen, Nils; Whaler, Kathryn A.; Finlay, Christopher C.

    2014-05-01

    Monthly means of the magnetic field measurements taken by ground observatories are a useful data source for studying temporal changes of the core magnetic field and the underlying core flow. However, the usual way of calculating monthly means as the arithmetic mean of all days (geomagnetic quiet as well as disturbed) and all local times (day and night) may result in contributions from external (magnetospheric and ionospheric) origin in the (ordinary, omm) monthly means. Such contamination makes monthly means less favourable for core studies. We calculated revised monthly means (rmm), and their uncertainties, from observatory hourly means using robust means and after removal of external field predictions, using an improved method for characterising the magnetospheric ring current. The utility of the new method for calculating observatory monthly means is demonstrated by inverting their first differences for core surface advective flows. The flow is assumed steady over three consecutive months to ensure uniqueness; the effects of more rapid changes should be attenuated by the weakly conducting mantle. Observatory data are inverted directly for a regularised core flow, rather than deriving it from a secular variation spherical harmonic model. The main field is specified by the CHAOS-4 model. Data from up to 128 observatories between 1997 and 2013 were used to calculate 185 flow models from the omm and rmm, for each possible set of three consecutive months. The full 3x3 (non-diagonal) data covariance matrix was used, and two-norm (least squares) minimisation performed. We are able to fit the data to the target (weighted) misfit of 1, for both omm and rmm inversions, provided we incorporate the full data covariance matrix, and produce consistent, plausible flows. Fits are better for rmm flows. The flows exhibit noticeable changes over timescales of a few months. However, they follow rapid excursions in the omm that we suspect result from external field contamination

  14. The log mean heat transfer rate method of heat exchanger considering the influence of heat radiation

    International Nuclear Information System (INIS)

    Wong, K.-L.; Ke, M.-T.; Ku, S.-S.

    2009-01-01

    The log mean temperature difference (LMTD) method is conventionally used to calculate the total heat transfer rate of heat exchangers. Because the heat radiation equation contains the 4th order exponential of temperature which is very complicate in calculations, thus LMTD method neglects the influence of heat radiation. From the recent investigation of a circular duct in some practical situations, it is found that even in the situation of the temperature difference between outer duct surface and surrounding is low to 1 deg. C, the heat radiation effect can not be ignored in the situations of lower ambient convective heat coefficient and greater surface emissivities. In this investigation, the log mean heat transfer rate (LMHTR) method which considering the influence of heat radiation, is developed to calculate the total heat transfer rate of heat exchangers.

  15. Supplier Risk Assessment Based on Best-Worst Method and K-Means Clustering: A Case Study

    Directory of Open Access Journals (Sweden)

    Merve Er Kara

    2018-04-01

    Full Text Available Supplier evaluation and selection is one of the most critical strategic decisions for developing a competitive and sustainable organization. Companies have to consider supplier related risks and threats in their purchasing decisions. In today’s competitive and risky business environment, it is very important to work with reliable suppliers. This study proposes a clustering based approach to group suppliers based on their risk profile. Suppliers of a company in the heavy-machinery sector are assessed based on 17 qualitative and quantitative risk types. The weights of the criteria are determined by using the Best-Worst method. Four factors are extracted by applying Factor Analysis to the supplier risk data. Then k-means clustering algorithm is applied to group core suppliers of the company based on the four risk factors. Three clusters are created with different risk exposure levels. The interpretation of the results provides insights for risk management actions and supplier development programs to mitigate supplier risk.

  16. The limits of the mean field

    International Nuclear Information System (INIS)

    Guerra, E.M. de

    2001-01-01

    In these talks, we review non relativistic selfconsistent mean field theories, their scope and limitations. We first discuss static and time dependent mean field approaches for particles and quasiparticles, together with applications. We then discuss extensions that go beyond the non-relativistic independent particle limit. On the one hand, we consider extensions concerned with restoration of symmetries and with the treatment of collective modes, particularly by means of quantized ATDHF. On the other hand, we consider extensions concerned with the relativistic dynamics of bound nucleons. We present data on nucleon momentum distributions that show the need for relativistic mean field approach and probe the limits of the mean field concept. Illustrative applications of various methods are presented stressing the role that selfconsistency plays in providing a unifying reliable framework to study all sorts of properties and phenomena. From global properties such as size, mass, lifetime,.., to detailed structure in excitation spectra (high spin, RPA modes,..), as well as charge, magnetization and velocity distributions. (orig.)

  17. System and Method for Providing a Climate Data Persistence Service

    Science.gov (United States)

    Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)

    2018-01-01

    A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.

  18. Performance Analysis of a Maximum Power Point Tracking Technique using Silver Mean Method

    Directory of Open Access Journals (Sweden)

    Shobha Rani Depuru

    2018-01-01

    Full Text Available The proposed paper presents a simple and particularly efficacious Maximum Power Point Tracking (MPPT algorithm based on Silver Mean Method (SMM. This method operates by choosing a search interval from the P-V characteristics of the given solar array and converges to MPP of the Solar Photo-Voltaic (SPV system by shrinking its interval. After achieving the maximum power, the algorithm stops shrinking and maintains constant voltage until the next interval is decided. The tracking capability efficiency and performance analysis of the proposed algorithm are validated by the simulation and experimental results with a 100W solar panel for variable temperature and irradiance conditions. The results obtained confirm that even without any perturbation and observation process, the proposed method still outperforms the traditional perturb and observe (P&O method by demonstrating far better steady state output, more accuracy and higher efficiency.

  19. Automated correlation and classification of secondary ion mass spectrometry images using a k-means cluster method.

    Science.gov (United States)

    Konicek, Andrew R; Lefman, Jonathan; Szakal, Christopher

    2012-08-07

    We present a novel method for correlating and classifying ion-specific time-of-flight secondary ion mass spectrometry (ToF-SIMS) images within a multispectral dataset by grouping images with similar pixel intensity distributions. Binary centroid images are created by employing a k-means-based custom algorithm. Centroid images are compared to grayscale SIMS images using a newly developed correlation method that assigns the SIMS images to classes that have similar spatial (rather than spectral) patterns. Image features of both large and small spatial extent are identified without the need for image pre-processing, such as normalization or fixed-range mass-binning. A subsequent classification step tracks the class assignment of SIMS images over multiple iterations of increasing n classes per iteration, providing information about groups of images that have similar chemistry. Details are discussed while presenting data acquired with ToF-SIMS on a model sample of laser-printed inks. This approach can lead to the identification of distinct ion-specific chemistries for mass spectral imaging by ToF-SIMS, as well as matrix-assisted laser desorption ionization (MALDI), and desorption electrospray ionization (DESI).

  20. STUDI SIMULASI MENGGUNAKAN FUZZY C-MEANS DALAM MENGKLASIFIKASI KONSTRUK TES

    Directory of Open Access Journals (Sweden)

    Rukli Rukli

    2013-01-01

    Full Text Available Tulisan ini memperkenalkan metode fuzzy c-means dalam mengklasifikasi konstruk tes. Untuk memverifikasi sifat unidimensional suatu tes biasanya menggunakan analisis faktor sebagai bagian dari statistik parametrik dengan beberapa persyaratan yang ketat sedangkan metode fuzzy c-means termasuk metode heuristik yang tidak memerlukan persyaratan yang ketat. Studi simulasi penelitian ini menggunakan dua metode yakni analisis faktor menggunakan program SPSS dan fuzzy c-means menggunakan program Matlab. Data simulasi menggunakan tipe data dikotomi dan politomi yang dibangkitkan lewat prog-ram Microsoft Office Excel dengan desain 2 kategori, yakni: tiga butir soal dengan banyak peserta tes 10, dan 30 butir soal dengan banyak peserta tes 100. Hasil simulasi menunjukkan bahwa metode fuzzy c-means lebih memberikan gambaran pengelompokan secara deskriptif dan dinamis pada semua desain yang telah dibuat dalam memverifikasi unidimensional pada suatu tes. Kata kunci: fuzzy c-means, analisis faktor, unidimensional _____________________________________________________________ SIMULATION STUDY USING FUZZY C-MEANS FOR CLASIFYING TEST CONSTRUCTION Abstract This paper introduces the fuzzy c-means method for classifying the test constructs. To verify the unidimensional a test typically uses factor analysis as part of parametric statistics with some strict requirements, while fuzzy c-means methods including method heuristic that do not require strict require-ments. Simulation comparison between the method of factor analysis using SPSS program and fuzzy c-means methods using Matlab. Simulation data using data type dichotomy and politomus generated through Microsoft Office Excel programs each with a number of 3 items using the number of participants 10 tests, while the number of 30 test items using the number as many as 100 participants. The simulation results show that the fuzzy c-means method provides a more descriptive and dyna-mic grouping of all the designs that

  1. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  2. Advances in dynamic and mean field games theory, applications, and numerical methods

    CERN Document Server

    Viscolani, Bruno

    2017-01-01

    This contributed volume considers recent advances in dynamic games and their applications, based on presentations given at the 17th Symposium of the International Society of Dynamic Games, held July 12-15, 2016, in Urbino, Italy. Written by experts in their respective disciplines, these papers cover various aspects of dynamic game theory including mean-field games, stochastic and pursuit-evasion games, and computational methods for dynamic games. Topics covered include Pedestrian flow in crowded environments Models for climate change negotiations Nash Equilibria for dynamic games involving Volterra integral equations Differential games in healthcare markets Linear-quadratic Gaussian dynamic games Aircraft control in wind shear conditions Advances in Dynamic and Mean-Field Games presents state-of-the-art research in a wide spectrum of areas. As such, it serves as a testament to the continued vitality and growth of the field of dynamic games and their applications. It will be of interest to an interdisciplinar...

  3. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-03-01

    The purpose of this research is to provide a fundamental computational investigation into the possible integration of experimental activities with the Advanced Test Reactor Critical (ATR-C) facility with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of integral data for improving neutron cross sections. Further assessment of oscillation

  4. Organizational and provider level factors in implementation of trauma-informed care after a city-wide training: an explanatory mixed methods assessment

    Directory of Open Access Journals (Sweden)

    April Joy Damian

    2017-11-01

    Full Text Available Abstract Background While there is increasing support for training youth-serving providers in trauma-informed care (TIC as a means of addressing high prevalence of U.S. childhood trauma, we know little about the effects of TIC training on organizational culture and providers’ professional quality of life. This mixed-methods study evaluated changes in organizational- and provider-level factors following participation in a citywide TIC training. Methods Government workers and nonprofit professionals (N = 90 who participated in a nine-month citywide TIC training completed a survey before and after the training to assess organizational culture and professional quality of life. Survey data were analyzed using multiple regression analyses. A subset of participants (n = 16 was interviewed using a semi-structured format, and themes related to organizational and provider factors were identified using qualitative methods. Results Analysis of survey data indicated significant improvements in participants’ organizational culture and professional satisfaction at training completion. Participants’ perceptions of their own burnout and secondary traumatic stress also increased. Four themes emerged from analysis of the interview data, including “Implementation of more flexible, less-punitive policies towards clients,” “Adoption of trauma-informed workplace design,” “Heightened awareness of own traumatic stress and need for self-care,” and “Greater sense of camaraderie and empathy for colleagues.” Conclusion Use of a mixed-methods approach provided a nuanced understanding of the impact of TIC training and suggested potential benefits of the training on organizational and provider-level factors associated with implementation of trauma-informed policies and practices. Future trainings should explicitly address organizational factors such as safety climate and morale, managerial support, teamwork climate and collaboration, and

  5. Organizational and provider level factors in implementation of trauma-informed care after a city-wide training: an explanatory mixed methods assessment.

    Science.gov (United States)

    Damian, April Joy; Gallo, Joseph; Leaf, Philip; Mendelson, Tamar

    2017-11-21

    While there is increasing support for training youth-serving providers in trauma-informed care (TIC) as a means of addressing high prevalence of U.S. childhood trauma, we know little about the effects of TIC training on organizational culture and providers' professional quality of life. This mixed-methods study evaluated changes in organizational- and provider-level factors following participation in a citywide TIC training. Government workers and nonprofit professionals (N = 90) who participated in a nine-month citywide TIC training completed a survey before and after the training to assess organizational culture and professional quality of life. Survey data were analyzed using multiple regression analyses. A subset of participants (n = 16) was interviewed using a semi-structured format, and themes related to organizational and provider factors were identified using qualitative methods. Analysis of survey data indicated significant improvements in participants' organizational culture and professional satisfaction at training completion. Participants' perceptions of their own burnout and secondary traumatic stress also increased. Four themes emerged from analysis of the interview data, including "Implementation of more flexible, less-punitive policies towards clients," "Adoption of trauma-informed workplace design," "Heightened awareness of own traumatic stress and need for self-care," and "Greater sense of camaraderie and empathy for colleagues." Use of a mixed-methods approach provided a nuanced understanding of the impact of TIC training and suggested potential benefits of the training on organizational and provider-level factors associated with implementation of trauma-informed policies and practices. Future trainings should explicitly address organizational factors such as safety climate and morale, managerial support, teamwork climate and collaboration, and individual factors including providers' compassion satisfaction, burnout, and secondary

  6. DETECTION OF REPORTS FALSIFICATION PROVIDED TO A BANK BY A BORROWER USING THE METHOD OF DYNAMIC PARAMETERS ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. S. Lyuft

    2015-01-01

    Full Text Available Summary. The main stop-factors in a landing were formed in the article according to the procedure of the European Bank for Reconstruction and Development by the means of questioning the leading Russian banks that finance small-scale business. It is given the description of this method, also it is identified the main weaknesses of the EBRD methodology. There is a description of the main methods of the borrowing company’s analytical balance and thereupon it is made the conclusion about the necessity of the analysis of the balance’s principal factors across time. The analysis of indicators and factors in the dynamics enables us to see trends in the development of the company, and to identify deviations in the coefficients. Either materiality or difference from normal values of these coefficients may indicate the factors of the borrowers’ misconduct, and in particular it gives evidence concerning falsification of reports provided to a bank. There are stages of information processing for falsification’s detection, excluding the interest from decision-makers about the possibility of lending in the results of a transaction. The formula that determinates the value of net profit falsification has been made on basis of dynamic parameters of the analytical balance and the connection with the administrative profit-and-loss report. Further, the article provides the second method of a determination of the net profit falsification already based on data of the parameters in dynamics namely business profitability rate. The process of calculation Payment To Income - payment to income - an indicator, in order to obtain good data on who falsify net income. Are key strengths of this method of identifying and conclusions paragraph article.

  7. Method for determination of the mean fraction of glandular tissue in individual female breasts using mammography

    International Nuclear Information System (INIS)

    Jansen, J T M; Veldkamp, W J H; Thijssen, M A O; Woudenberg, S van; Zoetelief, J

    2005-01-01

    The nationwide breast cancer screening programme using mammography has been in full operation in the Netherlands since 1997. Quality control of the screening programme has been assigned to the National Expert and Training Centre for Breast Cancer Screening. Limits are set to the mean glandular dose and the centre monitors these for all facilities engaged in the screening programme. This procedure is restricted to the determination of the entrance dose on a 5 cm thick polymethylmethacrylate (PMMA) phantom. The mean glandular dose for a compressed breast is estimated from these data. Individual breasts may deviate largely from this 5 cm PMMA breast model. Not only may the compressed breast size vary from 2 to 10 cm, but breast composition varies also. The mean glandular dose is dependent on the fraction of glandular tissue (glandularity) of the breast. To estimate the risk related to individual mammograms requires the development of a method for determination of the glandularity of individual breasts. A method has been developed to derive the glandularity using the attenuation of mammography x-rays in the breast. The method was applied to a series of mammograms at a screening unit. The results, i.e., a glandularity of 93% within the range of 0 to 1, were comparable with data in the literature. The glandularity as a function of compressed breast thickness is similar to results from other investigators using differing methods

  8. Integration of Lax and Zakharov-Schabat equations by means of algebraic geometry's methods

    International Nuclear Information System (INIS)

    Gozman, N.Ja.; Latyshev, A.V.; Savostjanov, M.V.; Lebedev, D.R.

    1982-01-01

    The solutions of nonlinear partial differential equations of Lax and Zakharov-Schabat types are obtained with the help of algebro-geometric method. The Krichever-Drinfeld bimodule for rational curve with cusp point is constructed. It is noted that rational solutions of Zakharov-Schabat equations can be found by means of constructed bimodule in the case of rank 1 only. The evolution of the poles of these solutions is investigated

  9. Implementation of K-Means Clustering Method for Electronic Learning Model

    Science.gov (United States)

    Latipa Sari, Herlina; Suranti Mrs., Dewi; Natalia Zulita, Leni

    2017-12-01

    Teaching and Learning process at SMK Negeri 2 Bengkulu Tengah has applied e-learning system for teachers and students. The e-learning was based on the classification of normative, productive, and adaptive subjects. SMK Negeri 2 Bengkulu Tengah consisted of 394 students and 60 teachers with 16 subjects. The record of e-learning database was used in this research to observe students’ activity pattern in attending class. K-Means algorithm in this research was used to classify students’ learning activities using e-learning, so that it was obtained cluster of students’ activity and improvement of student’s ability. Implementation of K-Means Clustering method for electronic learning model at SMK Negeri 2 Bengkulu Tengah was conducted by observing 10 students’ activities, namely participation of students in the classroom, submit assignment, view assignment, add discussion, view discussion, add comment, download course materials, view article, view test, and submit test. In the e-learning model, the testing was conducted toward 10 students that yielded 2 clusters of membership data (C1 and C2). Cluster 1: with membership percentage of 70% and it consisted of 6 members, namely 1112438 Anggi Julian, 1112439 Anis Maulita, 1112441 Ardi Febriansyah, 1112452 Berlian Sinurat, 1112460 Dewi Anugrah Anwar and 1112467 Eka Tri Oktavia Sari. Cluster 2:with membership percentage of 30% and it consisted of 4 members, namely 1112463 Dosita Afriyani, 1112471 Erda Novita, 1112474 Eskardi and 1112477 Fachrur Rozi.

  10. FINGER KNUCKLE PRINT RECOGNITION WITH SIFT AND K-MEANS ALGORITHM

    Directory of Open Access Journals (Sweden)

    A. Muthukumar

    2013-02-01

    Full Text Available In general, the identification and verification are done by passwords, pin number, etc., which is easily cracked by others. Biometrics is a powerful and unique tool based on the anatomical and behavioral characteristics of the human beings in order to prove their authentication. This paper proposes a novel recognition methodology of biometrics named as Finger Knuckle print (FKP. Hence this paper has focused on the extraction of features of Finger knuckle print using Scale Invariant Feature Transform (SIFT, and the key points are derived from FKP are clustered using K-Means Algorithm. The centroid of K-Means is stored in the database which is compared with the query FKP K-Means centroid value to prove the recognition and authentication. The comparison is based on the XOR operation. Hence this paper provides a novel recognition method to provide authentication. Results are performed on the PolyU FKP database to check the proposed FKP recognition method.

  11. Method of immobilizing weapons plutonium to provide a durable, disposable waste product

    Science.gov (United States)

    Ewing, Rodney C.; Lutze, Werner; Weber, William J.

    1996-01-01

    A method of atomic scale fixation and immobilization of plutonium to provide a durable waste product. Plutonium is provided in the form of either PuO.sub.2 or Pu(NO.sub.3).sub.4 and is mixed with and SiO.sub.2. The resulting mixture is cold pressed and then heated under pressure to form (Zr,Pu)SiO.sub.4 as the waste product.

  12. Development of a sensitive method for a structural elucidation of acyl carnitines by means of GC-MS techniques

    International Nuclear Information System (INIS)

    Altmann, E.

    1999-11-01

    The goal of the present work was to develop a sensitive and reliable method to characterize the urinary acyl carnitines, to further establish it as routine procedure in hospitals, especially in Pedriatic Departments. The determination of the excreted acyl carnitines allows drawing conclusions on errors or deviations in the cellular metabolism. Applying the volatile lactone derivatives of the acyl carnitines various methods of GC/MS technique are compared. With the examined lactones under concern EI mass spectra furnish just a first incomplete information. Frequently the molecular peaks are not sufficiently intense. Yet by means of the retention times, the signals m/z 84, 85 and 144 as ion traces, as well as the characteristic fragmentation helpful information is provided. Concerning the +CI/NH 3 - mass spectra the protonated molecular ions (M + H) + and the usually very intense (M + NH 4 ) + - ions unambiguously render structural assignments. In the case of the - CI/NH 3 - mass spectra the (M-1) and (M-85) ions allow definitive assignments due to their lesser fragmentation tendency. Each of the analytical outcoming can be regarded as leading contribution in helpfully establishing the current method in every clinical hospital. (author)

  13. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    Science.gov (United States)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  14. Methods to estimate annual mean spring discharge to the Snake River between Milner Dam and King Hill, Idaho

    Science.gov (United States)

    Kjelstrom, L.C.

    1995-01-01

    Many individual springs and groups of springs discharge water from volcanic rocks that form the north canyon wall of the Snake River between Milner Dam and King Hill. Previous estimates of annual mean discharge from these springs have been used to understand the hydrology of the eastern part of the Snake River Plain. Four methods that were used in previous studies or developed to estimate annual mean discharge since 1902 were (1) water-budget analysis of the Snake River; (2) correlation of water-budget estimates with discharge from 10 index springs; (3) determination of the combined discharge from individual springs or groups of springs by using annual discharge measurements of 8 springs, gaging-station records of 4 springs and 3 sites on the Malad River, and regression equations developed from 5 of the measured springs; and (4) a single regression equation that correlates gaging-station records of 2 springs with historical water-budget estimates. Comparisons made among the four methods of estimating annual mean spring discharges from 1951 to 1959 and 1963 to 1980 indicated that differences were about equivalent to a measurement error of 2 to 3 percent. The method that best demonstrates the response of annual mean spring discharge to changes in ground-water recharge and discharge is method 3, which combines the measurements and regression estimates of discharge from individual springs.

  15. Optimization of the ship type using waveform by means of Rankine source method; Rankine source ho ni yoru hakei wo mochiita funagata saitekika ni tsuite

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, A; Eguchi, T [Mitsui Engineering and Shipbuilding Co. Ltd., Tokyo (Japan)

    1996-04-10

    Among the numerical calculation methods for steady-state wave-making problems, the panel shift Rankine source (PSRS) method has the advantages of rather precise determination of wave pattern of practical ship types, and short calculation period. The wave pattern around the hull was calculated by means of the PSRS method. The waveform analysis was carried out for the wave, to obtain an amplitude function of the original ship type. Based on the amplitude function, a ship type improvement method aiming at the optimization of ship type was provided using a conditional calculus of variation. A Series 60 (Cb=0.6) ship type was selected for the ship type improvement, to apply this technique. It was suggested that optimum design can be made for reducing the wave making resistance by means of this method. For the improvement of Series 60 ship type using this method, a great degree of reduction in the wave making resistance was recognized from the results of numerical waveform analysis. It was suggested that the ship type improvement aiming at the reduction of wave-making resistance can be made in shorter period and by smaller labor compared with the method using a waveform analysis of cistern tests. 5 refs., 9 figs.

  16. Attenuation correction for renal scintigraphy with 99mTc - DMSA: comparison between Raynaud and the geometric mean methods

    International Nuclear Information System (INIS)

    Argenta, J.; Brambilla, C.R.; Marques da Silva, A.M.

    2009-01-01

    The evaluation of the index of renal function (IF) requires soft-tissue attenuation correction. This paper investigates the impact over the IF, when attenuation correction is applied using the Raynaud method and the geometric mean method in renal planar scintigraphy, using posterior and anterior views. The study was conducted with Monte Carlo simulated images of five GSF family voxel phantoms with different relative uptakes in each kidney from normal (50% -50%) to pathological (10% -90%). The results showed that Raynaud method corrects more efficiently the cases where the renal depth is close to the value of the standard phantom. The geometric mean method showed similar results to the Raynaud method for Baby, Child and Golem. For Helga and Donna models, the errors were above 20%, increasing with relative uptake. Further studies should be conducted to assess the influences of the standard phantom in the correcting attenuation methods. (author)

  17. Attenuation correction for renal scintigraphy with 99mTc-DMSA: analysis between Raynaud and the geometric mean methods

    International Nuclear Information System (INIS)

    Argenta, Jackson; Brambilla, Claudia R.; Silva, Ana Maria M. da

    2010-01-01

    The evaluation of the index of renal function (IF) requires soft-tissue attenuation correction. This paper investigates the impact over the IF, when attenuation correction is applied using the Raynaud method and the Geometric Mean method in renal planar scintigraphy, using posterior and anterior views. The study was conducted with Monte Carlo simulated images of five GSF family voxel phantoms with different relative uptakes in each kidney from normal (50% -50%) to pathological (10% -90%). The results showed that Raynaud method corrects more efficiently the cases where the renal depth is close to the value of the standard phantom. The geometric mean method showed similar results to the Raynaud method for Baby, Child and Golem. For Helga and Donna models, the errors were above 20%, increasing with relative uptake. Further studies should be conducted to assess the influences of the standard phantom in the correcting attenuation methods. (author)

  18. [Research on K-means clustering segmentation method for MRI brain image based on selecting multi-peaks in gray histogram].

    Science.gov (United States)

    Chen, Zhaoxue; Yu, Haizhong; Chen, Hao

    2013-12-01

    To solve the problem of traditional K-means clustering in which initial clustering centers are selected randomly, we proposed a new K-means segmentation algorithm based on robustly selecting 'peaks' standing for White Matter, Gray Matter and Cerebrospinal Fluid in multi-peaks gray histogram of MRI brain image. The new algorithm takes gray value of selected histogram 'peaks' as the initial K-means clustering center and can segment the MRI brain image into three parts of tissue more effectively, accurately, steadily and successfully. Massive experiments have proved that the proposed algorithm can overcome many shortcomings caused by traditional K-means clustering method such as low efficiency, veracity, robustness and time consuming. The histogram 'peak' selecting idea of the proposed segmentootion method is of more universal availability.

  19. TOWARDS FINDING A NEW KERNELIZED FUZZY C-MEANS CLUSTERING ALGORITHM

    Directory of Open Access Journals (Sweden)

    Samarjit Das

    2014-04-01

    Full Text Available Kernelized Fuzzy C-Means clustering technique is an attempt to improve the performance of the conventional Fuzzy C-Means clustering technique. Recently this technique where a kernel-induced distance function is used as a similarity measure instead of a Euclidean distance which is used in the conventional Fuzzy C-Means clustering technique, has earned popularity among research community. Like the conventional Fuzzy C-Means clustering technique this technique also suffers from inconsistency in its performance due to the fact that here also the initial centroids are obtained based on the randomly initialized membership values of the objects. Our present work proposes a new method where we have applied the Subtractive clustering technique of Chiu as a preprocessor to Kernelized Fuzzy CMeans clustering technique. With this new method we have tried not only to remove the inconsistency of Kernelized Fuzzy C-Means clustering technique but also to deal with the situations where the number of clusters is not predetermined. We have also provided a comparison of our method with the Subtractive clustering technique of Chiu and Kernelized Fuzzy C-Means clustering technique using two validity measures namely Partition Coefficient and Clustering Entropy.

  20. Prediction of Human Phenotype Ontology terms by means of hierarchical ensemble methods.

    Science.gov (United States)

    Notaro, Marco; Schubach, Max; Robinson, Peter N; Valentini, Giorgio

    2017-10-12

    The prediction of human gene-abnormal phenotype associations is a fundamental step toward the discovery of novel genes associated with human disorders, especially when no genes are known to be associated with a specific disease. In this context the Human Phenotype Ontology (HPO) provides a standard categorization of the abnormalities associated with human diseases. While the problem of the prediction of gene-disease associations has been widely investigated, the related problem of gene-phenotypic feature (i.e., HPO term) associations has been largely overlooked, even if for most human genes no HPO term associations are known and despite the increasing application of the HPO to relevant medical problems. Moreover most of the methods proposed in literature are not able to capture the hierarchical relationships between HPO terms, thus resulting in inconsistent and relatively inaccurate predictions. We present two hierarchical ensemble methods that we formally prove to provide biologically consistent predictions according to the hierarchical structure of the HPO. The modular structure of the proposed methods, that consists in a "flat" learning first step and a hierarchical combination of the predictions in the second step, allows the predictions of virtually any flat learning method to be enhanced. The experimental results show that hierarchical ensemble methods are able to predict novel associations between genes and abnormal phenotypes with results that are competitive with state-of-the-art algorithms and with a significant reduction of the computational complexity. Hierarchical ensembles are efficient computational methods that guarantee biologically meaningful predictions that obey the true path rule, and can be used as a tool to improve and make consistent the HPO terms predictions starting from virtually any flat learning method. The implementation of the proposed methods is available as an R package from the CRAN repository.

  1. Exploring a model for finding meaning in the changing world of work (Part 3: Meaning as framing context

    Directory of Open Access Journals (Sweden)

    Daniel H. Burger

    2013-03-01

    Full Text Available Orientation: This article, the final in a series of three papers, locates organisational change, specifically within the context of individuals’ experience of ‘meaning’, as conceptualised in Viktor Frankl’s logotherapy. Research purpose: The purpose of this theoretical paper is to investigate the context of meaning in organisational change by exploring the relationship between meaning and change. Motivation for the study: Although literature on change management is available in abundance, very little research has been focussed on the micro-level issues pertaining to organisational change, and virtually no research relating to the ‘existential meaning’ context of such change could be found. Research design, approach and method: The study was conducted by means of a review of literature, guided by the theoretical perspectives of logotherapy. Main findings: Whilst systems to which individuals traditionally turned for meaning decline, organisations become increasingly important for employees’ experience of meaning. As organisational change threatens such meaning, resistance to change may occur, which inhibits organisations’ ability to change. Logotherapy provides a useful framework for understanding this meaning context, which could be utilised to inform frameworks to guide change implementation more successfully. Practical and managerial implications: An understanding of the role that meaning can play in causing − and hence reducing − resistance to change may be of great value to organisations attempting to implement change initiatives. Contribution: The value-add of the article is grounded on its exploration of the relatively uncharted territory of how the experience of meaning by employees may impact organisational change. This article therefore provides a novel perspective for conceptualising change. In addition, it suggests specific recommendations for utilising an understanding of the meaning change relationship with the

  2. Exploring a model for finding meaning in the changing world of work (Part 3: Meaning as framing context

    Directory of Open Access Journals (Sweden)

    Daniel H. Burger

    2013-03-01

    Full Text Available Orientation: This article, the final in a series of three papers, locates organisational change, specifically within the context of individuals’ experience of ‘meaning’, as conceptualised in Viktor Frankl’s logotherapy.Research purpose: The purpose of this theoretical paper is to investigate the context of meaning in organisational change by exploring the relationship between meaning and change.Motivation for the study: Although literature on change management is available in abundance, very little research has been focussed on the micro-level issues pertaining to organisational change, and virtually no research relating to the ‘existential meaning’ context of such change could be found.Research design, approach and method: The study was conducted by means of a review of literature, guided by the theoretical perspectives of logotherapy.Main findings: Whilst systems to which individuals traditionally turned for meaning decline, organisations become increasingly important for employees’ experience of meaning. As organisational change threatens such meaning, resistance to change may occur, which inhibits organisations’ ability to change. Logotherapy provides a useful framework for understanding this meaning context, which could be utilised to inform frameworks to guide change implementation more successfully.Practical and managerial implications: An understanding of the role that meaning can play in causing − and hence reducing − resistance to change may be of great value to organisations attempting to implement change initiatives.Contribution: The value-add of the article is grounded on its exploration of the relatively uncharted territory of how the experience of meaning by employees may impact organisational change. This article therefore provides a novel perspective for conceptualising change. In addition, it suggests specific recommendations for utilising an understanding of the meaning change relationship with the

  3. Study of confined many electron atoms by means of the POEP method

    International Nuclear Information System (INIS)

    Sarsa, A; Buendía, E; Gálvez, F J

    2014-01-01

    The electronic structure of confined atoms under impenetrable spherical walls is studied by means of the parameterized optimized effective potential method. A cut-off factor is employed to account for Dirichlet boundary conditions. Two atomic basis sets commonly used for describing free atoms have been analyzed within this scheme. The accuracy of the method is similar to that achieved for the free atoms. The ground state electrostatic multiplet of the carbon atom as well as the ground state and both the [Ar]4s3d 7 5 F and [Ar]3d 8 3 F excited states of the iron atom are studied. The behaviour of the energy levels with the confinement has been analyzed in terms of the different contributions to the total energy of the atom. For the iron atom, the effect of confinement on the outermost orbitals is studied. (paper)

  4. Mean-field lattice trees

    NARCIS (Netherlands)

    Borgs, C.; Chayes, J.T.; Hofstad, van der R.W.; Slade, G.

    1999-01-01

    We introduce a mean-field model of lattice trees based on embeddings into d of abstract trees having a critical Poisson offspring distribution. This model provides a combinatorial interpretation for the self-consistent mean-field model introduced previously by Derbez and Slade [9], and provides an

  5. Hopfield-K-Means clustering algorithm: A proposal for the segmentation of electricity customers

    Energy Technology Data Exchange (ETDEWEB)

    Lopez, Jose J.; Aguado, Jose A.; Martin, F.; Munoz, F.; Rodriguez, A.; Ruiz, Jose E. [Department of Electrical Engineering, University of Malaga, C/ Dr. Ortiz Ramos, sn., Escuela de Ingenierias, 29071 Malaga (Spain)

    2011-02-15

    Customer classification aims at providing electric utilities with a volume of information to enable them to establish different types of tariffs. Several methods have been used to segment electricity customers, including, among others, the hierarchical clustering, Modified Follow the Leader and K-Means methods. These, however, entail problems with the pre-allocation of the number of clusters (Follow the Leader), randomness of the solution (K-Means) and improvement of the solution obtained (hierarchical algorithm). Another segmentation method used is Hopfield's autonomous recurrent neural network, although the solution obtained only guarantees that it is a local minimum. In this paper, we present the Hopfield-K-Means algorithm in order to overcome these limitations. This approach eliminates the randomness of the initial solution provided by K-Means based algorithms and it moves closer to the global optimun. The proposed algorithm is also compared against other customer segmentation and characterization techniques, on the basis of relative validation indexes. Finally, the results obtained by this algorithm with a set of 230 electricity customers (residential, industrial and administrative) are presented. (author)

  6. Hopfield-K-Means clustering algorithm: A proposal for the segmentation of electricity customers

    International Nuclear Information System (INIS)

    Lopez, Jose J.; Aguado, Jose A.; Martin, F.; Munoz, F.; Rodriguez, A.; Ruiz, Jose E.

    2011-01-01

    Customer classification aims at providing electric utilities with a volume of information to enable them to establish different types of tariffs. Several methods have been used to segment electricity customers, including, among others, the hierarchical clustering, Modified Follow the Leader and K-Means methods. These, however, entail problems with the pre-allocation of the number of clusters (Follow the Leader), randomness of the solution (K-Means) and improvement of the solution obtained (hierarchical algorithm). Another segmentation method used is Hopfield's autonomous recurrent neural network, although the solution obtained only guarantees that it is a local minimum. In this paper, we present the Hopfield-K-Means algorithm in order to overcome these limitations. This approach eliminates the randomness of the initial solution provided by K-Means based algorithms and it moves closer to the global optimun. The proposed algorithm is also compared against other customer segmentation and characterization techniques, on the basis of relative validation indexes. Finally, the results obtained by this algorithm with a set of 230 electricity customers (residential, industrial and administrative) are presented. (author)

  7. Preliminary Assessment of ATR-C Capabilities to Provide Integral Benchmark Data for Key Structural/Matrix Materials that May be Used for Nuclear Data Testing and Analytical Methods Validation

    Energy Technology Data Exchange (ETDEWEB)

    John D. Bess

    2009-07-01

    The purpose of this document is to identify some suggested types of experiments that can be performed in the Advanced Test Reactor Critical (ATR-C) facility. A fundamental computational investigation is provided to demonstrate possible integration of experimental activities in the ATR-C with the development of benchmark experiments. Criticality benchmarks performed in the ATR-C could provide integral data for key matrix and structural materials used in nuclear systems. Results would then be utilized in the improvement of nuclear data libraries and as a means for analytical methods validation. It is proposed that experiments consisting of well-characterized quantities of materials be placed in the Northwest flux trap position of the ATR-C. The reactivity worth of the material could be determined and computationally analyzed through comprehensive benchmark activities including uncertainty analyses. Experiments were modeled in the available benchmark model of the ATR using MCNP5 with the ENDF/B-VII.0 cross section library. A single bar (9.5 cm long, 0.5 cm wide, and 121.92 cm high) of each material could provide sufficient reactivity difference in the core geometry for computational modeling and analysis. However, to provide increased opportunity for the validation of computational models, additional bars of material placed in the flux trap would increase the effective reactivity up to a limit of 1$ insertion. For simplicity in assembly manufacture, approximately four bars of material could provide a means for additional experimental benchmark configurations, except in the case of strong neutron absorbers and many materials providing positive reactivity. Future tasks include the cost analysis and development of the experimental assemblies, including means for the characterization of the neutron flux and spectral indices. Oscillation techniques may also serve to provide additional means for experimentation and validation of computational methods and acquisition of

  8. Prediction based on mean subset

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Brown, P. J.; Madsen, Henrik

    2002-01-01

    , it is found that the proposed mean subset method has superior prediction performance than prediction based on the best subset method, and in some settings also better than the ridge regression and lasso methods. The conclusions drawn from the Monte Carlo study is corroborated in an example in which prediction......Shrinkage methods have traditionally been applied in prediction problems. In this article we develop a shrinkage method (mean subset) that forms an average of regression coefficients from individual subsets of the explanatory variables. A Bayesian approach is taken to derive an expression of how...... the coefficient vectors from each subset should be weighted. It is not computationally feasible to calculate the mean subset coefficient vector for larger problems, and thus we suggest an algorithm to find an approximation to the mean subset coefficient vector. In a comprehensive Monte Carlo simulation study...

  9. Testing a statistical method of global mean palotemperature estimations in a long climate simulation

    Energy Technology Data Exchange (ETDEWEB)

    Zorita, E.; Gonzalez-Rouco, F. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    2001-07-01

    Current statistical methods of reconstructing the climate of the last centuries are based on statistical models linking climate observations (temperature, sea-level-pressure) and proxy-climate data (tree-ring chronologies, ice-cores isotope concentrations, varved sediments, etc.). These models are calibrated in the instrumental period, and the longer time series of proxy data are then used to estimate the past evolution of the climate variables. Using such methods the global mean temperature of the last 600 years has been recently estimated. In this work this method of reconstruction is tested using data from a very long simulation with a climate model. This testing allows to estimate the errors of the estimations as a function of the number of proxy data and the time scale at which the estimations are probably reliable. (orig.)

  10. A study of Monte Carlo methods for weak approximations of stochastic particle systems in the mean-field?

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-01-08

    I discuss using single level and multilevel Monte Carlo methods to compute quantities of interests of a stochastic particle system in the mean-field. In this context, the stochastic particles follow a coupled system of Ito stochastic differential equations (SDEs). Moreover, this stochastic particle system converges to a stochastic mean-field limit as the number of particles tends to infinity. I start by recalling the results of applying different versions of Multilevel Monte Carlo (MLMC) for particle systems, both with respect to time steps and the number of particles and using a partitioning estimator. Next, I expand on these results by proposing the use of our recent Multi-index Monte Carlo method to obtain improved convergence rates.

  11. A study of Monte Carlo methods for weak approximations of stochastic particle systems in the mean-field?

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-01-01

    I discuss using single level and multilevel Monte Carlo methods to compute quantities of interests of a stochastic particle system in the mean-field. In this context, the stochastic particles follow a coupled system of Ito stochastic differential equations (SDEs). Moreover, this stochastic particle system converges to a stochastic mean-field limit as the number of particles tends to infinity. I start by recalling the results of applying different versions of Multilevel Monte Carlo (MLMC) for particle systems, both with respect to time steps and the number of particles and using a partitioning estimator. Next, I expand on these results by proposing the use of our recent Multi-index Monte Carlo method to obtain improved convergence rates.

  12. Computing daily mean streamflow at ungaged locations in Iowa by using the Flow Anywhere and Flow Duration Curve Transfer statistical methods

    Science.gov (United States)

    Linhart, S. Mike; Nania, Jon F.; Sanders, Curtis L.; Archfield, Stacey A.

    2012-01-01

    The U.S. Geological Survey (USGS) maintains approximately 148 real-time streamgages in Iowa for which daily mean streamflow information is available, but daily mean streamflow data commonly are needed at locations where no streamgages are present. Therefore, the USGS conducted a study as part of a larger project in cooperation with the Iowa Department of Natural Resources to develop methods to estimate daily mean streamflow at locations in ungaged watersheds in Iowa by using two regression-based statistical methods. The regression equations for the statistical methods were developed from historical daily mean streamflow and basin characteristics from streamgages within the study area, which includes the entire State of Iowa and adjacent areas within a 50-mile buffer of Iowa in neighboring states. Results of this study can be used with other techniques to determine the best method for application in Iowa and can be used to produce a Web-based geographic information system tool to compute streamflow estimates automatically. The Flow Anywhere statistical method is a variation of the drainage-area-ratio method, which transfers same-day streamflow information from a reference streamgage to another location by using the daily mean streamflow at the reference streamgage and the drainage-area ratio of the two locations. The Flow Anywhere method modifies the drainage-area-ratio method in order to regionalize the equations for Iowa and determine the best reference streamgage from which to transfer same-day streamflow information to an ungaged location. Data used for the Flow Anywhere method were retrieved for 123 continuous-record streamgages located in Iowa and within a 50-mile buffer of Iowa. The final regression equations were computed by using either left-censored regression techniques with a low limit threshold set at 0.1 cubic feet per second (ft3/s) and the daily mean streamflow for the 15th day of every other month, or by using an ordinary-least-squares multiple

  13. The Interplay of Text, Meaning and Practice

    DEFF Research Database (Denmark)

    Kärreman, Dan; Levay, Charlotta

    2017-01-01

    Context: The study of discourses (i.e. verbal interactions or written accounts) is increasingly used in social sciences to gain insight into issues connected to discourse, such as meanings, behaviours and actions. This paper situates discourse analysis in medical education, based on a framework...... settings, with a particular focus on the field of medical education. Methods: The study is based on a literature analysis of discourse analysis approaches published in Medical Education. Results: Findings suggest that empirical studies through discourse analysis can be heuristically understood in terms...... of the links between text, practices and meaning. Conclusions: Discourse analysis provides a more strongly supported argument when it is possible to defend claims on three levels: practice, using observational data; meaning, using ethnographic data, and text, using conversational and textual data....

  14. A novel intrusion detection method based on OCSVM and K-means recursive clustering

    Directory of Open Access Journals (Sweden)

    Leandros A. Maglaras

    2015-01-01

    Full Text Available In this paper we present an intrusion detection module capable of detecting malicious network traffic in a SCADA (Supervisory Control and Data Acquisition system, based on the combination of One-Class Support Vector Machine (OCSVM with RBF kernel and recursive k-means clustering. Important parameters of OCSVM, such as Gaussian width o and parameter v affect the performance of the classifier. Tuning of these parameters is of great importance in order to avoid false positives and over fitting. The combination of OCSVM with recursive k- means clustering leads the proposed intrusion detection module to distinguish real alarms from possible attacks regardless of the values of parameters o and v, making it ideal for real-time intrusion detection mechanisms for SCADA systems. Extensive simulations have been conducted with datasets extracted from small and medium sized HTB SCADA testbeds, in order to compare the accuracy, false alarm rate and execution time against the base line OCSVM method.

  15. An image segmentation method based on fuzzy C-means clustering and Cuckoo search algorithm

    Science.gov (United States)

    Wang, Mingwei; Wan, Youchuan; Gao, Xianjun; Ye, Zhiwei; Chen, Maolin

    2018-04-01

    Image segmentation is a significant step in image analysis and machine vision. Many approaches have been presented in this topic; among them, fuzzy C-means (FCM) clustering is one of the most widely used methods for its high efficiency and ambiguity of images. However, the success of FCM could not be guaranteed because it easily traps into local optimal solution. Cuckoo search (CS) is a novel evolutionary algorithm, which has been tested on some optimization problems and proved to be high-efficiency. Therefore, a new segmentation technique using FCM and blending of CS algorithm is put forward in the paper. Further, the proposed method has been measured on several images and compared with other existing FCM techniques such as genetic algorithm (GA) based FCM and particle swarm optimization (PSO) based FCM in terms of fitness value. Experimental results indicate that the proposed method is robust, adaptive and exhibits the better performance than other methods involved in the paper.

  16. Project-Based Method as an Effective Means of Interdisciplinary Interaction While Teaching a Foreign Language

    Science.gov (United States)

    Bondar, Irina Alekseevna; Kulbakova, Renata Ivanovna; Svintorzhitskaja, Irina Andreevna; Pilat, Larisa Pavlovna; Zavrumov, Zaur Aslanovich

    2016-01-01

    The article explains how to use a project-based method as an effective means of interdisciplinary interaction when teaching a foreign language on the example of The Institute of service, tourism and design (branch) of the North Caucasus Federal University (Pyatigorsk, Stavropol Territory Russia). The article holds the main objectives of the…

  17. Forecasting hourly global solar radiation using hybrid k-means and nonlinear autoregressive neural network models

    International Nuclear Information System (INIS)

    Benmouiza, Khalil; Cheknane, Ali

    2013-01-01

    Highlights: • An unsupervised clustering algorithm with a neural network model was explored. • The forecasting results of solar radiation time series and the comparison of their performance was simulated. • A new method was proposed combining k-means algorithm and NAR network to provide better prediction results. - Abstract: In this paper, we review our work for forecasting hourly global horizontal solar radiation based on the combination of unsupervised k-means clustering algorithm and artificial neural networks (ANN). k-Means algorithm focused on extracting useful information from the data with the aim of modeling the time series behavior and find patterns of the input space by clustering the data. On the other hand, nonlinear autoregressive (NAR) neural networks are powerful computational models for modeling and forecasting nonlinear time series. Taking the advantage of both methods, a new method was proposed combining k-means algorithm and NAR network to provide better forecasting results

  18. Validation of mean glandular dose values provided by a digital breast tomosynthesis system in Brazil

    International Nuclear Information System (INIS)

    Beraldo O, B.; Paixao, L.; Donato da S, S.; Araujo T, M. H.; Nogueira, M. S.

    2014-08-01

    Digital breast tomosynthesis (DBT) is an emerging imaging modality that provides quasi-three-dimensional structural information of the breast and has strong promise to improve the differentiation of normal tissue and suspicious masses reducing the tissue overlaps. DBT images are reconstructed from a sequence of low-dose X-ray projections of the breast acquired at a small number of angles over a limited angular range. The Ho logic Selen ia Dimensions system is equipped with an amorphous Selenium (a-Se) detector layer of 250 μm thickness and a 70 μm pixel pitch. Studies are needed to determine the radiation dose of patients that are undergoing this emerging procedure to compare with the results obtained in DBT images. The mean glandular dose (D G ) is the dosimetric quantity used in quality control of the mammographic systems. The aim of this work is to validate D G values for different breast thicknesses provided by a Ho logic Selen ia Dimensions system using a DBT mode in comparison with the same results obtained by a calibrated 90 X 5-6M-model Radcal ionization chamber. D G values were derived from the incident air kerma (K i ) measurements and tabulated conversion coefficients that are dependent on the half value layer (HVL) of the X-ray spectrum. Voltage and tube loading values were recorded in irradiations using W/Al anode/filter combination, automatic exposure control mode and polymethyl methacrylate (PMMA) slabs which simulate different breast thicknesses. For K i measurements, the ionization chamber was positioned at 655 mm from the focus and the same radiographic technique values were selected with the manual mode. D G values for a complete procedure ranged from 0.9 ± 0.1 to 3.7 ± 0.4 mGy. The results for different breast thicknesses are in accordance with values obtained by DBT images and with acceptable levels established by the Commission of the European Communities (Cec) and the International Atomic Energy Agency (IAEA). This work contributes to

  19. Validation of mean glandular dose values provided by a digital breast tomosynthesis system in Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Beraldo O, B.; Paixao, L.; Donato da S, S. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Post-graduation in Sciences and Technology of Radiations Minerals and Materials, Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte (Brazil); Araujo T, M. H. [Dr Maria Helena Araujo Teixeira Clinic, Guajajaras 40, 30180-100 Belo Horizonte (Brazil); Nogueira, M. S., E-mail: bbo@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte (Brazil)

    2014-08-15

    Digital breast tomosynthesis (DBT) is an emerging imaging modality that provides quasi-three-dimensional structural information of the breast and has strong promise to improve the differentiation of normal tissue and suspicious masses reducing the tissue overlaps. DBT images are reconstructed from a sequence of low-dose X-ray projections of the breast acquired at a small number of angles over a limited angular range. The Ho logic Selen ia Dimensions system is equipped with an amorphous Selenium (a-Se) detector layer of 250 μm thickness and a 70 μm pixel pitch. Studies are needed to determine the radiation dose of patients that are undergoing this emerging procedure to compare with the results obtained in DBT images. The mean glandular dose (D{sub G}) is the dosimetric quantity used in quality control of the mammographic systems. The aim of this work is to validate D{sub G} values for different breast thicknesses provided by a Ho logic Selen ia Dimensions system using a DBT mode in comparison with the same results obtained by a calibrated 90 X 5-6M-model Radcal ionization chamber. D{sub G} values were derived from the incident air kerma (K{sub i}) measurements and tabulated conversion coefficients that are dependent on the half value layer (HVL) of the X-ray spectrum. Voltage and tube loading values were recorded in irradiations using W/Al anode/filter combination, automatic exposure control mode and polymethyl methacrylate (PMMA) slabs which simulate different breast thicknesses. For K{sub i} measurements, the ionization chamber was positioned at 655 mm from the focus and the same radiographic technique values were selected with the manual mode. D{sub G} values for a complete procedure ranged from 0.9 ± 0.1 to 3.7 ± 0.4 mGy. The results for different breast thicknesses are in accordance with values obtained by DBT images and with acceptable levels established by the Commission of the European Communities (Cec) and the International Atomic Energy Agency (IAEA

  20. CHARACTERISTICS OF PRODUCTS MADE OF 17-4PH STEEL BY MEANS OF 3D PRINTING METHOD

    Directory of Open Access Journals (Sweden)

    Mariusz WALCZAK

    2016-09-01

    Full Text Available The article presents the results of tests of 17-4PH steel fabricated by means of the method consisting in laser additive manufacturing (LAM – direct metal laser sintering (DMLS. This grade of steel is characterized by excellent stress corrosion resistance in the first place and is applied as construction material in chemical, aircraft, medical or mould making industry. 3D metal printing is a relatively new method enabling significant change of structural properties of these materials at printing parameters predetermined by printers manufacturer for ”offline” printing mode.In order to achieve this goal, the authors have carried out the analysis of chemical composition, SEM tests and the tests of products surface roughness. Furthermore the products have been subjected to X-ray analysis by means of computed tomography (X-ray CT. Structural discontinuities have been found in upper layer and inside printouts subjected to tests.

  1. A fuzzy inventory model with acceptable shortage using graded mean integration value method

    Science.gov (United States)

    Saranya, R.; Varadarajan, R.

    2018-04-01

    In many inventory models uncertainty is due to fuzziness and fuzziness is the closed possible approach to reality. In this paper, we proposed a fuzzy inventory model with acceptable shortage which is completely backlogged. We fuzzily the carrying cost, backorder cost and ordering cost using Triangular and Trapezoidal fuzzy numbers to obtain the fuzzy total cost. The purpose of our study is to defuzzify the total profit function by Graded Mean Integration Value Method. Further a numerical example is also given to demonstrate the developed crisp and fuzzy models.

  2. Means and method for controlling the neutron output of a neutron generator tube

    International Nuclear Information System (INIS)

    1977-01-01

    A means and method for energizing and regulating a neutron generator tube is described. It has a target, an ion source and a replenisher. A negative high voltage is applied to the target and the target current monitored. A constant current from a constant current source is divided into a shunt current and a replenisher current in accordance with the target current. The replenisher current is applied to the replenisher in a neutron generator tube so as to control the neutron output in accordance with the target current. (C.F.)

  3. Dante, psychoanalysis, and the (erotic) meaning of meaning.

    Science.gov (United States)

    Hatcher, E R

    1990-01-01

    The author observes a resemblance between (1) the "polysemous" technique of imputing meaning to reality practiced in medieval biblical studies and in Dante's writing and (2) the technique of interpretation in contemporary psychoanalysis. She explores the roots of this resemblance in the development of intellectual history and provides examples of polysemous meanings in Dante's Divine Comedy, which is in part an autobiographical journey of self-reflection and self-realization (like psychoanalysis). She then suggests some implications of this resemblance for contemporary psychiatry.

  4. [Means and methods of personal hygiene in the experiment with 520-day isolation].

    Science.gov (United States)

    Shumilina, G A; Shumilina, I V; Solov'eva, S O

    2013-01-01

    Six volunteers (3 Russians, a Frenchman, an Italian and a Chinese) participated in assessment of the input of sanitation and housekeeping provisions to their wellbeing during 520-day isolation and confinement. Subject of the study was quality and sufficiency of housekeeping agents and procedures as well as more than 60 names of personal hygiene items. The sanitation and housekeeping monitoring involved the clinical, hygienic and microbiological methods, and also consideration of crew comments on the items at their disposal and recommended procedures. Based on the analysis of the functional condition of the integument and oral cavity and entries in the questionnaires, i.e. objective data and subjective feelings, all test subjects remained in the invariably good state. Owing to the application of the selected hygienic means and methods the microbial status of the crew was stable throughout 520-day isolation.

  5. Non-destructive Determination of Martensitic Content by Means of Magnetic Methods

    Energy Technology Data Exchange (ETDEWEB)

    Niffenegger, M.; Bauer, R.; Kalkhof, D

    2003-07-01

    The detection of material degradation in a pre-cracked stage would be very advantageous. Therefore the main objective of the EC 5th Framework Programme Project CRETE (Contract No. FIS5-1999-00280) was to assess the capability and the reliability of innovative NDT-inspection techniques for the detection of material degradation, induced by low cycle fatigue (LCF) and neutron irradiation of metastable austenitic and ferritic low-alloy steel. Within work package WP6 and WP7 several project partners tested aged or irradiated samples, using various advanced measuring techniques, such as acoustic, magnetic and thermoelectric ones. These indirect methods require a careful interpretation of the measured signal in terms of micro-structural evolutions due to ageing of the material. Therefore the material had to be characterized in its undamaged, as well as in its damaged state. Based on results from former investigations, main attention was paid to the content of martensitic phase as an indicator for fatigue. Since most NDT-methods are considered as indirect methods for the detection of martensite, neutron diffraction was applied as a reference method for a quantitative determination of martensite. The material characterization performed at PSI and INSA de Lyon is published in the PSI Bericht Nr. 03-17, July 2003, (ISSN 1019-0643). The present report only describes the magnetic methods applied at PSI for the detection of material degradation and summarises the results obtained in WP3 of the CRETE project. The report is issued simultaneously as a PSI report and the CRETE work package WP3 report. At PSI the following magnetic methods were applied to LCF specimens: (1) Ferromaster for measuring the magnetic permeability, (2) Eddy current impedance measuring by means of a Giant Magneto Resistance sensor (GMR), (3) Remanence field measurements using high sensitive Fluxgate and SQUID sensors. With these methods three sets of fatigue specimens, made from different metastable

  6. THIRD PARTY LOGISTIC SERVICE PROVIDER SELECTION USING FUZZY AHP AND TOPSIS METHOD

    Directory of Open Access Journals (Sweden)

    Golam Kabir

    2012-03-01

    Full Text Available The use of third party logistic(3PL services providers is increasing globally to accomplish the strategic objectives. In the increasingly competitive environment, logistics strategic management requires systematic and structured approach to have cutting edge over the rival. Logistics service provider selection is a complex multi-criteria decision making process; in which, decision makers have to deals with the optimization of conflicting objectives such as quality, cost, and delivery time. In this paper, fuzzy analytic hierarchy process (FAHP approach based on technique for order preference by similarity to ideal solution (TOPSIS method has been proposed for evaluating and selecting an appropriate logistics service provider, where the ratings of each alternative and importance weight of each criterion are expressed in triangular fuzzy numbers.

  7. K-means clustering versus validation measures: a data-distribution perspective.

    Science.gov (United States)

    Xiong, Hui; Wu, Junjie; Chen, Jian

    2009-04-01

    K-means is a well-known and widely used partitional clustering method. While there are considerable research efforts to characterize the key features of the K-means clustering algorithm, further investigation is needed to understand how data distributions can have impact on the performance of K-means clustering. To that end, in this paper, we provide a formal and organized study of the effect of skewed data distributions on K-means clustering. Along this line, we first formally illustrate that K-means tends to produce clusters of relatively uniform size, even if input data have varied "true" cluster sizes. In addition, we show that some clustering validation measures, such as the entropy measure, may not capture this uniform effect and provide misleading information on the clustering performance. Viewed in this light, we provide the coefficient of variation (CV) as a necessary criterion to validate the clustering results. Our findings reveal that K-means tends to produce clusters in which the variations of cluster sizes, as measured by CV, are in a range of about 0.3-1.0. Specifically, for data sets with large variation in "true" cluster sizes (e.g., CV > 1.0), K-means reduces variation in resultant cluster sizes to less than 1.0. In contrast, for data sets with small variation in "true" cluster sizes (e.g., CV K-means increases variation in resultant cluster sizes to greater than 0.3. In other words, for the earlier two cases, K-means produces the clustering results which are away from the "true" cluster distributions.

  8. On some methods to produce high-energy polarized electron beams by means of proton synchrotrons

    International Nuclear Information System (INIS)

    Bessonov, E.G.; Vazdik, Ya.A.

    1980-01-01

    Some methods of production of high-energy polarized electron beams by means of proton synchrotrons are considered. These methods are based on transfer by protons of a part of their energy to the polarized electrons of a thin target placed inside the working volume of the synchrotron. It is suggested to use as a polarized electron target a magnetized crystalline iron in which proton channeling is realized, polarized atomic beams and the polarized plasma. It is shown that by this method one can produce polarized electron beams with energy approximately 100 GeV, energy spread +- 5 % and intensity approximately 10 7 electron/c, polarization approximately 30% and with intensity approximately 10 4 -10 5 electron/c, polarization approximately 100% [ru

  9. A science of meaning. Can behaviorism bring meaning to psychological science?

    Science.gov (United States)

    DeGrandpre, R J

    2000-07-01

    An argument is presented for making meaning a central dependent variable in psychological science. Principles of operant psychology are then interpreted as providing a basic foundation for a science of meaning. The emphasis here is on the generality of basic operant concepts, where learning is a process of meaning making that is governed largely by natural contingencies; reinforcement is an organic process in which environment-behavior relations are selected, defined here as a dialectical process of meaning making; and reinforcers are experiential consequences with acquired, ecologically derived meanings. The author concludes with a call for a more interdisciplinary science of psychology, focusing on the individual in society.

  10. Mean field methods for cortical network dynamics

    DEFF Research Database (Denmark)

    Hertz, J.; Lerchner, Alexander; Ahmadi, M.

    2004-01-01

    We review the use of mean field theory for describing the dynamics of dense, randomly connected cortical circuits. For a simple network of excitatory and inhibitory leaky integrate- and-fire neurons, we can show how the firing irregularity, as measured by the Fano factor, increases...... with the strength of the synapses in the network and with the value to which the membrane potential is reset after a spike. Generalizing the model to include conductance-based synapses gives insight into the connection between the firing statistics and the high- conductance state observed experimentally in visual...

  11. Analysis of the material's expenditure of electric contacts by means of the isotopic method

    International Nuclear Information System (INIS)

    Farkash, K.

    1979-01-01

    To investigate lifetime of the weak-current and heavy-current contacts different radioisotopic methods have been developed. Advantages of the radioisotopic methods as compared with other methods of testing consists of the fact that due to their sensitivity they permit to determine low expense of material; permit to determine quantitatively expense of each element separately from the elements, composing the contacts alloy; by means of these methods it is possible to evaluate quantitatively topological distribution of the matter separated from the contacts into the environment; it is possible to determine morphological characteristics of the matter separated from the contact. During investigation of the lifetime of contacts there were determined: value of the expense of the material of contacts; composition of the expense of the material of contacts; composition of the matter separated from the contact; distribution of the separated matter depending on the electrical parameters and number of the closings of contact in the case of different compositions of contacts and in different conditions. Strength of the contacts' alloys related to the electrical load was investigated at the special stand [ru

  12. Multiple Attribute Group Decision-Making Methods Based on Trapezoidal Fuzzy Two-Dimensional Linguistic Partitioned Bonferroni Mean Aggregation Operators.

    Science.gov (United States)

    Yin, Kedong; Yang, Benshuo; Li, Xuemei

    2018-01-24

    In this paper, we investigate multiple attribute group decision making (MAGDM) problems where decision makers represent their evaluation of alternatives by trapezoidal fuzzy two-dimensional uncertain linguistic variable. To begin with, we introduce the definition, properties, expectation, operational laws of trapezoidal fuzzy two-dimensional linguistic information. Then, to improve the accuracy of decision making in some case where there are a sort of interrelationship among the attributes, we analyze partition Bonferroni mean (PBM) operator in trapezoidal fuzzy two-dimensional variable environment and develop two operators: trapezoidal fuzzy two-dimensional linguistic partitioned Bonferroni mean (TF2DLPBM) aggregation operator and trapezoidal fuzzy two-dimensional linguistic weighted partitioned Bonferroni mean (TF2DLWPBM) aggregation operator. Furthermore, we develop a novel method to solve MAGDM problems based on TF2DLWPBM aggregation operator. Finally, a practical example is presented to illustrate the effectiveness of this method and analyses the impact of different parameters on the results of decision-making.

  13. Method of App Selection for Healthcare Providers Based on Consumer Needs.

    Science.gov (United States)

    Lee, Jisan; Kim, Jeongeun

    2018-01-01

    Mobile device applications can be used to manage health. However, healthcare providers hesitate to use them because selection methods that consider the needs of health consumers and identify the most appropriate application are rare. This study aimed to create an effective method of identifying applications that address user needs. Women experiencing dysmenorrhea and premenstrual syndrome were the targeted users. First, we searched for related applications from two major sources of mobile applications. Brainstorming, mind mapping, and persona and scenario techniques were used to create a checklist of relevant criteria, which was used to rate the applications. Of the 2784 applications found, 369 were analyzed quantitatively. Of those, five of the top candidates were evaluated by three groups: application experts, clinical experts, and potential users. All three groups ranked one application the highest; however, the remaining rankings differed. The results of this study suggest that the method created is useful because it considers not only the needs of various users but also the knowledge of application and clinical experts. This study proposes a method for finding and using the best among existing applications and highlights the need for nurses who can understand and combine opinions of users and application and clinical experts.

  14. Method for providing a low density high strength polyurethane foam

    Science.gov (United States)

    Whinnery, Jr., Leroy L.; Goods, Steven H.; Skala, Dawn M.; Henderson, Craig C.; Keifer, Patrick N.

    2013-06-18

    Disclosed is a method for making a polyurethane closed-cell foam material exhibiting a bulk density below 4 lbs/ft.sup.3 and high strength. The present embodiment uses the reaction product of a modified MDI and a sucrose/glycerine based polyether polyol resin wherein a small measured quantity of the polyol resin is "pre-reacted" with a larger quantity of the isocyanate in a defined ratio such that when the necessary remaining quantity of the polyol resin is added to the "pre-reacted" resin together with a tertiary amine catalyst and water as a blowing agent, the polymerization proceeds slowly enough to provide a stable foam body.

  15. MODEL FOR FORMATION OF ENTREPRENEUR’S STYLE THINKING AMONG STUDENTS OF SECONDARY SCHOOLS PROVIDING GENERAL EDUCATION WHILE USING MEANS THAT DEVELOP SOCIAL AND PEDAGOGICAL ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    A. M. Gorodovich

    2008-01-01

    Full Text Available The paper rises problems pertaining to formation of entrepreneur competence among students of secondary schools providing general education while using means that develop social and pedagogical environment.

  16. Elementary methods for statistical systems, mean field, large-n, and duality

    International Nuclear Information System (INIS)

    Itzykson, C.

    1983-01-01

    Renormalizable field theories are singled out by such precise restraints that regularization schemes must be used to break these invariances. Statistical methods can be adapted to these problems where asymptotically free models fail. This lecture surveys approximation schemes developed in the context of statistical mechanics. The confluence point of statistical mechanics and field theory is the use of discretized path integrals, where continuous space time has been replaced by a regular lattice. Dynamic variables, a Boltzman weight factor, and boundary conditions are the ingredients. Mean field approximations --field equations, Random field transform, and gauge invariant systems--are surveyed. Under Large-N limits vector models are found to simplify tremendously. The reasons why matrix models drawn from SU (n) gauge theories do not simplify are discussed. In the epilogue, random curves versus random surfaces are offered as an example where global and local symmetries are not alike

  17. Brand Meaning Cocreation

    DEFF Research Database (Denmark)

    Tierney, Kieran D.; Karpen, Ingo; Westberg, Kate

    2016-01-01

    Purpose: The purpose of this paper is to consolidate and advance the understanding of brand meaning and the evolving process by which it is determined by introducing and explicating the concept of brand meaning cocreation (BMCC). Design/methodology/approach: In-depth review and integration...... of literature from branding, cocreation, service systems, and practice theory. To support deep theorizing, the authors also examine the role of institutional logics in the BMCC process in framing interactions and brand meaning outcomes. Findings: Prior research is limited in that it neither maps the process...... of cocreation within which meanings emerge nor provides theoretical conceptualizations of brand meaning or the process of BMCC. While the literature acknowledges that brand meaning is influenced by multiple interactions, their nature and how they contribute to BMCC have been overlooked. Research limitations...

  18. The nuclear N-body problem and the effective interaction in self-consistent mean-field methods

    International Nuclear Information System (INIS)

    Duguet, Thomas

    2002-01-01

    This work deals with two aspects of mean-field type methods extensively used in low-energy nuclear structure. The first study is at the mean-field level. The link between the wave-function describing an even-even nucleus and the odd-even neighbor is revisited. To get a coherent description as a function of the pairing intensity in the system, the utility of the formalization of this link through a two steps process is demonstrated. This two-steps process allows to identify the role played by different channels of the force when a nucleon is added in the system. In particular, perturbative formula evaluating the contribution of time-odd components of the functional to the nucleon separation energy are derived for zero and realistic pairing intensities. Self-consistent calculations validate the developed scheme as well as the derived perturbative formula. This first study ends up with an extended analysis of the odd-even mass staggering in nuclei. The new scheme allows to identify the contribution to this observable coming from different channels of the force. The necessity of a better understanding of time-odd terms in order to decide which odd-even mass formulae extracts the pairing gap the most properly is identified. These terms being nowadays more or less out of control, extended studies are needed to make precise the fit of a pairing force through the comparison of theoretical and experimental odd-even mass differences. The second study deals with beyond mean-field methods taking care of the correlations associated with large amplitude oscillations in nuclei. Their effects are usually incorporated through the GCM or the projected mean-field method. We derive a perturbation theory motivating such variational calculations from a diagrammatic point of view for the first time. Resuming two-body correlations in the energy expansion, we obtain an effective interaction removing the hard-core problem in the context of configuration mixing calculations. Proceeding to a

  19. Systems and methods for providing power to a load based upon a control strategy

    Science.gov (United States)

    Perisic, Milun; Kajouke, Lateef A; Ransom, Ray M

    2013-12-24

    Systems and methods are provided for an electrical system. The electrical system includes a load, an interface configured to receive a voltage from a voltage source, and a controller configured to receive the voltage from the voltage source through the interface and to provide a voltage and current to the load. Wherein, when the controller is in a constant voltage mode, the controller provides a constant voltage to the load, when the controller is in a constant current mode, the controller provides a constant current to the load, and when the controller is in a constant power mode, the controller provides a constant power to the load.

  20. Method and means for repairing injection fuel pump pistons

    Energy Technology Data Exchange (ETDEWEB)

    Ash, E.G.; Tompkins, M.J. Jr.

    1988-06-07

    This patent describes an improvement in timing pistons for rotary fuel injection pumps of the type having a die cast aluminum housing. The housing has a cylindrical chamber, a steel piston, the piston being received in the chamber, means for reciprocating the piston lengthwise of the chamber, an aluminum jacket surrounding the piston and extending the full length thereof, the jacket being rigidly secured to the piston. The jacket has an exterior surface hard coat anodized to the hardness of about 60-70 Rockwell (C scale) as the means of preventing galling due to the reciprocal movement of the aluminum jacket piston within the aluminum chamber.

  1. Mean-Variance-CvaR Model of Multiportfolio Optimization via Linear Weighted Sum Method

    Directory of Open Access Journals (Sweden)

    Younes Elahi

    2014-01-01

    Full Text Available We propose a new approach to optimizing portfolios to mean-variance-CVaR (MVC model. Although of several researches have studied the optimal MVC model of portfolio, the linear weighted sum method (LWSM was not implemented in the area. The aim of this paper is to investigate the optimal portfolio model based on MVC via LWSM. With this method, the solution of the MVC model of portfolio as the multiobjective problem is presented. In data analysis section, this approach in investing on two assets is investigated. An MVC model of the multiportfolio was implemented in MATLAB and tested on the presented problem. It is shown that, by using three objective functions, it helps the investors to manage their portfolio better and thereby minimize the risk and maximize the return of the portfolio. The main goal of this study is to modify the current models and simplify it by using LWSM to obtain better results.

  2. Choosing the Number of Clusters in K-Means Clustering

    Science.gov (United States)

    Steinley, Douglas; Brusco, Michael J.

    2011-01-01

    Steinley (2007) provided a lower bound for the sum-of-squares error criterion function used in K-means clustering. In this article, on the basis of the lower bound, the authors propose a method to distinguish between 1 cluster (i.e., a single distribution) versus more than 1 cluster. Additionally, conditional on indicating there are multiple…

  3. Connecting the dots and merging meaning: using mixed methods to study primary care delivery transformation.

    Science.gov (United States)

    Scammon, Debra L; Tomoaia-Cotisel, Andrada; Day, Rachel L; Day, Julie; Kim, Jaewhan; Waitzman, Norman J; Farrell, Timothy W; Magill, Michael K

    2013-12-01

    To demonstrate the value of mixed methods in the study of practice transformation and illustrate procedures for connecting methods and for merging findings to enhance the meaning derived. An integrated network of university-owned, primary care practices at the University of Utah (Community Clinics or CCs). CC has adopted Care by Design, its version of the Patient Centered Medical Home. Convergent case study mixed methods design. Analysis of archival documents, internal operational reports, in-clinic observations, chart audits, surveys, semistructured interviews, focus groups, Centers for Medicare and Medicaid Services database, and the Utah All Payer Claims Database. Each data source enriched our understanding of the change process and understanding of reasons that certain changes were more difficult than others both in general and for particular clinics. Mixed methods enabled generation and testing of hypotheses about change and led to a comprehensive understanding of practice change. Mixed methods are useful in studying practice transformation. Challenges exist but can be overcome with careful planning and persistence. © Health Research and Educational Trust.

  4. Methods and means for improving the man-machine systems for NPP control

    International Nuclear Information System (INIS)

    Konstantinov, L.V.; Rakitin, I.D.

    1984-01-01

    Consideration is being given to the role of ''human factors'' and the ways of improving the man-machine interaction in NPP control and safety systems (CSS). Simulators and tAaining equipment on the basis of dynamic power unit models used for training and improving skill of NPP operatoAs, as well as for mastering collective actions of personnel under accidental conditions are considered in detail. The most perfect program complexes for fast NPP diagnostics and theiA pealization in the Federal Republic of Germany, Japan, Canada, the USA and other countries are described. A special attention is paid to the means and methods of videoterminal dialogue operator interaction with an object both in normal and extreme situations. It is noted that the problems of the man-machine interaction have become the subject of study only in the end of 70s after analyzing the causes of the Three-Mile-Island accident (USA). Publications dealing with the development of perspective control rooms for NPP were analyzed. It was concluded that radical changes both in equipment and principles of organizing the personnel activity will take place in the nearest future. They will be based on the progress in creating dialogue means and computers of the fourth and fifth generations as well as on engineering and psychological and technical aspects of designing

  5. Risk-sensitive mean-field games

    KAUST Repository

    Tembine, Hamidou

    2014-04-01

    In this paper, we study a class of risk-sensitive mean-field stochastic differential games. We show that under appropriate regularity conditions, the mean-field value of the stochastic differential game with exponentiated integral cost functional coincides with the value function satisfying a Hamilton -Jacobi- Bellman (HJB) equation with an additional quadratic term. We provide an explicit solution of the mean-field best response when the instantaneous cost functions are log-quadratic and the state dynamics are affine in the control. An equivalent mean-field risk-neutral problem is formulated and the corresponding mean-field equilibria are characterized in terms of backward-forward macroscopic McKean-Vlasov equations, Fokker-Planck-Kolmogorov equations, and HJB equations. We provide numerical examples on the mean field behavior to illustrate both linear and McKean-Vlasov dynamics. © 1963-2012 IEEE.

  6. Risk-sensitive mean-field games

    KAUST Repository

    Tembine, Hamidou; Zhu, Quanyan; Başar, Tamer

    2014-01-01

    In this paper, we study a class of risk-sensitive mean-field stochastic differential games. We show that under appropriate regularity conditions, the mean-field value of the stochastic differential game with exponentiated integral cost functional coincides with the value function satisfying a Hamilton -Jacobi- Bellman (HJB) equation with an additional quadratic term. We provide an explicit solution of the mean-field best response when the instantaneous cost functions are log-quadratic and the state dynamics are affine in the control. An equivalent mean-field risk-neutral problem is formulated and the corresponding mean-field equilibria are characterized in terms of backward-forward macroscopic McKean-Vlasov equations, Fokker-Planck-Kolmogorov equations, and HJB equations. We provide numerical examples on the mean field behavior to illustrate both linear and McKean-Vlasov dynamics. © 1963-2012 IEEE.

  7. Mean-Reverting Portfolio With Budget Constraint

    Science.gov (United States)

    Zhao, Ziping; Palomar, Daniel P.

    2018-05-01

    This paper considers the mean-reverting portfolio design problem arising from statistical arbitrage in the financial markets. We first propose a general problem formulation aimed at finding a portfolio of underlying component assets by optimizing a mean-reversion criterion characterizing the mean-reversion strength, taking into consideration the variance of the portfolio and an investment budget constraint. Then several specific problems are considered based on the general formulation, and efficient algorithms are proposed. Numerical results on both synthetic and market data show that our proposed mean-reverting portfolio design methods can generate consistent profits and outperform the traditional design methods and the benchmark methods in the literature.

  8. Method and Apparatus Providing Deception and/or Altered Operation in an Information System Operating System

    Science.gov (United States)

    Cohen, Fred; Rogers, Deanna T.; Neagoe, Vicentiu

    2008-10-14

    A method and/or system and/or apparatus providing deception and/or execution alteration in an information system. In specific embodiments, deceptions and/or protections are provided by intercepting and/or modifying operation of one or more system calls of an operating system.

  9. (The feeling of) meaning-as-information.

    Science.gov (United States)

    Heintzelman, Samantha J; King, Laura A

    2014-05-01

    The desire for meaning is recognized as a central human motive. Yet, knowing that people want meaning does not explain its function. What adaptive problem does this experience solve? Drawing on the feelings-as-information hypothesis, we propose that the feeling of meaning provides information about the presence of reliable patterns and coherence in the environment, information that is not provided by affect. We review research demonstrating that manipulations of stimulus coherence influence subjective reports of meaning in life but not affect. We demonstrate that manipulations that foster an associative mindset enhance meaning. The meaning-as-information perspective embeds meaning in a network of foundational functions including associative learning, perception, cognition, and neural processing. This approach challenges assumptions about meaning, including its motivational appeal, the roles of expectancies and novelty in this experience, and the notion that meaning is inherently constructed. Implications for constructed meaning and existential meanings are discussed.

  10. A load factor based mean-variance analysis for fuel diversification

    Energy Technology Data Exchange (ETDEWEB)

    Gotham, Douglas; Preckel, Paul; Ruangpattana, Suriya [State Utility Forecasting Group, Purdue University, West Lafayette, IN (United States); Muthuraman, Kumar [McCombs School of Business, University of Texas, Austin, TX (United States); Rardin, Ronald [Department of Industrial Engineering, University of Arkansas, Fayetteville, AR (United States)

    2009-03-15

    Fuel diversification implies the selection of a mix of generation technologies for long-term electricity generation. The goal is to strike a good balance between reduced costs and reduced risk. The method of analysis that has been advocated and adopted for such studies is the mean-variance portfolio analysis pioneered by Markowitz (Markowitz, H., 1952. Portfolio selection. Journal of Finance 7(1) 77-91). However the standard mean-variance methodology, does not account for the ability of various fuels/technologies to adapt to varying loads. Such analysis often provides results that are easily dismissed by regulators and practitioners as unacceptable, since load cycles play critical roles in fuel selection. To account for such issues and still retain the convenience and elegance of the mean-variance approach, we propose a variant of the mean-variance analysis using the decomposition of the load into various types and utilizing the load factors of each load type. We also illustrate the approach using data for the state of Indiana and demonstrate the ability of the model in providing useful insights. (author)

  11. Electrical estimating methods

    CERN Document Server

    Del Pico, Wayne J

    2014-01-01

    Simplify the estimating process with the latest data, materials, and practices Electrical Estimating Methods, Fourth Edition is a comprehensive guide to estimating electrical costs, with data provided by leading construction database RS Means. The book covers the materials and processes encountered by the modern contractor, and provides all the information professionals need to make the most precise estimate. The fourth edition has been updated to reflect the changing materials, techniques, and practices in the field, and provides the most recent Means cost data available. The complexity of el

  12. Standard deviation and standard error of the mean.

    Science.gov (United States)

    Lee, Dong Kyu; In, Junyong; Lee, Sangseok

    2015-06-01

    In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.

  13. Determination of a reference value and its uncertainty through a power-moderated mean

    International Nuclear Information System (INIS)

    Pomme, S.; Keightley, J.

    2015-01-01

    A method is presented for calculating a key comparison reference value (KCRV) and its associated standard uncertainty. The method allows for technical scrutiny of data, correction or exclusion of extreme data, but above all uses a power-moderated mean that can calculate an efficient and robust mean from any data set. For mutually consistent data, the method approaches a weighted mean, the weights being the reciprocals of the variances (squared standard uncertainties) associated with the measured values. For data sets suspected of inconsistency, the weighting is moderated by increasing the laboratory variances by a common amount and/or decreasing the power of the weighting factors. By using computer simulations, it is shown that the PMM is a good compromise between efficiency and robustness, while also providing a realistic uncertainty. The method is of particular interest to data evaluators and organizers of proficiency tests. (authors)

  14. Growth Estimators and Confidence Intervals for the Mean of Negative Binomial Random Variables with Unknown Dispersion

    Directory of Open Access Journals (Sweden)

    David Shilane

    2013-01-01

    Full Text Available The negative binomial distribution becomes highly skewed under extreme dispersion. Even at moderately large sample sizes, the sample mean exhibits a heavy right tail. The standard normal approximation often does not provide adequate inferences about the data's expected value in this setting. In previous work, we have examined alternative methods of generating confidence intervals for the expected value. These methods were based upon Gamma and Chi Square approximations or tail probability bounds such as Bernstein's inequality. We now propose growth estimators of the negative binomial mean. Under high dispersion, zero values are likely to be overrepresented in the data. A growth estimator constructs a normal-style confidence interval by effectively removing a small, predetermined number of zeros from the data. We propose growth estimators based upon multiplicative adjustments of the sample mean and direct removal of zeros from the sample. These methods do not require estimating the nuisance dispersion parameter. We will demonstrate that the growth estimators' confidence intervals provide improved coverage over a wide range of parameter values and asymptotically converge to the sample mean. Interestingly, the proposed methods succeed despite adding both bias and variance to the normal approximation.

  15. The global Minmax k-means algorithm.

    Science.gov (United States)

    Wang, Xiaoyan; Bai, Yanping

    2016-01-01

    The global k -means algorithm is an incremental approach to clustering that dynamically adds one cluster center at a time through a deterministic global search procedure from suitable initial positions, and employs k -means to minimize the sum of the intra-cluster variances. However the global k -means algorithm sometimes results singleton clusters and the initial positions sometimes are bad, after a bad initialization, poor local optimal can be easily obtained by k -means algorithm. In this paper, we modified the global k -means algorithm to eliminate the singleton clusters at first, and then we apply MinMax k -means clustering error method to global k -means algorithm to overcome the effect of bad initialization, proposed the global Minmax k -means algorithm. The proposed clustering method is tested on some popular data sets and compared to the k -means algorithm, the global k -means algorithm and the MinMax k -means algorithm. The experiment results show our proposed algorithm outperforms other algorithms mentioned in the paper.

  16. Method and means for determining heat quantities

    Energy Technology Data Exchange (ETDEWEB)

    Waasdorp, G G; de Jong, J J; Bijl, A

    1965-08-24

    To determine the quantity of potential heat W that has flowed past a certain point in a certain time, the velocity of the combustible Q, the temperature T, and the specific gravity YDTU are measured, and these values are transmitted to a computer which automatically calculates the quantity: ..pi..EQUATION/sup -/ in which delta T is the difference between the combustible temperature T and a reference temperature, and in which the relation f(YDTU, delta T) represents the heat of combustion as a function of the quantities YDTU and delta T and possibly other properties of the combustible. Alternatively the quantity: ..pi..EQUATION/sup -/ may be measured; here the quantities have the same meaning as above.

  17. Hybrid K-means Dan Particle Swarm Optimization Untuk Clustering Nasabah Kredit

    Directory of Open Access Journals (Sweden)

    Yusuf Priyo Anggodo

    2017-05-01

    Credit is the biggest revenue for the bank. However, banks have to be selective in deciding which clients can receive the credit. This issue is becoming increasingly complex because when the bank was wrong to give credit to customers can do harm, apart of that a large number of deciding parameter in determining customer credit. Clustering is one way to be able to resolve this issue. K-means is a simple and popular method for solving clustering. However, K-means pure can’t provide optimum solutions so that needs to be done to get the optimum solution to improve. One method of optimization that can solve the problems of optimization with particle swarm optimization is good (PSO. PSO is very helpful in the process of clustering to perform optimization on the central point of each cluster. To improve better results on PSO there are some that do improve. The first use of time-variant inertia to make the dynamic value of inertial w each iteration. Both control the speed of the particle velocity or clamping to get the best position. Besides to overcome premature convergence do hybrid PSO with random injection. The results of this research provide the optimum results for solving clustering of customer credits. The test results showed the hybrid PSO K-means provide the greatest results than K-means and PSO K-means, where the silhouette of the K-means, PSO K-means, and hybrid PSO K-means respectively 0.57343, 0.792045, 1. Keywords: Credit, Clustering, PSO, K-means, Random Injection

  18. A nonparametric statistical method for determination of a confidence interval for the mean of a set of results obtained in a laboratory intercomparison

    International Nuclear Information System (INIS)

    Veglia, A.

    1981-08-01

    In cases where sets of data are obviously not normally distributed, the application of a nonparametric method for the estimation of a confidence interval for the mean seems to be more suitable than some other methods because such a method requires few assumptions about the population of data. A two-step statistical method is proposed which can be applied to any set of analytical results: elimination of outliers by a nonparametric method based on Tchebycheff's inequality, and determination of a confidence interval for the mean by a non-parametric method based on binominal distribution. The method is appropriate only for samples of size n>=10

  19. Four-dimensional cone beam CT reconstruction and enhancement using a temporal nonlocal means method

    Energy Technology Data Exchange (ETDEWEB)

    Jia Xun; Tian Zhen; Lou Yifei; Sonke, Jan-Jakob; Jiang, Steve B. [Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037 (United States); School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30318 (United States); Department of Radiation Oncology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands); Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037 (United States)

    2012-09-15

    Purpose: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase-resolved volumetric imaging in image guided radiation therapy. Conventionally, it is reconstructed by first sorting the x-ray projections into multiple respiratory phase bins according to a breathing signal extracted either from the projection images or some external surrogates, and then reconstructing a 3D CBCT image in each phase bin independently using FDK algorithm. This method requires adequate number of projections for each phase, which can be achieved using a low gantry rotation or multiple gantry rotations. Inadequate number of projections in each phase bin results in low quality 4D-CBCT images with obvious streaking artifacts. 4D-CBCT images at different breathing phases share a lot of redundant information, because they represent the same anatomy captured at slightly different temporal points. Taking this redundancy along the temporal dimension into account can in principle facilitate the reconstruction in the situation of inadequate number of projection images. In this work, the authors propose two novel 4D-CBCT algorithms: an iterative reconstruction algorithm and an enhancement algorithm, utilizing a temporal nonlocal means (TNLM) method. Methods: The authors define a TNLM energy term for a given set of 4D-CBCT images. Minimization of this term favors those 4D-CBCT images such that any anatomical features at one spatial point at one phase can be found in a nearby spatial point at neighboring phases. 4D-CBCT reconstruction is achieved by minimizing a total energy containing a data fidelity term and the TNLM energy term. As for the image enhancement, 4D-CBCT images generated by the FDK algorithm are enhanced by minimizing the TNLM function while keeping the enhanced images close to the FDK results. A forward-backward splitting algorithm and a Gauss-Jacobi iteration method are employed to solve the problems. The algorithms implementation on

  20. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  1. Tumour screening by means of tomography methods

    International Nuclear Information System (INIS)

    Diederich, S.

    2005-01-01

    Tomography methods such as computer tomography (CT), magnetic resonance tomography (MRT), and sonography/ultrasound examinations make it possible to detect small asymptomatic tumours, thus potentially preventing their manifestation at an advanced stage and improving survival prospects for the patients concerned. There are data available on various common tumours which show that modern tomography methods are capable of detecting not only small asymptomatic tumours but also their benign precursors (e.g. polyps of the large intestine). This has been demonstrated for lung cancer, colon cancer and breast cancer. However, it has not been possible to date to show for any tomography method or any type of tumour that the systematic use of such diagnostic procedures does anything to lower the mortality rate for that tumour. For other types of tumour (pancreatic cancer, kidney cancer, ovary cancer) the above named methods are either not sufficiently sensitive or the body of data that has accumulated on their respective use is too small to judge the benefit of tomography screenings. Current technical developments make it appear probable that for many types of cancer the reliability with which small tumours can be detected will improve in future. Studies aimed at clarifying the potential of screenings for reducing mortality rates are already underway for lung cancer and would be worthwhile performing for other tumour types

  2. Spiking cortical model based non-local means method for despeckling multiframe optical coherence tomography data

    Science.gov (United States)

    Gu, Yameng; Zhang, Xuming

    2017-05-01

    Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).

  3. Computing meaning v.4

    CERN Document Server

    Bunt, Harry; Pulman, Stephen

    2013-01-01

    This book is a collection of papers by leading researchers in computational semantics. It presents a state-of-the-art overview of recent and current research in computational semantics, including descriptions of new methods for constructing and improving resources for semantic computation, such as WordNet, VerbNet, and semantically annotated corpora. It also presents new statistical methods in semantic computation, such as the application of distributional semantics in the compositional calculation of sentence meanings. Computing the meaning of sentences, texts, and spoken or texted dialogue i

  4. Support Vector Data Descriptions and k-Means Clustering: One Class?

    Science.gov (United States)

    Gornitz, Nico; Lima, Luiz Alberto; Muller, Klaus-Robert; Kloft, Marius; Nakajima, Shinichi

    2017-09-27

    We present ClusterSVDD, a methodology that unifies support vector data descriptions (SVDDs) and k-means clustering into a single formulation. This allows both methods to benefit from one another, i.e., by adding flexibility using multiple spheres for SVDDs and increasing anomaly resistance and flexibility through kernels to k-means. In particular, our approach leads to a new interpretation of k-means as a regularized mode seeking algorithm. The unifying formulation further allows for deriving new algorithms by transferring knowledge from one-class learning settings to clustering settings and vice versa. As a showcase, we derive a clustering method for structured data based on a one-class learning scenario. Additionally, our formulation can be solved via a particularly simple optimization scheme. We evaluate our approach empirically to highlight some of the proposed benefits on artificially generated data, as well as on real-world problems, and provide a Python software package comprising various implementations of primal and dual SVDD as well as our proposed ClusterSVDD.

  5. Weighted mean: A possible method to express overall Dhatu Sarata

    Directory of Open Access Journals (Sweden)

    Chandar Prakash Gunawat

    2015-01-01

    Full Text Available Several questions are being raised regarding the accuracy of the methods of diagnosis and reporting of various clinical parameters according to Ayurveda in recent times. Uniformity in reporting, issues related to inter-rater variability, uniformity in applying statistical tests, reliability, consistency, and validation of various tools, - are some of the major concerns that are being voiced. Dhatu Sarata is one such domain where no substantial work has been carried out to address these issues. The Sanskrit term "Dhatu" roughly translates as a "tissue.". Sarata stands for the status of Dhatu in a given individual, i.e., it describes whether the status is excellent, moderate, or poor. In the available research literature, there are several gaps while dealing with and reporting the clinical assessment of Dhatu. Most of the workers group an individual into any one of the categories of Dhatu Sarata, and this approach neglects the contribution of other Dhatus to the overall Sarata in that individual. In this communication, we propose the usefulness of "weighted mean" in expressing the overall Sarata in an individual. This gives the researcher a freedom of not classifying an individual into any one group of Sarata, while also simultaneously allowing him/her to retain the focus on the status of an individual Dhatu.

  6. Weighted mean: A possible method to express overall Dhatu Sarata.

    Science.gov (United States)

    Gunawat, Chandar Prakash; Singh, Girish; Patwardhan, Kishor; Gehlot, Sangeeta

    2015-01-01

    Several questions are being raised regarding the accuracy of the methods of diagnosis and reporting of various clinical parameters according to Ayurveda in recent times. Uniformity in reporting, issues related to inter-rater variability, uniformity in applying statistical tests, reliability, consistency, and validation of various tools, - are some of the major concerns that are being voiced. Dhatu Sarata is one such domain where no substantial work has been carried out to address these issues. The Sanskrit term "Dhatu" roughly translates as a "tissue." Sarata stands for the status of Dhatu in a given individual, i.e., it describes whether the status is excellent, moderate, or poor. In the available research literature, there are several gaps while dealing with and reporting the clinical assessment of Dhatu. Most of the workers group an individual into any one of the categories of Dhatu Sarata, and this approach neglects the contribution of other Dhatus to the overall Sarata in that individual. In this communication, we propose the usefulness of "weighted mean" in expressing the overall Sarata in an individual. This gives the researcher a freedom of not classifying an individual into any one group of Sarata, while also simultaneously allowing him/her to retain the focus on the status of an individual Dhatu.

  7. Clarifying the role of mean centring in multicollinearity of interaction effects.

    Science.gov (United States)

    Shieh, Gwowen

    2011-11-01

    Moderated multiple regression (MMR) is frequently employed to analyse interaction effects between continuous predictor variables. The procedure of mean centring is commonly recommended to mitigate the potential threat of multicollinearity between predictor variables and the constructed cross-product term. Also, centring does typically provide more straightforward interpretation of the lower-order terms. This paper attempts to clarify two methodological issues of potential confusion. First, the positive and negative effects of mean centring on multicollinearity diagnostics are explored. It is illustrated that the mean centring method is, depending on the characteristics of the data, capable of either increasing or decreasing various measures of multicollinearity. Second, the exact reason why mean centring does not affect the detection of interaction effects is given. The explication shows the symmetrical influence of mean centring on the corrected sum of squares and variance inflation factor of the product variable while maintaining the equivalence between the two residual sums of squares for the regression of the product term on the two predictor variables. Thus the resulting test statistic remains unchanged regardless of the obvious modification of multicollinearity with mean centring. These findings provide a clear understanding and demonstration on the diverse impact of mean centring in MMR applications. ©2011 The British Psychological Society.

  8. Mean centering of ratio spectra and successive derivative ratio spectrophotometric methods for determination of isopropamide iodide, trifluoperazine hydrochloride and trifluoperazine oxidative degradate

    Directory of Open Access Journals (Sweden)

    Maha M. Abdelrahman

    2016-09-01

    Full Text Available Two sensitive, selective and precise stability indicating methods for the determination of isopropamide iodide (ISO, trifluoperazine hydrochloride (TPZ and trifluoperazine oxidative degradate (DEG were developed and validated. Method A is a successive derivative ratio spectrophotometric one, which depends on the successive derivative of ratio spectra in two steps using 0.1 N HCl as a solvent and measuring TPZ at 250.4 and 257.2 nm, ISO at 223 and 228 nm and DEG at 210.6, 213 and 270.2 nm. Method B is mean centering of ratio spectra which depends on using the mean centered ratio spectra in two successive steps and measuring the mean centered values of the second ratio spectra at 322, 355 and 339 nm for TPZ, ISO and DEG, respectively. Factors affecting the developed methods were studied and optimized, moreover, they have been validated as per ICH guidelines and the results demonstrated that the suggested methods are reliable, reproducible and suitable for routine use with short analysis time. Statistical analysis of the two developed methods with the reported one using F- and Student’s t-test showed no significant difference regarding accuracy and precision.

  9. Characterization of magnetic colloids by means of magnetooptics

    OpenAIRE

    Baraban, Larysa; Erbe, Artur; Leiderer, Paul

    2007-01-01

    A new, efficient method for the characterization of magnetic colloids based on the Faraday effect is proposed. According to the main principles of this technique, it is possible to detect the stray magnetic field of the colloidal particles induced inside the magnetooptical layer. The magnetic properties of individual particles can be determined providing measurements in a wide range of magnetic fields. The magnetization curves of capped colloids and paramagnetic colloids were measured by mean...

  10. The two means method for the attenuation coefficient determination of archaeological ceramics from the North of Parana; Metodo dos dois meios para a determinacao do coeficiente de atenuacao de ceramicas arqueologicas do norte do Parana

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Richard Maximiliano Cunha e

    1998-12-31

    This work reports an alternative methodology for the linear attenuation coefficient determination ({mu} {rho}) of irregular form samples, in such a way that is not necessary to consider the sample thickness. With this methodology, indigenous archaeological ceramics fragments from the region of Londrina, north of Parana, were studied. These ceramics fragments belong to the Kaingaing and Tupiguarani traditions. The equation for the {mu} {rho} determination employing the two mean method was obtained and it was used for {mu} {rho} determination by the gamma ray beam attenuation if immersed ceramics, by turns, in two different means with known linear attenuation coefficient. By the other side, {mu} theoretical value was determined with the XCOM computer code. This code uses as input the ceramics chemistry composition and provides an energy versus mass attenuation coefficient table. In order to validate the two mean method validation, five ceramics samples of thickness 1.15 cm and 1.87 cm were prepared with homogeneous clay. Using these ceramics, {mu} {rho} was determined using the attenuation method, and the two mean method. The result obtained for {mu} {rho} and its respective deviation were compared for these samples, for the two methods. With the obtained results, it was concluded that the two means method is good for the linear attenuation coefficient determination of materials of irregular shape, what is suitable, specially, for archaeometric studies. (author) 25 refs., 29 figs., 28 tabs.

  11. The two means method for the attenuation coefficient determination of archaeological ceramics from the North of Parana; Metodo dos dois meios para a determinacao do coeficiente de atenuacao de ceramicas arqueologicas do norte do Parana

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Richard Maximiliano Cunha e

    1997-12-31

    This work reports an alternative methodology for the linear attenuation coefficient determination ({mu} {rho}) of irregular form samples, in such a way that is not necessary to consider the sample thickness. With this methodology, indigenous archaeological ceramics fragments from the region of Londrina, north of Parana, were studied. These ceramics fragments belong to the Kaingaing and Tupiguarani traditions. The equation for the {mu} {rho} determination employing the two mean method was obtained and it was used for {mu} {rho} determination by the gamma ray beam attenuation if immersed ceramics, by turns, in two different means with known linear attenuation coefficient. By the other side, {mu} theoretical value was determined with the XCOM computer code. This code uses as input the ceramics chemistry composition and provides an energy versus mass attenuation coefficient table. In order to validate the two mean method validation, five ceramics samples of thickness 1.15 cm and 1.87 cm were prepared with homogeneous clay. Using these ceramics, {mu} {rho} was determined using the attenuation method, and the two mean method. The result obtained for {mu} {rho} and its respective deviation were compared for these samples, for the two methods. With the obtained results, it was concluded that the two means method is good for the linear attenuation coefficient determination of materials of irregular shape, what is suitable, specially, for archaeometric studies. (author) 25 refs., 29 figs., 28 tabs.

  12. Multi-objective mean-variance-skewness model for generation portfolio allocation in electricity markets

    Energy Technology Data Exchange (ETDEWEB)

    Pindoriya, N.M.; Singh, S.N. [Department of Electrical Engineering, Indian Institute of Technology Kanpur, Kanpur 208016 (India); Singh, S.K. [Indian Institute of Management Lucknow, Lucknow 226013 (India)

    2010-10-15

    This paper proposes an approach for generation portfolio allocation based on mean-variance-skewness (MVS) model which is an extension of the classical mean-variance (MV) portfolio theory, to deal with assets whose return distribution is non-normal. The MVS model allocates portfolios optimally by considering the maximization of both the expected return and skewness of portfolio return while simultaneously minimizing the risk. Since, it is competing and conflicting non-smooth multi-objective optimization problem, this paper employed a multi-objective particle swarm optimization (MOPSO) based meta-heuristic technique to provide Pareto-optimal solution in a single simulation run. Using a case study of the PJM electricity market, the performance of the MVS portfolio theory based method and the classical MV method is compared. It has been found that the MVS portfolio theory based method can provide significantly better portfolios in the situation where non-normally distributed assets exist for trading. (author)

  13. Multi-objective mean-variance-skewness model for generation portfolio allocation in electricity markets

    International Nuclear Information System (INIS)

    Pindoriya, N.M.; Singh, S.N.; Singh, S.K.

    2010-01-01

    This paper proposes an approach for generation portfolio allocation based on mean-variance-skewness (MVS) model which is an extension of the classical mean-variance (MV) portfolio theory, to deal with assets whose return distribution is non-normal. The MVS model allocates portfolios optimally by considering the maximization of both the expected return and skewness of portfolio return while simultaneously minimizing the risk. Since, it is competing and conflicting non-smooth multi-objective optimization problem, this paper employed a multi-objective particle swarm optimization (MOPSO) based meta-heuristic technique to provide Pareto-optimal solution in a single simulation run. Using a case study of the PJM electricity market, the performance of the MVS portfolio theory based method and the classical MV method is compared. It has been found that the MVS portfolio theory based method can provide significantly better portfolios in the situation where non-normally distributed assets exist for trading. (author)

  14. Comparison of K-means and fuzzy c-means algorithm performance for automated determination of the arterial input function.

    Science.gov (United States)

    Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong

    2014-01-01

    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection.

  15. Analysis of k-means clustering approach on the breast cancer Wisconsin dataset.

    Science.gov (United States)

    Dubey, Ashutosh Kumar; Gupta, Umesh; Jain, Sonal

    2016-11-01

    Breast cancer is one of the most common cancers found worldwide and most frequently found in women. An early detection of breast cancer provides the possibility of its cure; therefore, a large number of studies are currently going on to identify methods that can detect breast cancer in its early stages. This study was aimed to find the effects of k-means clustering algorithm with different computation measures like centroid, distance, split method, epoch, attribute, and iteration and to carefully consider and identify the combination of measures that has potential of highly accurate clustering accuracy. K-means algorithm was used to evaluate the impact of clustering using centroid initialization, distance measures, and split methods. The experiments were performed using breast cancer Wisconsin (BCW) diagnostic dataset. Foggy and random centroids were used for the centroid initialization. In foggy centroid, based on random values, the first centroid was calculated. For random centroid, the initial centroid was considered as (0, 0). The results were obtained by employing k-means algorithm and are discussed with different cases considering variable parameters. The calculations were based on the centroid (foggy/random), distance (Euclidean/Manhattan/Pearson), split (simple/variance), threshold (constant epoch/same centroid), attribute (2-9), and iteration (4-10). Approximately, 92 % average positive prediction accuracy was obtained with this approach. Better results were found for the same centroid and the highest variance. The results achieved using Euclidean and Manhattan were better than the Pearson correlation. The findings of this work provided extensive understanding of the computational parameters that can be used with k-means. The results indicated that k-means has a potential to classify BCW dataset.

  16. Private providers' knowledge, attitudes and misconceptions related to long-acting and permanent contraceptive methods: a case study in Bangladesh.

    Science.gov (United States)

    Ugaz, Jorge; Banke, Kathryn; Rahaim, Stephen; Chowdhury, Wahiduzzaman; Williams, Julie

    2016-11-01

    In Bangladesh, use of long-acting and permanent methods of contraception (LAPMs) remains stagnant. Providers' limited knowledge and biases may be a factor. We assessed private providers' knowledge, misconceptions and general attitudes towards LAPM in two urban areas. The ultimate goal is to shape programs and interventions to overcome these obstacles and improve full method choice in Bangladesh. Trained data collectors interviewed a convenience sample of 235 female doctors (obstetricians-gynecologists and general practitioners) and 150 female nurses from 194 commercial (for-profit) health care facilities in Chittagong City Corporation and Dhaka district. Data were collected on the nature of the practice, training received, knowledge about modern contraceptives and attitudes towards LAPM [including intrauterine device (IUDs), implants, female and male sterilization]. All providers, and especially doctors, lacked adequate knowledge regarding side effects for all LAPMs, particularly female and male sterilization. Providers had misconceptions about the effectiveness and convenience of LAPMs compared to short-acting contraceptive methods. Implants and IUDs were generally perceived more negatively than other methods. The majority of providers believed that husbands favor short-acting methods rather than LAPMs and that women should not use a method that their husbands do not approve of. Our findings document knowledge and attitudinal barriers among private for-profit providers in urban areas affecting their provision of accurate information about LAPM choices. Practitioners should be offered the necessary tools to provide women full access to all modern methods, especially LAPMs, in order to contribute to decreasing unmet need and improving full method choice in Bangladesh. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Burn-up determination of irradiated uranium oxide by means of direct gama spectrometry and by radiochemical method

    International Nuclear Information System (INIS)

    Cunha, I.I.L.; Nastasi, M.J.C.; Lima, F.W.

    1981-09-01

    The burn-up of thermal neutrons irradiated U 3 O 8 (natural uranium) samples has been determined by using both direct gamma spectrometry and radiochemical methods and the results obtained were compared. The fission products 144 Ce, 103 Ru, 106 Ru, 137 Cs and 95 Zr were chosen as burn-up monitors. In order to isolate the radioisotopes chosen as monitors, a radiochemical separation procedure has been established, in which the solvent extraction technique was used to separate cerium, cesium and ruthenium one from the other and all of them from uranium. The separation between zirconium and niobium and of both elements from the other radioisotopes and uranium was accomplished by means of adsorption on a silica-gel column, followed by selective elution of zirconium and of niobium. When use was made of the direct gamma-ray spectrometry method, the radioactivity of each nuclide of interest was measured in presence of all others. For this purpose use was made of gamma-ray spectrometry and of a Ge-Li detector. Comparison of burn-up values obtained by both methods was made by means of Student's 't' test, and this showed that results obtained in each case are statistically equal. (Author) [pt

  18. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    DEFF Research Database (Denmark)

    Sabourin, David; Snakenborg, Detlef; Dufva, Hans Martin

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observ...

  19. Assessment of BSRN radiation records for the computation of monthly means

    Science.gov (United States)

    Roesch, A.; Wild, M.; Ohmura, A.; Dutton, E. G.; Long, C. N.; Zhang, T.

    2011-02-01

    The integrity of the Baseline Surface Radiation Network (BSRN) radiation monthly averages are assessed by investigating the impact on monthly means due to the frequency of data gaps caused by missing or discarded high time resolution data. The monthly statistics, especially means, are considered to be important and useful values for climate research, model performance evaluations and for assessing the quality of satellite (time- and space-averaged) data products. The study investigates the spread in different algorithms that have been applied for the computation of monthly means from 1-min values. The paper reveals that the computation of monthly means from 1-min observations distinctly depends on the method utilized to account for the missing data. The intra-method difference generally increases with an increasing fraction of missing data. We found that a substantial fraction of the radiation fluxes observed at BSRN sites is either missing or flagged as questionable. The percentage of missing data is 4.4%, 13.0%, and 6.5% for global radiation, direct shortwave radiation, and downwelling longwave radiation, respectively. Most flagged data in the shortwave are due to nighttime instrumental noise and can reasonably be set to zero after correcting for thermal offsets in the daytime data. The study demonstrates that the handling of flagged data clearly impacts on monthly mean estimates obtained with different methods. We showed that the spread of monthly shortwave fluxes is generally clearly higher than for downwelling longwave radiation. Overall, BSRN observations provide sufficient accuracy and completeness for reliable estimates of monthly mean values. However, the value of future data could be further increased by reducing the frequency of data gaps and the number of outliers. It is shown that two independent methods for accounting for the diurnal and seasonal variations in the missing data permit consistent monthly means to within less than 1 W m-2 in most cases

  20. A Simple but Powerful Heuristic Method for Accelerating k-Means Clustering of Large-Scale Data in Life Science.

    Science.gov (United States)

    Ichikawa, Kazuki; Morishita, Shinichi

    2014-01-01

    K-means clustering has been widely used to gain insight into biological systems from large-scale life science data. To quantify the similarities among biological data sets, Pearson correlation distance and standardized Euclidean distance are used most frequently; however, optimization methods have been largely unexplored. These two distance measurements are equivalent in the sense that they yield the same k-means clustering result for identical sets of k initial centroids. Thus, an efficient algorithm used for one is applicable to the other. Several optimization methods are available for the Euclidean distance and can be used for processing the standardized Euclidean distance; however, they are not customized for this context. We instead approached the problem by studying the properties of the Pearson correlation distance, and we invented a simple but powerful heuristic method for markedly pruning unnecessary computation while retaining the final solution. Tests using real biological data sets with 50-60K vectors of dimensions 10-2001 (~400 MB in size) demonstrated marked reduction in computation time for k = 10-500 in comparison with other state-of-the-art pruning methods such as Elkan's and Hamerly's algorithms. The BoostKCP software is available at http://mlab.cb.k.u-tokyo.ac.jp/~ichikawa/boostKCP/.

  1. Configurable memory system and method for providing atomic counting operations in a memory device

    Science.gov (United States)

    Bellofatto, Ralph E.; Gara, Alan G.; Giampapa, Mark E.; Ohmacht, Martin

    2010-09-14

    A memory system and method for providing atomic memory-based counter operations to operating systems and applications that make most efficient use of counter-backing memory and virtual and physical address space, while simplifying operating system memory management, and enabling the counter-backing memory to be used for purposes other than counter-backing storage when desired. The encoding and address decoding enabled by the invention provides all this functionality through a combination of software and hardware.

  2. Numerical Methods for a Multicomponent Two-Phase Interface Model with Geometric Mean Influence Parameters

    KAUST Repository

    Kou, Jisheng

    2015-07-16

    In this paper, we consider an interface model for multicomponent two-phase fluids with geometric mean influence parameters, which is popularly used to model and predict surface tension in practical applications. For this model, there are two major challenges in theoretical analysis and numerical simulation: the first one is that the influence parameter matrix is not positive definite; the second one is the complicated structure of the energy function, which requires us to find out a physically consistent treatment. To overcome these two challenging problems, we reduce the formulation of the energy function by employing a linear transformation and a weighted molar density, and furthermore, we propose a local minimum grand potential energy condition to establish the relation between the weighted molar density and mixture compositions. From this, we prove the existence of the solution under proper conditions and prove the maximum principle of the weighted molar density. For numerical simulation, we propose a modified Newton\\'s method for solving this nonlinear model and analyze its properties; we also analyze a finite element method with a physical-based adaptive mesh-refinement technique. Numerical examples are tested to verify the theoretical results and the efficiency of the proposed methods.

  3. Numerical Methods for a Multicomponent Two-Phase Interface Model with Geometric Mean Influence Parameters

    KAUST Repository

    Kou, Jisheng; Sun, Shuyu

    2015-01-01

    In this paper, we consider an interface model for multicomponent two-phase fluids with geometric mean influence parameters, which is popularly used to model and predict surface tension in practical applications. For this model, there are two major challenges in theoretical analysis and numerical simulation: the first one is that the influence parameter matrix is not positive definite; the second one is the complicated structure of the energy function, which requires us to find out a physically consistent treatment. To overcome these two challenging problems, we reduce the formulation of the energy function by employing a linear transformation and a weighted molar density, and furthermore, we propose a local minimum grand potential energy condition to establish the relation between the weighted molar density and mixture compositions. From this, we prove the existence of the solution under proper conditions and prove the maximum principle of the weighted molar density. For numerical simulation, we propose a modified Newton's method for solving this nonlinear model and analyze its properties; we also analyze a finite element method with a physical-based adaptive mesh-refinement technique. Numerical examples are tested to verify the theoretical results and the efficiency of the proposed methods.

  4. Soils - Mean Permeability

    Data.gov (United States)

    Kansas Data Access and Support Center — This digital spatial data set provides information on the magnitude and spatial pattern of depth-weighted, mean soil permeability throughout the State of Kansas. The...

  5. Providing x-rays

    International Nuclear Information System (INIS)

    Mallozzi, P.J.; Epstein, H.M.

    1985-01-01

    This invention provides an apparatus for providing x-rays to an object that may be in an ordinary environment such as air at approximately atmospheric pressure. The apparatus comprises: means (typically a laser beam) for directing energy onto a target to produce x-rays of a selected spectrum and intensity at the target; a fluid-tight enclosure around the target; means for maintaining the pressure in the first enclosure substantially below atmospheric pressure; a fluid-tight second enclosure adjoining the first enclosure, the common wall portion having an opening large enough to permit x-rays to pass through but small enough to allow the pressure reducing means to evacuate gas from the first enclosure at least as fast as it enters through the opening; the second enclosure filled with a gas that is highly transparent to x-rays; the wall of the second enclosure to which the x-rays travel having a portion that is highly transparent to x-rays (usually a beryllium or plastic foil), so that the object to which the x-rays are to be provided may be located outside the second enclosure and adjacent thereto and thus receive the x-rays substantially unimpeded by air or other intervening matter. The apparatus is particularly suited to obtaining EXAFS (extended x-ray fine structure spectroscopy) data on a material

  6. Controlling Access to Suicide Means

    Directory of Open Access Journals (Sweden)

    Miriam Iosue

    2011-12-01

    Full Text Available Background: Restricting access to common means of suicide, such as firearms, toxic gas, pesticides and other, has been shown to be effective in reducing rates of death in suicide. In the present review we aimed to summarize the empirical and clinical literature on controlling the access to means of suicide. Methods: This review made use of both MEDLINE, ISI Web of Science and the Cochrane library databases, identifying all English articles with the keywords “suicide means”, “suicide method”, “suicide prediction” or “suicide prevention” and other relevant keywords. Results: A number of factors may influence an individual’s decision regarding method in a suicide act, but there is substantial support that easy access influences the choice of method. In many countries, restrictions of access to common means of suicide has lead to lower overall suicide rates, particularly regarding suicide by firearms in USA, detoxification of domestic and motor vehicle gas in England and other countries, toxic pesticides in rural areas, barriers at jumping sites and hanging, by introducing “safe rooms” in prisons and hospitals. Moreover, decline in prescription of barbiturates and tricyclic antidepressants (TCAs, as well as limitation of drugs pack size for paracetamol and salicylate has reduced suicides by overdose, while increased prescription of SSRIs seems to have lowered suicidal rates. Conclusions: Restriction to means of suicide may be particularly effective in contexts where the method is popular, highly lethal, widely available, and/or not easily substituted by other similar methods. However, since there is some risk of means substitution, restriction of access should be implemented in conjunction with other suicide prevention strategies.

  7. An implementation of the relational k-means algorithm

    OpenAIRE

    Szalkai, Balázs

    2013-01-01

    A C# implementation of a generalized k-means variant called relational k-means is described here. Relational k-means is a generalization of the well-known k-means clustering method which works for non-Euclidean scenarios as well. The input is an arbitrary distance matrix, as opposed to the traditional k-means method, where the clustered objects need to be identified with vectors.

  8. Generation of triangulated random surfaces by means of the Monte Carlo method in the grand canonical ensemble

    International Nuclear Information System (INIS)

    Zmushko, V.V.; Migdal, A.A.

    1987-01-01

    A model of triangulated random surfaces which is the discrete analogue of the Polyakov string is considered in the work. An algorithm is proposed which enables one to study the model by means of the Monte Carlo method in the grand canonical ensemble. Preliminary results are presented on the evaluation of the critical index γ

  9. Interconnection blocks: a method for providing reusable, rapid, multiple, aligned and planar microfluidic interconnections

    International Nuclear Information System (INIS)

    Sabourin, D; Snakenborg, D; Dufva, M

    2009-01-01

    In this paper a method is presented for creating 'interconnection blocks' that are re-usable and provide multiple, aligned and planar microfluidic interconnections. Interconnection blocks made from polydimethylsiloxane allow rapid testing of microfluidic chips and unobstructed microfluidic observation. The interconnection block method is scalable, flexible and supports high interconnection density. The average pressure limit of the interconnection block was near 5.5 bar and all individual results were well above the 2 bar threshold considered applicable to most microfluidic applications

  10. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  11. Interactive K-Means Clustering Method Based on User Behavior for Different Analysis Target in Medicine.

    Science.gov (United States)

    Lei, Yang; Yu, Dai; Bin, Zhang; Yang, Yang

    2017-01-01

    Clustering algorithm as a basis of data analysis is widely used in analysis systems. However, as for the high dimensions of the data, the clustering algorithm may overlook the business relation between these dimensions especially in the medical fields. As a result, usually the clustering result may not meet the business goals of the users. Then, in the clustering process, if it can combine the knowledge of the users, that is, the doctor's knowledge or the analysis intent, the clustering result can be more satisfied. In this paper, we propose an interactive K -means clustering method to improve the user's satisfactions towards the result. The core of this method is to get the user's feedback of the clustering result, to optimize the clustering result. Then, a particle swarm optimization algorithm is used in the method to optimize the parameters, especially the weight settings in the clustering algorithm to make it reflect the user's business preference as possible. After that, based on the parameter optimization and adjustment, the clustering result can be closer to the user's requirement. Finally, we take an example in the breast cancer, to testify our method. The experiments show the better performance of our algorithm.

  12. Detector array and method

    International Nuclear Information System (INIS)

    Timothy, J.G.; Bybee, R.L.

    1978-01-01

    A detector array and method are described in which sets of electrode elements are provided. Each set consists of a number of linear extending parallel electrodes. The sets of electrode elements are disposed at an angle (preferably orthogonal) with respect to one another so that the individual elements intersect and overlap individual elements of the other sets. Electrical insulation is provided between the overlapping elements. The detector array is exposed to a source of charged particles which in accordance with one embodiment comprise electrons derived from a microchannel array plate exposed to photons. Amplifier and discriminator means are provided for each individual electrode element. Detection means are provided to sense pulses on individual electrode elements in the sets, with coincidence of pulses on individual intersecting electrode elements being indicative of charged particle impact at the intersection of the elements. Electronic readout means provide an indication of coincident events and the location where the charged particle or particles impacted. Display means are provided for generating appropriate displays representative of the intensity and locaton of charged particles impacting on the detector array

  13. Means of Hilbert space operators

    CERN Document Server

    Hiai, Fumio

    2003-01-01

    The monograph is devoted to a systematic study of means of Hilbert space operators by a unified method based on the theory of double integral transformations and Peller's characterization of Schur multipliers. General properties on means of operators such as comparison results, norm estimates and convergence criteria are established. After some general theory, special investigations are focused on three one-parameter families of A-L-G (arithmetic-logarithmic-geometric) interpolation means, Heinz-type means and binomial means. In particular, norm continuity in the parameter is examined for such means. Some necessary technical results are collected as appendices.

  14. A Control Variate Method for Probabilistic Performance Assessment. Improved Estimates for Mean Performance Quantities of Interest

    Energy Technology Data Exchange (ETDEWEB)

    MacKinnon, Robert J.; Kuhlman, Kristopher L

    2016-05-01

    We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application to probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.

  15. Networks and knowledge creation for meaning

    DEFF Research Database (Denmark)

    Brink, Tove

    2016-01-01

    . The findings show that business networks provide an important frame for organising both central- and distributed leadership to provide meaning on all levels of the network organisations. The business network consists of customers, suppliers and business partners for support of reciprocal learning for providing......The research in this paper reveals how business networks can create organisational knowledge to provide meaning for enabling innovation and reduction of Levelized Cost Of Energy (LCOE)? The research was conducted from June 2014 to May 2015, using a qualitative deductive research approach...... and due to the organising process highly valuable findings regarding innovation can be utilised both for improved central decisions and for improved application of knowledge locally when the meaning is established. Further research is needed for elaboration of the business network frame and the organising...

  16. Competition Experiments as a Means of Evaluating Linear Free Energy Relationships

    Science.gov (United States)

    Mullins, Richard J.; Vedernikov, Andrei; Viswanathan, Rajesh

    2004-01-01

    The use of competition experiments as a means of evaluating linear free energy relationship in the undergraduate teaching laboratory is reported. The use of competition experiments proved to be a reliable method for the construction of Hammett plots with good correlation providing great flexibility with regard to the compounds and reactions that…

  17. Method and means for cracking oils

    Energy Technology Data Exchange (ETDEWEB)

    Crozier, R H

    1928-05-18

    In a retort for the distillation of coal, shale or the like utilizing the heat in vapors drawn off at different stages from the retort to distill off oils of lower boiling point, the arrangement at the lower end of the retort of a flue or a series of flues acting as bracing members and providing for the introduction of a gas burner or gas burners adapted to be supplied with gas from the gas mains of the like or the retort whereby the gas produced may be utilized to the greatest advantage.

  18. Comparative Performance Of Using PCA With K-Means And Fuzzy C Means Clustering For Customer Segmentation

    Directory of Open Access Journals (Sweden)

    Fahmida Afrin

    2015-08-01

    Full Text Available Abstract Data mining is the process of analyzing data and discovering useful information. Sometimes it is called knowledge Discovery. Clustering refers to groups whereas data are grouped in such a way that the data in one cluster are similar data in different clusters are dissimilar. Many data mining technologies are developed for customer segmentation. PCA is working as a preprocessor of Fuzzy C means and K- means for reducing the high dimensional and noisy data. There are many clustering method apply on customer segmentation. In this paper the performance of Fuzzy C means and K-means after implementing Principal Component Analysis is analyzed. We analyze the performance on a standard dataset for these algorithms. The results indicate that PCA based fuzzy clustering produces better results than PCA based K-means and is a more stable method for customer segmentation.

  19. Performance analysis of Arithmetic Mean method in determining peak junction temperature of semiconductor device

    Directory of Open Access Journals (Sweden)

    Mohana Sundaram Muthuvalu

    2015-12-01

    Full Text Available High reliability users of microelectronic devices have been derating junction temperature and other critical stress parameters to improve device reliability and extend operating life. The reliability of a semiconductor is determined by junction temperature. This paper gives a useful analysis on mathematical approach which can be implemented to predict temperature of a silicon die. The problem could be modeled as heat conduction equation. In this study, numerical approach based on implicit scheme and Arithmetic Mean (AM iterative method will be applied to solve the governing heat conduction equation. Numerical results are also included in order to assert the effectiveness of the proposed technique.

  20. Calculation of the energy provided by a PV generator. Comparative study: Conventional methods vs. artificial neural networks

    International Nuclear Information System (INIS)

    Almonacid, F.; Rus, C.; Perez-Higueras, P.; Hontoria, L.

    2011-01-01

    The use of photovoltaics for electricity generation purposes has recorded one of the largest increases in the field of renewable energies. The energy production of a grid-connected PV system depends on various factors. In a wide sense, it is considered that the annual energy provided by a generator is directly proportional to the annual radiation incident on the plane of the generator and to the installed nominal power. However, a range of factors is influencing the expected outcome by reducing the generation of energy. The aim of this study is to compare the results of four different methods for estimating the annual energy produced by a PV generator: three of them are classical methods and the fourth one is based on an artificial neural network developed by the R and D Group for Solar and Automatic Energy at the University of Jaen. The results obtained shown that the method based on an artificial neural network provides better results than the alternative classical methods in study, mainly due to the fact that this method takes also into account some second order effects, such as low irradiance, angular and spectral effects. -- Research highlights: → It is considered that the annual energy provided by a PV generator is directly proportional to the annual radiation incident on the plane of the generator and to the installed nominal power. → A range of factors are influencing the expected outcome by reducing the generation of energy (mismatch losses, dirt and dust, Ohmic losses,.). → The aim of this study is to compare the results of four different methods for estimating the annual energy produced by a PV generator: three of them are classical methods and the fourth one is based on an artificial neural network. → The results obtained shown that the method based on an artificial neural network provides better results than the alternative classical methods in study. While classical methods have only taken into account temperature losses, the method based in

  1. Calculation of the mean scattering angle, the logarithmic decrement and its mean square

    International Nuclear Information System (INIS)

    Bersillon, O.; Caput, B.

    1984-06-01

    The calculation of the mean scattering angle, the logarithmic decrement and its mean square, starting from the Legendre polynomial expansion coefficients of the relevant elastic scattering angular distribution, is numerically studied with different methods, one of which is proposed for the usual determination of these quantities which are present in the evaluated data files ENDF [fr

  2. A method for hardening or curing adhesives for flocking thermally sensitive substrata by means of an electron-beam

    International Nuclear Information System (INIS)

    Nablo, S.V.; Fussa, A.D.

    1975-01-01

    The invention relates to a method for hardening or curing adhesives for flocking thermally sensitive substrata by means of an electron-beam. That method consists in accurately adjusting the parameters of irradiation by an electron-beam and the beam velocity so as to obtain, a very rapid hardening of adhesives used for fixing flocking materials, or the like, to thermally sensitive substrate. That can be applied to hardening or curing adhesives for flocking thermally-sensitive substrata which normally restrict the hardening rate [fr

  3. A Few Comments on Determining the Shapes of Hyperboloid Cooling Towers by the Means of Ambient Tangents Method

    OpenAIRE

    Jasińska, Elżbieta; Preweda, Edward

    2004-01-01

    The paper presents the contemplations on determining the parameters of location and shape of the model hyperboloid cooling tower by the means of the ambient tangents method. The attention has been drawn to the method of determining the so-called length of the tangent, understood as the horizontal distance between the station and the points of tangency with the structure, as well as to the impact of calculation of this length on the parameters of the shell being determined. The calculations of...

  4. Segmentation of dermatoscopic images by frequency domain filtering and k-means clustering algorithms.

    Science.gov (United States)

    Rajab, Maher I

    2011-11-01

    Since the introduction of epiluminescence microscopy (ELM), image analysis tools have been extended to the field of dermatology, in an attempt to algorithmically reproduce clinical evaluation. Accurate image segmentation of skin lesions is one of the key steps for useful, early and non-invasive diagnosis of coetaneous melanomas. This paper proposes two image segmentation algorithms based on frequency domain processing and k-means clustering/fuzzy k-means clustering. The two methods are capable of segmenting and extracting the true border that reveals the global structure irregularity (indentations and protrusions), which may suggest excessive cell growth or regression of a melanoma. As a pre-processing step, Fourier low-pass filtering is applied to reduce the surrounding noise in a skin lesion image. A quantitative comparison of the techniques is enabled by the use of synthetic skin lesion images that model lesions covered with hair to which Gaussian noise is added. The proposed techniques are also compared with an established optimal-based thresholding skin-segmentation method. It is demonstrated that for lesions with a range of different border irregularity properties, the k-means clustering and fuzzy k-means clustering segmentation methods provide the best performance over a range of signal to noise ratios. The proposed segmentation techniques are also demonstrated to have similar performance when tested on real skin lesions representing high-resolution ELM images. This study suggests that the segmentation results obtained using a combination of low-pass frequency filtering and k-means or fuzzy k-means clustering are superior to the result that would be obtained by using k-means or fuzzy k-means clustering segmentation methods alone. © 2011 John Wiley & Sons A/S.

  5. Protein Based Molecular Markers Provide Reliable Means to Understand Prokaryotic Phylogeny and Support Darwinian Mode of Evolution

    Directory of Open Access Journals (Sweden)

    Vaibhav eBhandari

    2012-07-01

    Full Text Available The analyses of genome sequences have led to the proposal that lateral gene transfers (LGTs among prokaryotes are so widespread that they disguise the interrelationships among these organisms. This has led to questioning whether the Darwinian model of evolution is applicable to the prokaryotic organisms. In this review, we discuss the usefulness of taxon-specific molecular markers such as conserved signature indels (CSIs and conserved signature proteins (CSPs for understanding the evolutionary relationships among prokaryotes and to assess the influence of LGTs on prokaryotic evolution. The analyses of genomic sequences have identified large numbers of CSIs and CSPs that are unique properties of different groups of prokaryotes ranging from phylum to genus levels. The species distribution patterns of these molecular signatures strongly support a tree-like vertical inheritance of the genes containing these molecular signatures that is consistent with phylogenetic trees. Recent detailed studies in this regard on Thermotogae and Archaea, which are reviewed here, have identified large numbers of CSIs and CSPs that are specific for the species from these two taxa and a number of their major clades. The genetic changes responsible for these CSIs (and CSPs initially likely occurred in the common ancestors of these taxa and then vertically transferred to various descendants. Although some CSIs and CSPs in unrelated groups of prokaryotes were identified, their small numbers and random occurrence has no apparent influence on the consistent tree-like branching pattern emerging from other markers. These results provide evidence that although LGT is an important evolutionary force, it does not mask the tree-like branching pattern of prokaryotes or understanding of their evolutionary relationships. The identified CSIs and CSPs also provide novel and highly specific means for identification of different groups of microbes and for taxonomical and biochemical

  6. Protein based molecular markers provide reliable means to understand prokaryotic phylogeny and support Darwinian mode of evolution.

    Science.gov (United States)

    Bhandari, Vaibhav; Naushad, Hafiz S; Gupta, Radhey S

    2012-01-01

    The analyses of genome sequences have led to the proposal that lateral gene transfers (LGTs) among prokaryotes are so widespread that they disguise the interrelationships among these organisms. This has led to questioning of whether the Darwinian model of evolution is applicable to prokaryotic organisms. In this review, we discuss the usefulness of taxon-specific molecular markers such as conserved signature indels (CSIs) and conserved signature proteins (CSPs) for understanding the evolutionary relationships among prokaryotes and to assess the influence of LGTs on prokaryotic evolution. The analyses of genomic sequences have identified large numbers of CSIs and CSPs that are unique properties of different groups of prokaryotes ranging from phylum to genus levels. The species distribution patterns of these molecular signatures strongly support a tree-like vertical inheritance of the genes containing these molecular signatures that is consistent with phylogenetic trees. Recent detailed studies in this regard on the Thermotogae and Archaea, which are reviewed here, have identified large numbers of CSIs and CSPs that are specific for the species from these two taxa and a number of their major clades. The genetic changes responsible for these CSIs (and CSPs) initially likely occurred in the common ancestors of these taxa and then vertically transferred to various descendants. Although some CSIs and CSPs in unrelated groups of prokaryotes were identified, their small numbers and random occurrence has no apparent influence on the consistent tree-like branching pattern emerging from other markers. These results provide evidence that although LGT is an important evolutionary force, it does not mask the tree-like branching pattern of prokaryotes or understanding of their evolutionary relationships. The identified CSIs and CSPs also provide novel and highly specific means for identification of different groups of microbes and for taxonomical and biochemical studies.

  7. Identification of a Threshold Value for the DEMATEL Method: Using the Maximum Mean De-Entropy Algorithm

    Science.gov (United States)

    Chung-Wei, Li; Gwo-Hshiung, Tzeng

    To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation—the impact-relations map—by which respondents organize their own actions in the world. The applicability of the DEMATEL method is widespread, ranging from analyzing world problematique decision making to industrial planning. The most important property of the DEMATEL method used in the multi-criteria decision making (MCDM) field is to construct interrelations between criteria. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using real cases to find the interrelationships between the criteria for evaluating effects in E-learning programs as an examples, we will compare the results obtained from the respondents and from our method, and discuss that the different impact-relations maps from these two methods.

  8. 32 CFR 199.14 - Provider reimbursement methods.

    Science.gov (United States)

    2010-07-01

    ... physicians. (6) All services provided by nurse anesthetists. (7) All services related to discharges involving... more accurate data became available. (v) No update for inflation. The children's hospital differential... considered lower volume hospitals. (B) Hospitals that subsequently become higher volume hospitals. In any...

  9. Superheavy nuclei: a relativistic mean field outlook

    International Nuclear Information System (INIS)

    Afanasjev, A.V.

    2006-01-01

    The analysis of quasi-particle spectra in the heaviest A∼250 nuclei with spectroscopic data provides an additional constraint for the choice of effective interaction for the description of superheavy nuclei. It strongly suggests that only the parametrizations which predict Z = 120 and N = 172 as shell closures are reliable for superheavy nuclei within the relativistic mean field theory. The influence of the central depression in the density distribution of spherical superheavy nuclei on the shell structure is studied. A large central depression produces large shell gaps at Z = 120 and N = 172. The shell gaps at Z = 126 and N = 184 are favoured by a flat density distribution in the central part of the nucleus. It is shown that approximate particle number projection (PNP) by means of the Lipkin-Nogami (LN) method removes pairing collapse seen at these gaps in the calculations without PNP

  10. A combined approach for the enhancement and segmentation of mammograms using modified fuzzy C-means method in wavelet domain.

    Science.gov (United States)

    Srivastava, Subodh; Sharma, Neeraj; Singh, S K; Srivastava, R

    2014-07-01

    In this paper, a combined approach for enhancement and segmentation of mammograms is proposed. In preprocessing stage, a contrast limited adaptive histogram equalization (CLAHE) method is applied to obtain the better contrast mammograms. After this, the proposed combined methods are applied. In the first step of the proposed approach, a two dimensional (2D) discrete wavelet transform (DWT) is applied to all the input images. In the second step, a proposed nonlinear complex diffusion based unsharp masking and crispening method is applied on the approximation coefficients of the wavelet transformed images to further highlight the abnormalities such as micro-calcifications, tumours, etc., to reduce the false positives (FPs). Thirdly, a modified fuzzy c-means (FCM) segmentation method is applied on the output of the second step. In the modified FCM method, the mutual information is proposed as a similarity measure in place of conventional Euclidian distance based dissimilarity measure for FCM segmentation. Finally, the inverse 2D-DWT is applied. The efficacy of the proposed unsharp masking and crispening method for image enhancement is evaluated in terms of signal-to-noise ratio (SNR) and that of the proposed segmentation method is evaluated in terms of random index (RI), global consistency error (GCE), and variation of information (VoI). The performance of the proposed segmentation approach is compared with the other commonly used segmentation approaches such as Otsu's thresholding, texture based, k-means, and FCM clustering as well as thresholding. From the obtained results, it is observed that the proposed segmentation approach performs better and takes lesser processing time in comparison to the standard FCM and other segmentation methods in consideration.

  11. A Parametric k-Means Algorithm

    Science.gov (United States)

    Tarpey, Thaddeus

    2007-01-01

    Summary The k points that optimally represent a distribution (usually in terms of a squared error loss) are called the k principal points. This paper presents a computationally intensive method that automatically determines the principal points of a parametric distribution. Cluster means from the k-means algorithm are nonparametric estimators of principal points. A parametric k-means approach is introduced for estimating principal points by running the k-means algorithm on a very large simulated data set from a distribution whose parameters are estimated using maximum likelihood. Theoretical and simulation results are presented comparing the parametric k-means algorithm to the usual k-means algorithm and an example on determining sizes of gas masks is used to illustrate the parametric k-means algorithm. PMID:17917692

  12. An Efficient Power Regeneration and Drive Method of an Induction Motor by Means of an Optimal Torque Derived by Variational Method

    Science.gov (United States)

    Inoue, Kaoru; Ogata, Kenji; Kato, Toshiji

    When the motor speed is reduced by using a regenerative brake, the mechanical energy of rotation is converted to the electrical energy. When the regenerative torque is large, the corresponding current increases so that the copper loss also becomes large. On the other hand, the damping effect of rotation increases according to the time elapse when the regenerative torque is small. In order to use the limited energy effectively, an optimal regenerative torque should be discussed in order to regenerate electrical energy as much as possible. This paper proposes a design methodology of a regenerative torque for an induction motor to maximize the regenerative electric energy by means of the variational method. Similarly, an optimal torque for acceleration is derived in order to minimize the energy to drive. Finally, an efficient motor drive system with the proposed optimal torque and the power storage system stabilizing the DC link voltage will be proposed. The effectiveness of the proposed methods are illustrated by both simulations and experiments.

  13. Covariant density functional theory beyond mean field and applications for nuclei far from stability

    International Nuclear Information System (INIS)

    Ring, P

    2010-01-01

    Density functional theory provides a very powerful tool for a unified microscopic description of nuclei all over the periodic table. It is not only successful in reproducing bulk properties of nuclear ground states such as binding energies, radii, or deformation parameters, but it also allows the investigation of collective phenomena, such as giant resonances and rotational excitations. However, it is based on the mean field concept and therefore it has its limits. We discuss here two methods based based on covariant density functional theory going beyond the mean field concept, (i) models with an energy dependent self energy allowing the coupling to complex configurations and a quantitative description of the width of giant resonances and (ii) methods of configuration mixing between Slater determinants with different deformation and orientation providing are very successful description of transitional nuclei and quantum phase transitions.

  14. Development of mean field theories in nuclear physics and in desordered media

    International Nuclear Information System (INIS)

    Orland, Henri.

    1981-04-01

    This work, in two parts, deals with the development of mean field theories in nuclear physics (nuclei in balance and collisions of heavy ions) as well as in disordered media. In the first part, two different ways of tackling the problem of developments around mean field theories are explained. Possessing an approach wave function for the system, the natural idea for including the correlations is to develop the exact wave function of the system around the mean field wave function. The first two chapters show two different ways of dealing with this problem: the perturbative approach - Hartree-Fock equations with two body collisions and functional methods. In the second part: mean field theory for spin glasses. The problem for spin glasses is to construct a physically acceptable mean field theory. The importance of this problem in statistical mechanics is linked to the fact that the mean field theory provides a qualitative description of the low temperature phase and is the starting point needed for using more sophisticated methods (renormalization group). Two approaches to this problem are presented, one based on the Sherrington-Kirkpatrick model and the other based on a model of spins with purely local disorder and competitive interaction between the spins [fr

  15. Means and method for examining the human body by means of penetrating radiation

    International Nuclear Information System (INIS)

    1983-01-01

    The invention relates to a tomography method in which data are obtained by directing a fan of X-ray beams to the human body. The fan can rotate around the body. Moreover, the sources may rotate about an axis perpendicular to the plane of irradiation. (Auth.)

  16. Quantitation of IgE by means of a modified radial immunodiffusion method in comparison with the radioimmunosorbent test (RIST)

    International Nuclear Information System (INIS)

    Wiedermann, G.; Stemberg, H.; Kraft, D.; Ambrosch, F.; Schadlbauer, B.; Jarischko, E.; Vienna Univ.

    1974-01-01

    Serum IgE were quantitied by means of modified radial immunodiffusion technique (RID). To improve visibility of precipitin bands a staining procedure with DOPA was applied. Pretreatment of sera with dextransulfate proved necessary in order to avoid unspecific ringformation in the agargel. In comparison with the RIST it turned out that sera containing less than 500 I.U. IgE/ml did not produce precipitin bands with this method. Sera containing 500-999 I.U. IgE/ml occasionally exhibited positive results with the RID technique, whereas sera with more than 1,000 I.U./ml were regulary positive. In its present form the RID may be used as screening method for sera with higher IgE levels. Within the above mentioned limits the IgE levels calculated by means of the RID-test roughly corresponded the values determined by the RIST. (orig.) [de

  17. RESEARCH OF PROBLEMS OF DESIGN OF COMPLEX TECHNICAL PROVIDING AND THE GENERALIZED MODEL OF THEIR DECISION

    Directory of Open Access Journals (Sweden)

    A. V. Skrypnikov

    2015-01-01

    Full Text Available Summary. In this work the general ideas of a method of V. I. Skurikhin taking into account the specified features develop and questions of the analysis and synthesis of a complex of technical means, with finishing them to the level suitable for use in engineering practice of design of information management systems are in more detail considered. In work the general system approach to the solution of questions of a choice of technical means of the information management system is created, the general technique of the sys tem analysis and synthesis of a complex of the technical means and its subsystems providing achievement of extreme value of criterion of efficiency of functioning of a technical complex of the information management system is developed. The main attention is paid to the applied party of system researches of complex technical providing, in particular, to definition of criteria of quality of functioning of a technical complex, development of methods of the analysis of information base of the information management system and definition of requirements to technical means, and also methods of structural synthesis of the main subsystems of complex technical providing. Thus, the purpose is research on the basis of system approach of complex technical providing the information management system and development of a number of methods of the analysis and the synthesis of complex technical providing suitable for use in engineering practice of design of systems. The well-known paradox of development of management information consists of that parameters of the system, and consequently, and requirements to the complex hardware, can not be strictly reasonable to development of algorithms and programs, and vice versa. The possible method of overcoming of these difficulties is prognostication of structure and parameters of complex hardware for certain management informations on the early stages of development, with subsequent clarification and

  18. Measuring core inflation in India: An asymmetric trimmed mean approach

    Directory of Open Access Journals (Sweden)

    Naresh Kumar Sharma

    2015-12-01

    Full Text Available The paper seeks to obtain an optimal asymmetric trimmed mean-based core inflation measure in the class of trimmed mean measures when the distribution of price changes is leptokurtic and skewed to the right for any given period. Several estimators based on asymmetric trimmed mean approach are constructed and estimates generated by use of these estimators are evaluated on the basis of certain established empirical criteria. The paper also provides the method of trimmed mean expression “in terms of percentile score.” This study uses 69 monthly price indices which are constituent components of Wholesale Price Index for the period, April 1994 to April 2009, with 1993–1994 as the base year. Results of the study indicate that an optimally trimmed estimator is found when we trim 29.5% from the left-hand tail and 20.5% from the right-hand tail of the distribution of price changes.

  19. An Assessment of Mean Areal Precipitation Methods on Simulated Stream Flow: A SWAT Model Performance Assessment

    Directory of Open Access Journals (Sweden)

    Sean Zeiger

    2017-06-01

    Full Text Available Accurate mean areal precipitation (MAP estimates are essential input forcings for hydrologic models. However, the selection of the most accurate method to estimate MAP can be daunting because there are numerous methods to choose from (e.g., proximate gauge, direct weighted average, surface-fitting, and remotely sensed methods. Multiple methods (n = 19 were used to estimate MAP with precipitation data from 11 distributed monitoring sites, and 4 remotely sensed data sets. Each method was validated against the hydrologic model simulated stream flow using the Soil and Water Assessment Tool (SWAT. SWAT was validated using a split-site method and the observed stream flow data from five nested-scale gauging sites in a mixed-land-use watershed of the central USA. Cross-validation results showed the error associated with surface-fitting and remotely sensed methods ranging from −4.5 to −5.1%, and −9.8 to −14.7%, respectively. Split-site validation results showed the percent bias (PBIAS values that ranged from −4.5 to −160%. Second order polynomial functions especially overestimated precipitation and subsequent stream flow simulations (PBIAS = −160 in the headwaters. The results indicated that using an inverse-distance weighted, linear polynomial interpolation or multiquadric function method to estimate MAP may improve SWAT model simulations. Collectively, the results highlight the importance of spatially distributed observed hydroclimate data for precipitation and subsequent steam flow estimations. The MAP methods demonstrated in the current work can be used to reduce hydrologic model uncertainty caused by watershed physiographic differences.

  20. Method and apparatus for continuous sampling

    International Nuclear Information System (INIS)

    Marcussen, C.

    1982-01-01

    An apparatus and method for continuously sampling a pulverous material flow includes means for extracting a representative subflow from a pulverous material flow. A screw conveyor is provided to cause the extracted subflow to be pushed upwardly through a duct to an overflow. Means for transmitting a radiation beam transversely to the subflow in the duct, and means for sensing the transmitted beam through opposite pairs of windows in the duct are provided to measure the concentration of one or more constituents in the subflow. (author)

  1. Digital photography provides a fast, reliable, and noninvasive method to estimate anthocyanin pigment concentration in reproductive and vegetative plant tissues.

    Science.gov (United States)

    Del Valle, José C; Gallardo-López, Antonio; Buide, Mª Luisa; Whittall, Justen B; Narbona, Eduardo

    2018-03-01

    Anthocyanin pigments have become a model trait for evolutionary ecology as they often provide adaptive benefits for plants. Anthocyanins have been traditionally quantified biochemically or more recently using spectral reflectance. However, both methods require destructive sampling and can be labor intensive and challenging with small samples. Recent advances in digital photography and image processing make it the method of choice for measuring color in the wild. Here, we use digital images as a quick, noninvasive method to estimate relative anthocyanin concentrations in species exhibiting color variation. Using a consumer-level digital camera and a free image processing toolbox, we extracted RGB values from digital images to generate color indices. We tested petals, stems, pedicels, and calyces of six species, which contain different types of anthocyanin pigments and exhibit different pigmentation patterns. Color indices were assessed by their correlation to biochemically determined anthocyanin concentrations. For comparison, we also calculated color indices from spectral reflectance and tested the correlation with anthocyanin concentration. Indices perform differently depending on the nature of the color variation. For both digital images and spectral reflectance, the most accurate estimates of anthocyanin concentration emerge from anthocyanin content-chroma ratio, anthocyanin content-chroma basic, and strength of green indices. Color indices derived from both digital images and spectral reflectance strongly correlate with biochemically determined anthocyanin concentration; however, the estimates from digital images performed better than spectral reflectance in terms of r 2 and normalized root-mean-square error. This was particularly noticeable in a species with striped petals, but in the case of striped calyces, both methods showed a comparable relationship with anthocyanin concentration. Using digital images brings new opportunities to accurately quantify the

  2. Reaction paths based on mean first-passage times

    International Nuclear Information System (INIS)

    Park, Sanghyun; Sener, Melih K.; Lu Deyu; Schulten, Klaus

    2003-01-01

    Finding representative reaction pathways is important for understanding the mechanism of molecular processes. We propose a new approach for constructing reaction paths based on mean first-passage times. This approach incorporates information about all possible reaction events as well as the effect of temperature. As an application of this method, we study representative pathways of excitation migration in a photosynthetic light-harvesting complex, photosystem I. The paths thus computed provide a complete, yet distilled, representation of the kinetic flow of excitation toward the reaction center, thereby succinctly characterizing the function of the system

  3. The mean distance to the nth neighbour in a uniform distribution of random points: an application of probability theory

    International Nuclear Information System (INIS)

    Bhattacharyya, Pratip; Chakrabarti, Bikas K

    2008-01-01

    We study different ways of determining the mean distance (r n ) between a reference point and its nth neighbour among random points distributed with uniform density in a D-dimensional Euclidean space. First, we present a heuristic method; though this method provides only a crude mathematical result, it shows a simple way of estimating (r n ). Next, we describe two alternative means of deriving the exact expression of (r n ): we review the method using absolute probability and develop an alternative method using conditional probability. Finally, we obtain an approximation to (r n ) from the mean volume between the reference point and its nth neighbour and compare it with the heuristic and exact results

  4. ELV Recycling Service Provider Selection Using the Hybrid MCDM Method: A Case Application in China

    Directory of Open Access Journals (Sweden)

    Fuli Zhou

    2016-05-01

    Full Text Available With the rapid depletion of natural resources and undesired environmental changes globally, more interest has been shown in the research of green supply chain practices, including end-of-life vehicle (ELV recycling. The ELV recycling is mandatory for auto-manufacturers by legislation for the purpose of minimizing potential environmental damages. The purpose of the present research is to determine the best choice of ELV recycling service provider by employing an integrating hybrid multi-criteria decision making (MCDM method. In this research, economic, environmental and social factors are taken into consideration. The linguistic variables and trapezoidal fuzzy numbers (TFNs are applied into this evaluation to deal with the vague and qualitative information. With the combined weight calculation of criteria based on fuzzy aggregation and Shannon Entropy techniques, the normative multi-criteria optimization technique (FVIKOR method is applied to explore the best solution. An application was performed based on the proposed hybrid MCDM method, and sensitivity analysis was conducted on different decision making scenarios. The present study provides a decision-making approach on ELV recycling business selection under sustainability and green philosophy with high robustness and easy implementation.

  5. Apparatus and method for transverse tomography

    Energy Technology Data Exchange (ETDEWEB)

    1976-06-16

    An improved apparatus and method for generating the two-dimensional filtered back-projected image of a slice of an object is described. In accordance with the invention a photodetector means and illluminating means directed toward the photodetector means are provided. A carrier means is disposed between the illuminating means and the photodetector means, the carrier means having a plurality of substantially parallel elongated projections on the surface thereof. Each projection has an optical characteristic (transmissivity or reflectivity) representing the density characteristic of the slice of the object as measured at a particular relative rotational angle. A mask means, comprising a plurality of cycles of a substantially sinusoidally shaped pattern of varying amplitude, is disposed between the illuminating means and the photodetector. Means are provided for moving the carrier means and the mask means with respect to each other. Finally, a display or recording means, synchronized with the moving means, is responsive to the output of the photodetector for displaying the back-projected image.

  6. Towards explaining the speed of k-means

    NARCIS (Netherlands)

    Manthey, Bodo; van de Pol, Jan Cornelis; Raamsdonk, F.; Stoelinga, Mariëlle Ida Antoinette

    2011-01-01

    The $k$-means method is a popular algorithm for clustering, known for its speed in practice. This stands in contrast to its exponential worst-case running-time. To explain the speed of the $k$-means method, a smoothed analysis has been conducted. We sketch this smoothed analysis and a generalization

  7. The bonded in the chestnut-tree (Aesculus hippocastanum L.) bark water freezing process studied by means NMR method

    International Nuclear Information System (INIS)

    Haranczyk, H.; Weglarz, W.

    1994-01-01

    The bonded in the chestnut-tree (Aesculus hippocastanum L.) bark water freezing process was studied by means NMR method. The measured relaxation time (as a function of temperature) shows two compounds. First from solid state water (T 2 * 20 μs) and the second one from liquid water (T 2 * = 1 ms). This results are presented and discussed

  8. Evaluation of protection factors provided by full-face masks using man-test method at workplace

    International Nuclear Information System (INIS)

    Izumi, Yukio; Kinouchi, Nobuyuki; Ikezawa, Yoshio.

    1994-01-01

    From a practical angle of view to estimate the protection factors (PFs) provided by full-face masks, a number of protection factors were measured with a man-test apparatus just before the wearers started to do radiation work in radiation controlled area. PFs of the total number of 2,279 cases were measured under five simulated working conditions. The measured PFs were widely distributed from 2.3 to 6,700. About 95% of workers obtained PFs more than 50, and about 64% showed much higher PFs more than 1,000 due to good fitting. In the case of some persons, the measured PFs irregularly varied and changed to a large degree. This method is a reliable technique that has been confirmed to protect unexpected internal exposure. From the results obtained, the method should be necessary to provide a better mask and higher PF for each worker. (author)

  9. Mean field interaction in biochemical reaction networks

    KAUST Repository

    Tembine, Hamidou; Tempone, Raul; Vilanova, Pedro

    2011-01-01

    In this paper we establish a relationship between chemical dynamics and mean field game dynamics. We show that chemical reaction networks can be studied using noisy mean field limits. We provide deterministic, noisy and switching mean field limits

  10. Electron Inelastic-Mean-Free-Path Database

    Science.gov (United States)

    SRD 71 NIST Electron Inelastic-Mean-Free-Path Database (PC database, no charge)   This database provides values of electron inelastic mean free paths (IMFPs) for use in quantitative surface analyses by AES and XPS.

  11. Seismic restraint means for radiation detector

    International Nuclear Information System (INIS)

    Underwood, R.H.; Todt, W.H.

    1983-01-01

    Seismic restraint means are provided for mounting an elongated, generally cylindrical nuclear radiation detector within a tubular thimble in a nuclear reactor monitor system. The restraint means permits longitudinal movement of the radiation detector into and out of the thimble. Each restraint means comprises a split clamp ring and a plurality of symmetrically spaced support arms pivotally mounted on the clamp ring. Each support arm has spring bias means and thimble contact means eg insulating rollers whereby the contact means engage the thimble with a constant predetermined force which minimizes seismic vibration action on the radiation detector. (author)

  12. Segmentation of Mushroom and Cap width Measurement using Modified K-Means Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Eser Sert

    2014-01-01

    Full Text Available Mushroom is one of the commonly consumed foods. Image processing is one of the effective way for examination of visual features and detecting the size of a mushroom. We developed software for segmentation of a mushroom in a picture and also to measure the cap width of the mushroom. K-Means clustering method is used for the process. K-Means is one of the most successful clustering methods. In our study we customized the algorithm to get the best result and tested the algorithm. In the system, at first mushroom picture is filtered, histograms are balanced and after that segmentation is performed. Results provided that customized algorithm performed better segmentation than classical K-Means algorithm. Tests performed on the designed software showed that segmentation on complex background pictures is performed with high accuracy, and 20 mushrooms caps are measured with 2.281 % relative error.

  13. SOME ASPECTS OF THE INTEGRATION OF CONTENT, FORMS, MEANS, METHODS OF TEACHING FOREIGN LANGUAGESIN THE CONDITIONS OF INFORMATIZATION OF EDUCATION

    Directory of Open Access Journals (Sweden)

    И Ю Мишота

    2016-12-01

    Full Text Available In article some questions of integration of training methods to foreign languages using means of informatization of education are considered.The attention that application of information technologies in teaching foreign languages integrally supplements is focused and expands possibilities of an effective solution of didactic tasks in case of creation of modern pedagogical models and is a certain factor of integration of methods and forms of education.

  14. Method for providing slip energy control in permanent magnet electrical machines

    Science.gov (United States)

    Hsu, John S.

    2006-11-14

    An electric machine (40) has a stator (43), a permanent magnet rotor (38) with permanent magnets (39) and a magnetic coupling uncluttered rotor (46) for inducing a slip energy current in secondary coils (47). A dc flux can be produced in the uncluttered rotor when the secondary coils are fed with dc currents. The magnetic coupling uncluttered rotor (46) has magnetic brushes (A, B, C, D) which couple flux in through the rotor (46) to the secondary coils (47c, 47d) without inducing a current in the rotor (46) and without coupling a stator rotational energy component to the secondary coils (47c, 47d). The machine can be operated as a motor or a generator in multi-phase or single-phase embodiments and is applicable to the hybrid electric vehicle. A method of providing a slip energy controller is also disclosed.

  15. A Novel Polygonal Finite Element Method: Virtual Node Method

    Science.gov (United States)

    Tang, X. H.; Zheng, C.; Zhang, J. H.

    2010-05-01

    Polygonal finite element method (PFEM), which can construct shape functions on polygonal elements, provides greater flexibility in mesh generation. However, the non-polynomial form of traditional PFEM, such as Wachspress method and Mean Value method, leads to inexact numerical integration. Since the integration technique for non-polynomial functions is immature. To overcome this shortcoming, a great number of integration points have to be used to obtain sufficiently exact results, which increases computational cost. In this paper, a novel polygonal finite element method is proposed and called as virtual node method (VNM). The features of present method can be list as: (1) It is a PFEM with polynomial form. Thereby, Hammer integral and Gauss integral can be naturally used to obtain exact numerical integration; (2) Shape functions of VNM satisfy all the requirements of finite element method. To test the performance of VNM, intensive numerical tests are carried out. It found that, in standard patch test, VNM can achieve significantly better results than Wachspress method and Mean Value method. Moreover, it is observed that VNM can achieve better results than triangular 3-node elements in the accuracy test.

  16. Fluidics platform and method for sample preparation

    Science.gov (United States)

    Benner, Henry W.; Dzenitis, John M.

    2016-06-21

    Provided herein are fluidics platforms and related methods for performing integrated sample collection and solid-phase extraction of a target component of the sample all in one tube. The fluidics platform comprises a pump, particles for solid-phase extraction and a particle-holding means. The method comprises contacting the sample with one or more reagents in a pump, coupling a particle-holding means to the pump and expelling the waste out of the pump while the particle-holding means retains the particles inside the pump. The fluidics platform and methods herein described allow solid-phase extraction without pipetting and centrifugation.

  17. INFORMATION TECHNOLOGY AS A MEANS TO CAPTURE THE STUDENTS OF THE COURSE "METHODS OF "MATHEMATICS" EDUCATIONAL TEACHING FIELD"

    Directory of Open Access Journals (Sweden)

    Skvortsova S.

    2014-11-01

    Full Text Available The paper presents an analysis of the concepts of "information technology", "Information Technologies in Education", "Information technology education", "computer technology", "New Information Technologies", "New Information Technologies in Education". Found that the most common concept in this list is the concept of "information technology" as a set of methods and technical means for collecting, processing, storing, processing, transmission and presentation of data. Slightly narrower in this context, the concept of "new information technologies," which mandates the involvement of computer and other technical means to work with data. The emphasis on the learning process of information technology requires detailed terms "Information Technologies in Education" and "New Information Technologies in Education", which are defined as involvement of information technology and accordingly, including the technical means to create new perceptions and knowledge transfer, evaluation studies and all-round development of the individual in the educational process. Along with these terms also used such as "information technology training," which denotes a set of training and educational materials, and technical tools for educational purposes, as well as the system of scientific knowledge about their role and place in the educational process. Meanwhile, the term "information technology" encompasses all these concepts, so in a broad sense can be used to denote any signified concepts. As an extension of the term "information technology", the term information and communication technologies (ICT, and "information technology education", understood as educational technology using special methods, software and hardware to work with information and "ICT training "- as IT training focused on the use of computer communications networks for solving instructional problems or their fragments. Taking into account tasks, such as creating methodical maintenance of discipline

  18. Evaluation of Effective Factors on e-Loyalty in Organizations Providing Electronic Services using Fuzzy AHP Method

    Directory of Open Access Journals (Sweden)

    fatemeh mohammadi

    2012-12-01

    Full Text Available In today's business world, proper identification of customer’s requirements and a quick response to these requirements is a key to commercial success. Increasing customer loyalty affects the profitability and organizations can ensure their long-term interests by means of planning. In today's competitive world, the services provided by the competing company have to be more similar to each other and can be hard to surprise customers within completely new service in the long term, because the newest services are quickly imitated by competitors and marketed. Hence investment in customer loyalty is an effective and profitable investment for companies. One criticism that has entered into the e-service is customer loyalty. In order to study the causes of e-loyalty for organization providing e-services, this research identified the factors affecting customer loyalty in e-services and with questionnaire prepared and by using fuzzy hierarchical decision-making process acquires the weight of each factor and ultimately rank them. The results show that the quality of service provided to e-services customers is the most important factor in creating e-loyalty.

  19. Aphasics' defective perception of connotative meaning of verbal items which have no denotative meaning.

    Science.gov (United States)

    Ammon, K H; Moerman, C; Guleac, J D

    1977-12-01

    This study deals with the question of whether in aphasic patients the grasping of connotative meaning is disturbed. The method used was the "maluma - takete" type (Koehler, 1947): matching of synthetic words to meaningless figures. It was proven that asphasics from different countries with different languages have a disturbed perception of connotative meaning. There was a correlation with the severity of language comprehension disturbance in aphasics.

  20. Serving some and serving all: how providers navigate the challenges of providing racially targeted health services.

    Science.gov (United States)

    Zhou, Amy

    2017-10-01

    Racially targeted healthcare provides racial minorities with culturally and linguistically appropriate health services. This mandate, however, can conflict with the professional obligation of healthcare providers to serve patients based on their health needs. The dilemma between serving a particular population and serving all is heightened when the patients seeking care are racially diverse. This study examines how providers in a multi-racial context decide whom to include or exclude from health programs. This study draws on 12 months of ethnographic fieldwork at an Asian-specific HIV organization. Fieldwork included participant observation of HIV support groups, community outreach programs, and substance abuse recovery groups, as well as interviews with providers and clients. Providers managed the dilemma in different ways. While some programs in the organization focused on an Asian clientele, others de-emphasized race and served a predominantly Latino and African American clientele. Organizational structures shaped whether services were delivered according to racial categories. When funders examined client documents, providers prioritized finding Asian clients so that their documents reflected program goals to serve the Asian population. In contrast, when funders used qualitative methods, providers could construct an image of a program that targets Asians during evaluations while they included other racial minorities in their everyday practice. Program services were organized more broadly by health needs. Even within racially targeted programs, the meaning of race fluctuates and is contested. Patients' health needs cross cut racial boundaries, and in some circumstances, the boundaries of inclusion can expand beyond specific racial categories to include racial minorities and underserved populations more generally.

  1. DIFFERENTIAL DIAGNOSTICS MODEL RESEARCH BY MEANS OF THE POTENTIAL FUNCTIONS METHOD FOR NEUROLOGY DISEASES CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    V. Z. Stetsyuk

    2016-10-01

    Full Text Available Informatization in medicine offers a lot of opportunities to enhance quality of medical support, accuracy of diagnosis and provides the use of accumulated experience. Modern program systems are utilized now as additional tools to get appropriate advice. This article offers the way to provide help for neurology department doctor of NCSH «OKHMATDYT» during diagnosis determining. It was decided to design the program system for this purpose based on differential diagnostic model. The key problems in differential diagnosis are symptoms similarity between each other in one disease group and the absence of key symptom. Therefore the differential diagnostic model is needed. It is constructed using the potential function method in characteristics space. This characteristics space is formed by 100-200 points - patients with their symptoms. The main feature of this method here is that the decision function is building during recognition step united with learning that became possible with the help of modern powerful computers.

  2. The two-wave X-ray field calculated by means of integral-equation methods

    International Nuclear Information System (INIS)

    Bremer, J.

    1984-01-01

    The problem of calculating the two-wave X-ray field on the basis of the Takagi-Taupin equations is discussed for the general case of curved lattice planes. A two-dimensional integral equation which incorporates the nature of the incoming radiation, the form of the crystal/vacuum boundary, and the curvature of the structure, is deduced. Analytical solutions for the symmetrical Laue case with incoming plane waves are obtained directly for perfect crystals by means of iteration. The same method permits a simple derivation of the narrow-wave Laue and Bragg cases. Modulated wave fronts are discussed, and it is shown that a cut-off in the width of an incoming plane wave leads to lateral oscillations which are superimposed on the Pendelloesung fringes. Bragg and Laue shadow fields are obtained. The influence of a non-zero kernel is discussed and a numerical procedure for calculating wave amplitudes in curved crystals is presented. (Auth.)

  3. Using the geometric mean fluorescence intensity index method to measure ZAP-70 expression in patients with chronic lymphocytic leukemia

    Directory of Open Access Journals (Sweden)

    Wu YJ

    2016-02-01

    Full Text Available Yu-Jie Wu, Hui Wang, Jian-Hua Liang, Yi Miao, Lu Liu, Hai-Rong Qiu, Chun Qiao, Rong Wang, Jian-Yong Li Department of Hematology, First Affiliated Hospital of Nanjing Medical University, Jiangsu Province Hospital, Nanjing, People’s Republic of China Abstract: Expression of ζ-chain-associated protein kinase 70 kDa (ZAP-70 in chronic lymphocytic leukemia (CLL is associated with more aggressive disease and can help differentiate CLL from cases expressing mutated or unmutated immunoglobulin heavy chain variable region (IgHV genes. However, standardizing ZAP-70 expression by flow cytometric analysis has proved unsatisfactory. The key point is that ZAP-70 is weakly expressed with a continuous expression pattern rather than a clear discrimination between positive and negative CLL cells, which means that the resulting judgment is subjective. Thus, in this study, we aimed at assessing the reliability and repeatability of ZAP-70 expression using the geometric mean fluorescence intensity (geo MFI index method based on flow cytometry with 256-channel resolution in a series of 402 CLL patients and to compare ZAP-70 with other biological and clinical prognosticators. According to IgHV mutational status, we were able to confirm that the optimal cut-off point for the geo MFI index was 3.5 in the test set. In multivariate analyses that included the major clinical and biological prognostic markers for CLL, the prognostic impact of ZAP-70 expression appeared to have stronger discriminatory power when the geo MFI index method was applied. In addition, we found that ZAP-70-positive patients according to the geo MFI index method had shorter time to first treatment or overall survival (P=0.0002, P=0.0491. This is the first report showing that ZAP-70 expression can be evaluated by a new approach, the geo MFI index, which could be a useful prognostic method as it is more reliable, less subjective, and therefore better associated with improvement of CLL prognostication

  4. Method to separate various isotopes in compounds by means of laser radiation

    International Nuclear Information System (INIS)

    Meyer-Kretschmer, G.; Jetter, H.; Toennies, P.

    1980-01-01

    The uranium hexafluoride together with an inert addition gas is cooled down below 50 K by adiabatic expansion, then the state of oscillation of the molecules is changed specifically for each isotope using laser light, and subsequently positive ions are produced by means of an electron beam. The ions are removed from the gas by means of an electric field. (DG) [de

  5. Mean protein evolutionary distance: a method for comparative protein evolution and its application.

    Directory of Open Access Journals (Sweden)

    Michael J Wise

    Full Text Available Proteins are under tight evolutionary constraints, so if a protein changes it can only do so in ways that do not compromise its function. In addition, the proteins in an organism evolve at different rates. Leveraging the history of patristic distance methods, a new method for analysing comparative protein evolution, called Mean Protein Evolutionary Distance (MeaPED, measures differential resistance to evolutionary pressure across viral proteomes and is thereby able to point to the proteins' roles. Different species' proteomes can also be compared because the results, consistent across virus subtypes, concisely reflect the very different lifestyles of the viruses. The MeaPED method is here applied to influenza A virus, hepatitis C virus, human immunodeficiency virus (HIV, dengue virus, rotavirus A, polyomavirus BK and measles, which span the positive and negative single-stranded, doubled-stranded and reverse transcribing RNA viruses, and double-stranded DNA viruses. From this analysis, host interaction proteins including hemagglutinin (influenza, and viroporins agnoprotein (polyomavirus, p7 (hepatitis C and VPU (HIV emerge as evolutionary hot-spots. By contrast, RNA-directed RNA polymerase proteins including L (measles, PB1/PB2 (influenza and VP1 (rotavirus, and internal serine proteases such as NS3 (dengue and hepatitis C virus emerge as evolutionary cold-spots. The hot spot influenza hemagglutinin protein is contrasted with the related cold spot H protein from measles. It is proposed that evolutionary cold-spot proteins can become significant targets for second-line anti-viral therapeutics, in cases where front-line vaccines are not available or have become ineffective due to mutations in the hot-spot, generally more antigenically exposed proteins. The MeaPED package is available from www.pam1.bcs.uwa.edu.au/~michaelw/ftp/src/meaped.tar.gz.

  6. Mean protein evolutionary distance: a method for comparative protein evolution and its application.

    Science.gov (United States)

    Wise, Michael J

    2013-01-01

    Proteins are under tight evolutionary constraints, so if a protein changes it can only do so in ways that do not compromise its function. In addition, the proteins in an organism evolve at different rates. Leveraging the history of patristic distance methods, a new method for analysing comparative protein evolution, called Mean Protein Evolutionary Distance (MeaPED), measures differential resistance to evolutionary pressure across viral proteomes and is thereby able to point to the proteins' roles. Different species' proteomes can also be compared because the results, consistent across virus subtypes, concisely reflect the very different lifestyles of the viruses. The MeaPED method is here applied to influenza A virus, hepatitis C virus, human immunodeficiency virus (HIV), dengue virus, rotavirus A, polyomavirus BK and measles, which span the positive and negative single-stranded, doubled-stranded and reverse transcribing RNA viruses, and double-stranded DNA viruses. From this analysis, host interaction proteins including hemagglutinin (influenza), and viroporins agnoprotein (polyomavirus), p7 (hepatitis C) and VPU (HIV) emerge as evolutionary hot-spots. By contrast, RNA-directed RNA polymerase proteins including L (measles), PB1/PB2 (influenza) and VP1 (rotavirus), and internal serine proteases such as NS3 (dengue and hepatitis C virus) emerge as evolutionary cold-spots. The hot spot influenza hemagglutinin protein is contrasted with the related cold spot H protein from measles. It is proposed that evolutionary cold-spot proteins can become significant targets for second-line anti-viral therapeutics, in cases where front-line vaccines are not available or have become ineffective due to mutations in the hot-spot, generally more antigenically exposed proteins. The MeaPED package is available from www.pam1.bcs.uwa.edu.au/~michaelw/ftp/src/meaped.tar.gz.

  7. A Novel Grouping Method for Lithium Iron Phosphate Batteries Based on a Fractional Joint Kalman Filter and a New Modified K-Means Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaoyu Li

    2015-07-01

    Full Text Available This paper presents a novel grouping method for lithium iron phosphate batteries. In this method, a simplified electrochemical impedance spectroscopy (EIS model is utilized to describe the battery characteristics. Dynamic stress test (DST and fractional joint Kalman filter (FJKF are used to extract battery model parameters. In order to realize equal-number grouping of batteries, a new modified K-means clustering algorithm is proposed. Two rules are designed to equalize the numbers of elements in each group and exchange samples among groups. In this paper, the principles of battery model selection, physical meaning and identification method of model parameters, data preprocessing and equal-number clustering method for battery grouping are comprehensively described. Additionally, experiments for battery grouping and method validation are designed. This method is meaningful to application involving the grouping of fresh batteries for electric vehicles (EVs and screening of aged batteries for recycling.

  8. Development of high-density bentonite barriers by means of spraying methods. Part 2. Investigation of field conditions

    International Nuclear Information System (INIS)

    Tanaka, Toshiyuki; Kobayashi, Ichizo; Nakajima, Makoto; Toida, Masaru

    2006-01-01

    The authors have developed a method of constructing high-density bentonite by means of wet spraying to act as a backfill material in narrow places in radioactive waste disposal facilities. On the basis of the results of laboratory tests, they conducted field spraying tests to investigate the field conditions. The results of these tests are summarized as follows: 1) The bentonite could be sprayed smoothly by using a rotary spraying machine and a screw conveyor. 2) Provided that the air flow was at least 18.5 m 3 /min and the nozzle diameter did not exceed 25 mm, an average dry density of bentonite of 1.6 Mg/m 3 or higher could be achieved. 3) The dry density was constant within the spraying distance range 500 mm ∼ 2000 mm. 4) With a nozzle diameter of 19 mm, a spraying distance of 1000 mm, and a water content of 19.5%, an average dry density of the sprayed bentonite of 1.6 Mg/m 3 or higher and a rebound ratio not exceeding 30% was achieved. 5) The dry density of the sprayed bentonite decreased as the volume of bentonite supplied was increased, and it was shows to be closely related to the rotational speed of the spraying machine and the volume of bentonite sprayed from each hole. (author)

  9. Gripping means for fuel assemblies of nuclear reactor

    International Nuclear Information System (INIS)

    Batjukov, V.I.; Fadeev, A.I.; Shkhian, T.G.; Vjugov, O.N.

    1980-01-01

    The proposed gripping means for fuel assemblies of a nuclear reactor comprises a housing, whereupon there is movably mounted a slider provided with longitudinally extending slots to receive gripping jaws whose tails are pivotably secured to the housing of the gripping means. On one side, the end faces of the longitudinally extending slots are slanted with respect to the longitudinal axis of the gripping means and come in contact with the teeth of the gripping jaws provided on the end which is opposite to the tail, whereby the jaws open as the slider and housing of the gripping means moves relative to each other so that the teeth are received in an internal groove provided in the head of the fuel assembly

  10. Assessing implementation difficulties in tobacco use prevention and cessation counselling among dental providers

    Directory of Open Access Journals (Sweden)

    Murtomaa Heikki

    2011-05-01

    Full Text Available Abstract Background Tobacco use adversely affects oral health. Clinical guidelines recommend that dental providers promote tobacco abstinence and provide patients who use tobacco with brief tobacco use cessation counselling. Research shows that these guidelines are seldom implemented, however. To improve guideline adherence and to develop effective interventions, it is essential to understand provider behaviour and challenges to implementation. This study aimed to develop a theoretically informed measure for assessing among dental providers implementation difficulties related to tobacco use prevention and cessation (TUPAC counselling guidelines, to evaluate those difficulties among a sample of dental providers, and to investigate a possible underlying structure of applied theoretical domains. Methods A 35-item questionnaire was developed based on key theoretical domains relevant to the implementation behaviours of healthcare providers. Specific items were drawn mostly from the literature on TUPAC counselling studies of healthcare providers. The data were collected from dentists (n = 73 and dental hygienists (n = 22 in 36 dental clinics in Finland using a web-based survey. Of 95 providers, 73 participated (76.8%. We used Cronbach's alpha to ascertain the internal consistency of the questionnaire. Mean domain scores were calculated to assess different aspects of implementation difficulties and exploratory factor analysis to assess the theoretical domain structure. The authors agreed on the labels assigned to the factors on the basis of their component domains and the broader behavioural and theoretical literature. Results Internal consistency values for theoretical domains varied from 0.50 ('emotion' to 0.71 ('environmental context and resources'. The domain environmental context and resources had the lowest mean score (21.3%; 95% confidence interval [CI], 17.2 to 25.4 and was identified as a potential implementation difficulty. The domain emotion

  11. Mean field interaction in biochemical reaction networks

    KAUST Repository

    Tembine, Hamidou

    2011-09-01

    In this paper we establish a relationship between chemical dynamics and mean field game dynamics. We show that chemical reaction networks can be studied using noisy mean field limits. We provide deterministic, noisy and switching mean field limits and illustrate them with numerical examples. © 2011 IEEE.

  12. Mean excitation energies for molecular ions

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Phillip W.K.; Sauer, Stephan P.A. [Department of Chemistry, University of Copenhagen, Copenhagen (Denmark); Oddershede, Jens [Department of Physics, Chemistry, and Pharmacy, University of Southern Denmark, Odense (Denmark); Quantum Theory Project, Departments of Physics and Chemistry, University of Florida, Gainesville, FL (United States); Sabin, John R., E-mail: sabin@qtp.ufl.edu [Department of Physics, Chemistry, and Pharmacy, University of Southern Denmark, Odense (Denmark); Quantum Theory Project, Departments of Physics and Chemistry, University of Florida, Gainesville, FL (United States)

    2017-03-01

    The essential material constant that determines the bulk of the stopping power of high energy projectiles, the mean excitation energy, is calculated for a range of smaller molecular ions using the RPA method. It is demonstrated that the mean excitation energy of both molecules and atoms increase with ionic charge. However, while the mean excitation energies of atoms also increase with atomic number, the opposite is the case for mean excitation energies for molecules and molecular ions. The origin of these effects is explained by considering the spectral representation of the excited state contributing to the mean excitation energy.

  13. EMS Provider Assessment of Vehicle Damage Compared to a Professional Crash Reconstructionist

    Science.gov (United States)

    Lerner, E. Brooke; Cushman, Jeremy T.; Blatt, Alan; Lawrence, Richard; Shah, Manish N.; Swor, Robert; Brasel, Karen; Jurkovich, Gregory J.

    2011-01-01

    Objective To determine the accuracy of EMS provider assessments of motor vehicle damage, when compared to measurements made by a professional crash reconstructionist. Methods EMS providers caring for adult patients injured during a motor vehicle crash and transported to the regional trauma center in a midsized community were interviewed upon ED arrival. The interview collected provider estimates of crash mechanism of injury. For crashes that met a preset severity threshold, the vehicle’s owner was asked to consent to having a crash reconstructionist assess their vehicle. The assessment included measuring intrusion and external auto deformity. Vehicle damage was used to calculate change in velocity. Paired t-test and correlation were used to compare EMS estimates and investigator derived values. Results 91 vehicles were enrolled; of these 58 were inspected and 33 were excluded because the vehicle was not accessible. 6 vehicles had multiple patients. Therefore, a total of 68 EMS estimates were compared to the inspection findings. Patients were 46% male, 28% admitted to hospital, and 1% died. Mean EMS estimated deformity was 18” and mean measured was 14”. Mean EMS estimated intrusion was 5” and mean measured was 4”. EMS providers and the reconstructionist had 67% agreement for determination of external auto deformity (kappa 0.26), and 88% agreement for determination of intrusion (kappa 0.27) when the 1999 Field Triage Decision Scheme Criteria were applied. Mean EMS estimated speed prior to the crash was 48 mph±13 and mean reconstructionist estimated change in velocity was 18 mph±12 (correlation -0.45). EMS determined that 19 vehicles had rolled over while the investigator identified 18 (kappa 0.96). In 55 cases EMS and the investigator agreed on seatbelt use, for the remaining 13 cases there was disagreement (5) or the investigator was unable to make a determination (8) (kappa 0.40). Conclusions This study found that EMS providers are good at estimating

  14. The Meaning of Meaning, Etc.

    Science.gov (United States)

    Nilsen, Don L. F.

    This paper attempts to dispel a number of misconceptions about the nature of meaning, namely that: (1) synonyms are words that have the same meanings, (2) antonyms are words that have opposite meanings, (3) homonyms are words that sound the same but have different spellings and meanings, (4) converses are antonyms rather than synonyms, (5)…

  15. Mean-field magnetohydrodynamics and dynamo theory

    CERN Document Server

    Krause, F

    2013-01-01

    Mean-Field Magnetohydrodynamics and Dynamo Theory provides a systematic introduction to mean-field magnetohydrodynamics and the dynamo theory, along with the results achieved. Topics covered include turbulence and large-scale structures; general properties of the turbulent electromotive force; homogeneity, isotropy, and mirror symmetry of turbulent fields; and turbulent electromotive force in the case of non-vanishing mean flow. The turbulent electromotive force in the case of rotational mean motion is also considered. This book is comprised of 17 chapters and opens with an overview of the gen

  16. Performance of healthcare providers regarding iranian women experiencing physical domestic violence in Isfahan

    Directory of Open Access Journals (Sweden)

    Nasim Yousefnia

    2018-01-01

    Full Text Available Background: Domestic violence (DV can threaten women's health. Healthcare providers (HCPs may be the first to come into contact with a victim of DV. Their appropriate performance regarding a DV victim can decrease its complications. The aim of the present study was to investigate HCPs' performance regarding women experiencing DV in emergency and maternity wards of hospitals in Isfahan, Iran. Materials and Methods: The present descriptive, cross-sectional study was conducted among 300 HCPs working in emergency and maternity wards in hospitals in Isfahan. The participants were selected using quota random sampling from February to May 2016. A researcher-made questionnaire containing the five items of HCPs performance regarding DV (assessment, intervention, documentation, reference, and follow-up was used to collect data. The reliability and validity of the questionnaire were confirmed, and the collected data were analyzed using SPSS software. Cronbach's alpha was used to assess the reliability of the questionnaires. To present a general description of the data (variables, mean, and standard deviation, the table of frequencies was designed. Results: The performance of the participants regarding DV in the assessment (mean = 64.22, intervention (mean = 68.55, and reference stages (mean = 68.32 were average. However, in the documentation (mean = 72.55 and follow-up stages (mean = 23.10, their performance was good and weak respectively (criterion from 100. Conclusions: Based on the results, because of defects in providing services for women experiencing DV, a practical indigenous guideline should be provided to treat and support these women.

  17. An ab initio approach to free-energy reconstruction using logarithmic mean force dynamics

    International Nuclear Information System (INIS)

    Nakamura, Makoto; Obata, Masao; Morishita, Tetsuya; Oda, Tatsuki

    2014-01-01

    We present an ab initio approach for evaluating a free energy profile along a reaction coordinate by combining logarithmic mean force dynamics (LogMFD) and first-principles molecular dynamics. The mean force, which is the derivative of the free energy with respect to the reaction coordinate, is estimated using density functional theory (DFT) in the present approach, which is expected to provide an accurate free energy profile along the reaction coordinate. We apply this new method, first-principles LogMFD (FP-LogMFD), to a glycine dipeptide molecule and reconstruct one- and two-dimensional free energy profiles in the framework of DFT. The resultant free energy profile is compared with that obtained by the thermodynamic integration method and by the previous LogMFD calculation using an empirical force-field, showing that FP-LogMFD is a promising method to calculate free energy without empirical force-fields

  18. Method and means for a spatial and temporal probe for laser-generated plumes based on density gradients

    Science.gov (United States)

    Yeung, E.S.; Chen, G.

    1990-05-01

    A method and means are disclosed for a spatial and temporal probe for laser generated plumes based on density gradients includes generation of a plume of vaporized material from a surface by an energy source. The probe laser beam is positioned so that the plume passes through the probe laser beam. Movement of the probe laser beam caused by refraction from the density gradient of the plume is monitored. Spatial and temporal information, correlated to one another, is then derived. 15 figs.

  19. High Performance Harmonic Isolation By Means of The Single-phase Series Active Filter Employing The Waveform Reconstruction Method

    DEFF Research Database (Denmark)

    Senturk, Osman Selcuk; Hava, Ahmet M.

    2009-01-01

    current sampling delay reduction method (SDRM), a single-phase SAF compensated system provides higher harmonic isolation performance and higher stability margins compared to the system using conventional synchronous reference frame based methods. The analytical, simulation, and experimental studies of a 2...

  20. CORRECTION OF CHAIN-LINKING METHOD BY MEANS OF LLOYD-MOULTON-FISHER-TÖRNQVIST INDEX ON CROATIAN GDP DATA

    Directory of Open Access Journals (Sweden)

    Ante Rozga

    2013-02-01

    Full Text Available National statistical agencies of European Union use chain-linking method to achieve the best possible decomposition of GDP. The main advantage of this method is its simplicity, thus it can be applied in practice, which makes it particularly attractive in the situation when GDP has to be compiled on due time. By this method transformation-substitution effect – inherent to rational producers and consumers, has been implicitly built into GDP compilation, which is prior assumption of normative economic theory. On empirical (ex-post ground it gives more precise volume-price decomposition. In this paper, by means of constructing LMTF index and Fisher index derived from the previous one, it is suggested how to improve chain linking method, due to following reasons: a it is theoretically restrictive, b it gives only rough GDP decomposition into volume and price and, what seems to be its main disadvantage, c it gives additively inconsistent GDP.

  1. Nuclear power generating station equipment qualification method and apparatus

    International Nuclear Information System (INIS)

    Fero, A.H.; Potochnik, L.M.; Riling, R.W.; Semethy, K.F.

    1990-01-01

    This patent describes a method of monitoring an object piece of qualified equipment in a nuclear power plant. It comprises providing a first passive mimic means for mimicking the effect of radiation received by the object piece on the object piece; providing a second mimic means for mimicking the effect of a thermal history of the object piece on the object piece and mounting the first passive mimic means and the second mimic means in close proximity to the object piece

  2. W5″ Test: A simple method for measuring mean power output in the bench press exercise.

    Science.gov (United States)

    Tous-Fajardo, Julio; Moras, Gerard; Rodríguez-Jiménez, Sergio; Gonzalo-Skok, Oliver; Busquets, Albert; Mujika, Iñigo

    2016-11-01

    The aims of the present study were to assess the validity and reliability of a novel simple test [Five Seconds Power Test (W5″ Test)] for estimating the mean power output during the bench press exercise at different loads, and its sensitivity to detect training-induced changes. Thirty trained young men completed as many repetitions as possible in a time of ≈5 s at 25%, 45%, 65% and 85% of one-repetition maximum (1RM) in two test sessions separated by four days. The number of repetitions, linear displacement of the bar and time needed to complete the test were recorded by two independent testers, and a linear encoder was used as the criterion measure. For each load, the mean power output was calculated in the W5″ Test as mechanical work per time unit and compared with that obtained from the linear encoder. Subsequently, 20 additional subjects (10 training group vs. 10 control group) were assessed before and after completing a seven-week training programme designed to improve maximal power. Results showed that both assessment methods correlated highly in estimating mean power output at different loads (r range: 0.86-0.94; p bench press exercise in subjects who have previous resistance training experience.

  3. Privacy Penetration Testing: How to Establish Trust in Your Cloud Provider

    DEFF Research Database (Denmark)

    Probst, Christian W.; Sasse, M. Angela; Pieters, Wolter

    2012-01-01

    In the age of cloud computing, IT infrastructure becomes virtualised and takes the form of services. This virtualisation results in an increasing de-perimeterisation, where the location of data and computation is irrelevant from a user’s point of view. This irrelevance means that private...... and institutional users no longer have a concept of where their data is stored, and whether they can trust in cloud providers to protect their data. In this chapter, we investigate methods for increasing customers’ trust into cloud providers, and suggest a public penetration-testing agency as an essential component...... in a trustworthy cloud infrastructure....

  4. Mean excitation energies for molecular ions

    DEFF Research Database (Denmark)

    Jensen, Phillip W.K.; Sauer, Stephan P.A.; Oddershede, Jens

    2017-01-01

    The essential material constant that determines the bulk of the stopping power of high energy projectiles, the mean excitation energy, is calculated for a range of smaller molecular ions using the RPA method. It is demonstrated that the mean excitation energy of both molecules and atoms increase...

  5. Factorial and reduced K-means reconsidered

    NARCIS (Netherlands)

    Timmerman, Marieke E.; Ceulemans, Eva; Kiers, Henk A. L.; Vichi, Maurizio

    2010-01-01

    Factorial K-means analysis (FKM) and Reduced K-means analysis (RKM) are clustering methods that aim at simultaneously achieving a clustering of the objects and a dimension reduction of the variables. Because a comprehensive comparison between FKM and RKM is lacking in the literature so far, a

  6. How Preservice Teachers Make Meaning of Mathematics Methods Texts

    Science.gov (United States)

    Harkness, Shelly Sheats; Brass, Amy

    2017-01-01

    Mathematics methods texts are important resources for supporting preservice teachers' learning. Methods instructors routinely assign readings from texts. Yet, anecdotally and also based on reading compliance literature, many students report that they do not read assigned readings. Within this paper we briefly describe the findings from a survey of…

  7. An advanced probabilistic structural analysis method for implicit performance functions

    Science.gov (United States)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  8. Quantized kernel least mean square algorithm.

    Science.gov (United States)

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  9. Mental maps and travel behaviour: meanings and models

    Science.gov (United States)

    Hannes, Els; Kusumastuti, Diana; Espinosa, Maikel León; Janssens, Davy; Vanhoof, Koen; Wets, Geert

    2012-04-01

    In this paper, the " mental map" concept is positioned with regard to individual travel behaviour to start with. Based on Ogden and Richards' triangle of meaning (The meaning of meaning: a study of the influence of language upon thought and of the science of symbolism. International library of psychology, philosophy and scientific method. Routledge and Kegan Paul, London, 1966) distinct thoughts, referents and symbols originating from different scientific disciplines are identified and explained in order to clear up the notion's fuzziness. Next, the use of this concept in two major areas of research relevant to travel demand modelling is indicated and discussed in detail: spatial cognition and decision-making. The relevance of these constructs to understand and model individual travel behaviour is explained and current research efforts to implement these concepts in travel demand models are addressed. Furthermore, these mental map notions are specified in two types of computational models, i.e. a Bayesian Inference Network (BIN) and a Fuzzy Cognitive Map (FCM). Both models are explained, and a numerical and a real-life example are provided. Both approaches yield a detailed quantitative representation of the mental map of decision-making problems in travel behaviour.

  10. Assessment of dietary intake of flavouring substances within the procedure for their safety evaluation: advantages and limitations of estimates obtained by means of a per capita method.

    Science.gov (United States)

    Arcella, D; Leclercq, C

    2005-01-01

    The procedure for the safety evaluation of flavourings adopted by the European Commission in order to establish a positive list of these substances is a stepwise approach which was developed by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) and amended by the Scientific Committee on Food. Within this procedure, a per capita amount based on industrial poundage data of flavourings, is calculated to estimate the dietary intake by means of the maximised survey-derived daily intake (MSDI) method. This paper reviews the MSDI method in order to check if it can provide conservative intake estimates as needed at the first steps of a stepwise procedure. Scientific papers and opinions dealing with the MSDI method were reviewed. Concentration levels reported by the industry were compared with estimates obtained with the MSDI method. It appeared that, in some cases, these estimates could be orders of magnitude (up to 5) lower than those calculated considering concentration levels provided by the industry and regular consumption of flavoured foods and beverages. A critical review of two studies which had been used to support the statement that MSDI is a conservative method for assessing exposure to flavourings among high consumers was performed. Special attention was given to the factors that affect exposure at high percentiles, such as brand loyalty and portion sizes. It is concluded that these studies may not be suitable to validate the MSDI method used to assess intakes of flavours by European consumers due to shortcomings in the assumptions made and in the data used. Exposure assessment is an essential component of risk assessment. The present paper suggests that the MSDI method is not sufficiently conservative. There is therefore a clear need for either using an alternative method to estimate exposure to flavourings in the procedure or for limiting intakes to the levels at which the safety was assessed.

  11. k-Means has polynomial smoothed complexity

    NARCIS (Netherlands)

    Arthur, David; Manthey, Bodo; Röglin, Heiko; Spielman, D.A.

    2009-01-01

    The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In order to close the gap between practical performance and theoretical analysis, the k-means

  12. MEAN STACK WEB DEVELOPMENT

    OpenAIRE

    Le Thanh, Nghi

    2017-01-01

    The aim of the thesis is to provide a universal website using JavaScript as the main programming language. It also shows the basic parts anyone need to create a web application. The thesis creates a simple CMS using MEAN stack. MEAN is a collection of JavaScript based technologies used to develop web application. It is an acronym for MongoDB, Express, AngularJS and Node.js. It also allows non-technical users to easily update and manage a website’s content. But the application also lets o...

  13. On the validity of the arithmetic-geometric mean method to locate the optimal solution in a supply chain system

    Science.gov (United States)

    Chung, Kun-Jen

    2012-08-01

    Cardenas-Barron [Cardenas-Barron, L.E. (2010) 'A Simple Method to Compute Economic order Quantities: Some Observations', Applied Mathematical Modelling, 34, 1684-1688] indicates that there are several functions in which the arithmetic-geometric mean method (AGM) does not give the minimum. This article presents another situation to reveal that the AGM inequality to locate the optimal solution may be invalid for Teng, Chen, and Goyal [Teng, J.T., Chen, J., and Goyal S.K. (2009), 'A Comprehensive Note on: An Inventory Model under Two Levels of Trade Credit and Limited Storage Space Derived without Derivatives', Applied Mathematical Modelling, 33, 4388-4396], Teng and Goyal [Teng, J.T., and Goyal S.K. (2009), 'Comment on 'Optimal Inventory Replenishment Policy for the EPQ Model under Trade Credit Derived without Derivatives', International Journal of Systems Science, 40, 1095-1098] and Hsieh, Chang, Weng, and Dye [Hsieh, T.P., Chang, H.J., Weng, M.W., and Dye, C.Y. (2008), 'A Simple Approach to an Integrated Single-vendor Single-buyer Inventory System with Shortage', Production Planning and Control, 19, 601-604]. So, the main purpose of this article is to adopt the calculus approach not only to overcome shortcomings of the arithmetic-geometric mean method of Teng et al. (2009), Teng and Goyal (2009) and Hsieh et al. (2008), but also to develop the complete solution procedures for them.

  14. Meaning in animal and human communication.

    Science.gov (United States)

    Scott-Phillips, Thomas C

    2015-05-01

    What is meaning? While traditionally the domain of philosophy and linguistics, this question, and others related to it, is critical for cognitive and comparative approaches to communication. This short essay provides a concise and accessible description of how the term meaning can and should be used, how it relates to 'intentional communication', and what would constitute good evidence of meaning in animal communication, in the sense that is relevant for comparisons with human language.

  15. Job Satisfaction and Affecting Factors in Primary Health Care Providers

    Directory of Open Access Journals (Sweden)

    Ferit Kaya

    2016-06-01

    Full Text Available Objective: The aim of this study is to assess the job sat­isfaction of the primary health care providers and the fac­tors affecting it. Methods: This cross-sectional and descriptive study was carried out among the staff in The Public Health Care Centers (PHCC by performing a questionnaire under di­rect observation. Results: Out of 310 people consisting of the study uni­verse, 282 participants (94% were reached. The par­ticipants were 104 doctors, 132 assistant health care providers and 46 others (janitors, drivers The mean age of the participants was 37.21±7.70; 60.6% of them were women, 80.1% married, 96.5% graduated from at least High school. The mean of the general job satisfac­tion point of the participants in the study is 63.24±13.63. While the mean of the general job satisfaction point of the physicians and the nurses is found higher, the mean of the general job satisfaction point of janitors and other staff was found lower. The mean of the general job sat­isfaction point was found higher among the permanent and contract employee, women, health care staff, those whose wife/husband works, who chose his job willingly, more educated; who has longer working hours, high in­come, has 3 or less children and finds his job suitable for his skills; however the marital status, having children and age do not affect the mean job satisfaction point. Conclusion: Subjects having high income, found his job suitable for his skills, chose his job willingly had higher job satisfaction scores. This implies that there should be a wage balance among the staff with the same status. The lower job satisfaction score in PHCC indicates the neces­sity of improving the conditions of these centers.

  16. The method of belief scales as a means for dealing with uncertainty in tough regulatory decisions.

    Energy Technology Data Exchange (ETDEWEB)

    Pilch, Martin M.

    2005-10-01

    Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities and uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.

  17. Dynamic Mean-Variance Asset Allocation

    OpenAIRE

    Basak, Suleyman; Chabakauri, Georgy

    2009-01-01

    Mean-variance criteria remain prevalent in multi-period problems, and yet not much is known about their dynamically optimal policies. We provide a fully analytical characterization of the optimal dynamic mean-variance portfolios within a general incomplete-market economy, and recover a simple structure that also inherits several conventional properties of static models. We also identify a probability measure that incorporates intertemporal hedging demands and facilitates much tractability in ...

  18. Bonding brackets on white spot lesions pretreated by means of two methods

    Directory of Open Access Journals (Sweden)

    Julia Sotero Vianna

    2016-04-01

    Full Text Available Abstract Objective: The aim of this study was to evaluate the shear bond strength (SBS of brackets bonded to demineralized enamel pretreated with low viscosity Icon Infiltrant resin (DMG and glass ionomer cement (Clinpro XT Varnish, 3M Unitek with and without aging. Methods: A total of 75 bovine enamel specimens were allocated into five groups (n = 15. Group 1 was the control group in which the enamel surface was not demineralized. In the other four groups, the surfaces were submitted to cariogenic challenge and white spot lesions were treated. Groups 2 and 3 were treated with Icon Infiltrant resin; Groups 4 and 5, with Clinpro XT Varnish. After treatment, Groups 3 and 5 were artificially aged. Brackets were bonded with Transbond XT adhesive system and SBS was evaluated by means of a universal testing machine. Statistical analysis was performed by one-way analysis of variance followed by Tukey post-hoc test. Results: All groups tested presented shear bond strengths similar to or higher than the control group. Specimens of Group 4 had significantly higher shear bond strength values (p < 0.05 than the others. Conclusion: Pretreatment of white spot lesions, with or without aging, did not decrease the SBS of brackets.

  19. Hierarchical Adaptive Means (HAM) clustering for hardware-efficient, unsupervised and real-time spike sorting.

    Science.gov (United States)

    Paraskevopoulou, Sivylla E; Wu, Di; Eftekhar, Amir; Constandinou, Timothy G

    2014-09-30

    This work presents a novel unsupervised algorithm for real-time adaptive clustering of neural spike data (spike sorting). The proposed Hierarchical Adaptive Means (HAM) clustering method combines centroid-based clustering with hierarchical cluster connectivity to classify incoming spikes using groups of clusters. It is described how the proposed method can adaptively track the incoming spike data without requiring any past history, iteration or training and autonomously determines the number of spike classes. Its performance (classification accuracy) has been tested using multiple datasets (both simulated and recorded) achieving a near-identical accuracy compared to k-means (using 10-iterations and provided with the number of spike classes). Also, its robustness in applying to different feature extraction methods has been demonstrated by achieving classification accuracies above 80% across multiple datasets. Last but crucially, its low complexity, that has been quantified through both memory and computation requirements makes this method hugely attractive for future hardware implementation. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Algorithm based on regional separation for automatic grain boundary extraction using improved mean shift method

    Science.gov (United States)

    Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip

    2018-06-01

    Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.

  1. VAR Portfolio Optimal: Perbandingan Antara Metode Markowitz dan Mean Absolute Deviation

    Directory of Open Access Journals (Sweden)

    R. Agus Sartono

    2009-05-01

    Full Text Available Portfolio selection method which have been introduced by Harry Markowitz (1952 used variance or deviation standard as a measure of risk. Kanno and Yamazaki (1991 introduced another method and used mean absolute deviation as a measure of risk instead of variance. The Value-at Risk (VaR is a relatively new method to capitalized risk that been used by financial institutions. The aim of this research is compare between mean variance and mean absolute deviation of two portfolios. Next, we attempt to assess the VaR of two portfolios using delta normal method and historical simulation. We use the secondary data from the Jakarta Stock Exchange – LQ45 during 2003. We find that there is a weak-positive correlation between deviation standard and return in both portfolios. The VaR nolmal delta based on mean absolute deviation method eventually is higher than the VaR normal delta based on mean variance method. However, based on the historical simulation the VaR of two methods is statistically insignificant. Thus, the deviation standard is sufficient measures of portfolio risk.Keywords: optimalisasi portofolio, mean-variance, mean-absolute deviation, value-at-risk, metode delta normal, metode simulasi historis

  2. Losing a child: finding meaning in bereavement

    Directory of Open Access Journals (Sweden)

    Julia Bogensperger

    2014-03-01

    Full Text Available Background: Confronting the loss of a loved one leads us to the core questions of human existence. Bereaved parents have to deal with the rupture of a widely shared concept of what is perceived to be the natural course of life and are forced into meaning reconstruction. Objective: This study aims to expand upon existing work concerning specific themes of meaning reconstruction in a sample of bereaved parents. More specifically, the relationship between meaning reconstruction, complicated grief, and posttraumatic growth was analyzed, with special attention focused on traumatic and unexpected losses. Method: In a mixed methods approach, themes of meaning reconstruction (sense-making and benefit-finding were assessed in in-depth interviews with a total of 30 bereaved parents. Posttraumatic growth and complicated grief were assessed using standardized questionnaires, and qualitative and quantitative results were then merged using data transformation methods. Results: In total 42 themes of meaning reconstruction were abstracted from oral material. It was shown that sense-making themes ranged from causal explanations to complex philosophical beliefs about life and death. Benefit-finding themes contained thoughts about personal improvement as well as descriptions about social actions. Significant correlations were found between the extent of sense-making and posttraumatic growth scores (rs=0.54, rs=0.49; p<0.01, especially when the death was traumatic or unexpected (rs=0.67, rs=0.63; p<0.01. However, analysis revealed no significant correlation with complicated grief. Overall results corroborate meaning reconstruction themes and the importance of meaning reconstruction for posttraumatic growth.

  3. Relationships between the generalized functional method and other methods of nonimaging optical design.

    Science.gov (United States)

    Bortz, John; Shatz, Narkis

    2011-04-01

    The recently developed generalized functional method provides a means of designing nonimaging concentrators and luminaires for use with extended sources and receivers. We explore the mathematical relationships between optical designs produced using the generalized functional method and edge-ray, aplanatic, and simultaneous multiple surface (SMS) designs. Edge-ray and dual-surface aplanatic designs are shown to be special cases of generalized functional designs. In addition, it is shown that dual-surface SMS designs are closely related to generalized functional designs and that certain computational advantages accrue when the two design methods are combined. A number of examples are provided. © 2011 Optical Society of America

  4. Size-specific dose estimate (SSDE) provides a simple method to calculate organ dose for pediatric CT examinations

    Energy Technology Data Exchange (ETDEWEB)

    Moore, Bria M.; Brady, Samuel L., E-mail: samuel.brady@stjude.org; Kaufman, Robert A. [Department of Radiological Sciences, St Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States); Mirro, Amy E. [Department of Biomedical Engineering, Washington University, St Louis, Missouri 63130 (United States)

    2014-07-15

    Purpose: To investigate the correlation of size-specific dose estimate (SSDE) with absorbed organ dose, and to develop a simple methodology for estimating patient organ dose in a pediatric population (5–55 kg). Methods: Four physical anthropomorphic phantoms representing a range of pediatric body habitus were scanned with metal oxide semiconductor field effect transistor (MOSFET) dosimeters placed at 23 organ locations to determine absolute organ dose. Phantom absolute organ dose was divided by phantom SSDE to determine correlation between organ dose and SSDE. Organ dose correlation factors (CF{sub SSDE}{sup organ}) were then multiplied by patient-specific SSDE to estimate patient organ dose. The CF{sub SSDE}{sup organ} were used to retrospectively estimate individual organ doses from 352 chest and 241 abdominopelvic pediatric CT examinations, where mean patient weight was 22 kg ± 15 (range 5–55 kg), and mean patient age was 6 yrs ± 5 (range 4 months to 23 yrs). Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm; thus, showing appropriate scalability of the phantoms across the entire pediatric population in this study. IndividualCF{sub SSDE}{sup organ} were determined for a total of 23 organs in the chest and abdominopelvic region across nine weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7–1.4) and abdominopelvic region (average 0.9; range 0.7–1.3) was near unity. For organ/tissue that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1–0.4) for both the chest and abdominopelvic regions, respectively. A means to estimate patient organ dose was demonstrated. Calculated patient organ dose, using patient SSDE and CF{sub SSDE}{sup organ}, was compared to

  5. Mean level signal crossing rate for an arbitrary stochastic process

    DEFF Research Database (Denmark)

    Yura, Harold T.; Hanson, Steen Grüner

    2010-01-01

    The issue of the mean signal level crossing rate for various probability density functions with primary relevance for optics is discussed based on a new analytical method. This method relies on a unique transformation that transforms the probability distribution under investigation into a normal...... probability distribution, for which the distribution of mean level crossings is known. In general, the analytical results for the mean level crossing rate are supported and confirmed by numerical simulations. In particular, we illustrate the present method by presenting analytic expressions for the mean level...

  6. Comparison of Epicyclic Gearing Design Methods by Means of Quality Criteria Evaluation

    Directory of Open Access Journals (Sweden)

    I. V. Leonov

    2015-01-01

    Full Text Available The performance of modern economy depends on the usage of different machines. Execution of the many tasks a society entrusts to the machinery requires a huge amount of the mechanical energy imparted to the mechanical system due to different engines. Combining the motors and actuators in turn occurs through various transmissions.Among the numerous types of transmission the planetary gears occupy an important place. With a number of advantages and differences from other types of transmission of rotational motion, planetary gear can be used as a gear or a differential gear. The planetary gear firmly holds a leading position for its frequent use in transmissions of various technological and transport vehicles, as it has a convenient layout and high load capacity.Despite the fact that people have been using planetary gears over two thousand years, there is no simple method of their design, allowing both a minimizing design time and an optimization of their performance characteristics and technological qualities.The proposed design method is derived from the classical method of factors. It limits the number of options by isolating a promising region of a set of reduced criteria values of the overall dimensions, one of the main design criteria. A minimizing size criterion optimization is provided through rapprochement of gear sizes in two rows of gearings and proximity to the minimum possible number of teeth from the undercut condition, environment for numerous satellites, and gear assembly as well as through specifying the numbers of teeth of one of the rows to be equal to the arithmetic average of the teeth numbers of the other row.

  7. Analysis of Home Energy Consumption by K-Mean

    Directory of Open Access Journals (Sweden)

    Fahad Razaque

    2017-10-01

    Full Text Available The smart meter offered exceptional chances to well comprehend energy consumption manners in which quantity of data being generated. One request was the separation of energy load-profiles into clusters of related conduct. The Research measured the resemblance between groups them together and load-profiles into clusters by k-means clustering algorithm. The cluster met, also called “Gender (Male/Female, House (Rented/Owned and customers status (Satisfied/Unsatisfied” display methods of consuming energy. It provided value information aimed at utilities to generate specific electricity charges and healthier aim energy efficiency programs. The results show that 43% extremely dissatisfied of energy customer is achieved by using energy consumption.

  8. Application of Mean of Absolute Deviation Method for the Selection of Best Nonlinear Component Based on Video Encryption

    Science.gov (United States)

    Anees, Amir; Khan, Waqar Ahmad; Gondal, Muhammad Asif; Hussain, Iqtadar

    2013-07-01

    The aim of this work is to make use of the mean of absolute deviation (MAD) method for the evaluation process of substitution boxes used in the advanced encryption standard. In this paper, we use the MAD technique to analyze some popular and prevailing substitution boxes used in encryption processes. In particular, MAD is applied to advanced encryption standard (AES), affine power affine (APA), Gray, Lui J., Residue Prime, S8 AES, SKIPJACK, and Xyi substitution boxes.

  9. Real-Time Nonlocal Means-Based Despeckling.

    Science.gov (United States)

    Breivik, Lars Hofsoy; Snare, Sten Roar; Steen, Erik Normann; Solberg, Anne H Schistad

    2017-06-01

    In this paper, we propose a multiscale nonlocal means-based despeckling method for medical ultrasound. The multiscale approach leads to large computational savings and improves despeckling results over single-scale iterative approaches. We present two variants of the method. The first, denoted multiscale nonlocal means (MNLM), yields uniform robust filtering of speckle both in structured and homogeneous regions. The second, denoted unnormalized MNLM (UMNLM), is more conservative in regions of structure assuring minimal disruption of salient image details. Due to the popularity of anisotropic diffusion-based methods in the despeckling literature, we review the connection between anisotropic diffusion and iterative variants of NLM. These iterative variants in turn relate to our multiscale variant. As part of our evaluation, we conduct a simulation study making use of ground truth phantoms generated from clinical B-mode ultrasound images. We evaluate our method against a set of popular methods from the despeckling literature on both fine and coarse speckle noise. In terms of computational efficiency, our method outperforms the other considered methods. Quantitatively on simulations and on a tissue-mimicking phantom, our method is found to be competitive with the state-of-the-art. On clinical B-mode images, our method is found to effectively smooth speckle while preserving low-contrast and highly localized salient image detail.

  10. The PDF method for turbulent combustion

    Science.gov (United States)

    Pope, S. B.

    1991-01-01

    Probability Density Function (PDF) methods provide a means of calculating the properties of turbulent reacting flows. They have been successfully applied to many turbulent flames, including some with finite rate kinetic effects. Here the methods are reviewed with an emphasis on computational issues and their application to turbulent combustion.

  11. Methods and means of the radioisotope flaw detection of the nuclear power reactors components

    International Nuclear Information System (INIS)

    Dekopov, A.S.; Majorov, A.N.; Firsov, V.G.

    1979-01-01

    Methods and means are considered for the radioisotopic flaw detection of the nuclear reactors pressure vessels and structural components of the reactor circuit. Methods of control are described as in the technological process of fabrication of the power reactors assemblies as during the systematic-preventive repair of the nuclear power station equipment during exploitation. Methodological base is given of the technology of radiation control of welded joints of the pressure vessel branch piper of the WWER-440 and WWER-1000 reactors in the process of assembling and exploitation and joining pipes with the pipe-plate of the steamgenerator in the process of fabrication. Methods of the radioisotope flaw detection in the process of exploitation take into consideration the influence of the radioisotope background, and ensure obtaining of the demanded by the rules of control, sensitivity. Methods of control of welded joints of the steamgenerator of nuclear power plants are based on the simultaneous examination of all joints with application of the shaped radiographic plate-holders. Special gamma-flaw-detection equipment is developed for control of the welded joints of the main branch-pipes. Design peculiarities are given of the installation for flaw detection. These installations are equipped with the system for emergency return of the radiation source into the storage position from the position for exposure. They have automatic exposure-meters for determination of the exposure time. Successfull exploitation of such installations in the Finland during assembling equipment for the nuclear reactor of the nuclear power plant ''Loviisa-1'' and in the USSR on the Novovoronezh nuclear power plant has shown possibility for detection of flaws having dimensions about 1% of the equipment used. For control of welded joints of pipes with pipe-plates at the steam generators, portable flaw-detectors are used. Sensitivity of these flaw-detectors towards detection of the wire standards has

  12. Means and Variances without Calculus

    Science.gov (United States)

    Kinney, John J.

    2005-01-01

    This article gives a method of finding discrete approximations to continuous probability density functions and shows examples of its use, allowing students without calculus access to the calculation of means and variances.

  13. Pressure algorithm for elliptic flow calculations with the PDF method

    Science.gov (United States)

    Anand, M. S.; Pope, S. B.; Mongia, H. C.

    1991-01-01

    An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.

  14. Magnetic cushioning and pressure applying means

    International Nuclear Information System (INIS)

    Turner, G.F.A.M.

    1981-01-01

    This invention relates to a novel cushioning and pressure applying means for compressing sheets of film in an X-ray cassette. The cushioning means is provided by two sheets of rubber or plastics material each of which contains an array of magnets, the sheets being held together so that like magnetic poles are in opposition. (author)

  15. MO-DE-207A-11: Sparse-View CT Reconstruction Via a Novel Non-Local Means Method

    International Nuclear Information System (INIS)

    Chen, Z; Qi, H; Wu, S; Xu, Y; Zhou, L

    2016-01-01

    Purpose: Sparse-view computed tomography (CT) reconstruction is an effective strategy to reduce the radiation dose delivered to patients. Due to its insufficiency of measurements, traditional non-local means (NLM) based reconstruction methods often lead to over-smoothness in image edges. To address this problem, an adaptive NLM reconstruction method based on rotational invariance (RIANLM) is proposed. Methods: The method consists of four steps: 1) Initializing parameters; 2) Algebraic reconstruction technique (ART) reconstruction using raw projection data; 3) Positivity constraint of the image reconstructed by ART; 4) Update reconstructed image by using RIANLM filtering. In RIANLM, a novel similarity metric that is rotational invariance is proposed and used to calculate the distance between two patches. In this way, any patch with similar structure but different orientation to the reference patch would win a relatively large weight to avoid over-smoothed image. Moreover, the parameter h in RIANLM which controls the decay of the weights is adaptive to avoid over-smoothness, while it in NLM is not adaptive during the whole reconstruction process. The proposed method is named as ART-RIANLM and validated on Shepp-Logan phantom and clinical projection data. Results: In our experiments, the searching neighborhood size is set to 15 by 15 and the similarity window is set to 3 by 3. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, the ART-RIANLM produces higher SNR (35.38dB<24.00dB) and lower MAE (0.0006<0.0023) reconstructed image than ART-NLM. The visual inspection demonstrated that the proposed method could suppress artifacts or noises more effectively and preserve image edges better. Similar results were found for clinical data case. Conclusion: A novel ART-RIANLM method for sparse-view CT reconstruction is presented with superior image. Compared to the conventional ART-NLM method, the SNR and MAE from ART-RIANLM increases 47% and decreases 74

  16. The influence of electromyographic recording methods and the innervation zone on the mean power frequency-torque relationships.

    Science.gov (United States)

    Herda, Trent J; Zuniga, Jorge M; Ryan, Eric D; Camic, Clayton L; Bergstrom, Haley C; Smith, Doug B; Weir, Joseph P; Cramer, Joel T; Housh, Terry J

    2015-06-01

    This study examined the effects of electromyographic (EMG) recording methods and innervation zone (IZ) on the mean power frequency (MPF)-torque relationships. Nine subjects performed isometric ramp muscle actions of the leg extensors from 5% to 100% of maximal voluntary contraction with an eight channel linear electrode array over the IZ of the vastus lateralis. The slopes were calculated from the log-transformed monopolar and bipolar EMG MPF-torque relationships for each channel and subject and 95% confidence intervals (CI) were constructed around the slopes for each relationship and the composite of the slopes. Twenty-two to 55% of the subjects exhibited 95% CIs that did not include a slope of zero for the monopolar EMG MPF-torque relationships while 25-75% of the subjects exhibited 95% CIs that did not include a slope of zero for the bipolar EMG MPF-torque relationships. The composite of the slopes from the EMG MPF-torque relationships were not significantly different from zero for any method or channel, however, the method and IZ location slightly influenced the number of significant slopes on a subject-by-subject basis. The log-transform model indicated that EMG MPF-torque patterns were nonlinear regardless of recording method or distance from the IZ. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Power Quality Improvement and LVRT Capability Enhancement of Wind Farms by Means of an Inductive Filtering Method

    Directory of Open Access Journals (Sweden)

    Yanjian Peng

    2016-04-01

    Full Text Available Unlike the traditional method for power quality improvement and low-voltage ride through (LVRT capability enhancement of wind farms, this paper proposes a new wind power integrated system by means of an inductive filtering method, especially if it contains a grid-connected transformer, a static synchronous compensator (STATCOM and fully-tuned (FT branches. First, the main circuit topology of the new wind power integrated system is presented. Then, the mathematical model is established to reveal the mechanism of harmonic suppression and the reactive compensation of the proposed wind power integrated system, and then the realization conditions of the inductive filtering method is obtained. Further, the control strategy of STATCOM is introduced. Based on the measured data for a real wind farm, the simulation studies are carried out to illustrate the performance of the proposed new wind power integrated system. The results indicate that the new system can not only enhance the LVRT capability of wind farms, but also prevent harmonic components flowing into the primary (grid winding of the grid-connected transformer. Moreover, since the new method can compensate for reactive power in a wind farm, the power factor at the grid side can be improved effectively.

  18. Method for Predicting Thermal Buckling in Rails

    Science.gov (United States)

    2018-01-01

    A method is proposed herein for predicting the onset of thermal buckling in rails in such a way as to provide a means of avoiding this type of potentially devastating failure. The method consists of the development of a thermomechanical model of rail...

  19. In-service inspection of condenser tubes by means of electrochemical methods

    International Nuclear Information System (INIS)

    Taelemans, G.

    The commissioning of an increasing number of large nuclear power plants involves an increased significance of such condenser tube problems as: - erosion on tube ends, - generalized corrosion and pitting corrosion, - deposits in the tubes. In order to solve such problems, investigations were performed especially focused on a measurement technique that enables in-service behaviour of condenser tubes to be monitored. For such a purpose, measurement of the polarization resistance has been adopted. The existing corrosion products and scaled-off iron oxides were eliminated by means of a carborundum balls processing, as clearly appears from polarization resistance reduction. Then iron sulphate was injected in order to build a new and better protective layer. In addition, the tube was kept clean by means of foam rubber balls. There is a second implementation area: fouled condenser tubes. A significant polarization resistance reduction is noted during the acid cleaning. (orig.) [de

  20. The burden of neck pain: its meaning for persons with neck pain and healthcare providers, explored by concept mapping.

    Science.gov (United States)

    van Randeraad-van der Zee, Carlijn H; Beurskens, Anna J H M; Swinkels, Raymond A H M; Pool, Jan J M; Batterham, Roy W; Osborne, Richard H; de Vet, Henrica C W

    2016-05-01

    To empirically define the concept of burden of neck pain. The lack of a clear understanding of this construct from the perspective of persons with neck pain and care providers hampers adequate measurement of this burden. An additional aim was to compare the conceptual model obtained with the frequently used Neck Disability Index (NDI). Concept mapping, combining qualitative (nominal group technique and group consensus) and quantitative research methods (cluster analysis and multidimensional scaling), was applied to groups of persons with neck pain (n = 3) and professionals treating persons with neck pain (n = 2). Group members generated statements, which were organized into concept maps. Group members achieved consensus about the number and description of domains and the researchers then generated an overall mind map covering the full breadth of the burden of neck pain. Concept mapping revealed 12 domains of burden of neck pain: impaired mobility neck, neck pain, fatigue/concentration, physical complaints, psychological aspects/consequences, activities of daily living, social participation, financial consequences, difficult to treat/difficult to diagnose, difference of opinion with care providers, incomprehension by social environment, and how person with neck pain deal with complaints. All ten items of the NDI could be linked to the mind map, but the NDI measures only part of the burden of neck pain. This study revealed the relevant domains for the burden of neck pain from the viewpoints of persons with neck pain and their care providers. These results can guide the identification of existing measurements instruments for each domain or the development of new ones to measure the burden of neck pain.

  1. Mean age distribution of inorganic soil-nitrogen

    Science.gov (United States)

    Woo, Dong K.; Kumar, Praveen

    2016-07-01

    Excess reactive nitrogen in soils of intensively managed landscapes causes adverse environmental impact, and continues to remain a global concern. Many novel strategies have been developed to provide better management practices and, yet, the problem remains unresolved. The objective of this study is to develop a model to characterize the "age" of inorganic soil-nitrogen (nitrate, and ammonia/ammonium). We use the general theory of age, which provides an assessment of the time elapsed since inorganic nitrogen has been introduced into the soil system. We analyze a corn-corn-soybean rotation, common in the Midwest United States, as an example application. We observe two counter-intuitive results: (1) the mean nitrogen age in the topsoil layer is relatively high; and (2) mean nitrogen age is lower under soybean cultivation compared to corn although no fertilizer is applied for soybean cultivation. The first result can be explained by cation-exchange of ammonium that retards the leaching of nitrogen, resulting in an increase in the mean nitrogen age near the soil surface. The second result arises because the soybean utilizes the nitrogen fertilizer left from the previous year, thereby removing the older nitrogen and reducing mean nitrogen age. Estimating the mean nitrogen age can thus serve as an important tool to disentangle complex nitrogen dynamics by providing a nuanced characterization of the time scales of soil-nitrogen transformation and transport processes.

  2. Automated Means of Identifying Landslide Deposits using LiDAR Data using the Contour Connection Method Algorithm

    Science.gov (United States)

    Olsen, M. J.; Leshchinsky, B. A.; Tanyu, B. F.

    2014-12-01

    Landslides are a global natural hazard, resulting in severe economic, environmental and social impacts every year. Often, landslides occur in areas of repeated slope instability, but despite these trends, significant residential developments and critical infrastructure are built in the shadow of past landslide deposits and marginally stable slopes. These hazards, despite their sometimes enormous scale and regional propensity, however, are difficult to detect on the ground, often due to vegetative cover. However, new developments in remote sensing technology, specifically Light Detection and Ranging mapping (LiDAR) are providing a new means of viewing our landscape. Airborne LiDAR, combined with a level of post-processing, enable the creation of spatial data representative of the earth beneath the vegetation, highlighting the scars of unstable slopes of the past. This tool presents a revolutionary technique to mapping landslide deposits and their associated regions of risk; yet, their inventorying is often done manually, an approach that can be tedious, time-consuming and subjective. However, the associated LiDAR bare earth data present the opportunity to use this remote sensing technology and typical landslide geometry to create an automated algorithm that can detect and inventory deposits on a landscape scale. This algorithm, called the Contour Connection Method (CCM), functions by first detecting steep gradients, often associated with the headscarp of a failed hillslope, and initiating a search, highlighting deposits downslope of the failure. Based on input of search gradients, CCM can assist in highlighting regions identified as landslides consistently on a landscape scale, capable of mapping more than 14,000 hectares rapidly (help better define these regions of risk.

  3. Invariance of the Cauchy mean-value expression with an application to the problem of representation of Cauchy means

    Directory of Open Access Journals (Sweden)

    Lucio R. Berrone

    2005-01-01

    Full Text Available The notion of invariance under transformations (changes of coordinates of the Cauchy mean-value expression is introduced and then used in furnishing a suitable two-variable version of a result by L. Losonczi on equality of many-variable Cauchy means. An assessment of the methods used by Losonczi and Matkowski is made and an alternative way is proposed to solve the problem of representation of two-variable Cauchy means.

  4. Method of decreasing nuclear power

    International Nuclear Information System (INIS)

    Masuda, Hiromi

    1987-01-01

    Purpose: To easily attain the power decreasing in a HWLWR type reactor and improve the reactor safety. Method: The method is applied to a nuclear reactor in which the reactor reactivity is controlled by control rods and liquid posions dissolved in moderators. Means for forecasting the control rod operation amount required for the reactor power down and means for removing liquid poisons in the moderators are provided. The control rod operation amount required for the power down is forecast before the power down and the liquid poisons in the moderators are removed. Then, the control rods are inserted into a deep insertion position to reduce the reactor power. This invention can facilitate easy power down, as well as provide effects of improving the controllability in the usual operation and of avoiding abrupt power down which leads to an improved availability. (Kamimura, M.)

  5. Protection walls and other means used in everyday work on the Vinca RA Reactor

    International Nuclear Information System (INIS)

    Milosevic, M.; Ninkovic, M.

    1964-10-01

    Work with radioactive materials requires special protection of the personnel. Special attention has been paid to this problem because the time allowed for work on a problem depends on the protection provided. The paper gives a short review of the means and methods of protection against irradiation and contamination, it also describes some personal and technical protection means used in specific working conditions. A special description is given of the technical means of radiation protection (protection against free beams): heavy bricks (iron and sand), water and iron shields, plugs for beam cutting. Experimental data on the efficiency of these means in moderating the radiation by gamma rays and thermal neutrons are given. (All measurements of the efficiency of the protection means have been carried out under the real conditions, that is to say conditions under which these measurements are usually made, so the data obtained completely respond to dosimetry demands) (author)

  6. Semi-supervised clustering methods.

    Science.gov (United States)

    Bair, Eric

    2013-01-01

    Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as "semi-supervised clustering" methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided.

  7. Acoustic nonlinearity parameter B/A determined by means of thermodynamic method under elevated pressures for alkanediols.

    Science.gov (United States)

    Zorębski, Edward; Zorębski, Michał

    2014-01-01

    The so-called Beyer nonlinearity parameter B/A is calculated for 1,2- and 1,3-propanediol, 1,2-, 1,3-, and 1,4-butanediol, as well as 2-methyl-2,4-pentanediol by means of a thermodynamic method. The calculations are made for temperatures from (293.15 to 318.15) K and pressures up to 100 MPa. The decrease in B/A values with the increasing pressure is observed. In the case of 1,3-butanediol, the results are compared with corresponding literature data. The consistency is very satisfactory. A simple relationship between the internal pressure and B/A nonlinearity parameter has also been studied. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Perceptions of vaginal microbicides as an HIV prevention method among health care providers in KwaZulu-Natal, South Africa

    Directory of Open Access Journals (Sweden)

    Mantell Joanne E

    2007-03-01

    Full Text Available Abstract Background The promise of microbicides as an HIV prevention method will not be realized if not supported by health care providers. They are the primary source of sexual health information for potential users, in both the public and private health sectors. Therefore, the aim of this study was to determine perceptions of vaginal microbicides as a potential HIV prevention method among health care providers in Durban and Hlabisa, South Africa, using a combination of quantitative and qualitative methods. Results During 2004, semi structured interviews with 149 health care providers were conducted. Fifty seven percent of hospital managers, 40% of pharmacists and 35% of nurses possessed some basic knowledge of microbicides, such as the product being used intra-vaginally before sex to prevent HIV infection. The majority of them were positive about microbicides and were willing to counsel users regarding potential use. Providers from both public and private sectors felt that an effective microbicide should be available to all people, regardless of HIV status. Providers felt that the product should be accessed over-the-counter in pharmacies and in retail stores. They also felt a need for potential microbicides to be available free of charge, and packaged with clear instructions. The media was seen by health care providers as being an effective strategy for promoting microbicides. Conclusion Overall, health care providers were very positive about the possible introduction of an effective microbicide for HIV prevention. The findings generated by this study illustrated the need for training health care providers prior to making the product accessible, as well as the importance of addressing the potential barriers to use of the product by women. These are important concerns in the health care community, and this study also served to educate them for the day when research becomes reality.

  9. Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture

    Science.gov (United States)

    Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA

    2011-10-11

    Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

  10. Intelligent automation of high-performance liquid chromatography method development by means of a real-time knowledge-based approach.

    Science.gov (United States)

    I, Ting-Po; Smith, Randy; Guhan, Sam; Taksen, Ken; Vavra, Mark; Myers, Douglas; Hearn, Milton T W

    2002-09-27

    We describe the development, attributes and capabilities of a novel type of artificial intelligence system, called LabExpert, for automation of HPLC method development. Unlike other computerised method development systems, LabExpert operates in real-time, using an artificial intelligence system and design engine to provide experimental decision outcomes relevant to the optimisation of complex separations as well as the control of the instrumentation, column selection, mobile phase choice and other experimental parameters. LabExpert manages every input parameter to a HPLC data station and evaluates each output parameter of the HPLC data station in real-time as part of its decision process. Based on a combination of inherent and user-defined evaluation criteria, the artificial intelligence system programs use a reasoning process, applying chromatographic principles and acquired experimental observations to iteratively provide a regime for a priori development of an acceptable HPLC separation method. Because remote monitoring and control are also functions of LabExpert, the system allows full-time utilisation of analytical instrumentation and associated laboratory resources. Based on our experience with LabExpert with a wide range of analyte mixtures, this artificial intelligence system consistently identified in a similar or faster time-frame preferred sets of analytical conditions that are equal in resolution, efficiency and throughput to those empirically determined by highly experienced chromatographic scientists. An illustrative example, demonstrating the potential of LabExpert in the process of method development of drug substances, is provided.

  11. Dynamic knock detection and quantification in a spark ignition engine by means of a pressure based method

    International Nuclear Information System (INIS)

    Galloni, Enzo

    2012-01-01

    Highlights: ► Experimental data have been analyzed by a pressure based method. ► Knock intensity level depends on a threshold varying with the engine operating point. ► A dynamic method is proposed to overcome the definition of a predetermined threshold. ► The knock intensity of each operating point is quantified by a dimensionless index. ► The knock limited spark advance can be detected by means of this index. - Abstract: In spark ignition engines, knock onset limits the maximum spark advance. An inaccurate identification of this limit penalises the fuel conversion efficiency. Thus it is very important to define a knock detection method able to assess the knock intensity of an engine operating point. Usually, in engine development, knock event is evaluated by analysing the in-cylinder pressure trace. Data are filtered and processed in order to obtain some indices correlated to the knock intensity, then the calculated value is compared to a predetermined threshold. The calibration of this threshold is complex and difficult; statistical approach should be used, but often empirical values are considered. In this paper a method that dynamically calculates the knock threshold necessary to determine the knock event is proposed. The purpose is to resolve cycle by cycle the knock intensity related to an individual engine cycle without setting a predetermined threshold. The method has been applied to an extensive set of experimental data relative to a gasoline spark-ignition engine. Results are correlated to those obtained considering a traditional method, where a statistical approach has been used to detect knock.

  12. Fractal dimension to classify the heart sound recordings with KNN and fuzzy c-mean clustering methods

    Science.gov (United States)

    Juniati, D.; Khotimah, C.; Wardani, D. E. K.; Budayasa, K.

    2018-01-01

    The heart abnormalities can be detected from heart sound. A heart sound can be heard directly with a stethoscope or indirectly by a phonocardiograph, a machine of the heart sound recording. This paper presents the implementation of fractal dimension theory to make a classification of phonocardiograms into a normal heart sound, a murmur, or an extrasystole. The main algorithm used to calculate the fractal dimension was Higuchi’s Algorithm. There were two steps to make a classification of phonocardiograms, feature extraction, and classification. For feature extraction, we used Discrete Wavelet Transform to decompose the signal of heart sound into several sub-bands depending on the selected level. After the decomposition process, the signal was processed using Fast Fourier Transform (FFT) to determine the spectral frequency. The fractal dimension of the FFT output was calculated using Higuchi Algorithm. The classification of fractal dimension of all phonocardiograms was done with KNN and Fuzzy c-mean clustering methods. Based on the research results, the best accuracy obtained was 86.17%, the feature extraction by DWT decomposition level 3 with the value of kmax 50, using 5-fold cross validation and the number of neighbors was 5 at K-NN algorithm. Meanwhile, for fuzzy c-mean clustering, the accuracy was 78.56%.

  13. Mean Field Games with a Dominating Player

    Energy Technology Data Exchange (ETDEWEB)

    Bensoussan, A., E-mail: axb046100@utdallas.edu [The University of Texas at Dallas, International Center for Decision and Risk Analysis, Jindal School of Management (United States); Chau, M. H. M., E-mail: michaelchaumanho@gmail.com; Yam, S. C. P., E-mail: scpyam@sta.cuhk.edu.hk [The Chinese University of Hong Kong, Department of Statistics (Hong Kong, People’s Republic of China) (China)

    2016-08-15

    In this article, we consider mean field games between a dominating player and a group of representative agents, each of which acts similarly and also interacts with each other through a mean field term being substantially influenced by the dominating player. We first provide the general theory and discuss the necessary condition for the optimal controls and equilibrium condition by adopting adjoint equation approach. We then present a special case in the context of linear-quadratic framework, in which a necessary and sufficient condition can be asserted by stochastic maximum principle; we finally establish the sufficient condition that guarantees the unique existence of the equilibrium control. The proof of the convergence result of finite player game to mean field counterpart is provided in Appendix.

  14. Using the geometric mean fluorescence intensity index method to measure ZAP-70 expression in patients with chronic lymphocytic leukemia.

    Science.gov (United States)

    Wu, Yu-Jie; Wang, Hui; Liang, Jian-Hua; Miao, Yi; Liu, Lu; Qiu, Hai-Rong; Qiao, Chun; Wang, Rong; Li, Jian-Yong

    2016-01-01

    Expression of ζ-chain-associated protein kinase 70 kDa (ZAP-70) in chronic lymphocytic leukemia (CLL) is associated with more aggressive disease and can help differentiate CLL from cases expressing mutated or unmutated immunoglobulin heavy chain variable region (IgHV) genes. However, standardizing ZAP-70 expression by flow cytometric analysis has proved unsatisfactory. The key point is that ZAP-70 is weakly expressed with a continuous expression pattern rather than a clear discrimination between positive and negative CLL cells, which means that the resulting judgment is subjective. Thus, in this study, we aimed at assessing the reliability and repeatability of ZAP-70 expression using the geometric mean fluorescence intensity (geo MFI) index method based on flow cytometry with 256-channel resolution in a series of 402 CLL patients and to compare ZAP-70 with other biological and clinical prognosticators. According to IgHV mutational status, we were able to confirm that the optimal cut-off point for the geo MFI index was 3.5 in the test set. In multivariate analyses that included the major clinical and biological prognostic markers for CLL, the prognostic impact of ZAP-70 expression appeared to have stronger discriminatory power when the geo MFI index method was applied. In addition, we found that ZAP-70-positive patients according to the geo MFI index method had shorter time to first treatment or overall survival (P=0.0002, P=0.0491). This is the first report showing that ZAP-70 expression can be evaluated by a new approach, the geo MFI index, which could be a useful prognostic method as it is more reliable, less subjective, and therefore better associated with improvement of CLL prognostication and prediction of clinical course.

  15. Improvisation and meaning.

    Science.gov (United States)

    Gilbertson, Simon

    2013-08-07

    This article presents and discusses a long-term repeated-immersion research process that explores meaning allocated to an episode of 50 seconds of music improvisation in early neurosurgical rehabilitation by a teenage boy with severe traumatic brain injury and his music therapist. The process began with the original therapy session in August 1994 and extends to the current time of writing in 2013. A diverse selection of qualitative research methods were used during a repeated immersion and engagement with the selected episodes. The multiple methods used in this enquiry include therapeutic narrative analysis and musicological and video analysis during my doctoral research between 2002 and 2004, arts-based research in 2008 using expressive writing, and arts-based research in 2012 based on the creation of a body cast of my right hand as I used it to play the first note of my music improvising in the original therapy episode, which is accompanied by reflective journaling. The casting of my hand was done to explore and reconsider the role of my own body as an embodied and integral, but originally hidden, part of the therapy process. Put together, these investigations explore the potential meanings of the episode of music improvisation in therapy in an innovative and imaginative way. However, this article does not aim at this stage to present a model or theory for neurorehabilitation but offers an example of how a combination of diverse qualitative methods over an extended period of time can be instrumental in gaining innovative and rich insights into initially hidden perspectives on health, well-being, and human relating.

  16. A simple and fast method to determine the parameters for fuzzy c-means cluster analysis

    DEFF Research Database (Denmark)

    Schwämmle, Veit; Jensen, Ole Nørregaard

    2010-01-01

    MOTIVATION: Fuzzy c-means clustering is widely used to identify cluster structures in high-dimensional datasets, such as those obtained in DNA microarray and quantitative proteomics experiments. One of its main limitations is the lack of a computationally fast method to set optimal values...... of algorithm parameters. Wrong parameter values may either lead to the inclusion of purely random fluctuations in the results or ignore potentially important data. The optimal solution has parameter values for which the clustering does not yield any results for a purely random dataset but which detects cluster...... formation with maximum resolution on the edge of randomness. RESULTS: Estimation of the optimal parameter values is achieved by evaluation of the results of the clustering procedure applied to randomized datasets. In this case, the optimal value of the fuzzifier follows common rules that depend only...

  17. Determining the area of influence of depression cone in the vicinity of lignite mine by means of triangle method and LANDSAT TM/ETM+ satellite images.

    Science.gov (United States)

    Zawadzki, Jarosław; Przeździecki, Karol; Miatkowski, Zygmunt

    2016-01-15

    Problems with lowering of water table are common all over the world. Intensive pumping of water from aquifers for consumption, irrigation, industrial or mining purposes often causes groundwater depletion and results in the formation of cone of depression. This can severely decrease water pressure, even over vast areas, and can create severe problems such as degradation of agriculture or natural environment sometimes depriving people and animals of water supply. In this paper, the authors present a method for determining the area of influence of a groundwater depression cone resulting from prolonged drainage, by means of satellite images in optical, near infrared and thermal infrared bands from TM sensor (Thematic Mapper) and ETM+ sensor (Enhanced Thematic Mapper +) placed on Landsat 5 and Landsat 7 satellites. The research area was Szczercowska Valley (Pol. Kotlina Szczercowska), Central Poland, located within a range of influence of a groundwater drainage system of the lignite coal mine in Belchatow. It is the biggest lignite coal mine in Poland and one of the largest in Europe exerting an enormous impact on the environment. The main method of satellite data analysis for determining soil moisture, was the so-called triangle method. This method, based on TVDI (Temperature Vegetation Dryness Index) was supported by additional spatial analysis including ordinary kriging used in order to combine fragmentary information obtained from areas covered by meadows. The results obtained are encouraging and confirm the usefulness of the triangle method not only for soil moisture determination but also for assessment of the temporal and spatial changes in the area influenced by the groundwater depression cone. The range of impact of the groundwater depression cone determined by means of above-described remote sensing analysis shows good agreement with that determined by ground measurements. The developed satellite method is much faster and cheaper than in-situ measurements

  18. Method of predicting the mean lung dose based on a patient's anatomy and dose-volume histograms

    Energy Technology Data Exchange (ETDEWEB)

    Zawadzka, Anna, E-mail: a.zawadzka@zfm.coi.pl [Medical Physics Department, Centre of Oncology, Maria Sklodowska-Curie Memorial Cancer Center, Warsaw (Poland); Nesteruk, Marta [Faculty of Physics, University of Warsaw, Warsaw (Poland); Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich (Switzerland); Brzozowska, Beata [Faculty of Physics, University of Warsaw, Warsaw (Poland); Kukołowicz, Paweł F. [Medical Physics Department, Centre of Oncology, Maria Sklodowska-Curie Memorial Cancer Center, Warsaw (Poland)

    2017-04-01

    The aim of this study was to propose a method to predict the minimum achievable mean lung dose (MLD) and corresponding dosimetric parameters for organs-at-risk (OAR) based on individual patient anatomy. For each patient, the dose for 36 equidistant individual multileaf collimator shaped fields in the treatment planning system (TPS) was calculated. Based on these dose matrices, the MLD for each patient was predicted by the homemade DosePredictor software in which the solution of linear equations was implemented. The software prediction results were validated based on 3D conformal radiotherapy (3D-CRT) and volumetric modulated arc therapy (VMAT) plans previously prepared for 16 patients with stage III non–small-cell lung cancer (NSCLC). For each patient, dosimetric parameters derived from plans and the results calculated by DosePredictor were compared. The MLD, the maximum dose to the spinal cord (D{sub max} {sub cord}) and the mean esophageal dose (MED) were analyzed. There was a strong correlation between the MLD calculated by the DosePredictor and those obtained in treatment plans regardless of the technique used. The correlation coefficient was 0.96 for both 3D-CRT and VMAT techniques. In a similar manner, MED correlations of 0.98 and 0.96 were obtained for 3D-CRT and VMAT plans, respectively. The maximum dose to the spinal cord was not predicted very well. The correlation coefficient was 0.30 and 0.61 for 3D-CRT and VMAT, respectively. The presented method allows us to predict the minimum MLD and corresponding dosimetric parameters to OARs without the necessity of plan preparation. The method can serve as a guide during the treatment planning process, for example, as initial constraints in VMAT optimization. It allows the probability of lung pneumonitis to be predicted.

  19. A Similarity-Ranking Method on Semantic Computing for Providing Information-Services in Station-Concierge System

    Directory of Open Access Journals (Sweden)

    Motoki Yokoyama

    2017-07-01

    Full Text Available The prevalence of smartphones and wireless broadband networks have been progressing as a new Railway infomration environment. According to the spread of such devices and information technology, various types of information can be obtained from databases connected to the Internet. One scenario of obtaining such a wide variety of information resources is in the phase of user’s transportation. This paper proposes an information provision system, named the Station Concierge System that matches the situation and intention of passengers. The purpose of this system is to estimate the needs of passengers like station staff or hotel concierge and to provide information resources that satisfy user’s expectations dynamically. The most important module of the system is constructed based on a new information ranking method for passenger intention prediction and service recommendation. This method has three main features, which are (1 projecting a user to semantic vector space by using her current context, (2 predicting the intention of a user based on selecting a semantic vector subspace, and (3 ranking the services by a descending order of relevant scores to the user’ intention. By comparing the predicted results of our method with those of two straightforward computation methods, the experimental studies show the effectiveness and efficiency of the proposed method. Using this system, users can obtain transit information and service map that dynamically matches their context.

  20. Solutions of Heat-Like and Wave-Like Equations with Variable Coefficients by Means of the Homotopy Analysis Method

    International Nuclear Information System (INIS)

    Alomari, A. K.; Noorani, M. S. M.; Nazar, R.

    2008-01-01

    We employ the homotopy analysis method (HAM) to obtain approximate analytical solutions to the heat-like and wave-like equations. The HAM contains the auxiliary parameter ħ, which provides a convenient way of controlling the convergence region of series solutions. The analysis is accompanied by several linear and nonlinear heat-like and wave-like equations with initial boundary value problems. The results obtained prove that HAM is very effective and simple with less error than the Adomian decomposition method and the variational iteration method

  1. Method for fluidizing and coating ultrafine particles, device for fluidizing and coating ultrafine particles

    Science.gov (United States)

    Li, Jie; Liu, Yung Y

    2015-01-20

    The invention provides a method for dispersing particles within a reaction field, the method comprising confining the particles to the reaction field using a standing wave. The invention also provides a system for coating particles, the system comprising a reaction zone; a means for producing fluidized particles within the reaction zone; a fluid to produce a standing wave within the reaction zone; and a means for introducing coating moieties to the reaction zone. The invention also provides a method for coating particles, the method comprising fluidizing the particles, subjecting the particles to a standing wave; and contacting the subjected particles with a coating moiety.

  2. Evaluating health information technology: provider satisfaction with an HIV-specific, electronic clinical management and reporting system.

    Science.gov (United States)

    Magnus, Manya; Herwehe, Jane; Andrews, Laura; Gibson, Laura; Daigrepont, Nathan; De Leon, Jordana M; Hyslop, Newton E; Styron, Steven; Wilcox, Ronald; Kaiser, Michael; Butler, Michael K

    2009-02-01

    Health information technology (HIT) offers the potential to improve care for persons living with HIV. Provider satisfaction with HIT is essential to realize benefits, yet its evaluation presents challenges. An HIV-specific, electronic clinical management and reporting system was implemented in Louisiana's eight HIV clinics, serving over 7500. A serial cross-sectional survey was administered at three points between April 2002 and July 2005; qualitative methods were used to augment quantitative. Multivariable methods were used to characterize provider satisfaction. The majority of the sample (n = 196; T1 = 105; T2 = 46; T3 = 45) was female (80.0%), between ages of 25 and 50 years (68.3%), frequent providers at that clinic (53.7% more than 4 days per week), and had been at the same clinic for a year or more (85.0%). Improvements in satisfaction were observed in patient tracking ( p 0.05), current viral load decreased at each time point (mean 4.0 [SD 5.6], 2.9 [2.5], 1.8 [2.6], p = 0.08], current antiretroviral status decreased at each time point (mean 3.9 [SD 4.7], 2.9 [3.7], 1.5 [1.1], p < 0.04), history of antiretroviral use decreased at each time point (mean 15.1 [SD 21.9], 6.0 [5.4], 5.4 [7.2], p < 0.04]. Time savings were realized, averaging 16.1 minutes per visit ( p < 0.04). Providers were satisfied with HIT in multiple domains, and significant time savings were realized.

  3. Comparison of two methods for calculating the mean vascularization index of ovarian stroma on the basis of spatio-temporal image correlation high-definition flow technology.

    Science.gov (United States)

    Kudla, Marek J; Kandzia, Tomasz; Alcázar, Juan Luis

    2013-11-01

    The aim of our study was to determine the agreement between two different methods for calculating the mean vascularization index (VI) of ovarian stroma using spatio-temporal image correlation-high definition flow (STIC-HDF) technology. Stored 4-D STIC-HDF volume data for ovaries of 34 premenopausal women were assessed retrospectively. We calculated the mean VI from the VI values derived for each 3-D volume within the STIC sequence. Then, the examiner subjectively selected the two volumes with the highest and lowest color signals, respectively. We averaged these two values. Agreement between VI measurements was estimated by calculating intra-class correlation coefficients. The intra-class correlation coefficient for the VI was 0.999 (95% confidence interval: 0.999-1.000). The mean time needed to calculate the mean VI using the entire 4-D STIC sequence was significantly longer than the mean time needed to calculate the average value from the volumes with the highest and lowest color signals determined by the operator (p < 0001). We conclude that there is significant agreement between the two methods. Calculating the average VI from the highest and lowest values is less time consuming than calculating the mean VI from the complete STIC sequence. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  4. On the mean and variance of the writhe of random polygons

    International Nuclear Information System (INIS)

    Portillo, J; Scharein, R; Arsuaga, J; Vazquez, M; Diao, Y

    2011-01-01

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an 'ideal' conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n) behaves as a linear function of the length of the equilateral random polygon.

  5. On the mean and variance of the writhe of random polygons.

    Science.gov (United States)

    Portillo, J; Diao, Y; Scharein, R; Arsuaga, J; Vazquez, M

    We here address two problems concerning the writhe of random polygons. First, we study the behavior of the mean writhe as a function length. Second, we study the variance of the writhe. Suppose that we are dealing with a set of random polygons with the same length and knot type, which could be the model of some circular DNA with the same topological property. In general, a simple way of detecting chirality of this knot type is to compute the mean writhe of the polygons; if the mean writhe is non-zero then the knot is chiral. How accurate is this method? For example, if for a specific knot type K the mean writhe decreased to zero as the length of the polygons increased, then this method would be limited in the case of long polygons. Furthermore, we conjecture that the sign of the mean writhe is a topological invariant of chiral knots. This sign appears to be the same as that of an "ideal" conformation of the knot. We provide numerical evidence to support these claims, and we propose a new nomenclature of knots based on the sign of their expected writhes. This nomenclature can be of particular interest to applied scientists. The second part of our study focuses on the variance of the writhe, a problem that has not received much attention in the past. In this case, we focused on the equilateral random polygons. We give numerical as well as analytical evidence to show that the variance of the writhe of equilateral random polygons (of length n ) behaves as a linear function of the length of the equilateral random polygon.

  6. Differences between true mean temperatures and means calculated with four different approaches: a case study from three Croatian stations

    Science.gov (United States)

    Bonacci, Ognjen; Željković, Ivana

    2018-01-01

    Different countries use varied methods for daily mean temperature calculation. None of them assesses precisely the true daily mean temperature, which is defined as the integral of continuous temperature measurements in a day. Of special scientific as well as practical importance is to find out how temperatures calculated by different methods and approaches deviate from the true daily mean temperature. Five mean daily temperatures were calculated (T0, T1, T2, T3, T4) using five different equations. The mean of 24-h temperature observations during the calendar day is accepted to represent the true, daily mean T0. The differences Δ i between T0 and four other mean daily temperatures T1, T2, T3, and T4 were calculated and analysed. In the paper, analyses were done with hourly data measured in a period from 1 January 1999 to 31 December 2014 (149,016 h, 192 months and 16 years) at three Croatian meteorological stations. The stations are situated in distinct climatological areas: Zagreb Grič in a mild climate, Zavižan in the cold mountain region and Dubrovnik in the hot Mediterranean. Influence of fog on the temperature is analysed. Special attention is given to analyses of extreme (maximum and minimum) daily differences occurred at three analysed stations. Selection of the fixed local hours, which is in use for calculation of mean daily temperature, plays a crucial role in diminishing of bias from the true daily temperature.

  7. Symbolic Meaning of Drama “Perlawanan Diponegoro”

    Directory of Open Access Journals (Sweden)

    Nur Sahid

    2017-01-01

    Full Text Available Study on Drama entitled “Perlawanan Diponegoro” or “Diponegoro Insurrection” by Lephen Purwanto is aiming at deeply digging the semiotic meanings attached to it. This study employed Keir Elam’s theatrical semiotics as the approach, while Krippendorf’s content analysis was implemented as the method of study. Citing from Krippendorf, content analysis is a method that is particularly develop to study symbolical phenomena with a major purpose that is to dig and reveal other examined phenomena, comprising content, meaning, and essential element of a literary work.

  8. Beyond the Mean: Sensitivities of the Variance of Population Growth.

    Science.gov (United States)

    Trotter, Meredith V; Krishna-Kumar, Siddharth; Tuljapurkar, Shripad

    2013-03-01

    Populations in variable environments are described by both a mean growth rate and a variance of stochastic population growth. Increasing variance will increase the width of confidence bounds around estimates of population size, growth, probability of and time to quasi-extinction. However, traditional sensitivity analyses of stochastic matrix models only consider the sensitivity of the mean growth rate. We derive an exact method for calculating the sensitivity of the variance in population growth to changes in demographic parameters. Sensitivities of the variance also allow a new sensitivity calculation for the cumulative probability of quasi-extinction. We apply this new analysis tool to an empirical dataset on at-risk polar bears to demonstrate its utility in conservation biology We find that in many cases a change in life history parameters will increase both the mean and variance of population growth of polar bears. This counterintuitive behaviour of the variance complicates predictions about overall population impacts of management interventions. Sensitivity calculations for cumulative extinction risk factor in changes to both mean and variance, providing a highly useful quantitative tool for conservation management. The mean stochastic growth rate and its sensitivities do not fully describe the dynamics of population growth. The use of variance sensitivities gives a more complete understanding of population dynamics and facilitates the calculation of new sensitivities for extinction processes.

  9. PET image reconstruction: mean, variance, and optimal minimax criterion

    International Nuclear Information System (INIS)

    Liu, Huafeng; Guo, Min; Gao, Fei; Shi, Pengcheng; Xue, Liying; Nie, Jing

    2015-01-01

    Given the noise nature of positron emission tomography (PET) measurements, it is critical to know the image quality and reliability as well as expected radioactivity map (mean image) for both qualitative interpretation and quantitative analysis. While existing efforts have often been devoted to providing only the reconstructed mean image, we present a unified framework for joint estimation of the mean and corresponding variance of the radioactivity map based on an efficient optimal min–max criterion. The proposed framework formulates the PET image reconstruction problem to be a transformation from system uncertainties to estimation errors, where the minimax criterion is adopted to minimize the estimation errors with possibly maximized system uncertainties. The estimation errors, in the form of a covariance matrix, express the measurement uncertainties in a complete way. The framework is then optimized by ∞-norm optimization and solved with the corresponding H ∞ filter. Unlike conventional statistical reconstruction algorithms, that rely on the statistical modeling methods of the measurement data or noise, the proposed joint estimation stands from the point of view of signal energies and can handle from imperfect statistical assumptions to even no a priori statistical assumptions. The performance and accuracy of reconstructed mean and variance images are validated using Monte Carlo simulations. Experiments on phantom scans with a small animal PET scanner and real patient scans are also conducted for assessment of clinical potential. (paper)

  10. Science teachers' meaning-making of teaching practice, collaboration and professional development

    DEFF Research Database (Denmark)

    Nielsen, Birgitte Lund

    The aims of the research presented in the thesis are three-fold: 1) To gain an insight into challenges and needs related to Danish science teachers professional development (PD), 2) to understand Danish science teachers’ meaning-making when involved in PD designed according to criteria from...... international research and 3) a research methodological perspective: to adapt, and discuss the use of a specific tool for analysis and representation of the teachers’ meaning-making. A mixed method approach is taken: The empirical research includes a cohort-survey of graduating science teachers repeated...... to lack of confidence. The case-studies provide examples where science teachers’ develop a growing confidence, and begin to focus on students’ learning by manipulating both science ideas and equipment. The teachers involved in artifact-mediated interactions refer to gaining insight into students...

  11. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  12. Primary standardization of C-14 by means of CIEMAT/NIST, TDCR and 4πβ-γ methods

    International Nuclear Information System (INIS)

    Kuznetsova, Maria

    2016-01-01

    In this work, the primary standardization of "1"4C solution, which emits beta particles of maximum energy 156 keV, was made by means of three different methods: CIEMAT/NIST and TDCR (Triple To Double Coincidence Ratio) methods in liquid scintillation systems and the tracing method, in the 4πβ-γ coincidence system. TRICARB LSC (Liquid Scintillator Counting) system, equipped with two photomultipliers tubes, was used for CIEMAT/NIST method, using a "3H standard that emits beta particles with maximum energy of 18.7 keV, as efficiency tracing. HIDEX 300SL LSC system, equipped with three photomultipliers tubes, was used for TDCR method. Samples of "1"4C and "3H, for the liquid scintillator system, were prepared using three commercial scintillation cocktails, UltimaGold, Optiphase Hisafe3 and InstaGel-Plus, in order to compare the performance in the measurements. All samples were prepared with 15 mL scintillators, in glass vials with low potassium concentration. Known aliquots of radioactive solution were dropped onto the cocktail scintillators. In order to obtain the quenching parameter curve, a nitro methane carrier solution and 1 mL of distilled water were used. For measurements in the 4πβ-γ system, "6"0Co was used as beta gamma emitter. SCS (software coincidence system) was applied and the beta efficiency was changed by using electronic discrimination. The behavior of the extrapolation curve was predicted with code ESQUEMA, using Monte Carlo technique. The "1"4C activity obtained by the three methods applied in this work was compared and the results showed to be in agreement, within the experimental uncertainty. (author)

  13. Bronze-mean hexagonal quasicrystal

    Science.gov (United States)

    Dotera, Tomonari; Bekku, Shinichi; Ziherl, Primož

    2017-10-01

    The most striking feature of conventional quasicrystals is their non-traditional symmetry characterized by icosahedral, dodecagonal, decagonal or octagonal axes. The symmetry and the aperiodicity of these materials stem from an irrational ratio of two or more length scales controlling their structure, the best-known examples being the Penrose and the Ammann-Beenker tiling as two-dimensional models related to the golden and the silver mean, respectively. Surprisingly, no other metallic-mean tilings have been discovered so far. Here we propose a self-similar bronze-mean hexagonal pattern, which may be viewed as a projection of a higher-dimensional periodic lattice with a Koch-like snowflake projection window. We use numerical simulations to demonstrate that a disordered variant of this quasicrystal can be materialized in soft polymeric colloidal particles with a core-shell architecture. Moreover, by varying the geometry of the pattern we generate a continuous sequence of structures, which provide an alternative interpretation of quasicrystalline approximants observed in several metal-silicon alloys.

  14. Gas phase collision dynamics by means of pulse-radiolysis methods

    International Nuclear Information System (INIS)

    Hatano, Yoshihiko

    1989-01-01

    After a brief survey of recent advances in gas-phase collision dynamics studies using pulse radiolysis methods, the following two topics in our research programs are presented with emphasis on the superior advantages of the pulse radiolysis methods over the various methods of gas-phase collision dynamics, such as beam methods, swarm methods and flow methods. One of the topics is electron attachment to van der Waals molecules. The attachment rates of thermal electrons to O 2 and other molecules in dense gases have been measured in wide ranges of both gas temperatures and pressures, from which experimental evidence has been obtained for electron attachment to van der Waals molecules. The results have been compared with theories and discussed in terms of the effect of van der Waals interaction on the electron attachment resonance. The obtained conclusions have been related with investigations of electron attachment, solvation and localization in the condensed phase. The other is Penning ionization and its related processes. The rate constants for the de-excitation of He(2 1 P), He(2 3 S), Ne( 3 P 0 ), Ne( 3 P 1 ), Ne( 3 P 2 ), Ar( 1 P 1 ), Ar( 3 P 1 ), by atoms and molecules have been measured in the temperature range from 100 to 300 K, thus obtaining the collisional energy dependence of the de-excitation cross sections. The results are compared in detail with theories classified according to the excited rare gas atoms in the metastable and resonance states. (author)

  15. A mixed methods study of patient-provider communication about opioid analgesics.

    Science.gov (United States)

    Hughes, Helen Kinsman; Korthuis, Philip Todd; Saha, Somnath; Eggly, Susan; Sharp, Victoria; Cohn, Jonathan; Moore, Richard; Beach, Mary Catherine

    2015-04-01

    To describe patient-provider communication about opioid pain medicine and explore how these discussions affect provider attitudes toward patients. We audio-recorded 45 HIV providers and 423 patients in routine outpatient encounters at four sites across the country. Providers completed post-visit questionnaires assessing their attitudes toward patients. We identified discussions about opioid pain management and analyzed them qualitatively. We used logistic regression to assess the association between opioid discussion and providers' attitudes toward patients. 48 encounters (11% of the total sample) contained substantive discussion of opioid-related pain management. Most conversations were initiated by patients (n=28, 58%) and ended by the providers (n=36, 75%). Twelve encounters (25%) contained dialog suggesting a difference of opinion or conflict. Providers more often agreed than disagreed to give the prescription (50% vs. 23%), sometimes reluctantly; in 27% (n=13) of encounters, no decision was made. Fewer than half of providers (n=20, 42%) acknowledged the patient's experience of pain. Providers had a lower odds of positive regard for the patient (adjusted OR=0.51, 95% CI: 0.27-0.95) when opioids were discussed. Pain management discussions are common in routine outpatient HIV encounters and providers may regard patients less favorably if opioids are discussed during visits. The sometimes-adversarial nature of these discussions may negatively affect provider attitudes toward patients. Empathy and pain acknowledgment are tools that clinicians can use to facilitate productive discussions of pain management. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. Means of temporal expressions in newspaper news and press report

    Directory of Open Access Journals (Sweden)

    Čutura Ilijana R.

    2016-01-01

    Full Text Available This paper analyses most frequent linguistic means for expressing the temporal frame in the printed news and press reports. With structuralism as a chosen theoretical framework, the approach of the research is qualitative and stylistic. Since the study belongs to the field of functional stylistics, the primary methods used in the study were functional-stylistic and linguistic-stylistic ones. As the study focuses on two newspaper genres, comparative-stylistic method was used as well. The analysis has been conducted on concrete linguistic excerpts from Serbian daily newspapers published throughout Serbia from 2008 to 2015. The aims of the paper are to show model of expressing temporal frame in contemporary Serbian newspapers. This paper provides an overview of the characteristics of model and the types of temporal expression as well as their variations in contemporal Serbian newspapers. The paper also aims to determine the differencies between printed news and press reports by the choice of temporal expressions. It is shown that there is a tendency of changing schematized structure of these informative genres and some innovation in relation to the choice of linguistic means for expessing the meaning of temporally close events. The research is a contribution to journalism stylistics, more precisely to the Serbian language newspaper stylistics, and also contributes to the study of linguistic and stylistic characteristics of non-literary texts. The study is also relevant because it describes the use of adverbs and adverbial expressions in the journalistic style.

  17. Scintillation camera with improved output means

    International Nuclear Information System (INIS)

    Lange, K.; Wiesen, E.J.; Woronowicz, E.M.

    1978-01-01

    In a scintillation camera system, the output pulse signals from an array of photomultiplier tubes are coupled to the inputs of individual preamplifiers. The preamplifier output signals are coupled to circuitry for computing the x and y coordinates of the scintillations. A cathode ray oscilloscope is used to form an image corresponding with the pattern in which radiation is emitted by a body. Means for improving the uniformity and resolution of the scintillations are provided. The means comprise biasing means coupled to the outputs of selected preamplifiers so that output signals below a predetermined amplitude are not suppressed and signals falling within increasing ranges of amplitudes are increasingly suppressed. In effect, the biasing means make the preamplifiers non-linear for selected signal levels

  18. Simultaneous determination of some antiprotozoal drugs in different combined dosage forms by mean centering of ratio spectra and multivariate calibration with model updating methods

    Directory of Open Access Journals (Sweden)

    Abdelaleem Eglal A

    2012-04-01

    Full Text Available Abstract Background Metronidazole (MET and Diloxanide Furoate (DF, act as antiprotozoal drugs, in their ternary mixtures with Mebeverine HCl (MEH, an effective antispasmodic drug. This work concerns with the development and validation of two simple, specific and cost effective methods mainly for simultaneous determination of the proposed ternary mixture. In addition, the developed multivariate calibration model has been updated to determine Metronidazole benzoate (METB in its binary mixture with DF in Dimetrol® suspension. Results Method (I is the mean centering of ratio spectra spectrophotometric method (MCR that depends on using the mean centered ratio spectra in two successive steps that eliminates the derivative steps and therefore the signal to noise ratio is enhanced. The developed MCR method has been successfully applied for determination of MET, DF and MEH in different laboratory prepared mixtures and in tablets. Method (II is the partial least square (PLS multivariate calibration method that has been optimized for determination of MET, DF and MEH in Dimetrol ® tablets and by updating the developed model, it has been successfully used for prediction of binary mixtures of DF and Metronidazole Benzoate ester (METB in Dimetrol ® suspension with good accuracy and precision without reconstruction of the calibration set. Conclusion The developed methods have been validated; accuracy, precision and specificity were found to be within the acceptable limits. Moreover results obtained by the suggested methods showed no significant difference when compared with those obtained by reported methods. Graphical Abstract

  19. Improvisation and meaning

    Directory of Open Access Journals (Sweden)

    Simon Gilbertson

    2013-08-01

    Full Text Available This article presents and discusses a long-term repeated-immersion research process that explores meaning allocated to an episode of 50 seconds of music improvisation in early neurosurgical rehabilitation by a teenage boy with severe traumatic brain injury and his music therapist. The process began with the original therapy session in August 1994 and extends to the current time of writing in 2013. A diverse selection of qualitative research methods were used during a repeated immersion and engagement with the selected episodes. The multiple methods used in this enquiry include therapeutic narrative analysis and musicological and video analysis during my doctoral research between 2002 and 2004, arts-based research in 2008 using expressive writing, and arts-based research in 2012 based on the creation of a body cast of my right hand as I used it to play the first note of my music improvising in the original therapy episode, which is accompanied by reflective journaling. The casting of my hand was done to explore and reconsider the role of my own body as an embodied and integral, but originally hidden, part of the therapy process. Put together, these investigations explore the potential meanings of the episode of music improvisation in therapy in an innovative and imaginative way. However, this article does not aim at this stage to present a model or theory for neurorehabilitation but offers an example of how a combination of diverse qualitative methods over an extended period of time can be instrumental in gaining innovative and rich insights into initially hidden perspectives on health, well-being, and human relating.

  20. Improvisation and meaning

    Science.gov (United States)

    2013-01-01

    This article presents and discusses a long-term repeated-immersion research process that explores meaning allocated to an episode of 50 seconds of music improvisation in early neurosurgical rehabilitation by a teenage boy with severe traumatic brain injury and his music therapist. The process began with the original therapy session in August 1994 and extends to the current time of writing in 2013. A diverse selection of qualitative research methods were used during a repeated immersion and engagement with the selected episodes. The multiple methods used in this enquiry include therapeutic narrative analysis and musicological and video analysis during my doctoral research between 2002 and 2004, arts-based research in 2008 using expressive writing, and arts-based research in 2012 based on the creation of a body cast of my right hand as I used it to play the first note of my music improvising in the original therapy episode, which is accompanied by reflective journaling. The casting of my hand was done to explore and reconsider the role of my own body as an embodied and integral, but originally hidden, part of the therapy process. Put together, these investigations explore the potential meanings of the episode of music improvisation in therapy in an innovative and imaginative way. However, this article does not aim at this stage to present a model or theory for neurorehabilitation but offers an example of how a combination of diverse qualitative methods over an extended period of time can be instrumental in gaining innovative and rich insights into initially hidden perspectives on health, well-being, and human relating. PMID:23930989

  1. Meaning discrimination in bilingual Venda dictionaries | Mafela ...

    African Journals Online (AJOL)

    In most cases, the equivalents of the entry-words are provided without giving meaning discrimination. Without a good command of Venda and the provision of meaning discrimination, users will find it difficult to make a correct choice of the equivalent for which they are looking. Bilingual Venda dictionaries are therefore not ...

  2. Mean-field Ensemble Kalman Filter

    KAUST Repository

    Law, Kody; Tembine, Hamidou; Tempone, Raul

    2015-01-01

    A proof of convergence of the standard EnKF generalized to non-Gaussian state space models is provided. A density-based deterministic approximation of the mean-field limiting EnKF (MFEnKF) is proposed, consisting of a PDE solver and a quadrature

  3. Economic valuation of the ecosystem services provided by a protected area in the Brazilian Cerrado: application of the contingent valuation method.

    Science.gov (United States)

    Resende, F M; Fernandes, G W; Andrade, D C; Néder, H D

    2017-11-01

    Considering that the economic valuation of ecosystem services is a useful approach to support the conservation of natural areas, we aimed to estimate the monetary value of the benefits provided by a protected area in southeast Brazil, the Serra do Cipó National Park. We calculated the visitor's willingness to pay to conserve the ecosystems of the protected area using the contingent valuation method. Located in a region under intense anthropogenic pressure, the Serra do Cipó National Park is mostly composed of rupestrian grassland ecosystems, in addition to other Cerrado physiognomies. We conducted a survey consisting of 514 interviews with visitors of the region and found that the mean willingness to pay was R$ 7.16 year-1, which corresponds to a total of approximately R$ 716,000.00 year-1. We detected that per capita income, the household size, the level of interest in environmental issues and the place of origin influenced the likelihood that individuals are willing to contribute to the conservation of the park, as well as the value of the stated willingness to pay. This study conveys the importance of conserving rupestrian grassland and other Cerrado physiognomies to decision makers and society.

  4. Improvement of the cruise performances of a wing by means of aerodynamic optimization. Validation with a Far-Field method

    Science.gov (United States)

    Jiménez-Varona, J.; Ponsin Roca, J.

    2015-06-01

    Under a contract with AIRBUS MILITARY (AI-M), an exercise to analyze the potential of optimization techniques to improve the wing performances at cruise conditions has been carried out by using an in-house design code. The original wing was provided by AI-M and several constraints were posed for the redesign. To maximize the aerodynamic efficiency at cruise, optimizations were performed using the design techniques developed internally at INTA under a research program (Programa de Termofluidodinámica). The code is a gradient-based optimizaa tion code, which uses classical finite differences approach for gradient computations. Several techniques for search direction computation are implemented for unconstrained and constrained problems. Techniques for geometry modifications are based on different approaches which include perturbation functions for the thickness and/or mean line distributions and others by Bézier curves fitting of certain degree. It is very e important to afford a real design which involves several constraints that reduce significantly the feasible design space. And the assessment of the code is needed in order to check the capabilities and the possible drawbacks. Lessons learnt will help in the development of future enhancements. In addition, the validation of the results was done using also the well-known TAU flow solver and a far-field drag method in order to determine accurately the improvement in terms of drag counts.

  5. Simulation of anisotropic diffusion by means of a diffusion velocity method

    CERN Document Server

    Beaudoin, A; Rivoalen, E

    2003-01-01

    An alternative method to the Particle Strength Exchange method for solving the advection-diffusion equation in the general case of a non-isotropic and non-uniform diffusion is proposed. This method is an extension of the diffusion velocity method. It is shown that this extension is quite straightforward due to the explicit use of the diffusion flux in the expression of the diffusion velocity. This approach is used to simulate pollutant transport in groundwater and the results are compared to those of the PSE method presented in an earlier study by Zimmermann et al.

  6. Hand Washing Practices Among Emergency Medical Services Providers

    Directory of Open Access Journals (Sweden)

    Joshua Bucher

    2015-10-01

    Full Text Available Introduction: Hand hygiene is an important component of infection control efforts. Our primary and secondary goals were to determine the reported rates of hand washing and stethoscope cleaning in emergency medical services (EMS workers, respectively. Methods: We designed a survey about hand hygiene practices. The survey was distributed to various national EMS organizations through e-mail. Descriptive statistics were calculated for survey items (responses on a Likert scale and subpopulations of survey respondents to identify relationships between variables. We used analysis of variance to test differences in means between the subgroups. Results: There were 1,494 responses. Overall, reported hand hygiene practices were poor among pre-hospital providers in all clinical situations. Women reported that they washed their hands more frequently than men overall, although the differences were unlikely to be clinically significant. Hygiene after invasive procedures was reported to be poor. The presence of available hand sanitizer in the ambulance did not improve reported hygiene rates but improved reported rates of cleaning the stethoscope (absolute difference 0.4, p=0.0003. Providers who brought their own sanitizer were more likely to clean their hands. Conclusion: Reported hand hygiene is poor amongst pre-hospital providers. There is a need for future intervention to improve reported performance in pre-hospital provider hand washing.

  7. Image Segmentation Method Using Fuzzy C Mean Clustering Based on Multi-Objective Optimization

    Science.gov (United States)

    Chen, Jinlin; Yang, Chunzhi; Xu, Guangkui; Ning, Li

    2018-04-01

    Image segmentation is not only one of the hottest topics in digital image processing, but also an important part of computer vision applications. As one kind of image segmentation algorithms, fuzzy C-means clustering is an effective and concise segmentation algorithm. However, the drawback of FCM is that it is sensitive to image noise. To solve the problem, this paper designs a novel fuzzy C-mean clustering algorithm based on multi-objective optimization. We add a parameter λ to the fuzzy distance measurement formula to improve the multi-objective optimization. The parameter λ can adjust the weights of the pixel local information. In the algorithm, the local correlation of neighboring pixels is added to the improved multi-objective mathematical model to optimize the clustering cent. Two different experimental results show that the novel fuzzy C-means approach has an efficient performance and computational time while segmenting images by different type of noises.

  8. Consequence clauses with a meaning of measure and degree

    Directory of Open Access Journals (Sweden)

    Nikolić Marina M.

    2015-01-01

    Full Text Available The objective of this paper is to present a new classification of consequence clauses that is based on a category of degree. Based on that parameter, consequence clauses are classified into the ones that have a meaning of degree and the ones that don’t have that meaning. Furthemore, consequence clauses with a meaning of degree can be divided into the clauses with a meaning of measure (Napolju je toliko hladno da smo se odmah vratili iz šetnje./It is so cold outside that we had to return immediately back; On je takav prevarant, da će te sigurno preći/ He is such a chouse - he will cheat for sure. and into excessive clauses (Previše je debela da uđe u tu haljinu/ She is too fat to fit in this dress; Devojčica je premalena da ostane sama kod kuće/ The girl is too small to stay at home alone; Buka je bila dovoljna da natera dete u plač/The noise was loud enough to cause the baby cry. The article provides the arguments that the category of degree in this sentences is grammaticalized with the help of certain modifiers and a conjunction of result da. The author applies the onomasiological method in the analysis and uses the theory of semantic locations. An existence of certain meanings is confirmed by the transformational test. [Projekat Ministarstva nauke Republike Srbije, br. 178021: Opis i standardizacija savremenog srpskog jezika, potprojekat: Sintaksa složene rečenice

  9. The Social Construction of Place Meaning: Exploring Multiple Meanings of Place as an Outdoor Teaching and Learning Environment

    OpenAIRE

    Gkoutis, Georgios

    2014-01-01

    This investigation explores the meanings primary school teachers who apply outdoor learning and teaching methods associate withthe places that encompass their teaching practices. A symbolic interactionist framework coupled with a social constructionistorientation was employed to analyze data collected from semi-structured interviews and photo elicitation techniques. The findingsillustrated that meaning ascribed to place derived from the interactional processes between the study’s respondents ...

  10. A method for visual inspection of welding by means of image processing of x-ray photograph

    International Nuclear Information System (INIS)

    Koshimizu, Hiroyasu; Yoshida, Tohru.

    1983-01-01

    Computer image processing is becoming a helpful tool even in industrial inspections. A computerized method for welding visual inspection is proposed in this paper. This method is based on computer image processing of X-ray photograph of welding, in which the appearance information of weldments such as shape of weld bead really exists. Structural patterns are extracted at first and seven computer measures for inspection are calculated using those patterns. Software system for visual inspection is constructed based on these seven measures. It was experimentally made clear that this system can provide a performance of more than 0.85 correlation to human visual inspection. As a result, the visual inspection by computer using X-ray photograph became a promising tool to realize objectivity and quantitativity of welding inspection. Additionally, the consistency of the system, the possibility to reduce computing costs, and so on are discussed to improve the proposed method. (author)

  11. A SVDD and K-Means Based Early Warning Method for Dual-Rotor Equipment under Time-Varying Operating Conditions

    Directory of Open Access Journals (Sweden)

    Zhinong Jiang

    2018-01-01

    Full Text Available Under frequently time-varying operating conditions, equipment with dual rotors like gas turbines is influenced by two rotors with different rotating speeds. Alarm methods of fixed threshold are unable to consider the influences of time-varying operating conditions. Hence, those methods are not suitable for monitoring dual-rotor equipment. An early warning method for dual-rotor equipment under time-varying operating conditions is proposed in this paper. The influences of time-varying rotating speeds of dual rotors on alarm thresholds have been considered. Firstly, the operating conditions are divided into several limited intervals according to rotating speeds of dual rotors. Secondly, the train data within each interval is processed by SVDD and the allowable ranges (i.e., the alarm threshold of the vibration are determined. The alarm threshold of each interval of operating conditions is obtained. The alarm threshold can be expressed as a sphere, whose controlling parameters are the coordinate of the center and the radius. Then, the cluster center of the test data, whose alarm state is to be judged, can be extracted through K-means. Finally, the alarm state can be obtained by comparing the cluster center with the corresponding sphere. Experiments are conducted to validate the proposed method.

  12. A Mixed-Methods Investigation of Early Childhood Professional Development for Providers and Recipients in the United States

    Science.gov (United States)

    Linder, Sandra M.; Rembert, Kellye; Simpson, Amber; Ramey, M. Deanna

    2016-01-01

    This multi-phase mixed-methods study explores provider and recipient perceptions of the current state of early childhood professional development in a southeastern area of the United States. Professional development for the early childhood workforce has been shown to positively influence the quality of early childhood classrooms. This study…

  13. The meaning of ordered SOS

    NARCIS (Netherlands)

    Mousavi, M.R.; Phillips, I.C.C.; Reniers, M.A.; Ulidowski, I.; Arun-Kumar, S.; Garg, N.

    2006-01-01

    Structured Operational Semantics (SOS) is a popular method for defining semantics by means of deduction rules. An important feature of deduction rules, or simply SOS rules, are negative premises, which are crucial in the definitions of such phenomena as priority mechanisms and time-outs. Orderings

  14. Develop a practical means to monitor the criticality of the TMI-2 core

    International Nuclear Information System (INIS)

    Kim, S.S.; Levine, S.H.; Imel, G.

    1984-06-01

    A method has been developed to monitor the subcritical reactivity and unfold the k/sub infinity/ distribution of a degraded reactor core. The method uses several fixed neutron detectors and a Cf-252 neutron source placed sequentially in multiple positions in the core. It is called the Asymmetric Multiple Position Neutron Source (AMPNS) method. The AMPNS method employs the nucleonic codes to analyze in two dimensions the neutron multiplication of a Cf-252 neutron source. Experiments were performed on the Penn State Breazeale TRIGA Reactor (PSBR). The first set of experiments calibrates the k/sub infinity/'s of the fuel elements moved during the second set of experiments. The second set of experiments provides a means for both developing and validating the AMPNS method. Several test runs of optimization calculations have been made on the PSBR core assuming one of the subcritical configurations is a damaged core. Test runs of the AMPNS method reveals that when the core cell size and source position are correctly chosen, the solution converges to the correct k/sub eff/ and k/sub infinity/ distribution without any oscillations or instabilities. Application of the AMPNS method to the degraded TMI-2 core has been studied to provide some initial insight into this problem

  15. HARDI denoising using nonlocal means on S2

    Science.gov (United States)

    Kuurstra, Alan; Dolui, Sudipto; Michailovich, Oleg

    2012-02-01

    Diffusion MRI (dMRI) is a unique imaging modality for in vivo delineation of the anatomical structure of white matter in the brain. In particular, high angular resolution diffusion imaging (HARDI) is a specific instance of dMRI which is known to excel in detection of multiple neural fibers within a single voxel. Unfortunately, the angular resolution of HARDI is known to be inversely proportional to SNR, which makes the problem of denoising of HARDI data be of particular practical importance. Since HARDI signals are effectively band-limited, denoising can be accomplished by means of linear filtering. However, the spatial dependency of diffusivity in brain tissue makes it impossible to find a single set of linear filter parameters which is optimal for all types of diffusion signals. Hence, adaptive filtering is required. In this paper, we propose a new type of non-local means (NLM) filtering which possesses the required adaptivity property. As opposed to similar methods in the field, however, the proposed NLM filtering is applied in the spherical domain of spatial orientations. Moreover, the filter uses an original definition of adaptive weights, which are designed to be invariant to both spatial rotations as well as to a particular sampling scheme in use. As well, we provide a detailed description of the proposed filtering procedure, its efficient implementation, as well as experimental results with synthetic data. We demonstrate that our filter has substantially better adaptivity as compared to a number of alternative methods.

  16. A REVIEW WAVELET TRANSFORM AND FUZZY K-MEANS BASED IMAGE DE-NOISING METHOD

    OpenAIRE

    Nidhi Patel*, Asst. Prof. Pratik Kumar Soni

    2017-01-01

    The research area of image processing technique using fuzzy k-means and wavelet transform. The enormous amount of data necessary for images is a main reason for the growth of many areas within the research field of computer imaging such as image processing and compression. In order to get this in requisites of the concerned research work, wavelet transforms and k-means clustering is applied. This can be done in order to discover more possible combinations that may lead to the finest de-noisin...

  17. Apparatus and method for reconstructing data

    International Nuclear Information System (INIS)

    Pavkovich, J.M.

    1977-01-01

    The apparatus and method for reconstructing data are described. A fan beam of radiation is passed through an object, the beam lying in the same quasi-plane as the object slice to be examined. Radiation not absorbed in the object slice is recorded on oppositely situated detectors aligned with the source of radiation. Relative rotation is provided between the source-detector configuration and the object. Reconstruction means are coupled to the detector means, and may comprise a general purpose computer, a special purpose computer, and control logic for interfacing between said computers and controlling the respective functioning thereof for performing a convolution and back projection based upon non-absorbed radiation detected by said detector means, whereby the reconstruction means converts values of the non-absorbed radiation into values of absorbed radiation at each of an arbitrarily large number of points selected within the object slice. Display means are coupled to the reconstruction means for providing a visual or other display or representation of the quantities of radiation absorbed at the points considered in the object. (Auth.)

  18. Semi-supervised clustering methods

    Science.gov (United States)

    Bair, Eric

    2013-01-01

    Cluster analysis methods seek to partition a data set into homogeneous subgroups. It is useful in a wide variety of applications, including document processing and modern genetics. Conventional clustering methods are unsupervised, meaning that there is no outcome variable nor is anything known about the relationship between the observations in the data set. In many situations, however, information about the clusters is available in addition to the values of the features. For example, the cluster labels of some observations may be known, or certain observations may be known to belong to the same cluster. In other cases, one may wish to identify clusters that are associated with a particular outcome variable. This review describes several clustering algorithms (known as “semi-supervised clustering” methods) that can be applied in these situations. The majority of these methods are modifications of the popular k-means clustering method, and several of them will be described in detail. A brief description of some other semi-supervised clustering algorithms is also provided. PMID:24729830

  19. A new method of spatio-temporal topographic mapping by correlation coefficient of K-means cluster.

    Science.gov (United States)

    Li, Ling; Yao, Dezhong

    2007-01-01

    It would be of the utmost interest to map correlated sources in the working human brain by Event-Related Potentials (ERPs). This work is to develop a new method to map correlated neural sources based on the time courses of the scalp ERPs waveforms. The ERP data are classified first by k-means cluster analysis, and then the Correlation Coefficients (CC) between the original data of each electrode channel and the time course of each cluster centroid are calculated and utilized as the mapping variable on the scalp surface. With a normalized 4-concentric-sphere head model with radius 1, the performance of the method is evaluated by simulated data. CC, between simulated four sources (s (1)-s (4)) and the estimated cluster centroids (c (1)-c (4)), and the distances (Ds), between the scalp projection points of the s (1)-s (4) and that of the c (1)-c (4), are utilized as the evaluation indexes. Applied to four sources with two of them partially correlated (with maximum mutual CC = 0.4892), CC (Ds) between s (1)-s (4) and c (1)-c (4) are larger (smaller) than 0.893 (0.108) for noise levels NSRclusters located at left, right occipital and frontal. The estimated vectors of the contra-occipital area demonstrate that attention to the stimulus location produces increased amplitude of the P1 and N1 components over the contra-occipital scalp. The estimated vector in the frontal area displays two large processing negativity waves around 100 ms and 250 ms when subjects are attentive, and there is a small negative wave around 140 ms and a P300 when subjects are unattentive. The results of simulations and real Visual Evoked Potentials (VEPs) data demonstrate the validity of the method in mapping correlated sources. This method may be an objective, heuristic and important tool to study the properties of cerebral, neural networks in cognitive and clinical neurosciences.

  20. Reliability of an analysis method for measuring diaphragm excursion by means of direct visualization with videofluoroscopy.

    Science.gov (United States)

    Yi, Liu C; Nascimento, Oliver A; Jardim, José R

    2011-06-01

    The purpose of this study was to verify the reproducibility between two different observers of an analysis method for diaphragmatic displacement measurements using direct visualization with videofluoroscopy. 29 mouth breathing children aged 5 to 12 years from both genders were analyzed. The diaphragmatic displacement evaluation was divided in three parts: videofluoroscopy with VHS recording in standing, sitting, and dorsal positions; digitalization of the images; and measurement of the distance between diaphragmatic domes during a breathing cycle using Adobe Photoshop 5.5 and Adobe Premiere PRO 6.5 software. The intraclass correlation coefficients presented excellent reproducibility in all positions, with coefficients always above 0.94. Mean of the measurements of the diaphagramatic domes displacement done by the two observers were similar (Phealthcare professionals. Copyright © 2010 SEPAR. Published by Elsevier Espana. All rights reserved.

  1. Creating an Arms Race? Examining School Costs and Motivations for Providing NAPLEX and PCOA Preparation.

    Science.gov (United States)

    Lebovitz, Lisa; Shuford, Veronica P; DiVall, Margarita V; Daugherty, Kimberly K; Rudolph, Michael J

    2017-09-01

    Objective. To examine the extent of financial and faculty resources dedicated to preparing students for NAPLEX and PCOA examinations, and how these investments compare with NAPLEX pass rates. Methods. A 23-item survey was administered to assessment professionals in U.S. colleges and schools of pharmacy (C/SOPs). Institutions were compared by type, age, and student cohort size. Institutional differences were explored according to the costs and types of NAPLEX and PCOA preparation provided, if any, and mean NAPLEX pass rates. Results. Of 134 C/SOPs that received the survey invitation, 91 responded. Nearly 80% of these respondents reported providing some form of NAPLEX preparation. Significantly higher 2015 mean NAPLEX pass rates were found in public institutions, schools that do not provide NAPLEX prep, and schools spending less than $10,000 annually on NAPLEX prep. Only 18 schools reported providing PCOA preparation. Conclusion. Investment in NAPLEX and PCOA preparation resources vary widely across C/SOPs but may increase in the next few years, due to dropping NAPLEX pass rates and depending upon how PCOA data are used.

  2. The Meaning of Touch to Patients Undergoing Chemotherapy.

    Science.gov (United States)

    Leonard, Katherine E; Kalman, Melanie

    2015-09-01

    To explore the experience of being touched in people diagnosed with cancer and undergoing IV chemotherapy.
 Qualitative, phenomenologic.
 Central New York and northern Pennsylvania, both in the northeastern United States
. 11 Caucasian, English-speaking adults.
. Individual interviews used open-ended questions to explore the meaning of being touched to each participant. Meanings of significant statements, which pertained to the phenomenon under investigation, were formulated hermeneutically. Themes were derived from immersion in the data and extraction of similar and divergent concepts among all interviews, yielding a multidimensional understanding of the meaning of being touched in this sample of participants
. Participants verbalized awareness of and sensitivity to the regard of others who were touching them, including healthcare providers, family, and friends. Patients do not classify a provider's touch as either task or comfort oriented. Meanings evolved in the context of three primary themes. The experience of being touched encompasses the quality of presence of providers, family, or friends. For touch to be regarded as positive, patients must be regarded as inherently whole and equal. The quality of how touch is received is secondary to and flows from the relationship established between patient and provider
. This study adds to the literature in its finding that the fundamental quality of the relationship between patient and provider establishes the perceived quality of touch. Previous studies have primarily divided touch into two categories.

  3. Mean stress and the exhaustion of fatigue-damage resistance

    Science.gov (United States)

    Berkovits, Avraham

    1989-01-01

    Mean-stress effects on fatigue life are critical in isothermal and thermomechanically loaded materials and composites. Unfortunately, existing mean-stress life-prediction methods do not incorporate physical fatigue damage mechanisms. An objective is to examine the relation between mean-stress induced damage (as measured by acoustic emission) and existing life-prediction methods. Acoustic emission instrumentation has indicated that, as with static yielding, fatigue damage results from dislocation buildup and motion until dislocation saturation is reached, after which void formation and coalescence predominate. Correlation of damage processes with similar mechanisms under monotonic loading led to a reinterpretation of Goodman diagrams for 40 alloys and a modification of Morrow's formulation for life prediction under mean stresses. Further testing, using acoustic emission to monitor dislocation dynamics, can generate data for developing a more general model for fatigue under mean stress.

  4. Finding reproducible cluster partitions for the k-means algorithm.

    Science.gov (United States)

    Lisboa, Paulo J G; Etchells, Terence A; Jarman, Ian H; Chambers, Simon J

    2013-01-01

    K-means clustering is widely used for exploratory data analysis. While its dependence on initialisation is well-known, it is common practice to assume that the partition with lowest sum-of-squares (SSQ) total i.e. within cluster variance, is both reproducible under repeated initialisations and also the closest that k-means can provide to true structure, when applied to synthetic data. We show that this is generally the case for small numbers of clusters, but for values of k that are still of theoretical and practical interest, similar values of SSQ can correspond to markedly different cluster partitions. This paper extends stability measures previously presented in the context of finding optimal values of cluster number, into a component of a 2-d map of the local minima found by the k-means algorithm, from which not only can values of k be identified for further analysis but, more importantly, it is made clear whether the best SSQ is a suitable solution or whether obtaining a consistently good partition requires further application of the stability index. The proposed method is illustrated by application to five synthetic datasets replicating a real world breast cancer dataset with varying data density, and a large bioinformatics dataset.

  5. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review.

    Science.gov (United States)

    Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C

    2018-03-07

    Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally

  6. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-01-01

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  7. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  8. Finite-State Mean-Field Games, Crowd Motion Problems, and its Numerical Methods

    KAUST Repository

    Machado Velho, Roberto

    2017-01-01

    -economic sciences. Examples include paradigm shifts in the scientific community and the consumer choice behavior in a free market. The corresponding finite-state mean-field game models are hyperbolic systems of partial differential equations, for which we propose

  9. VAR Portfolio Optimal: Perbandingan Antara Metode Markowitz Dan Mean Absolute Deviation

    OpenAIRE

    Sartono, R. Agus; Setiawan, Arie Andika

    2006-01-01

    Portfolio selection method which have been introduced by Harry Markowitz (1952) used variance or deviation standard as a measure of risk. Kanno and Yamazaki (1991) introduced another method and used mean absolute deviation as a measure of risk instead of variance. The Value-at Risk (VaR) is a relatively new method to capitalized risk that been used by financial institutions. The aim of this research is compare between mean variance and mean absolute deviation of two portfolios. Next, we attem...

  10. Calculations of Neutron Flux Distributions by Means of Integral Transport Methods

    Energy Technology Data Exchange (ETDEWEB)

    Carlvik, I

    1967-05-15

    Flux distributions have been calculated mainly in one energy group, for a number of systems representing geometries interesting for reactor calculations. Integral transport methods of two kinds were utilised, collision probabilities (CP) and the discrete method (DIT). The geometries considered comprise the three one-dimensional geometries, planes, sphericals and annular, and further a square cell with a circular fuel rod and a rod cluster cell with a circular outer boundary. For the annular cells both methods (CP and DIT) were used and the results were compared. The purpose of the work is twofold, firstly to demonstrate the versatility and efficacy of integral transport methods and secondly to serve as a guide for anybody who wants to use the methods.

  11. Breastfeeding Education: disagreement of meanings

    Directory of Open Access Journals (Sweden)

    Nydia Stella Caicedo Martínez

    Full Text Available Objective.This work sought to analyze how educational processes have been developed for breastfeeding in a health institution, starting from the meanings mothers, families, and health staff construct thereon. Methods. This was qualitative research of ethnographic approach, which included observations during the group educational activities of the programs, focal groups, and interviews of mothers, their families, and the health staff of a hospital unit in the city of Medellín, Colombia. The analysis was guided by the constant comparison method. Results. The categories emerging from the data were: 1 breast milk is an ideal food. 2 The mothers' experiences influence upon the breastfeeding practice. 3 Family beliefs sometimes operate as cultural barriers. 4 Disagreements are revealed in the educational process. Conclusion. The way educational processes have taken place for breastfeeding reveals a break expressed by the scarce interaction between the meanings professionals have constructed on the topic and those the mothers and their families give to the experience of breastfeeding.

  12. GPU accelerated real-time confocal fluorescence lifetime imaging microscopy (FLIM) based on the analog mean-delay (AMD) method

    Science.gov (United States)

    Kim, Byungyeon; Park, Byungjun; Lee, Seungrag; Won, Youngjae

    2016-01-01

    We demonstrated GPU accelerated real-time confocal fluorescence lifetime imaging microscopy (FLIM) based on the analog mean-delay (AMD) method. Our algorithm was verified for various fluorescence lifetimes and photon numbers. The GPU processing time was faster than the physical scanning time for images up to 800 × 800, and more than 149 times faster than a single core CPU. The frame rate of our system was demonstrated to be 13 fps for a 200 × 200 pixel image when observing maize vascular tissue. This system can be utilized for observing dynamic biological reactions, medical diagnosis, and real-time industrial inspection. PMID:28018724

  13. Finding golden mean in a physics exercise

    Science.gov (United States)

    Benedetto, Elmo

    2017-07-01

    The golden mean is an algebraic irrational number that has captured the popular imagination and is discussed in many books. Indeed, some scientists believe that it appears in some patterns in nature, including the spiral arrangement of leaves and other plant parts. Generally, the golden mean is introduced in geometry and the textbooks give the definition showing a graphical method to determine it. In this short note, we want to find this number by studying projectile motion. This could be a way to introduce the golden mean (also said to be the golden ratio, golden section, Fidia constant, divine proportion or extreme and mean ratio) in a physics course.

  14. MEAN OF MEDIAN ABSOLUTE DERIVATION TECHNIQUE MEAN ...

    African Journals Online (AJOL)

    eobe

    development of mean of median absolute derivation technique based on the based on the based on .... of noise mean to estimate the speckle noise variance. Noise mean property ..... Foraging Optimization,” International Journal of. Advanced ...

  15. The use of Second Life as an effective means of providing informal science education to secondary school students

    Science.gov (United States)

    Amous, Haytham

    This research study evaluated the use of Second Life and its virtual museums as a means of providing effective informal science education for both junior high and high school students. This study investigated whether the attitudes of students toward science change as a result of scholastic exposure to the science museums in Second Life. The dependence between attitudes and learning styles was also investigated. The data gathered from the experiences and the perceptions of students using Second Life in informal science education were analyzed to address the questions of the study. The researcher used qualitative and quantitative research methodologies to investigate the research questions. The first and second research questions were quantitative and used TOSRA2 research instrument to assess attitude and perceptions and learning style questionnaire scores. The attitudes toward science before and after visiting the Second Life museums showed no significant change. A weak relationship between the attitudes toward science and the participants learning styles was found. The researcher therefore concluded that no relationship existed between the average of the TOSRA scores and the learning styles questionnaire scores. To address questions research three and four, a collective qualitative case study approach (Creswell, 2007), as well as a structured interviews focusing on the students' perspectives about using Second Life for informal science education was used. The students did not prefer informal science education using second life over formal education. This was in part attributed to the poor usability and/or familiarity with the program. Despite the students' technical difficulties confronted in visiting Second Life the perception of student about their learning experiences and the use of Second Life on informal science environment were positive.

  16. The meaning of a legal category of “sanction”

    Directory of Open Access Journals (Sweden)

    Al’bina Sergeyevna Panova

    2015-06-01

    Full Text Available Objective to study the legal category of sanction. Methods dialectical systematic and logical methods of analysis synthesis. Results the study of the legal category of quotsanctionquot has shown that a sanction can be applied on a regulatory or contractual basis if stipulated by a civil agreement and as the measures of liability and protection. One of the promising directions of its use is the motivating one ndash sanctions can provide the legal consequences favorable for those who observe the behavior stipulated by the law. The following is offered as the direction of development of domestic legislation on sanctions verification of compliance of the sanctions amount and terms with the offences gravity introduction of previously nonexistent sanctions for example speculation on food and currency markets the use of discretionary sanctions as a means of positive legal stimulation of the economy. Scientific novelty the conclusion is made about the nature of the sanctions it is proved that the sanction is a legal means the use of which enables the victim to protect their violated challenged rights provided for by the legislation and or the agreement and implies adverse consequences of property and or organizational nature for the offender. The sanctions application has its own peculiarities. Their use is aimed at curbing the illegal actions of the offender debtor to stimulate them to the proper performance of statutory or contractual duties often sanctions are aimed at compensating for damage caused to the creditor. A peculiar feature of the sanctions is that they are a necessary component of the legal system. Practical value the results obtained can be used to conduct economic and legal research relating to the economics and entrepreneurship in treaty practice in teaching the disciplines of Civil Law Business Law Commercial Law etc. nbsp

  17. Methodical Approaches to Communicative Providing of Retailer Branding

    Directory of Open Access Journals (Sweden)

    Andrey Kataev

    2017-07-01

    Full Text Available The thesis is devoted to the rationalization of methodical approaches for provision of branding of retail trade enterprises. The article considers the features of brand perception by retail consumers and clarifies the specifics of customer reviews of stores for the procedures accompanying brand management. It is proved that besides traditional communication mix, the most important tool of communicative influence on buyers is the store itself as a place for comfortable shopping. The shop should have a stimulating effect on all five human senses, including sight, smell, hearing, touch, and taste, which shall help maximize consumer integration into the buying process.

  18. Pedestrian Flow in the Mean Field Limit

    KAUST Repository

    Haji Ali, Abdul Lateef

    2012-11-01

    We study the mean-field limit of a particle-based system modeling the behavior of many indistinguishable pedestrians as their number increases. The base model is a modified version of Helbing\\'s social force model. In the mean-field limit, the time-dependent density of two-dimensional pedestrians satisfies a four-dimensional integro-differential Fokker-Planck equation. To approximate the solution of the Fokker-Planck equation we use a time-splitting approach and solve the diffusion part using a Crank-Nicholson method. The advection part is solved using a Lax-Wendroff-Leveque method or an upwind Backward Euler method depending on the advection speed. Moreover, we use multilevel Monte Carlo to estimate observables from the particle-based system. We discuss these numerical methods, and present numerical results showing the convergence of observables that were calculated using the particle-based model as the number of pedestrians increases to those calculated using the probability density function satisfying the Fokker-Planck equation.

  19. Assessment of masticatory performance by means of a color-changeable chewing gum.

    Science.gov (United States)

    Tarkowska, Agnieszka; Katzer, Lukasz; Ahlers, Marcus Oliver

    2017-01-01

    Previous research determined the relevance of masticatory performance with regard to nutritional status, cognitive functions, or stress management. In addition, the measurement of masticatory efficiency contributes to the evaluation of therapeutic successes within the stomatognathic system. However, the question remains unanswered as to what extent modern techniques are able to reproduce the subtle differences in masticatory efficiency within various patient groups. The purpose of this review is to provide an extensive summary of the evaluation of masticatory performance by means of a color-changeable chewing gum with regard to its clinical relevance and applicability. A general overview describing the various methods available for this task has already been published. This review focuses in depth on the research findings available on the technique of measuring masticatory performance by means of color-changeable chewing gum. Described are the mechanism and the differentiability of the color change and methods to evaluate the color changes. Subsequently, research on masticatory performance is conducted with regard to patient age groups, the impact of general diseases and the effect of prosthetic and surgical treatment. The studies indicate that color-changeable chewing gum is a valid and reliable method for the evaluation of masticatory function. Apart from other methods, in clinical practice this technique can enhance dental diagnostics as well as the assessment of therapy outcomes. Copyright © 2016 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  20. The mean squared writhe of alternating random knot diagrams

    Energy Technology Data Exchange (ETDEWEB)

    Diao, Y; Hinson, K [Department of Mathematics and Statistics University of North Carolina at Charlotte, NC 28223 (United States); Ernst, C; Ziegler, U, E-mail: ydiao@uncc.ed [Department of Mathematics and Computer Science, Western Kentucky University, Bowling Green, KY 42101 (United States)

    2010-12-10

    The writhe of a knot diagram is a simple geometric measure of the complexity of the knot diagram. It plays an important role not only in knot theory itself, but also in various applications of knot theory to fields such as molecular biology and polymer physics. The mean squared writhe of any sample of knot diagrams with n crossings is n when for each diagram at each crossing one of the two strands is chosen as the overpass at random with probability one-half. However, such a diagram is usually not minimal. If we restrict ourselves to a minimal knot diagram, then the choice of which strand is the over- or under-strand at each crossing is no longer independent of the neighboring crossings and a larger mean squared writhe is expected for minimal diagrams. This paper explores the effect on the correlation between the mean squared writhe and the diagrams imposed by the condition that diagrams are minimal by studying the writhe of classes of reduced, alternating knot diagrams. We demonstrate that the behavior of the mean squared writhe heavily depends on the underlying space of diagram templates. In particular this is true when the sample space contains only diagrams of a special structure. When the sample space is large enough to contain not only diagrams of a special type, then the mean squared writhe for n crossing diagrams tends to grow linearly with n, but at a faster rate than n, indicating an intrinsic property of alternating knot diagrams. Studying the mean squared writhe of alternating random knot diagrams also provides some insight into the properties of the diagram generating methods used, which is an important area of study in the applications of random knot theory.

  1. The meaning of pharmacological treatment for schizophrenic patients

    Directory of Open Access Journals (Sweden)

    Kelly Graziani Giacchero Vedana

    2014-08-01

    Full Text Available OBJECTIVE: to understand the meaning of medication therapy for schizophrenic patients and formulate a theoretical model about the study phenomenon.METHOD: a qualitative approach was employed, using Symbolic Interactionism as the theoretical and Grounded Theory as the methodological framework. The research was developed between 2008 and 2010 at three community mental health services in the interior of the State of São Paulo - Brazil. Thirty-six patients and thirty-six family members were selected through theoretical sampling. The data were mainly collected through open interviews and observation and simultaneously analyzed through open, axial and selective coding.RESULTS: the meaning of the pharmacotherapy is centered on the phenomenon "Living with a help that bothers", which expresses the patients' ambivalence towards the medication and determines their decision making. The insight, access, limitations for self-administration of the drugs and interactions with family members and the health team influenced the patient's medication-related behavior.CONCLUSION: the theory presented in this study provides a comprehensive, contextualized, motivational and dynamic understanding of the relation the patient experiences and indicates potentials and barriers to follow the medication treatment.

  2. Weighted Mean of Signal Intensity for Unbiased Fiber Tracking of Skeletal Muscles: Development of a New Method and Comparison With Other Correction Techniques.

    Science.gov (United States)

    Giraudo, Chiara; Motyka, Stanislav; Weber, Michael; Resinger, Christoph; Thorsten, Feiweier; Traxler, Hannes; Trattnig, Siegfried; Bogner, Wolfgang

    2017-08-01

    The aim of this study was to investigate the origin of random image artifacts in stimulated echo acquisition mode diffusion tensor imaging (STEAM-DTI), assess the role of averaging, develop an automated artifact postprocessing correction method using weighted mean of signal intensities (WMSIs), and compare it with other correction techniques. Institutional review board approval and written informed consent were obtained. The right calf and thigh of 10 volunteers were scanned on a 3 T magnetic resonance imaging scanner using a STEAM-DTI sequence.Artifacts (ie, signal loss) in STEAM-based DTI, presumably caused by involuntary muscle contractions, were investigated in volunteers and ex vivo (ie, human cadaver calf and turkey leg using the same DTI parameters as for the volunteers). An automated postprocessing artifact correction method based on the WMSI was developed and compared with previous approaches (ie, iteratively reweighted linear least squares and informed robust estimation of tensors by outlier rejection [iRESTORE]). Diffusion tensor imaging and fiber tracking metrics, using different averages and artifact corrections, were compared for region of interest- and mask-based analyses. One-way repeated measures analysis of variance with Greenhouse-Geisser correction and Bonferroni post hoc tests were used to evaluate differences among all tested conditions. Qualitative assessment (ie, images quality) for native and corrected images was performed using the paired t test. Randomly localized and shaped artifacts affected all volunteer data sets. Artifact burden during voluntary muscle contractions increased on average from 23.1% to 77.5% but were absent ex vivo. Diffusion tensor imaging metrics (mean diffusivity, fractional anisotropy, radial diffusivity, and axial diffusivity) had a heterogeneous behavior, but in the range reported by literature. Fiber track metrics (number, length, and volume) significantly improved in both calves and thighs after artifact

  3. Measuring kinetics of complex single ion channel data using mean-variance histograms.

    Science.gov (United States)

    Patlak, J B

    1993-07-01

    The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean

  4. Identification and Decay Studies of New, Neutron-Rich Isotopes of Bismuth, Lead and Thallium by means of a Pulsed Release Element Selective Method

    CERN Multimedia

    Mills, A; Kugler, E; Van duppen, P L E; Lettry, J

    2002-01-01

    % IS354 \\\\ \\\\ It is proposed to produce, identify and investigate at ISOLDE new, neutron-rich isotopes of bismuth, lead and thallium at the mass numbers A=215 to A=218. A recently tested operation mode of the PS Booster-ISOLDE complex, taking an advantage of the unique pulsed proton beam structure, will be used together with a ThC target in order to increase the selectivity. The decay properties of new nuclides will be studied by means of $\\beta$-, $\\gamma$- and X- ray spectroscopy methods. The expected information on the $\\beta$-half-lives and excited states will be used for testing and developing the nuclear structure models ``south-east'' of $^{208}$Pb, and will provide input data for the description of the r-process path at very heavy nuclei. The proposed study of the yields and the decay properties of those heavy nuclei produced in the spallation of $^{232}$Th by a 1~GeV proton beam contributes also the data necessary for the simulations of a hybrid accelerator-reactor system.

  5. Identification of significant features by the Global Mean Rank test.

    Science.gov (United States)

    Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph

    2014-01-01

    With the introduction of omics-technologies such as transcriptomics and proteomics, numerous methods for the reliable identification of significantly regulated features (genes, proteins, etc.) have been developed. Experimental practice requires these tests to successfully deal with conditions such as small numbers of replicates, missing values, non-normally distributed expression levels, and non-identical distributions of features. With the MeanRank test we aimed at developing a test that performs robustly under these conditions, while favorably scaling with the number of replicates. The test proposed here is a global one-sample location test, which is based on the mean ranks across replicates, and internally estimates and controls the false discovery rate. Furthermore, missing data is accounted for without the need of imputation. In extensive simulations comparing MeanRank to other frequently used methods, we found that it performs well with small and large numbers of replicates, feature dependent variance between replicates, and variable regulation across features on simulation data and a recent two-color microarray spike-in dataset. The tests were then used to identify significant changes in the phosphoproteomes of cancer cells induced by the kinase inhibitors erlotinib and 3-MB-PP1 in two independently published mass spectrometry-based studies. MeanRank outperformed the other global rank-based methods applied in this study. Compared to the popular Significance Analysis of Microarrays and Linear Models for Microarray methods, MeanRank performed similar or better. Furthermore, MeanRank exhibits more consistent behavior regarding the degree of regulation and is robust against the choice of preprocessing methods. MeanRank does not require any imputation of missing values, is easy to understand, and yields results that are easy to interpret. The software implementing the algorithm is freely available for academic and commercial use.

  6. Lattice Boltzmann simulations of sound directivity of a cylindrical pipe with mean flow

    International Nuclear Information System (INIS)

    Shi, Yong; Scavone, Gary P; Silva, Andrey R da

    2013-01-01

    This paper proposes a numerical scheme based on the lattice Boltzmann method to tackle the classical problem of sound radiation directivity of pipes issuing subsonic mean flows. The investigation is focused on normal mode radiation, which allows the use of a two-dimensional lattice with an axisymmetric condition at the pipe’s longitudinal axis. The numerical results are initially verified against an exact analytical solution for the sound radiation directivity of an unflanged pipe in the absence of a mean flow, which shows a very good agreement. Thereafter, the sound directivity results in the presence of a subsonic mean flow are compared with both analytical models and experimental data. The results are in good agreement, particularly for low values of the Helmholtz number ka. Moreover, the phenomenon known as ‘zone of relative silence’ was observed, even for mean flows associated with very low Mach numbers, though discrepancies were also observed in the comparison between the numerical results and the analytical predictions. A thorough discussion on the scheme implementation and numerical results is provided in the paper. (paper)

  7. Investigation of periodic systems by means of the generalized Hill method

    International Nuclear Information System (INIS)

    Baitin, A.V.; Ivanov, A.A.

    1994-01-01

    We propose the new method of investigation of infinite periodic determination which is a generalized Hill method. This method has been used for finding of the characteristic value for the Hill equation. finding the band structure of the one-dimensional periodic and obtaining of the dispersion equation for the electromagnetic wave propagation in the waveguide by plasma arbitrary periodic density modulation by plasma arbitrary periodic density modulation

  8. Stochastic optimization methods

    CERN Document Server

    Marti, Kurt

    2005-01-01

    Optimization problems arising in practice involve random parameters. For the computation of robust optimal solutions, i.e., optimal solutions being insensitive with respect to random parameter variations, deterministic substitute problems are needed. Based on the distribution of the random data, and using decision theoretical concepts, optimization problems under stochastic uncertainty are converted into deterministic substitute problems. Due to the occurring probabilities and expectations, approximative solution techniques must be applied. Deterministic and stochastic approximation methods and their analytical properties are provided: Taylor expansion, regression and response surface methods, probability inequalities, First Order Reliability Methods, convex approximation/deterministic descent directions/efficient points, stochastic approximation methods, differentiation of probability and mean value functions. Convergence results of the resulting iterative solution procedures are given.

  9. Back-reaction beyond the mean field approximation

    International Nuclear Information System (INIS)

    Kluger, Y.

    1993-01-01

    A method for solving an initial value problem of a closed system consisting of an electromagnetic mean field and its quantum fluctuations coupled to fermions is presented. By tailoring the large N f expansion method to the Schwinger-Keldysh closed time path (CTP) formulation of the quantum effective action, causality of the resulting equations of motion is ensured, and a systematic energy conserving and gauge invariant expansion about the electromagnetic mean field in powers of 1/N f is developed. The resulting equations may be used to study the quantum nonequilibrium effects of pair creation in strong electric fields and the scattering and transport processes of a relativistic e + e - plasma. Using the Bjorken ansatz of boost invariance initial conditions in which the initial electric mean field depends on the proper time only, we show numerical results for the case in which the N f expansion is truncated in the lowest order, and compare them with those of a phenomenological transport equation

  10. How do mean division shares affect growth and development

    Directory of Open Access Journals (Sweden)

    Shao Liang Frank

    2017-01-01

    Full Text Available The Gini coefficient is widely used in academia to discuss how income inequality affects development and growth. However, different Lorenz curves may provide different development and growth outcomes while still leading to the same Gini coefficient. This paper studies the development effects of “mean division shares”, i.e., the share of income (mean income share held by people whose household disposable income per capita is below the mean income and the share of the population (mean population share with this income, using panel data. Our analysis explores how this income share and population share impact development and growth. It shows that the income and population shares affect growth in significantly different ways and that an analysis of these metrics provides substantial value compared to that of the Gini coefficient.

  11. Weakly coupled mean-field game systems

    KAUST Repository

    Gomes, Diogo A.; Patrizi, Stefania

    2016-01-01

    Here, we prove the existence of solutions to first-order mean-field games (MFGs) arising in optimal switching. First, we use the penalization method to construct approximate solutions. Then, we prove uniform estimates for the penalized problem

  12. *K-means and Cluster Models for Cancer Signatures

    OpenAIRE

    Kakushadze, Zura; Yu, Willie

    2017-01-01

    We present *K-means clustering algorithm and source code by expanding statistical clustering methods applied in https://ssrn.com/abstract=2802753 to quantitative finance. *K-means is statistically deterministic without specifying initial centers, etc. We apply *K-means to extracting cancer signatures from genome data without using nonnegative matrix factorization (NMF). *K-means’ computational cost is a fraction of NMF’s. Using 1389 published samples for 14 cancer types, we find that 3 cancer...

  13. Application of some pattern recognition methods for the early detection of failures at NPP components by means of noise diagnosis

    International Nuclear Information System (INIS)

    Weiss, F.P.

    1985-01-01

    The automation of the decisions on normality or abnormality of the plant condition being based on automated measurements is an essential step for the integration of noise diagnostics into the control and safety system of a nuclear power plant. By reason of the stochastic character of noise diagnostic measuring quantities principles of statistical pattern recognition are used in order automatically to get a decision on the plant condition. Four different pattern recognition methods complementing each other have been developed respectively tested with data from a WWER-440 type reactor. These four methods are included in a specially written software package. According to stationarity, correlation and probability distribution type of the state describing features and according to the necessary detection sensitivity to failures either the decorrelation method, the cluster method, the Parzen method or the distribution test of Wilcoxon, Mann and Whitney has to be applied. The efficiency and the limits of the investigated methods are discussed in detail. In context with the surveillance of the WWER-440 core by means of the power spectral densities of neutron flux fluctuations it could as well experimentally as theoretically be shown that the logarithmic power spectral densities follow a Gaussian probability distribution. (author)

  14. Mean intraocular pressure in hypertensive adults

    International Nuclear Information System (INIS)

    Irum, S.; Malik, A.M.; Saeed, M.

    2015-01-01

    To determine the mean Intraocular Pressure (IOP) in already diagnosed adult hypertensive patients with different grades of hypertension. Study Design: Cross-sectional descriptive study. Place and Duration of Study: Combined Military Hospital, Lahore, from March 2012 to Aug 2012. Patients and Methods: A total of 178 already diagnosed hypertensive patients were selected. A detailed history of ocular or systemic diseases was taken. Intraocular pressure was measured with help of Goldmann applanation tonometer. Three consecutive readings of IOP of each eye were taken at 30 minutes interval and mean calculated. Blood pressure was recorded in seated position from right upper arm, by mercury sphygmomanometer. Blood pressure measurements were determined by taking the mean value of three systolic and diastolic readings. Results: The results of intraocular pressure (IOP) between various grades of hypertension were determined. There was an increase in mean IOP with rise in blood pressure. The subjects with grade I hypertension showed a mean IOP of 13.95 ± 3.74 mmHg, while grade II and grade III hypertensive subjects had mean IOPs as 18.10 ± 3.32 and 20.21 ± 2.52 mmHg respectively. Conclusion: A higher value of mean IOP was found with increase in systolic and diastolic blood pressures. (author)

  15. Some Algorithms for the Conditional Mean Vector and Covariance Matrix

    Directory of Open Access Journals (Sweden)

    John F. Monahan

    2006-08-01

    Full Text Available We consider here the problem of computing the mean vector and covariance matrix for a conditional normal distribution, considering especially a sequence of problems where the conditioning variables are changing. The sweep operator provides one simple general approach that is easy to implement and update. A second, more goal-oriented general method avoids explicit computation of the vector and matrix, while enabling easy evaluation of the conditional density for likelihood computation or easy generation from the conditional distribution. The covariance structure that arises from the special case of an ARMA(p, q time series can be exploited for substantial improvements in computational efficiency.

  16. ADVANCING THE STUDY OF VIOLENCE AGAINST WOMEN USING MIXED METHODS: INTEGRATING QUALITATIVE METHODS INTO A QUANTITATIVE RESEARCH PROGRAM

    Science.gov (United States)

    Testa, Maria; Livingston, Jennifer A.; VanZile-Tamsen, Carol

    2011-01-01

    A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women’s sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided. PMID:21307032

  17. Dose-response meta-analysis of differences in means

    Directory of Open Access Journals (Sweden)

    Alessio Crippa

    2016-08-01

    Full Text Available Abstract Background Meta-analytical methods are frequently used to combine dose-response findings expressed in terms of relative risks. However, no methodology has been established when results are summarized in terms of differences in means of quantitative outcomes. Methods We proposed a two-stage approach. A flexible dose-response model is estimated within each study (first stage taking into account the covariance of the data points (mean differences, standardized mean differences. Parameters describing the study-specific curves are then combined using a multivariate random-effects model (second stage to address heterogeneity across studies. Results The method is fairly general and can accommodate a variety of parametric functions. Compared to traditional non-linear models (e.g. E max, logistic, spline models do not assume any pre-specified dose-response curve. Spline models allow inclusion of studies with a small number of dose levels, and almost any shape, even non monotonic ones, can be estimated using only two parameters. We illustrated the method using dose-response data arising from five clinical trials on an antipsychotic drug, aripiprazole, and improvement in symptoms in shizoaffective patients. Using the Positive and Negative Syndrome Scale (PANSS, pooled results indicated a non-linear association with the maximum change in mean PANSS score equal to 10.40 (95 % confidence interval 7.48, 13.30 observed for 19.32 mg/day of aripiprazole. No substantial change in PANSS score was observed above this value. An estimated dose of 10.43 mg/day was found to produce 80 % of the maximum predicted response. Conclusion The described approach should be adopted to combine correlated differences in means of quantitative outcomes arising from multiple studies. Sensitivity analysis can be a useful tool to assess the robustness of the overall dose-response curve to different modelling strategies. A user-friendly R package has been developed to facilitate

  18. [Darwinism and the meaning of "meaning"].

    Science.gov (United States)

    Castrodeza, Carlos

    2009-01-01

    The problem of the meaning of life is herewith contemplated from a Darwinian perspective. It is argued how factors such as existential depression, the concern about the meaning of "meaning," the problem of evil, death as the end of our personal identity, happiness as an unachievable goal, etc. may well have an adaptive dimension "controlled" neither by ourselves nor obscure third parties (conspiracy theories) but "simply" by our genes (replicators in general) so that little if anything is to be done to find a radical remedy for the human condition.

  19. Method for thinning specimen

    Science.gov (United States)

    Follstaedt, David M.; Moran, Michael P.

    2005-03-15

    A method for thinning (such as in grinding and polishing) a material surface using an instrument means for moving an article with a discontinuous surface with an abrasive material dispersed between the material surface and the discontinuous surface where the discontinuous surface of the moving article provides an efficient means for maintaining contact of the abrasive with the material surface. When used to dimple specimens for microscopy analysis, a wheel with a surface that has been modified to produce a uniform or random discontinuous surface significantly improves the speed of the dimpling process without loss of quality of finish.

  20. Math modeling in economics. Solutions of problem on use of raw materials and creation of a diet by means of a graphic method

    Directory of Open Access Journals (Sweden)

    Shonin M.Yu.

    2017-02-01

    Full Text Available the work is devoted to creation of mathematical model of the solution of problems on use of raw materials and drawing up a diet. These problems have been solved by the authors of this article by means of a graphic method in number.

  1. Content and Methods used to Train Tobacco Cessation Treatment Providers: An International Survey.

    Science.gov (United States)

    Kruse, Gina R; Rigotti, Nancy A; Raw, Martin; McNeill, Ann; Murray, Rachael; Piné-Abata, Hembadoon; Bitton, Asaf; McEwen, Andy

    2017-12-01

    There are limited existing data describing the training methods used to educate tobacco cessation treatment providers around the world. To measure the prevalence of tobacco cessation treatment content, skills training and teaching methods reported by tobacco treatment training programs across the world. Web-based survey in May-September 2013 among tobacco cessation training experts across six geographic regions and four World Bank income levels. Response rate was 73% (84 of 115 countries contacted). Of 104 individual programs from 84 countries, most reported teaching brief advice (78%) and one-to-one counseling (74%); telephone counseling was uncommon (33%). Overall, teaching of knowledge topics was more commonly reported than skills training. Programs in lower income countries less often reported teaching about medications, behavioral treatments and biomarkers and less often reported skills-based training about interviewing clients, medication management, biomarker measurement, assessing client outcomes, and assisting clients with co-morbidities. Programs reported a median 15 hours of training. Face-to-face training was common (85%); online programs were rare (19%). Almost half (47%) included no learner assessment. Only 35% offered continuing education. Nearly all programs reported teaching evidence-based treatment modalities in a face-to-face format. Few programs delivered training online or offered continuing education. Skills-based training was less common among low- and middle-income countries (LMICs). There is a large unmet need for tobacco treatment training protocols which emphasize practical skills, and which are more rapidly scalable than face-to-face training in LMICs.

  2. A method to provide rapid in situ determination of tip radius in dynamic atomic force microscopy

    International Nuclear Information System (INIS)

    Santos, Sergio; Guang Li; Souier, Tewfik; Gadelrab, Karim; Chiesa, Matteo; Thomson, Neil H.

    2012-01-01

    We provide a method to characterize the tip radius of an atomic force microscopy in situ by monitoring the dynamics of the cantilever in ambient conditions. The key concept is that the value of free amplitude for which transitions from the attractive to repulsive force regimes are observed, strongly depends on the curvature of the tip. In practice, the smaller the value of free amplitude required to observe a transition, the sharper the tip. This general behavior is remarkably independent of the properties of the sample and cantilever characteristics and shows the strong dependence of the transitions on the tip radius. The main advantage of this method is rapid in situ characterization. Rapid in situ characterization enables one to continuously monitor the tip size during experiments. Further, we show how to reproducibly shape the tip from a given initial size to any chosen larger size. This approach combined with the in situ tip size monitoring enables quantitative comparison of materials measurements between samples. These methods are set to allow quantitative data acquisition and make direct data comparison readily available in the community.

  3. Assessing implementation difficulties in tobacco use prevention and cessation counselling among dental providers.

    Science.gov (United States)

    Amemori, Masamitsu; Michie, Susan; Korhonen, Tellervo; Murtomaa, Heikki; Kinnunen, Taru H

    2011-05-26

    Tobacco use adversely affects oral health. Clinical guidelines recommend that dental providers promote tobacco abstinence and provide patients who use tobacco with brief tobacco use cessation counselling. Research shows that these guidelines are seldom implemented, however. To improve guideline adherence and to develop effective interventions, it is essential to understand provider behaviour and challenges to implementation. This study aimed to develop a theoretically informed measure for assessing among dental providers implementation difficulties related to tobacco use prevention and cessation (TUPAC) counselling guidelines, to evaluate those difficulties among a sample of dental providers, and to investigate a possible underlying structure of applied theoretical domains. A 35-item questionnaire was developed based on key theoretical domains relevant to the implementation behaviours of healthcare providers. Specific items were drawn mostly from the literature on TUPAC counselling studies of healthcare providers. The data were collected from dentists (n = 73) and dental hygienists (n = 22) in 36 dental clinics in Finland using a web-based survey. Of 95 providers, 73 participated (76.8%). We used Cronbach's alpha to ascertain the internal consistency of the questionnaire. Mean domain scores were calculated to assess different aspects of implementation difficulties and exploratory factor analysis to assess the theoretical domain structure. The authors agreed on the labels assigned to the factors on the basis of their component domains and the broader behavioural and theoretical literature. Internal consistency values for theoretical domains varied from 0.50 ('emotion') to 0.71 ('environmental context and resources'). The domain environmental context and resources had the lowest mean score (21.3%; 95% confidence interval [CI], 17.2 to 25.4) and was identified as a potential implementation difficulty. The domain emotion provided the highest mean score (60%; 95% CI, 55

  4. Mean-field learning for satisfactory solutions

    KAUST Repository

    Tembine, Hamidou

    2013-12-01

    One of the fundamental challenges in distributed interactive systems is to design efficient, accurate, and fair solutions. In such systems, a satisfactory solution is an innovative approach that aims to provide all players with a satisfactory payoff anytime and anywhere. In this paper we study fully distributed learning schemes for satisfactory solutions in games with continuous action space. Considering games where the payoff function depends only on own-action and an aggregate term, we show that the complexity of learning systems can be significantly reduced, leading to the so-called mean-field learning. We provide sufficient conditions for convergence to a satisfactory solution and we give explicit convergence time bounds. Then, several acceleration techniques are used in order to improve the convergence rate. We illustrate numerically the proposed mean-field learning schemes for quality-of-service management in communication networks. © 2013 IEEE.

  5. Evaluating Patient Perspectives of Provider Professionalism on Twitter in an Academic Obstetrics and Gynecology Clinic: Patient Survey

    Science.gov (United States)

    Stansfield, R Brent; Opipari, AnneMarie; Hammoud, Maya M

    2018-01-01

    Background One-third of Americans use social media websites as a source of health care information. Twitter, a microblogging site that allows users to place 280-character posts—or tweets—on the Web, is emerging as an important social media platform for health care. However, most guidelines on medical professionalism on social media are based on expert opinion. Objective This study sought to examine if provider Twitter profiles with educational tweets were viewed as more professional than profiles with personal tweets or a mixture of the two, and to determine the impact of provider gender on perceptions of professionalism in an academic obstetrics and gynecology clinic. Methods This study randomized obstetrics and gynecology patients at the University of Michigan Von Voigtlander Clinic to view one of six medical provider Twitter profiles, which differed in provider gender and the nature of tweets. Each participant answered 10 questions about their perception of the provider’s professionalism based on the Twitter profile content. Results The provider profiles with educational tweets alone received higher mean professionalism scores than profiles with personal tweets. Specifically, the female and male provider profiles with exclusively educational tweets had the highest and second highest overall mean professionalism ratings at 4.24 and 3.85, respectively. In addition, the female provider profiles received higher mean professionalism ratings than male provider profiles with the same content. The female profile with mixed content received a mean professionalism rating of 3.38 compared to 3.24 for the male mixed-content profile, and the female profile with only personal content received a mean professionalism rating of 3.68 compared to 2.68 for the exclusively personal male provider profile. Conclusions This study showed that in our obstetrics and gynecology clinic, patients perceived providers with educational profiles as more professional than those with a

  6. Are judgments a form of data clustering? Reexamining contrast effects with the k-means algorithm.

    Science.gov (United States)

    Boillaud, Eric; Molina, Guylaine

    2015-04-01

    A number of theories have been proposed to explain in precise mathematical terms how statistical parameters and sequential properties of stimulus distributions affect category ratings. Various contextual factors such as the mean, the midrange, and the median of the stimuli; the stimulus range; the percentile rank of each stimulus; and the order of appearance have been assumed to influence judgmental contrast. A data clustering reinterpretation of judgmental relativity is offered wherein the influence of the initial choice of centroids on judgmental contrast involves 2 combined frequency and consistency tendencies. Accounts of the k-means algorithm are provided, showing good agreement with effects observed on multiple distribution shapes and with a variety of interaction effects relating to the number of stimuli, the number of response categories, and the method of skewing. Experiment 1 demonstrates that centroid initialization accounts for contrast effects obtained with stretched distributions. Experiment 2 demonstrates that the iterative convergence inherent to the k-means algorithm accounts for the contrast reduction observed across repeated blocks of trials. The concept of within-cluster variance minimization is discussed, as is the applicability of a backward k-means calculation method for inferring, from empirical data, the values of the centroids that would serve as a representation of the judgmental context. (c) 2015 APA, all rights reserved.

  7. Methods of Complex Data Processing from Technical Means of Monitoring

    Directory of Open Access Journals (Sweden)

    Serhii Tymchuk

    2017-03-01

    Full Text Available The problem of processing the information from different types of monitoring equipment was examined. The use of generalized methods of information processing, based on the techniques of clustering combined territorial information sources for monitoring and the use of framing model of knowledge base for identification of monitoring objects was proposed as a possible solution of the problem. Clustering methods were formed on the basis of Lance-Williams hierarchical agglomerative procedure using the Ward metrics. Frame model of knowledge base was built using the tools of object-oriented modeling.

  8. Probabilistic methods in combinatorial analysis

    CERN Document Server

    Sachkov, Vladimir N

    2014-01-01

    This 1997 work explores the role of probabilistic methods for solving combinatorial problems. These methods not only provide the means of efficiently using such notions as characteristic and generating functions, the moment method and so on but also let us use the powerful technique of limit theorems. The basic objects under investigation are nonnegative matrices, partitions and mappings of finite sets, with special emphasis on permutations and graphs, and equivalence classes specified on sequences of finite length consisting of elements of partially ordered sets; these specify the probabilist

  9. 24 CFR 1000.54 - What procedures apply to complaints arising out of any of the methods of providing for Indian...

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 4 2010-04-01 2010-04-01 false What procedures apply to complaints arising out of any of the methods of providing for Indian preference? 1000.54 Section 1000.54 Housing and... ACTIVITIES General § 1000.54 What procedures apply to complaints arising out of any of the methods of...

  10. Mean-value identities as an opportunity for Monte Carlo error reduction.

    Science.gov (United States)

    Fernandez, L A; Martin-Mayor, V

    2009-05-01

    In the Monte Carlo simulation of both lattice field theories and of models of statistical mechanics, identities verified by exact mean values, such as Schwinger-Dyson equations, Guerra relations, Callen identities, etc., provide well-known and sensitive tests of thermalization bias as well as checks of pseudo-random-number generators. We point out that they can be further exploited as control variates to reduce statistical errors. The strategy is general, very simple, and almost costless in CPU time. The method is demonstrated in the two-dimensional Ising model at criticality, where the CPU gain factor lies between 2 and 4.

  11. A method for analyzing the business case for provider participation in the National Cancer Institute's Community Clinical Oncology Program and similar federally funded, provider-based research networks.

    Science.gov (United States)

    Reiter, Kristin L; Song, Paula H; Minasian, Lori; Good, Marjorie; Weiner, Bryan J; McAlearney, Ann Scheck

    2012-09-01

    The Community Clinical Oncology Program (CCOP) plays an essential role in the efforts of the National Cancer Institute (NCI) to increase enrollment in clinical trials. Currently, there is little practical guidance in the literature to assist provider organizations in analyzing the return on investment (ROI), or business case, for establishing and operating a provider-based research network (PBRN) such as the CCOP. In this article, the authors present a conceptual model of the business case for PBRN participation, a spreadsheet-based tool and advice for evaluating the business case for provider participation in a CCOP organization. A comparative, case-study approach was used to identify key components of the business case for hospitals attempting to support a CCOP research infrastructure. Semistructured interviews were conducted with providers and administrators. Key themes were identified and used to develop the financial analysis tool. Key components of the business case included CCOP start-up costs, direct revenue from the NCI CCOP grant, direct expenses required to maintain the CCOP research infrastructure, and incidental benefits, most notably downstream revenues from CCOP patients. The authors recognized the value of incidental benefits as an important contributor to the business case for CCOP participation; however, currently, this component is not calculated. The current results indicated that providing a method for documenting the business case for CCOP or other PBRN involvement will contribute to the long-term sustainability and expansion of these programs by improving providers' understanding of the financial implications of participation. Copyright © 2011 American Cancer Society.

  12. Maternity Care Services Provided by Family Physicians in Rural Hospitals.

    Science.gov (United States)

    Young, Richard A

    The purpose of this study was to describe how many rural family physicians (FPs) and other types of providers currently provide maternity care services, and the requirements to obtain privileges. Chief executive officers of rural hospitals were purposively sampled in 15 geographically diverse states with significant rural areas in 2013 to 2014. Questions were asked about the provision of maternity care services, the physicians who perform them, and qualifications required to obtain maternity care privileges. Analysis used descriptive statistics, with comparisons between the states, community rurality, and hospital size. The overall response rate was 51.2% (437/854). Among all identified hospitals, 44.9% provided maternity care services, which varied considerably by state (range, 17-83%; P maternity care, a mean of 271 babies were delivered per year, 27% by cesarean delivery. A mean of 7.0 FPs had privileges in these hospitals, of which 2.8 provided maternity care and 1.8 performed cesarean deliveries. The percentage of FPs who provide maternity care (mean, 48%; range, 10-69%; P maternity care who are FPs (mean, 63%; range, 10-88%; P maternity care services in US rural hospitals, including cesarean deliveries. Some family medicine residencies should continue to train their residents to provide these services to keep replenishing this valuable workforce. © Copyright 2017 by the American Board of Family Medicine.

  13. A combined usage of stochastic and quantitative risk assessment methods in the worksites: Application on an electric power provider

    International Nuclear Information System (INIS)

    Marhavilas, P.K.; Koulouriotis, D.E.

    2012-01-01

    An individual method cannot build either a realistic forecasting model or a risk assessment process in the worksites, and future perspectives should focus on the combined forecasting/estimation approach. The main purpose of this paper is to gain insight into a risk prediction and estimation methodological framework, using the combination of three different methods, including the proportional quantitative-risk-assessment technique (PRAT), the time-series stochastic process (TSP), and the method of estimating the societal-risk (SRE) by F–N curves. In order to prove the usefulness of the combined usage of stochastic and quantitative risk assessment methods, an application on an electric power provider industry is presented to, using empirical data.

  14. Meaning-centered dream work with hospice patients: A pilot study.

    Science.gov (United States)

    Wright, Scott T; Grant, Pei C; Depner, Rachel M; Donnelly, James P; Kerr, Christopher W

    2015-10-01

    Hospice patients often struggle with loss of meaning, while many experience meaningful dreams. The purpose of this study was to conduct a preliminary exploration into the process and therapeutic outcomes of meaning-centered dream work with hospice patients. A meaning-centered variation of the cognitive-experiential model of dream work (Hill, 1996; 2004) was tested with participants. This variation was influenced by the tenets of meaning-centered psychotherapy (Breitbart et al., 2012). A total of 12 dream-work sessions were conducted with 7 hospice patients (5 women), and session transcripts were analyzed using the consensual qualitative research (CQR) method (Hill, 2012). Participants also completed measures of gains from dream interpretation in terms of existential well-being and quality of life. Participants' dreams generally featured familiar settings and living family and friends. Reported images from dreams were usually connected to feelings, relationships, and the concerns of waking life. Participants typically interpreted their dreams as meaning that they needed to change their way of thinking, address legacy concerns, or complete unfinished business. Generally, participants developed and implemented action plans based on these interpretations, despite their physical limitations. Participants described dream-work sessions as meaningful, comforting, and helpful. High scores on a measure of gains from dream interpretation were reported, consistent with qualitative findings. No adverse effects were reported or indicated by assessments. Our results provided initial support for the feasibility and helpfulness of dream work in this population. Implications for counseling with the dying and directions for future research were also explored.

  15. Elastic K-means using posterior probability.

    Science.gov (United States)

    Zheng, Aihua; Jiang, Bo; Li, Yan; Zhang, Xuehan; Ding, Chris

    2017-01-01

    The widely used K-means clustering is a hard clustering algorithm. Here we propose a Elastic K-means clustering model (EKM) using posterior probability with soft capability where each data point can belong to multiple clusters fractionally and show the benefit of proposed Elastic K-means. Furthermore, in many applications, besides vector attributes information, pairwise relations (graph information) are also available. Thus we integrate EKM with Normalized Cut graph clustering into a single clustering formulation. Finally, we provide several useful matrix inequalities which are useful for matrix formulations of learning models. Based on these results, we prove the correctness and the convergence of EKM algorithms. Experimental results on six benchmark datasets demonstrate the effectiveness of proposed EKM and its integrated model.

  16. On the meaning of sink capture efficiency and sink strength for point defects

    International Nuclear Information System (INIS)

    Mansur, L.K.; Wolfer, W.G.

    1982-01-01

    The concepts of sink capture efficiency and sink strength for point defects are central to the theory of point defect reactions in materials undergoing irradiation. Two fundamentally different definitions of the capture efficiency are in current use. The essential difference can be stated simply. The conventional meaning denotes a measure of the loss rate of point defects to sinks per unit mean point defect concentration. A second definition of capture efficiency, introduced recently, gives a measure of the point defect loss rate without normalization to the mean point defect concentration. The relationship between the two capture efficiencies is here derived. By stating the relationship we hope to eliminate confusion caused by comparisons of the two types of capture efficiencies at face value and to provide a method of obtaining one from the other. Internally consistent usage of either of the capture efficiencies leads to the same results for the calculation of measuable quantities, as is required physically. (orig.)

  17. PET reconstruction via nonlocal means induced prior.

    Science.gov (United States)

    Hou, Qingfeng; Huang, Jing; Bian, Zhaoying; Chen, Wufan; Ma, Jianhua

    2015-01-01

    The traditional Bayesian priors for maximum a posteriori (MAP) reconstruction methods usually incorporate local neighborhood interactions that penalize large deviations in parameter estimates for adjacent pixels; therefore, only local pixel differences are utilized. This limits their abilities of penalizing the image roughness. To achieve high-quality PET image reconstruction, this study investigates a MAP reconstruction strategy by incorporating a nonlocal means induced (NLMi) prior (NLMi-MAP) which enables utilizing global similarity information of image. The present NLMi prior approximates the derivative of Gibbs energy function by an NLM filtering process. Specially, the NLMi prior is obtained by subtracting the current image estimation from its NLM filtered version and feeding the residual error back to the reconstruction filter to yield the new image estimation. We tested the present NLMi-MAP method with simulated and real PET datasets. Comparison studies with conventional filtered backprojection (FBP) and a few iterative reconstruction methods clearly demonstrate that the present NLMi-MAP method performs better in lowering noise, preserving image edge and in higher signal to noise ratio (SNR). Extensive experimental results show that the NLMi-MAP method outperforms the existing methods in terms of cross profile, noise reduction, SNR, root mean square error (RMSE) and correlation coefficient (CORR).

  18. Realization of the Evristic Combination Methods by Means of Computer Graphics

    Directory of Open Access Journals (Sweden)

    S. A. Novoselov

    2012-01-01

    Full Text Available The paper looks at the ways of enhancing and stimulating the creative activity and initiative of pedagogic students – the prospective specialists called for educating and upbringing socially and professionally competent, originally thinking, versatile personalities. For developing their creative abilities the author recommends introducing the heuristic combination methods, applied for engineering creativity facilitation; associative-synectic technology; and computer graphics tools. The paper contains the comparative analysis of the main heuristic method operations and the computer graphics redactor in creating a visual composition. The examples of implementing the heuristic combination methods are described along with the extracts of the laboratory classes designed for creativity and its motivation developments. The approbation of the given method in the several universities confirms the prospects of enhancing the students’ learning and creative activities. 

  19. Merging Belief Propagation and the Mean Field Approximation: A Free Energy Approach

    DEFF Research Database (Denmark)

    Riegler, Erwin; Kirkelund, Gunvor Elisabeth; Manchón, Carles Navarro

    2013-01-01

    We present a joint message passing approach that combines belief propagation and the mean field approximation. Our analysis is based on the region-based free energy approximation method proposed by Yedidia et al. We show that the message passing fixed-point equations obtained with this combination...... correspond to stationary points of a constrained region-based free energy approximation. Moreover, we present a convergent implementation of these message passing fixed-point equations provided that the underlying factor graph fulfills certain technical conditions. In addition, we show how to include hard...

  20. Contribution to the sample mean plot for graphical and numerical sensitivity analysis

    International Nuclear Information System (INIS)

    Bolado-Lavin, R.; Castaings, W.; Tarantola, S.

    2009-01-01

    The contribution to the sample mean plot, originally proposed by Sinclair, is revived and further developed as practical tool for global sensitivity analysis. The potentials of this simple and versatile graphical tool are discussed. Beyond the qualitative assessment provided by this approach, a statistical test is proposed for sensitivity analysis. A case study that simulates the transport of radionuclides through the geosphere from an underground disposal vault containing nuclear waste is considered as a benchmark. The new approach is tested against a very efficient sensitivity analysis method based on state dependent parameter meta-modelling

  1. The mean and variance of phylogenetic diversity under rarefaction.

    Science.gov (United States)

    Nipperess, David A; Matsen, Frederick A

    2013-06-01

    Phylogenetic diversity (PD) depends on sampling depth, which complicates the comparison of PD between samples of different depth. One approach to dealing with differing sample depth for a given diversity statistic is to rarefy, which means to take a random subset of a given size of the original sample. Exact analytical formulae for the mean and variance of species richness under rarefaction have existed for some time but no such solution exists for PD.We have derived exact formulae for the mean and variance of PD under rarefaction. We confirm that these formulae are correct by comparing exact solution mean and variance to that calculated by repeated random (Monte Carlo) subsampling of a dataset of stem counts of woody shrubs of Toohey Forest, Queensland, Australia. We also demonstrate the application of the method using two examples: identifying hotspots of mammalian diversity in Australasian ecoregions, and characterising the human vaginal microbiome.There is a very high degree of correspondence between the analytical and random subsampling methods for calculating mean and variance of PD under rarefaction, although the Monte Carlo method requires a large number of random draws to converge on the exact solution for the variance.Rarefaction of mammalian PD of ecoregions in Australasia to a common standard of 25 species reveals very different rank orderings of ecoregions, indicating quite different hotspots of diversity than those obtained for unrarefied PD. The application of these methods to the vaginal microbiome shows that a classical score used to quantify bacterial vaginosis is correlated with the shape of the rarefaction curve.The analytical formulae for the mean and variance of PD under rarefaction are both exact and more efficient than repeated subsampling. Rarefaction of PD allows for many applications where comparisons of samples of different depth is required.

  2. Men’s Grief, Meaning and Growth

    DEFF Research Database (Denmark)

    Spaten, Ole Michael; Byrialsen, Mia Nørremark; Langdridge, Darren

    2012-01-01

    There is a scarcity of research on men's experience of bereavement (Reiniche, 2006), particularly in relation to qualitative research that focuses on the meaning of such an experience. This paper seeks to address this scarcity by presenting the findings from a phenomenological study of the life-w...... phenomenological method of Van Manen (1990) was used to uncover three key themes, labelled grief and self-reflection, meaning of life and loss, and re-figuring the life-world. These themes are discussed in the light of broader existential concerns and the extant literature....

  3. System and method of providing quick thermal comfort with reduced energy by using directed spot conditioning

    Science.gov (United States)

    Wang, Mingyu; Kadle, Prasad S.; Ghosh, Debashis; Zima, Mark J.; Wolfe, IV, Edward; Craig, Timothy D

    2016-10-04

    A heating, ventilation, and air conditioning (HVAC) system and a method of controlling a HVAC system that is configured to provide a perceived comfortable ambient environment to an occupant seated in a vehicle cabin. The system includes a nozzle configured to direct an air stream from the HVAC system to the location of a thermally sensitive portion of the body of the occupant. The system also includes a controller configured to determine an air stream temperature and an air stream flow rate necessary to establish the desired heat supply rate for the sensitive portion and provide a comfortable thermal environment by thermally isolating the occupant from the ambient vehicle cabin temperature. The system may include a sensor to determine the location of the sensitive portion. The nozzle may include a thermoelectric device to heat or cool the air stream.

  4. 3D dose distribution calculation in a voxelized human phantom by means of Monte Carlo method

    International Nuclear Information System (INIS)

    Abella, V.; Miro, R.; Juste, B.; Verdu, G.

    2010-01-01

    The aim of this work is to provide the reconstruction of a real human voxelized phantom by means of a MatLab program and the simulation of the irradiation of such phantom with the photon beam generated in a Theratron 780 (MDS Nordion) 60 Co radiotherapy unit, by using the Monte Carlo transport code MCNP (Monte Carlo N-Particle), version 5. The project results in 3D dose mapping calculations inside the voxelized antropomorphic head phantom. The program provides the voxelization by first processing the CT slices; the process follows a two-dimensional pixel and material identification algorithm on each slice and three-dimensional interpolation in order to describe the phantom geometry via small cubic cells, resulting in an MCNP input deck format output. Dose rates are calculated by using the MCNP5 tool FMESH, superimposed mesh tally, which gives the track length estimation of the particle flux in units of particles/cm 2 . Furthermore, the particle flux is converted into dose by using the conversion coefficients extracted from the NIST Physical Reference Data. The voxelization using a three-dimensional interpolation technique in combination with the use of the FMESH tool of the MCNP Monte Carlo code offers an optimal simulation which results in 3D dose mapping calculations inside anthropomorphic phantoms. This tool is very useful in radiation treatment assessments, in which voxelized phantoms are widely utilized.

  5. Nanoscale waveguiding methods

    Directory of Open Access Journals (Sweden)

    Wang Chia-Jean

    2007-01-01

    Full Text Available AbstractWhile 32 nm lithography technology is on the horizon for integrated circuit (IC fabrication, matching the pace for miniaturization with optics has been hampered by the diffraction limit. However, development of nanoscale components and guiding methods is burgeoning through advances in fabrication techniques and materials processing. As waveguiding presents the fundamental issue and cornerstone for ultra-high density photonic ICs, we examine the current state of methods in the field. Namely, plasmonic, metal slot and negative dielectric based waveguides as well as a few sub-micrometer techniques such as nanoribbons, high-index contrast and photonic crystals waveguides are investigated in terms of construction, transmission, and limitations. Furthermore, we discuss in detail quantum dot (QD arrays as a gain-enabled and flexible means to transmit energy through straight paths and sharp bends. Modeling, fabrication and test results are provided and show that the QD waveguide may be effective as an alternate means to transfer light on sub-diffraction dimensions.

  6. Water-equivalent solid sources prepared by means of two distinct methods

    International Nuclear Information System (INIS)

    Koskinas, Marina F.; Yamazaki, Ione M.; Potiens Junior, Ademar

    2014-01-01

    The Nuclear Metrology Laboratory at IPEN is involved in developing radioactive water-equivalent solid sources prepared from an aqueous solution of acrylamide using two distinct methods for polymerization. One of them is the polymerization by high dose of 60 Co irradiation; in the other method the solid matrix-polyacrylamide is obtained from an aqueous solution composed by acrylamide, catalyzers and an aliquot of a radionuclide. The sources have been prepared in cylindrical geometry. In this paper, the study of the distribution of radioactive material in the solid sources prepared by both methods is presented. (author)

  7. Two Numerical Approaches to Stationary Mean-Field Games

    KAUST Repository

    Almulla, Noha; Ferreira, Rita; Gomes, Diogo A.

    2016-01-01

    Here, we consider numerical methods for stationary mean-field games (MFG) and investigate two classes of algorithms. The first one is a gradient-flow method based on the variational characterization of certain MFG. The second one uses monotonicity properties of MFG. We illustrate our methods with various examples, including one-dimensional periodic MFG, congestion problems, and higher-dimensional models.

  8. Two Numerical Approaches to Stationary Mean-Field Games

    KAUST Repository

    Almulla, Noha

    2016-10-04

    Here, we consider numerical methods for stationary mean-field games (MFG) and investigate two classes of algorithms. The first one is a gradient-flow method based on the variational characterization of certain MFG. The second one uses monotonicity properties of MFG. We illustrate our methods with various examples, including one-dimensional periodic MFG, congestion problems, and higher-dimensional models.

  9. The global kernel k-means algorithm for clustering in feature space.

    Science.gov (United States)

    Tzortzis, Grigorios F; Likas, Aristidis C

    2009-07-01

    Kernel k-means is an extension of the standard k -means clustering algorithm that identifies nonlinearly separable clusters. In order to overcome the cluster initialization problem associated with this method, we propose the global kernel k-means algorithm, a deterministic and incremental approach to kernel-based clustering. Our method adds one cluster at each stage, through a global search procedure consisting of several executions of kernel k-means from suitable initializations. This algorithm does not depend on cluster initialization, identifies nonlinearly separable clusters, and, due to its incremental nature and search procedure, locates near-optimal solutions avoiding poor local minima. Furthermore, two modifications are developed to reduce the computational cost that do not significantly affect the solution quality. The proposed methods are extended to handle weighted data points, which enables their application to graph partitioning. We experiment with several data sets and the proposed approach compares favorably to kernel k -means with random restarts.

  10. Mean-field theory and solitonic matter

    International Nuclear Information System (INIS)

    Cohen, T.D.

    1989-01-01

    Finite density solitonic matter is considered in the context of quantum field theory. Mean-field theory, which provides a reasonable description for single-soliton properties gives rise to a crystalline description. A heuristic description of solitonic matter is given which shows that the low-density limit of solitonic matter (the limit which is presumably relevant for nuclear matter) does not commute with the mean-field theory limit and gives rise to a Fermi-gas description of the system. It is shown on the basis of a formal expansion of simple soliton models in terms of the coupling constant why one expects mean-field theory to fail at low densities and why the corrections to mean-field theory are nonperturbative. This heuristic description is tested against an exactly solvable 1+1 dimensional model (the sine-Gordon model) and found to give the correct behavior. The relevance of these results to the program of doing nuclear physics based on soliton models is discussed. (orig.)

  11. Unsafe abortion in urban and rural Tanzania: method, provider and consequences

    DEFF Research Database (Denmark)

    Rasch, Vibeke; Kipingili, Rose

    2009-01-01

    OBJECTIVE: To describe unsafe abortion methods and associated health consequences in Tanzania, where induced abortion is restricted by law but common and known to account for a disproportionate share of hospital admissions. METHOD: Cross-sectional study of women admitted with alleged miscarriage...

  12. Directly measuring mean and variance of infinite-spectrum observables such as the photon orbital angular momentum.

    Science.gov (United States)

    Piccirillo, Bruno; Slussarenko, Sergei; Marrucci, Lorenzo; Santamato, Enrico

    2015-10-19

    The standard method for experimentally determining the probability distribution of an observable in quantum mechanics is the measurement of the observable spectrum. However, for infinite-dimensional degrees of freedom, this approach would require ideally infinite or, more realistically, a very large number of measurements. Here we consider an alternative method which can yield the mean and variance of an observable of an infinite-dimensional system by measuring only a two-dimensional pointer weakly coupled with the system. In our demonstrative implementation, we determine both the mean and the variance of the orbital angular momentum of a light beam without acquiring the entire spectrum, but measuring the Stokes parameters of the optical polarization (acting as pointer), after the beam has suffered a suitable spin-orbit weak interaction. This example can provide a paradigm for a new class of useful weak quantum measurements.

  13. Provider self-disclosure during contraceptive counseling.

    Science.gov (United States)

    McLean, Merritt; Steinauer, Jody; Schmittdiel, Julie; Chan, Pamela; Dehlendorf, Christine

    2017-02-01

    Provider self-disclosure (PSD) - defined as providers making statements regarding personal information to patients - has not been well characterized in the context of contraceptive counseling. In this study, we describe the incidence, content and context of contraceptive PSD. This mixed methods analysis used data from the Provider-Patient Contraceptive Counseling study, for which 349 family planning patients were recruited from 2009 to 2012 from six clinics in the San Francisco Bay Area. Audio-recordings from their visits were analyzed for the presence or absence of PSD, and those visits with evidence of PSD were analyzed using qualitative methods. The associations of patient and provider demographics and patient satisfaction measures, obtained from survey data, with PSD were analyzed using bivariable and multivariable analyses. Thirty-seven percent of providers showed evidence of PSD during at least one visit, and PSD occurred in 9% of clinic visits. Fifty-four percent of PSD statements were about intrauterine devices. About half of PSD statements occurred prior to the final selection of the contraceptive method and appeared to influence the choice of method. In post-visit surveys, all patients who reported receiving PSD considered it to be appropriate, and patient-reported PSD was not statistically associated with measures of patient satisfaction. This study provides some support for the appropriateness of PSD during family planning encounters, at least as practiced during the sampled visits. Further research could explore whether this counseling strategy has an impact on patients' ability to identify the best contraceptive methods for them. In this study, PSD did not have a demonstrated negative effect on the provider-patient relationship. In almost half of visits, PSD appeared to influence patients' choice of a method; whether this influence is beneficial needs further research. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. a qualitative study of providers' perspectives

    African Journals Online (AJOL)

    Background: Glaucoma management is challenging to patients as well as to the eye care providers.The study is aimed at describing the challenges faced by providers using qualitative methods. Methods: In-depth interviews were conducted with selected Ophthalmologists and resident doctors in ophthalmology at centres ...

  15. MATHEMATICAL МODELLING OF SELECTING INFORMATIVE FEATURES FOR ANALYZING THE LIFE CYCLE PROCESSES OF RADIO-ELECTRONIC MEANS

    Directory of Open Access Journals (Sweden)

    Николай Григорьевич Стародубцев

    2017-09-01

    monitoring the life cycle of radio electronic means is presented, by classifying the states of radioelectronic means and life cycle processes in the space of characteristics, each of which has a certain significance, which allowed finding a complex criterion and formalize the selection procedure. The solution of the task of identifying the life cycle of radio electronic means involves the creation of rules that determine the state of radio electronic means. The cases of insufficient for a correct classification of the number of a priori data are considered, approximate methods of selection according to criteria for using basic prototypes and information priorities are proposed. The application of a function for dividing sets in parameter space and formulating rules that regulate the correspondence between sets of parameters and values of performance indicators makes it possible to provide identification of states in the process of monitoring the life cycle of radio electronic means.

  16. Meaning in life of older persons : An integrative literature review

    NARCIS (Netherlands)

    MSc S.H.A. Hupkens; Anja Machielse; Dr. M.J.B.M. Goumans; P. Derkx

    2016-01-01

    The aim of this integrative review for nurses is to synthesize knowledge from scholarly literature to provide insight into how older persons find meaning in life, what are influencing circumstances, and what are their sources of meaning. The review serves as a starting point for including meaning in

  17. Influence of Mean Rooftop-Level Estimation Method on Sensible Heat Flux Retrieved from a Large-Aperture Scintillometer Over a City Centre

    Science.gov (United States)

    Zieliński, Mariusz; Fortuniak, Krzysztof; Pawlak, Włodzimierz; Siedlecki, Mariusz

    2017-08-01

    The sensible heat flux ( H) is determined using large-aperture scintillometer (LAS) measurements over a city centre for eight different computation scenarios. The scenarios are based on different approaches of the mean rooftop-level (zH) estimation for the LAS path. Here, zH is determined separately for wind directions perpendicular (two zones) and parallel (one zone) to the optical beam to reflect the variation in topography and building height on both sides of the LAS path. Two methods of zH estimation are analyzed: (1) average building profiles; (2) weighted-average building height within a 250 m radius from points located every 50 m along the optical beam, or the centre of a certain zone (in the case of a wind direction perpendicular to the path). The sensible heat flux is computed separately using the friction velocity determined with the eddy-covariance method and the iterative procedure. The sensitivity of the sensible heat flux and the extent of the scintillometer source area to different computation scenarios are analyzed. Differences reaching up to 7% between heat fluxes computed with different scenarios were found. The mean rooftop-level estimation method has a smaller influence on the sensible heat flux (-4 to 5%) than the area used for the zH computation (-5 to 7%). For the source-area extent, the discrepancies between respective scenarios reached a similar magnitude. The results demonstrate the value of the approach in which zH is estimated separately for wind directions parallel and perpendicular to the LAS optical beam.

  18. Attitudes of Healthcare Providers towards Providing Contraceptives for Unmarried Adolescents in Ibadan, Nigeria

    OpenAIRE

    Ahanonu, Ezihe Loretta

    2014-01-01

    Objective This study sought to assess the attitude of Healthcare Providers towards providing contraceptives for unmarried adolescents in four Local Government Areas in Ibadan, Nigeria. Materials and methods A cross-sectional descriptive study was conducted among 490 Healthcare Providers in 24 randomly selected healthcare facilities using self-administered, pre-tested questionnaires. Results More than half (57.5%) of the respondents perceived the provision of contraceptives for unmarried adole...

  19. Ratio of geometric means to analyze continuous outcomes in meta-analysis: comparison to mean differences and ratio of arithmetic means using empiric data and simulation.

    Science.gov (United States)

    Friedrich, Jan O; Adhikari, Neill K J; Beyene, Joseph

    2012-07-30

    Meta-analyses pooling continuous outcomes can use mean differences (MD), standardized MD (MD in pooled standard deviation units, SMD), or ratio of arithmetic means (RoM). Recently, ratio of geometric means using ad hoc (RoGM (ad hoc) ) or Taylor series (RoGM (Taylor) ) methods for estimating variances have been proposed as alternative effect measures for skewed continuous data. Skewed data are suggested for summary measures of clinical parameters restricted to positive values which have large coefficients of variation (CV). Our objective was to compare performance characteristics of RoGM (ad hoc) and RoGM (Taylor) to MD, SMD, and RoM. We used empiric data from systematic reviews reporting continuous outcomes and selected from each the meta-analysis with the most and at least 5 trials (Cochrane Database [2008, Issue 1]). We supplemented this with simulations conducted with representative parameters. Pooled results were calculated using each effect measure. Of the reviews, 232/5053 met the inclusion criteria. Empiric data and simulation showed that RoGM (ad hoc) exhibits more extreme treatment effects and greater heterogeneity than all other effect measures. Compared with MD, SMD, and RoM, RoGM (Taylor) exhibits similar treatment effects, more heterogeneity when CV ≤0.7, and less heterogeneity when CV > 0.7. In conclusion, RoGM (Taylor) may be considered for pooling continuous outcomes in meta-analysis when data are skewed, but RoGM (ad hoc) should not be used. However, clinicians' lack of familiarity with geometric means combined with acceptable performance characteristics of RoM in most situations suggests that RoM may be the preferable ratio method for pooling continuous outcomes in meta-analysis. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Discussion of a method for providing general risk information by linking with the nuclear information

    International Nuclear Information System (INIS)

    Shobu, Nobuhiro; Yokomizo, Shirou; Umezawa, Sayaka

    2004-06-01

    'Risk information navigator (http://www.ricotti.jp/risknavi/)', an internet tool for arousing public interest and fostering people's risk literacy, has been developed as the contents for the official website of Techno Community Square 'RICOTTI' (http://www.ricotti.jp) at TOKAI village. In this report we classified the risk information into the fields, Health/Daily Life', 'Society/Crime/Disaster' and Technology/Environment/Energy', for the internet tool contents. According to these categories we discussed a method for providing various risk information on general fields by linking with the information on nuclear field. The web contents are attached to this report with the CD-R media. (author)