WorldWideScience

Sample records for methods setting expectations

  1. Setting clear expectations for safety basis development

    International Nuclear Information System (INIS)

    MORENO, M.R.

    2003-01-01

    DOE-RL has set clear expectations for a cost-effective approach for achieving compliance with the Nuclear Safety Management requirements (10 CFR 830, Nuclear Safety Rule) which will ensure long-term benefit to Hanford. To facilitate implementation of these expectations, tools were developed to streamline and standardize safety analysis and safety document development resulting in a shorter and more predictable DOE approval cycle. A Hanford Safety Analysis and Risk Assessment Handbook (SARAH) was issued to standardized methodologies for development of safety analyses. A Microsoft Excel spreadsheet (RADIDOSE) was issued for the evaluation of radiological consequences for accident scenarios often postulated for Hanford. A standard Site Documented Safety Analysis (DSA) detailing the safety management programs was issued for use as a means of compliance with a majority of 3009 Standard chapters. An in-process review was developed between DOE and the Contractor to facilitate DOE approval and provide early course correction. As a result of setting expectations and providing safety analysis tools, the four Hanford Site waste management nuclear facilities were able to integrate into one Master Waste Management Documented Safety Analysis (WM-DSA)

  2. Goal Setting and Expectancy Theory Predictions of Effort and Performance.

    Science.gov (United States)

    Dossett, Dennis L.; Luce, Helen E.

    Neither expectancy (VIE) theory nor goal setting alone are effective determinants of individual effort and task performance. To test the combined ability of VIE and goal setting to predict effort and performance, 44 real estate agents and their managers completed questionnaires. Quarterly income goals predicted managers' ratings of agents' effort,…

  3. 5 CFR 9701.406 - Setting and communicating performance expectations.

    Science.gov (United States)

    2010-01-01

    ... communicating performance expectations. (a) Performance expectations must align with and support the DHS mission and its strategic goals, organizational program and policy objectives, annual performance plans, and... organizational level; (2) Organizational, occupational, or other work requirements, such as standard operating...

  4. 5 CFR 9901.406 - Setting and communicating performance expectations.

    Science.gov (United States)

    2010-01-01

    ... communicating performance expectations. (a) Performance expectations will support and align with the mission and strategic goals, organizational program and policy objectives, annual performance plans, and other measures... performance targets at the individual, team, and/or organizational level; (2) Organizational, occupational, or...

  5. Expectations

    DEFF Research Database (Denmark)

    depend on the reader’s own experiences, individual feelings, personal associations or on conventions of reading, interpretive communities and cultural conditions? This volume brings together narrative theory, fictionality theory and speech act theory to address such questions of expectations...

  6. Decomposing cross-country differences in quality adjusted life expectancy: the impact of value sets

    Directory of Open Access Journals (Sweden)

    Oppe Mark

    2011-06-01

    Full Text Available Abstract Background The validity, reliability and cross-country comparability of summary measures of population health (SMPH have been persistently debated. In this debate, the measurement and valuation of nonfatal health outcomes have been defined as key issues. Our goal was to quantify and decompose international differences in health expectancy based on health-related quality of life (HRQoL. We focused on the impact of value set choice on cross-country variation. Methods We calculated Quality Adjusted Life Expectancy (QALE at age 20 for 15 countries in which EQ-5D population surveys had been conducted. We applied the Sullivan approach to combine the EQ-5D based HRQoL data with life tables from the Human Mortality Database. Mean HRQoL by country-gender-age was estimated using a parametric model. We used nonparametric bootstrap techniques to compute confidence intervals. QALE was then compared across the six country-specific time trade-off value sets that were available. Finally, three counterfactual estimates were generated in order to assess the contribution of mortality, health states and health-state values to cross-country differences in QALE. Results QALE at age 20 ranged from 33 years in Armenia to almost 61 years in Japan, using the UK value set. The value sets of the other five countries generated different estimates, up to seven years higher. The relative impact of choosing a different value set differed across country-gender strata between 2% and 20%. In 50% of the country-gender strata the ranking changed by two or more positions across value sets. The decomposition demonstrated a varying impact of health states, health-state values, and mortality on QALE differences across countries. Conclusions The choice of the value set in SMPH may seriously affect cross-country comparisons of health expectancy, even across populations of similar levels of wealth and education. In our opinion, it is essential to get more insight into the drivers

  7. The Prediction of College Student Academic Performance and Retention: Application of Expectancy and Goal Setting Theories

    Science.gov (United States)

    Friedman, Barry A.; Mandel, Rhonda G.

    2010-01-01

    Student retention and performance in higher education are important issues for educators, students, and the nation facing critical professional labor shortages. Expectancy and goal setting theories were used to predict academic performance and college student retention. Students' academic expectancy motivation at the start of the college…

  8. Investigating requests and expectations for future methods of CEE

    DEFF Research Database (Denmark)

    Nørgaard, Bente; Jensson, Palle; Bayard, Ove

    2014-01-01

    This article presents a map of requests and expectations for future ‘delivery’ methods of continuing engineering education (CEE) viewed from the perspective of Scandinavian managing directors and their employed engineers. During the last decades numerous attempts have been made to develop new...... into the crystal ball to identify requests and expectations to future methods of CEE. The significance of the investigation will be a conceptual map, which discloses some future focus areas ahead of CEE providers....

  9. Engineering Graduates' Skill Sets in the MENA Region: A Gap Analysis of Industry Expectations and Satisfaction

    Science.gov (United States)

    Ramadi, Eric; Ramadi, Serge; Nasr, Karim

    2016-01-01

    This study explored gaps between industry expectations and perceptions of engineering graduates' skill sets in the Middle East and North Africa (MENA) region. This study measured the importance that managers of engineers placed on 36 skills relevant to engineers. Also measured was managers' satisfaction with engineering graduates' skill sets.…

  10. User library service expectations in health science vs. other settings: a LibQUAL+ Study.

    Science.gov (United States)

    Thompson, Bruce; Kyrillidou, Martha; Cook, Colleen

    2007-12-01

    To explore how the library service expectations and perceptions of users might differ across health-related libraries as against major research libraries not operating in a medical context; to determine whether users of medical libraries demand better library service quality, because the inability of users to access needed literature promptly may lead to a patient who cannot be properly diagnosed, or a diagnosis that cannot be properly treated. We compared LibQUAL+ total and subscale scores across three groups of US, Canadian and British libraries for this purpose. Anticipated differences in expectations for health as other library settings did not emerge. The expectations and perceptions are similar across different types of health science library settings, hospital and academic, and across other general research libraries.

  11. Expectations, Realizations, and Approval of Tablet Computers in an Educational Setting

    Science.gov (United States)

    Hassan, Mamdouh; Geys, Benny

    2016-01-01

    The introduction of new technologies in classrooms is often thought to offer great potential for advancing learning. In this article, we investigate the relationship between such expectations and the post-implementation evaluation of a new technology in an educational setting. Building on psychological research, we argue that (1) high expectations…

  12. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  13. Home advantage in soccer--A matter of expectations, goal setting and tactical decisions of coaches?

    Science.gov (United States)

    Staufenbiel, Kathrin; Lobinger, Babett; Strauss, Bernd

    2015-01-01

    In soccer, home teams win about 67% of decided games. The causes for this home advantage are still unresolved. There is a shortage of research on the psychological states of actors involved. In this study, we examined soccer coaches' expectations, goal setting and tactical decisions in relation to game location. Soccer coaches (N = 297) with different expertise levels participated in an experimental, online management game and were randomly assigned to one of two groups, "home game (HG)" or "away game." Participants received information on the game for which they were asked to make decisions in multiple points. The only differing information between groups was game location. Regardless of expertise, HG coaches had higher expectations to win, set more challenging goals and decided for more offensive and courageous playing tactics. Possible consequences of these findings concerning home advantage in soccer are discussed.

  14. Decomposing cross-country differences in quality adjusted life expectancy: the impact of value sets.

    Science.gov (United States)

    Heijink, Richard; van Baal, Pieter; Oppe, Mark; Koolman, Xander; Westert, Gert

    2011-06-23

    The validity, reliability and cross-country comparability of summary measures of population health (SMPH) have been persistently debated. In this debate, the measurement and valuation of nonfatal health outcomes have been defined as key issues. Our goal was to quantify and decompose international differences in health expectancy based on health-related quality of life (HRQoL). We focused on the impact of value set choice on cross-country variation. We calculated Quality Adjusted Life Expectancy (QALE) at age 20 for 15 countries in which EQ-5D population surveys had been conducted. We applied the Sullivan approach to combine the EQ-5D based HRQoL data with life tables from the Human Mortality Database. Mean HRQoL by country-gender-age was estimated using a parametric model. We used nonparametric bootstrap techniques to compute confidence intervals. QALE was then compared across the six country-specific time trade-off value sets that were available. Finally, three counterfactual estimates were generated in order to assess the contribution of mortality, health states and health-state values to cross-country differences in QALE. QALE at age 20 ranged from 33 years in Armenia to almost 61 years in Japan, using the UK value set. The value sets of the other five countries generated different estimates, up to seven years higher. The relative impact of choosing a different value set differed across country-gender strata between 2% and 20%. In 50% of the country-gender strata the ranking changed by two or more positions across value sets. The decomposition demonstrated a varying impact of health states, health-state values, and mortality on QALE differences across countries. The choice of the value set in SMPH may seriously affect cross-country comparisons of health expectancy, even across populations of similar levels of wealth and education. In our opinion, it is essential to get more insight into the drivers of differences in health-state values across populations. This

  15. Setting Instructional Expectations: Patterns of Principal Leadership for Middle School Mathematics

    Science.gov (United States)

    Katterfeld, Karin

    2013-01-01

    Principal instructional leadership has been found to support improved instruction. However, the methods through which principal leadership influences classroom instruction are less clear. This study investigates how principals' leadership may predict the expectations that mathematics teachers perceive for classroom practice. Results from a…

  16. Rough Set Theory based prognostication of life expectancy for terminally ill patients.

    Science.gov (United States)

    Gil-Herrera, Eleazar; Yalcin, Ali; Tsalatsanis, Athanasios; Barnes, Laura E; Djulbegovic, Benjamin

    2011-01-01

    We present a novel knowledge discovery methodology that relies on Rough Set Theory to predict the life expectancy of terminally ill patients in an effort to improve the hospice referral process. Life expectancy prognostication is particularly valuable for terminally ill patients since it enables them and their families to initiate end-of-life discussions and choose the most desired management strategy for the remainder of their lives. We utilize retrospective data from 9105 patients to demonstrate the design and implementation details of a series of classifiers developed to identify potential hospice candidates. Preliminary results confirm the efficacy of the proposed methodology. We envision our work as a part of a comprehensive decision support system designed to assist terminally ill patients in making end-of-life care decisions.

  17. Standard setting: Comparison of two methods

    Directory of Open Access Journals (Sweden)

    Oyebode Femi

    2006-09-01

    Full Text Available Abstract Background The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard – setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. Methods The norm – reference method of standard -setting (mean minus 1 SD was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ. Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart. We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. Results The pass rate with the norm-reference method was 85% (66/78 and that by the Angoff method was 100% (78 out of 78. The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% – 87%. The modified Angoff method had an inter-rater reliability of 0.81 – 0.82 and a test-retest reliability of 0.59–0.74. Conclusion There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  18. Standard setting: comparison of two methods.

    Science.gov (United States)

    George, Sanju; Haque, M Sayeed; Oyebode, Femi

    2006-09-14

    The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard-setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. The norm-reference method of standard-setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% - 87%). The modified Angoff method had an inter-rater reliability of 0.81-0.82 and a test-retest reliability of 0.59-0.74. There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability.

  19. The "Set Map" Method of Navigation.

    Science.gov (United States)

    Tippett, Julian

    1998-01-01

    Explains the "set map" method of using the baseplate compass to solve walkers' navigational needs as opposed to the 1-2-3 method for taking a bearing. The map, with the compass permanently clipped to it, is rotated to the position in which its features have the same orientation as their counterparts on the ground. Includes directions and…

  20. Patient Expectations and Perceptions of Goal-setting Strategies for Disease Management in Rheumatoid Arthritis.

    Science.gov (United States)

    Strand, Vibeke; Wright, Grace C; Bergman, Martin J; Tambiah, Jeyanesh; Taylor, Peter C

    2015-11-01

    To identify how patients perceive the broad effect of active rheumatoid arthritis (RA) on their daily lives and indicate how RA disease management could benefit from the inclusion of individual goal-setting strategies. Two multinational surveys were completed by patients with RA. The "Good Days Fast" survey was conducted to explore the effect of disease on the daily lives and relationships of women with RA. The "Getting to Your Destination Faster" survey examined RA patients' treatment expectations and goal-setting practices. Respondents from all countries agreed that RA had a substantial negative effect on many aspects of their lives (work productivity, daily routines, participation in social and leisure activities) and emotional well-being (loss of self-confidence, feelings of detachment, isolation). Daily pain was a paramount issue, and being pain- and fatigue-free was considered the main indicator of a "good day." Setting personal, social, and treatment goals, as well as monitoring disease progress to achieve these, was considered very beneficial by patients with RA, but discussion of treatment goals seldom appeared to be a part of medical appointments. Many patients with RA feel unable to communicate their disease burden and treatment goals, which are critically important to them, to their healthcare provider (HCP). Insights gained from these 2 surveys should help to guide patients and HCP to better focus upon mutually defined goals for continued improvement of management and achievement of optimal care in RA.

  1. Advantages and limitations of the SETS method

    International Nuclear Information System (INIS)

    Mahaffy, J.H.

    1983-01-01

    The stability-enchancing two-step (SETS) method has been used successfully in the Transient Reactor Analysis Code (TRAC) for several years. The method consists of a basic semi-implicit step combined with a stabilizer step that, taken together, eliminate the material Courant stability limit associated with standard semi-implicit numerical methods. This approach toward stability requires significantly fewer computational operations than a fully implicit method, but currently maintains the first-order accuracy in space and time of its semi-implicit predecessors

  2. Leader performance evaluations and role congruity expectations in a community college setting

    OpenAIRE

    Trickey, Edward

    2011-01-01

    To investigate the relationships among evaluator attitudes, the role congruity biases many people consciously and unconsciously maintain, evaluation practices, perceptions of leader efficacy and success, and leader persistence in two community college settings, a mixed-methods study was conducted. Leaders are the products of their experiences, environments, the greater society within which they live, their personal attitudes and biases, and the attitudes and role biases of others. Over time, ...

  3. Parent Expectancies and Preferences for Mental Health Treatment: The Roles of Emotion Mind-Sets and Views of Failure.

    Science.gov (United States)

    Schleider, Jessica L; Weisz, John R

    2018-01-24

    Because parents are primary gatekeepers to mental health care for their children, parental expectations that mental health treatment is ineffective may undermine treatment seeking, retention, and response. Thus, a need exists to understand parents' expectations about treatment and to develop scalable interventions that can instill more favorable views. We examined parents' treatment expectancies and preferences for their offspring and themselves in relation to two global beliefs: mind-sets (malleability beliefs) of emotions and anxiety, and views of failure as enhancing versus debilitating. Study 1 (N = 200; 49.5% fathers; 70.4% Caucasian) examined associations among parents' emotion mind-sets, anxiety mind-sets, failure beliefs, and treatment expectancies and preferences. Study 2 (N = 430; 44.70% fathers; 75.80% Caucasian) tested whether online inductions teaching "growth emotion mind-sets" (viewing emotions as malleable), adaptive failure beliefs, or both improved parents' treatment expectancies and hypothetical preferences for treatment (vs. no-treatment). Participants received one of three 8- to 15-min inductions or a psychoeducation control, rating treatment expectancies. and preferences pre- and postinduction. In Study 1, fixed emotion mind-sets and failure-is-debilitating beliefs were associated with lower parent psychotherapy expectancies for offspring and themselves and stronger "no-treatment" preferences for offspring. In Study 2, inductions teaching (a) growth emotion mind-sets only and (b) growth emotion mind-sets and failure-is-enhancing beliefs improved parents' psychotherapy expectancies for themselves (ds = .38, .51) and offspring (ds = .30, .43). No induction increased parents' hypothetical preferences for treatment (vs. no-treatment). Findings suggest scalable strategies for strengthening parents' psychotherapy effectiveness beliefs for themselves and their children.

  4. Estimating health expectancies from two cross-sectional surveys: The intercensal method

    Directory of Open Access Journals (Sweden)

    Michel Guillot

    2009-10-01

    Full Text Available Health expectancies are key indicators for monitoring the health of populations, as well as for informing debates about compression or expansion of morbidity. However, current methodologies for estimating them are not entirely satisfactory. They are either of limited applicability because of high data requirements (the multistate method or based on questionable assumptions (the Sullivan method. This paper proposes a new method, called the "intercensal" method, which relies on the multistate framework but uses widely available data. The method uses age-specific proportions "healthy" at two successive, independent cross-sectional health surveys, and, together with information on general mortality, solves for the set of transition probabilities that produces the observed sequence of proportions healthy. The system is solved by making realistic parametric assumptions about the age patterns of transition probabilities. Using data from the Health and Retirement Survey (HRS and from the National Health Interview Survey (NHIS, the method is tested against both the multistate method and the Sullivan method. We conclude that the intercensal approach is a promising framework for the indirect estimation of health expectancies.

  5. Graduate Students' Expectations of an Introductory Research Methods Course

    Science.gov (United States)

    Earley, Mark A.

    2013-01-01

    While there is a scattered literature base on teaching research methods courses, there is very little literature that speaks to what and how students learn in research methods courses. Students are often described as coming to the course not seeing its relevance, bringing negative attitudes and low motivation with them. The purpose of this…

  6. Located historical cognition: syllabus expectations and teaching method

    Directory of Open Access Journals (Sweden)

    Geyso Germinari

    2012-03-01

    Full Text Available This research of qualitative nature has as objective to analyze how a group of elementary school’s teachers formulates its teaching method, under the perspective of located historical cognition. The theoretical and methodological presuppositions of the Historical Education are present in the teaching and learning conception of the State Syllabus Guidelines of History, referring to a located historical cognition. The syllabus affirms that the teachers’ pedagogic work has as purpose the formation of the students’ historical thought, through the historical conscience. In order to do that, it suggests the use of the historical investigation method in classroom, articulated by the historical narratives of the subjects. Based in the theoretical and methodological referential of the “methodological structuralism”, the investigation used standardized questionnaire and semi-structured interview applied to four teachers. The results indicate that the teachers use in their teaching method elements of the historical investigation, which is a practice that potentiates in the student the development of a cognition located in the science of history.

  7. Assessing fitness for use: the expected value of spatial data sets

    NARCIS (Netherlands)

    Bruin, de S.; Bregt, A.K.; Ven, van de M.

    2001-01-01

    This paper proposes and illustrates a decision analytical approach to compare the value of alternative spatial data sets. In contrast to other work addressing value of information, its focus is on value of control. This is a useful concept when choosing the best data set for decision making under

  8. The impact of fetal growth restriction on latency in the setting of expectant management of preeclampsia.

    Science.gov (United States)

    McKinney, David; Boyd, Heather; Langager, Amanda; Oswald, Michael; Pfister, Abbey; Warshak, Carri R

    2016-03-01

    Fetal growth restriction is a common complication of preeclampsia. Expectant management for qualifying patients has been found to have acceptable maternal safety while improving neonatal outcomes. Whether fetal growth restriction influences the duration of latency during expectant management of preeclampsia is unknown. The objective of the study was to determine whether fetal growth restriction is associated with a reduced interval to delivery in women with preeclampsia being expectantly managed prior to 34 weeks. We performed a retrospective cohort of singleton, live-born, nonanomalous deliveries at the University of Cincinnati Medical Center between 2008 and 2013. Patients were included in our analysis if they were diagnosed with preeclampsia prior to 34 completed weeks and if the initial management plan was to pursue expectant management beyond administration of steroids for fetal lung maturity. Two study groups were determined based on the presence or absence of fetal growth restriction. Patients were delivered when they developed persistent neurological symptoms, severe hypertension refractory to medical therapy, renal insufficiency, nonreassuring fetal status, pulmonary edema, or hemolysis elevated liver low platelet syndrome or when they reached 37 weeks if they remained stable without any other indication for delivery. Our primary outcome was the interval from diagnosis of preeclampsia to delivery, measured in days. Secondary outcomes included indications for delivery, rates of induction and cesarean delivery, development of severe morbidities of preeclampsia, and select neonatal outcomes. We performed a multivariate logistic regression analysis comparing those with fetal growth restriction with those with normally grown fetuses to determine whether there is an association between fetal growth restriction and a shortened interval to delivery, neonatal intensive care unit admission, prolonged neonatal stay, and neonatal mortality. A total of 851 patients met

  9. Choosing the right rehabilitation setting after herniated disc surgery: Motives, motivations and expectations from the patients' perspective.

    Science.gov (United States)

    Löbner, Margrit; Stein, Janine; Luppa, Melanie; Konnopka, Alexander; Meisel, Hans Jörg; Günther, Lutz; Meixensberger, Jürgen; Stengler, Katarina; Angermeyer, Matthias C; König, Hans-Helmut; Riedel-Heller, Steffi G

    2017-01-01

    This study aims to investigate (1) motives, motivations and expectations regarding the choice for a specific rehabilitation setting after herniated disc surgery and (2) how rehabilitation-related motivations and expectations are associated with rehabilitation outcome (ability to work, health-related quality of life and satisfaction with rehabilitation) three months after disc surgery. The longitudinal cohort study refers to 452 disc surgery patients participating in a subsequent rehabilitation. Baseline interviews took part during acute hospital stay (pre-rehabilitation), follow-up interviews three months later (post-rehabilitation). Binary logistic regression and multiple linear regression analyses were applied. (1) Motives, motivations and expectations: Inpatient rehabilitation (IPR) patients stated "less effort/stress" (40.9%), more "relaxation and recreation" (39.1%) and greater "intensity of care and treatment" (37.0%) regarding their setting preference, whereas outpatient rehabilitation (OPR) patients indicated "family reasons" (45.3%), the wish for "staying in familiar environment" (35.9%) as well as "job-related reasons" (11.7%) as most relevant. IPR patients showed significantly higher motivation/expectation scores regarding regeneration (p job (p example, patients with less motivations/expectations to achieve improvements regarding "physical burden" showed a better health-related quality of life (p satisfaction with rehabilitation (OR = .806; p < .05). Rehabilitation-related motivations and expectations differed substantially between IPR and OPR patients before rehabilitation and were significantly associated with rehabilitation outcome. Taking motivational and expectation-related aspects into account may help to improve allocation procedures for different rehabilitation settings and may improve rehabilitation success.

  10. Standard-Setting Methods as Measurement Processes

    Science.gov (United States)

    Nichols, Paul; Twing, Jon; Mueller, Canda D.; O'Malley, Kimberly

    2010-01-01

    Some writers in the measurement literature have been skeptical of the meaningfulness of achievement standards and described the standard-setting process as blatantly arbitrary. We argue that standard setting is more appropriately conceived of as a measurement process similar to student assessment. The construct being measured is the panelists'…

  11. The Influence of Attention Set, Working Memory Capacity, and Expectations on Inattentional Blindness.

    Science.gov (United States)

    Kreitz, Carina; Furley, Philip; Memmert, Daniel; Simons, Daniel J

    2016-04-01

    The probability of inattentional blindness, the failure to notice an unexpected object when attention is engaged on some primary task, is influenced by contextual factors like task demands, features of the unexpected object, and the observer's attention set. However, predicting who will notice an unexpected object and who will remain inattentionally blind has proven difficult, and the evidence that individual differences in cognition affect noticing remains ambiguous. We hypothesized that greater working memory capacity might modulate the effect of attention sets on noticing because working memory is associated with the ability to focus attention selectively. People with greater working memory capacity might be better able to attend selectively to target items, thereby increasing the chances of noticing unexpected objects that were similar to the attended items while decreasing the odds of noticing unexpected objects that differed from the attended items. Our study (N = 120 participants) replicated evidence that task-induced attention sets modulate noticing but found no link between noticing and working memory capacity. Our results are largely consistent with the idea that individual differences in working memory capacity do not predict noticing of unexpected objects in an inattentional blindness task. © The Author(s) 2015.

  12. Choosing the right rehabilitation setting after herniated disc surgery: Motives, motivations and expectations from the patients' perspective.

    Directory of Open Access Journals (Sweden)

    Margrit Löbner

    Full Text Available This study aims to investigate (1 motives, motivations and expectations regarding the choice for a specific rehabilitation setting after herniated disc surgery and (2 how rehabilitation-related motivations and expectations are associated with rehabilitation outcome (ability to work, health-related quality of life and satisfaction with rehabilitation three months after disc surgery.The longitudinal cohort study refers to 452 disc surgery patients participating in a subsequent rehabilitation. Baseline interviews took part during acute hospital stay (pre-rehabilitation, follow-up interviews three months later (post-rehabilitation. Binary logistic regression and multiple linear regression analyses were applied.(1 Motives, motivations and expectations: Inpatient rehabilitation (IPR patients stated "less effort/stress" (40.9%, more "relaxation and recreation" (39.1% and greater "intensity of care and treatment" (37.0% regarding their setting preference, whereas outpatient rehabilitation (OPR patients indicated "family reasons" (45.3%, the wish for "staying in familiar environment" (35.9% as well as "job-related reasons" (11.7% as most relevant. IPR patients showed significantly higher motivation/expectation scores regarding regeneration (p < .001, health (p < .05, coping (p < .001, retirement/job (p < .01, psychological burden (p < .05 and physical burden (p < .001 compared to OPR patients. (2 Associations with rehabilitation outcome: Besides other factors (e.g. age, gender and educational level rehabilitation-related motivations/expectations were significantly associated with rehabilitation outcome measures. For example, patients with less motivations/expectations to achieve improvements regarding "physical burden" showed a better health-related quality of life (p < .01 three months after disc surgery. Less motivations/expectations to achieve improvements regarding "psychological burden" was linked to a better mental health status (p < .001 and a

  13. 'Sleeping with the enemy?' Expectations and reality in imaging children in the emergency setting

    International Nuclear Information System (INIS)

    Frush, Donald P.; Frush, Karen S.

    2008-01-01

    As an introduction to the ALARA conference titled ''Building Bridges between Radiology and Emergency Medicine: Consensus Conference on Imaging Safety and Quality for Children in the Emergency Setting,'' it is important for us to understand the landscapes of both the pediatric radiology and emergency medicine subspecialties. Recognizing potentially different practice patterns, including perspectives on pediatric care, as well as shared and sometimes unique professional pressures, can help us identify common concerns and problems and facilitate the development of strategies aimed at correcting these issues. (orig.)

  14. Almost Free Modules Set-Theoretic Methods

    CERN Document Server

    Eklof, PC

    1990-01-01

    This is an extended treatment of the set-theoretic techniques which have transformed the study of abelian group and module theory over the last 15 years. Part of the book is new work which does not appear elsewhere in any form. In addition, a large body of material which has appeared previously (in scattered and sometimes inaccessible journal articles) has been extensively reworked and in many cases given new and improved proofs. The set theory required is carefully developed with algebraists in mind, and the independence results are derived from explicitly stated axioms. The book contains exe

  15. Gradient augmented level set method for phase change simulations

    Science.gov (United States)

    Anumolu, Lakshman; Trujillo, Mario F.

    2018-01-01

    A numerical method for the simulation of two-phase flow with phase change based on the Gradient-Augmented-Level-set (GALS) strategy is presented. Sharp capturing of the vaporization process is enabled by: i) identification of the vapor-liquid interface, Γ (t), at the subgrid level, ii) discontinuous treatment of thermal physical properties (except for μ), and iii) enforcement of mass, momentum, and energy jump conditions, where the gradients of the dependent variables are obtained at Γ (t) and are consistent with their analytical expression, i.e. no local averaging is applied. Treatment of the jump in velocity and pressure at Γ (t) is achieved using the Ghost Fluid Method. The solution of the energy equation employs the sub-grid knowledge of Γ (t) to discretize the temperature Laplacian using second-order one-sided differences, i.e. the numerical stencil completely resides within each respective phase. To carefully evaluate the benefits or disadvantages of the GALS approach, the standard level set method is implemented and compared against the GALS predictions. The results show the expected trend that interface identification and transport are predicted noticeably better with GALS over the standard level set. This benefit carries over to the prediction of the Laplacian and temperature gradients in the neighborhood of the interface, which are directly linked to the calculation of the vaporization rate. However, when combining the calculation of interface transport and reinitialization with two-phase momentum and energy, the benefits of GALS are to some extent neutralized, and the causes for this behavior are identified and analyzed. Overall the additional computational costs associated with GALS are almost the same as those using the standard level set technique.

  16. A comparison of human elements and nonhuman elements in private health care settings: customers' perceptions and expectations.

    Science.gov (United States)

    Mohd Suki, Norazah; Chwee Lian, Jennifer Chiam; Suki, Norbayah Mohd

    2009-01-01

    In today's highly competitive health care environment, many private health care settings are now looking into customer service indicators to learn customers' perceptions and determine whether they are meeting customers' expectations in order to ensure that their customers are satisfied with the services. This research paper aims to investigate whether the human elements were more important than the nonhuman elements in private health care settings. We used the internationally renowned SERVQUAL five-dimension model plus three additional dimensions-courtesy, communication, and understanding of customers of the human element-when evaluating health care services. A total of 191 respondents from three private health care settings in the Klang Valley region of Malaysia were investigated. Descriptive statistics were calculated by the Statistical Package for Social Sciences (SPSS) computer program, version 15. Interestingly, the results suggested that customers nowadays have very high expectations especially when it comes to the treatment they are receiving. Overall, the research indicated that the human elements were more important than the nonhuman element in private health care settings. Hospital management should look further to improve on areas that have been highlighted. Implications for management practice and directions for future research are discussed.

  17. Set-theoretic methods in control

    CERN Document Server

    Blanchini, Franco

    2015-01-01

    The second edition of this monograph describes the set-theoretic approach for the control and analysis of dynamic systems, both from a theoretical and practical standpoint.  This approach is linked to fundamental control problems, such as Lyapunov stability analysis and stabilization, optimal control, control under constraints, persistent disturbance rejection, and uncertain systems analysis and synthesis.  Completely self-contained, this book provides a solid foundation of mathematical techniques and applications, extensive references to the relevant literature, and numerous avenues for further theoretical study. All the material from the first edition has been updated to reflect the most recent developments in the field, and a new chapter on switching systems has been added.  Each chapter contains examples, case studies, and exercises to allow for a better understanding of theoretical concepts by practical application. The mathematical language is kept to the minimum level necessary for the adequate for...

  18. Methods to Increase Educational Effectiveness in an Adult Correctional Setting.

    Science.gov (United States)

    Kuster, Byron

    1998-01-01

    A correctional educator reflects on methods that improve instructional effectiveness. These include teacher-student collaboration, clear goals, student accountability, positive classroom atmosphere, high expectations, and mutual respect. (SK)

  19. Changes in nursing students' expectations of nursing clinical faculties' competences: A longitudinal, mixed methods study.

    Science.gov (United States)

    Lovrić, Robert; Prlić, Nada; Milutinović, Dragana; Marjanac, Igor; Žvanut, Boštjan

    2017-12-01

    Changes in nursing students' expectations of their clinical nursing faculty competences over the course of time are an insufficiently researched phenomenon. To explore what competences BSc nursing students expect from their clinical faculties during their clinical training, and whether their expectations changed during their three-year studies. Furthermore, to survey factors which influenced their expectations and whether the fulfilment levels of their expectations influenced their feelings, learning, and behaviour. A two-phase, mixed-methods design was used. The Higher Nursing Education Institution in Osijek, Croatia, European Union. A cohort of 34 BSc nursing students, who were followed over the course of their three-year studies. In Phase I, in each year, prior to their clinical training, participants responded to the same modified Nursing Clinical Teacher Effectiveness Inventory questionnaire about their expectations of clinical faculties' competences (52 items representing six categories of competences). In Phase II, seven days after their graduation, participants wrote reflections on the aforementioned expectations during their studies. The results show that Clinical faculties' evaluation of student was the category in which participants had the highest expectations in all three years. Results of Wilcoxon signed rank test indicate a significant increase of participants' expectations in all categories of clinical nursing faculties' competences during their study. Participants' reflections confirm these results and indicate that actual competences of clinical faculties and behaviour have the most significant effects on the change in these expectations. Participants reported that expectations, if fulfilled, facilitate their learning and motivation for better performance. BSc nursing students' expectations of clinical nursing faculty competences represent an important concept, as they obviously determine the quality of faculty practice. Hence, they should be

  20. Segmenting the Parotid Gland using Registration and Level Set Methods

    DEFF Research Database (Denmark)

    Hollensen, Christian; Hansen, Mads Fogtmann; Højgaard, Liselotte

    . The method was evaluated on a test set consisting of 8 corresponding data sets. The attained total volume Dice coefficient and mean Haussdorff distance were 0.61 ± 0.20 and 15.6 ± 7.4 mm respectively. The method has improvement potential which could be exploited in order for clinical introduction....

  1. Identifying Heterogeneities in Subsurface Environment using the Level Set Method

    Energy Technology Data Exchange (ETDEWEB)

    Lei, Hongzhuan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lu, Zhiming [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vesselinov, Velimir Valentinov [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-25

    These are slides from a presentation on identifying heterogeneities in subsurface environment using the level set method. The slides start with the motivation, then explain Level Set Method (LSM), the algorithms, some examples are given, and finally future work is explained.

  2. The method of expected number of deaths, 1786-1886-1986.

    Science.gov (United States)

    Keiding, N

    1987-04-01

    "The method of expected number of deaths is an integral part of standardization of vital rates, which is one of the oldest statistical techniques. The expected number of deaths was calculated in 18th century actuarial mathematics...but the method seems to have been forgotten, and was reinvented in connection with 19th century studies of geographical and occupational variations of mortality.... It is noted that standardization of rates is intimately connected to the study of relative mortality, and a short description of very recent developments in the methodology of that area is included." (SUMMARY IN FRE) excerpt

  3. Application of Expectation Maximization Method for Purchase Decision-Making Support in Welding Branch

    Directory of Open Access Journals (Sweden)

    Kujawińska Agnieszka

    2016-06-01

    Full Text Available The article presents a study of applying the proposed method of cluster analysis to support purchasing decisions in the welding industry. The authors analyze the usefulness of the non-hierarchical method, Expectation Maximization (EM, in the selection of material (212 combinations of flux and wire melt for the SAW (Submerged Arc Welding method process. The proposed approach to cluster analysis is proved as useful in supporting purchase decisions.

  4. A mixed methods study to understand patient expectations for antibiotics for an upper respiratory tract infection

    Directory of Open Access Journals (Sweden)

    Christina Gaarslev

    2016-10-01

    Full Text Available Abstract Background Antimicrobial resistance is a public health challenge supplemented by inappropriate prescribing, especially for an upper respiratory tract infection in primary care. Patient/carer expectations have been identified as one of the main drivers for inappropriate antibiotics prescribing by primary care physicians. The aim of this study was to understand who is more likely to expect an antibiotic for an upper respiratory tract infection from their doctor and the reasons underlying it. Methods This study used a sequential mixed methods approach: a nationally representative cross sectional survey (n = 1509 and four focus groups. The outcome of interest was expectation and demand for an antibiotic from a doctor when presenting with a cold or flu. Results The study found 19.5 % of survey respondents reported that they would expect the doctor to prescribe antibiotics for a cold or flu. People younger than 65 years of age, those who never attended university and those speaking a language other than English at home were more likely to expect or demand antibiotics for a cold or flu. People who knew that ‘antibiotics don’t kill viruses’ and agreed that ‘taking an antibiotic when one is not needed means they won’t work in the future’ were less likely to expect or demand antibiotics. The main reasons for expecting antibiotics were believing that antibiotics are an effective treatment for a cold or flu and that they shortened the duration and potential deterioration of their illness. The secondary reason centered around the value or return on investment for visiting a doctor when feeling unwell. Conclusion Our study found that patients do not appear to feel they have a sufficiently strong incentive to consider the impact of their immediate use of antibiotics on antimicrobial resistance. The issue of antibiotic resistance needs to be explained and reframed as a more immediate health issue with dire consequences to ensure the

  5. "Expectations to Change" ((E2C): A Participatory Method for Facilitating Stakeholder Engagement with Evaluation Findings

    Science.gov (United States)

    Adams, Adrienne E.; Nnawulezi, Nkiru A.; Vandenberg, Lela

    2015-01-01

    From a utilization-focused evaluation perspective, the success of an evaluation is rooted in the extent to which the evaluation was used by stakeholders. This paper details the "Expectations to Change" (E2C) process, an interactive, workshop-based method designed to engage primary users with their evaluation findings as a means of…

  6. A level set method for multiple sclerosis lesion segmentation.

    Science.gov (United States)

    Zhao, Yue; Guo, Shuxu; Luo, Min; Shi, Xue; Bilello, Michel; Zhang, Shaoxiang; Li, Chunming

    2018-06-01

    In this paper, we present a level set method for multiple sclerosis (MS) lesion segmentation from FLAIR images in the presence of intensity inhomogeneities. We use a three-phase level set formulation of segmentation and bias field estimation to segment MS lesions and normal tissue region (including GM and WM) and CSF and the background from FLAIR images. To save computational load, we derive a two-phase formulation from the original multi-phase level set formulation to segment the MS lesions and normal tissue regions. The derived method inherits the desirable ability to precisely locate object boundaries of the original level set method, which simultaneously performs segmentation and estimation of the bias field to deal with intensity inhomogeneity. Experimental results demonstrate the advantages of our method over other state-of-the-art methods in terms of segmentation accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Volume Sculpting Using the Level-Set Method

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2002-01-01

    In this paper, we propose the use of the Level--Set Method as the underlying technology of a volume sculpting system. The main motivation is that this leads to a very generic technique for deformation of volumetric solids. In addition, our method preserves a distance field volume representation....... A scaling window is used to adapt the Level--Set Method to local deformations and to allow the user to control the intensity of the tool. Level--Set based tools have been implemented in an interactive sculpting system, and we show sculptures created using the system....

  8. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  9. Multiattribute Grey Target Decision Method Based on Soft Set Theory

    Directory of Open Access Journals (Sweden)

    Xia Wang

    2014-01-01

    Full Text Available With respect to the Multiattribute decision-making problems in which the evaluation attribute sets are different and the evaluating values of alternatives are interval grey numbers, a multiattribute grey target decision-making method in which the attribute sets are different was proposed. The concept of grey soft set was defined, and its “AND” operation was assigned by combining the intersection operation of grey number. The expression approach of new grey soft set of attribute sets considering by all decision makers were gained by applying the “AND” operation of grey soft set, and the weights of synthesis attribute were proved. The alternatives were ranked according to the size of distance of bull’s eyes of each alternative under synthetic attribute sets. The green supplier selection was illustrated to demonstrate the effectiveness of proposed model.

  10. A parametric level-set method for partially discrete tomography

    NARCIS (Netherlands)

    A. Kadu (Ajinkya); T. van Leeuwen (Tristan); K.J. Batenburg (Joost)

    2017-01-01

    textabstractThis paper introduces a parametric level-set method for tomographic reconstruction of partially discrete images. Such images consist of a continuously varying background and an anomaly with a constant (known) grey-value. We express the geometry of the anomaly using a level-set function,

  11. Integration of educational methods and physical settings: Design ...

    African Journals Online (AJOL)

    ... setting without having an architectural background. The theoretical framework of the research allows designers to consider key features and users' possible activities in High/ Scope settings and shape their designs accordingly. Keywords: daily activity; design; High/Scope education; interior space; teaching method ...

  12. A mixed methods study to understand patient expectations for antibiotics for an upper respiratory tract infection.

    Science.gov (United States)

    Gaarslev, Christina; Yee, Melissa; Chan, Georgi; Fletcher-Lartey, Stephanie; Khan, Rabia

    2016-01-01

    Antimicrobial resistance is a public health challenge supplemented by inappropriate prescribing, especially for an upper respiratory tract infection in primary care. Patient/carer expectations have been identified as one of the main drivers for inappropriate antibiotics prescribing by primary care physicians. The aim of this study was to understand who is more likely to expect an antibiotic for an upper respiratory tract infection from their doctor and the reasons underlying it. This study used a sequential mixed methods approach: a nationally representative cross sectional survey ( n  = 1509) and four focus groups. The outcome of interest was expectation and demand for an antibiotic from a doctor when presenting with a cold or flu. The study found 19.5 % of survey respondents reported that they would expect the doctor to prescribe antibiotics for a cold or flu. People younger than 65 years of age, those who never attended university and those speaking a language other than English at home were more likely to expect or demand antibiotics for a cold or flu. People who knew that 'antibiotics don't kill viruses' and agreed that 'taking an antibiotic when one is not needed means they won't work in the future' were less likely to expect or demand antibiotics. The main reasons for expecting antibiotics were believing that antibiotics are an effective treatment for a cold or flu and that they shortened the duration and potential deterioration of their illness. The secondary reason centered around the value or return on investment for visiting a doctor when feeling unwell. Our study found that patients do not appear to feel they have a sufficiently strong incentive to consider the impact of their immediate use of antibiotics on antimicrobial resistance. The issue of antibiotic resistance needs to be explained and reframed as a more immediate health issue with dire consequences to ensure the success of future health campaigns.

  13. Methods for setting objectives and mission breakdown structure

    DEFF Research Database (Denmark)

    Riis, Eva

    2013-01-01

    Projects should create value for the organisations that instigate them or that will make use of their outcomes. This requires there be greater attention paid to defining the value expected of them, and it intensifies the need to formulate clear project objectives. Both tasks are complicated by th...... by the lack of a common terminology when setting objectives. This paper will propose concepts and a tool that should have a natural place in every project manager's toolbox.......Projects should create value for the organisations that instigate them or that will make use of their outcomes. This requires there be greater attention paid to defining the value expected of them, and it intensifies the need to formulate clear project objectives. Both tasks are complicated...

  14. A comparison of different methods for decomposition of changes in expectation of life at birth and differentials in life expectancy at birth

    Directory of Open Access Journals (Sweden)

    P. K. Murthy

    2005-04-01

    Full Text Available Several methods were proposed to decompose the difference between two life expectancies at birth into the contribution by different age groups. In this study an attempt has been made to compare different methods with that of Chandra Sekar (1949 method. The methodologies suggested by Arriaga, Lopez and Ruzicka and Pollard have been extended. It is shown that all the three methods and also Chandra Sekar method in their modified (symmetrical form will be seen to produce the same result as that of United Nations, Pollard, Andreev and Pressat. Finally it is suggested to use symmetric formulae of the above methods because the percent contribution of total of the interaction terms to the difference in the life expectancy at birth is observed to be very negligible.

  15. Recursive expectation-maximization clustering: A method for identifying buffering mechanisms composed of phenomic modules

    Science.gov (United States)

    Guo, Jingyu; Tian, Dehua; McKinney, Brett A.; Hartman, John L.

    2010-06-01

    Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance

  16. Comparing Methods of Calculating Expected Annual Damage in Urban Pluvial Flood Risk Assessments

    DEFF Research Database (Denmark)

    Skovgård Olsen, Anders; Zhou, Qianqian; Linde, Jens Jørgen

    2015-01-01

    Estimating the expected annual damage (EAD) due to flooding in an urban area is of great interest for urban water managers and other stakeholders. It is a strong indicator for a given area showing how vulnerable it is to flood risk and how much can be gained by implementing e.g., climate change...... adaptation measures. This study identifies and compares three different methods for estimating the EAD based on unit costs of flooding of urban assets. One of these methods was used in previous studies and calculates the EAD based on a few extreme events by assuming a log-linear relationship between cost...... of an event and the corresponding return period. This method is compared to methods that are either more complicated or require more calculations. The choice of method by which the EAD is calculated appears to be of minor importance. At all three case study areas it seems more important that there is a shift...

  17. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all......The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time...

  18. Coordinator Role Mobility Method for Increasing the Life Expectancy of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Jurenoks Aleksejs

    2017-05-01

    Full Text Available The general problem of wireless sensor network nodes is the low-power batteries that significantly limit the life expectancy of a network. Nowadays the technical solutions related to energy resource management are being rapidly developed and integrated into the daily lives of people. The energy resource management systems use sensor networks for receiving and processing information during the realia time. The present paper proposes using a coordinator role mobility method for controlling the routing processes for energy balancing in nodes, which provides dynamic network reconfiguration possibilities. The method is designed to operate fully in the background and can be integrated into any exiting working system.

  19. Efficiency of Choice Set Generation Methods for Bicycle Routes

    DEFF Research Database (Denmark)

    Halldórsdóttir, Katrín; Rieser-Schüssler, Nadine; W. Axhausen, Kay

    behaviour, observed choices and alternatives composing the choice set of each cyclist are necessary. However, generating the alternative choice sets can prove challenging. This paper analyses the efficiency of various choice set generation methods for bicycle routes in order to contribute to our...... travelling information with GPS loggers, compared to self-reported RP data, is more accurate geographic locations and routes. Also, the GPS traces give more reliable information on times and prevent trip underreporting, and it is possible to collect information on many trips by the same person without...

  20. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI

    International Nuclear Information System (INIS)

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-01-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)

  1. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  2. A Review of Methods for Analysis of the Expected Value of Information.

    Science.gov (United States)

    Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca

    2017-10-01

    In recent years, value-of-information analysis has become more widespread in health economic evaluations, specifically as a tool to guide further research and perform probabilistic sensitivity analysis. This is partly due to methodological advancements allowing for the fast computation of a typical summary known as the expected value of partial perfect information (EVPPI). A recent review discussed some approximation methods for calculating the EVPPI, but as the research has been active over the intervening years, that review does not discuss some key estimation methods. Therefore, this paper presents a comprehensive review of these new methods. We begin by providing the technical details of these computation methods. We then present two case studies in order to compare the estimation performance of these new methods. We conclude that a method based on nonparametric regression offers the best method for calculating the EVPPI in terms of accuracy, computational time, and ease of implementation. This means that the EVPPI can now be used practically in health economic evaluations, especially as all the methods are developed in parallel with R functions and a web app to aid practitioners.

  3. Optimizing distance-based methods for large data sets

    Science.gov (United States)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  4. Improved Expectation Maximization Algorithm for Gaussian Mixed Model Using the Kernel Method

    Directory of Open Access Journals (Sweden)

    Mohd Izhan Mohd Yusoff

    2013-01-01

    Full Text Available Fraud activities have contributed to heavy losses suffered by telecommunication companies. In this paper, we attempt to use Gaussian mixed model, which is a probabilistic model normally used in speech recognition to identify fraud calls in the telecommunication industry. We look at several issues encountered when calculating the maximum likelihood estimates of the Gaussian mixed model using an Expectation Maximization algorithm. Firstly, we look at a mechanism for the determination of the initial number of Gaussian components and the choice of the initial values of the algorithm using the kernel method. We show via simulation that the technique improves the performance of the algorithm. Secondly, we developed a procedure for determining the order of the Gaussian mixed model using the log-likelihood function and the Akaike information criteria. Finally, for illustration, we apply the improved algorithm to real telecommunication data. The modified method will pave the way to introduce a comprehensive method for detecting fraud calls in future work.

  5. A local level set method based on a finite element method for unstructured meshes

    International Nuclear Information System (INIS)

    Ngo, Long Cu; Choi, Hyoung Gwon

    2016-01-01

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time

  6. A local level set method based on a finite element method for unstructured meshes

    Energy Technology Data Exchange (ETDEWEB)

    Ngo, Long Cu; Choi, Hyoung Gwon [School of Mechanical Engineering, Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2016-12-15

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time.

  7. Method for calculating annual energy efficiency improvement of TV sets

    International Nuclear Information System (INIS)

    Varman, M.; Mahlia, T.M.I.; Masjuki, H.H.

    2006-01-01

    The popularization of 24 h pay-TV, interactive video games, web-TV, VCD and DVD are poised to have a large impact on overall TV electricity consumption in the Malaysia. Following this increased consumption, energy efficiency standard present a highly effective measure for decreasing electricity consumption in the residential sector. The main problem in setting energy efficiency standard is identifying annual efficiency improvement, due to the lack of time series statistical data available in developing countries. This study attempts to present a method of calculating annual energy efficiency improvement for TV set, which can be used for implementing energy efficiency standard for TV sets in Malaysia and other developing countries. Although the presented result is only an approximation, definitely it is one of the ways of accomplishing energy standard. Furthermore, the method can be used for other appliances without any major modification

  8. Method for calculating annual energy efficiency improvement of TV sets

    Energy Technology Data Exchange (ETDEWEB)

    Varman, M. [Department of Mechanical Engineering, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia); Mahlia, T.M.I. [Department of Mechanical Engineering, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia)]. E-mail: indra@um.edu.my; Masjuki, H.H. [Department of Mechanical Engineering, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia)

    2006-10-15

    The popularization of 24 h pay-TV, interactive video games, web-TV, VCD and DVD are poised to have a large impact on overall TV electricity consumption in the Malaysia. Following this increased consumption, energy efficiency standard present a highly effective measure for decreasing electricity consumption in the residential sector. The main problem in setting energy efficiency standard is identifying annual efficiency improvement, due to the lack of time series statistical data available in developing countries. This study attempts to present a method of calculating annual energy efficiency improvement for TV set, which can be used for implementing energy efficiency standard for TV sets in Malaysia and other developing countries. Although the presented result is only an approximation, definitely it is one of the ways of accomplishing energy standard. Furthermore, the method can be used for other appliances without any major modification.

  9. A Level Set Discontinuous Galerkin Method for Free Surface Flows

    DEFF Research Database (Denmark)

    Grooss, Jesper; Hesthaven, Jan

    2006-01-01

    We present a discontinuous Galerkin method on a fully unstructured grid for the modeling of unsteady incompressible fluid flows with free surfaces. The surface is modeled by embedding and represented by a levelset. We discuss the discretization of the flow equations and the level set equation...

  10. A Memory and Computation Efficient Sparse Level-Set Method

    NARCIS (Netherlands)

    Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.

    Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the

  11. Parent and Staff Expectations for Continuity of Home Practices in the Child Care Setting for Families with Diverse Cultural Backgrounds

    Science.gov (United States)

    De Gioia, Katey

    2009-01-01

    The use of childcare services for very young children (birth to three years) has increased dramatically in the past two decades (Department of Families, Community Services and Indigenous Affairs, 2004). This article investigates the expectations for cultural continuity of caregiving practices (with particular emphasis on sleep and feeding) between…

  12. Use of decision criteria based on expected values to support decision-making in a production assurance and safety setting

    International Nuclear Information System (INIS)

    Aven, T.; Flage, R.

    2009-01-01

    We consider decision problems related to production assurance and safety. The issue is to what extent we should use decision criteria based on expected values, such as the expected net present value (E[NPV]) and the expected cost per expected number of saved lives (ICAF), to guide the decision. Such criteria are recognised as practical tools for supporting decision-making under uncertainty, but is uncertainty adequately taken into account by these criteria? Based on the prevailing practice and the existing literature, we conclude that there is a need for a clarification of the rationale of these criteria. Adjustments of the standard approaches have been suggested to reflect risks and uncertainties, but can cautionary and precautionary concerns be replaced by formulae and mechanical procedures? These issues are discussed in the present paper, particularly addressing the company level. We argue that the search for such formulae and procedures should be replaced by a more balanced perspective acknowledging that there will always be a need for management review and judgment beyond the realm of the analyses. Most of the suggested adjustments of the E[NPV] and ICAF approaches should be avoided. They add more confusion than value.

  13. Reconstruction of electrical impedance tomography (EIT) images based on the expectation maximum (EM) method.

    Science.gov (United States)

    Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi

    2012-11-01

    Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Landscape democracy, three sets of values, and the connoisseur method

    DEFF Research Database (Denmark)

    Arler, Finn; Mellqvist, Helena

    2015-01-01

    for argument. It examines various methods that have been used to try to make landscape decisions more democratic. In the last part of the paper the connoisseur method is introduced. This method emphasises stakeholder participation in deliberative processes with a particular focus on place-based knowledge......The European Landscape Convention has brought up the question of democracy in relation to landscape transformation, but without a clear definition of democracy. This paper conceptualises democracy in relation to three main sets of values related to self-determination, co-determination and respect...

  15. A mixed methods analysis of experiences and expectations among early-career medical oncologists in Australia.

    Science.gov (United States)

    Wong, W K Tim; Kirby, Emma; Broom, Alex; Sibbritt, David; Francis, Kay; Karapetis, Christos S; Karikios, Deme; Harrup, Rosemary; Lwin, Zarnie

    2018-01-26

    A viable and sustainable medical oncology profession is integral for meeting the increasing demand for quality cancer care. The aim of this study was to explore the workforce-related experiences, perceptions and career expectations of early-career medical oncologists in Australia. A mixed-methods design, including a survey (n  =  170) and nested qualitative semistructured interviews (n  =  14) with early-career medical oncologists. Recruitment was through the Medical Oncology Group of Australia. Qualitative data were thematically analyzed and for the survey results, logistic regression modeling was conducted. Early-career medical oncologists experienced uncertainty regarding their future employment opportunities. The competitive job market has made them cautious about securing a preferred job leading to a perceived need to improve their qualifications through higher degree training and research activities. The following themes and trends were identified from the qualitative and quantitative analyses: age, career stage and associated early-career uncertainty; locale, professional competition and training preferences; participation in research and evolving professional expectations; and workload and career development opportunities as linked to career uncertainty. Perceived diminished employment opportunities in the medical oncology profession, and shifting expectations to be "more qualified," have increased uncertainty among junior medical oncologists in terms of their future career prospects. Structural factors relating to adequate funding of medical oncology positions may facilitate or inhibit progressive change in the workforce and its sustainability. Workforce planning and strategies informed by findings from this study will be necessary in ensuring that both the needs of cancer patients and of medical oncologists are met. © 2018 John Wiley & Sons Australia, Ltd.

  16. A deep level set method for image segmentation

    OpenAIRE

    Tang, Min; Valipour, Sepehr; Zhang, Zichen Vincent; Cobzas, Dana; MartinJagersand

    2017-01-01

    This paper proposes a novel image segmentation approachthat integrates fully convolutional networks (FCNs) with a level setmodel. Compared with a FCN, the integrated method can incorporatesmoothing and prior information to achieve an accurate segmentation.Furthermore, different than using the level set model as a post-processingtool, we integrate it into the training phase to fine-tune the FCN. Thisallows the use of unlabeled data during training in a semi-supervisedsetting. Using two types o...

  17. A working-set framework for sequential convex approximation methods

    DEFF Research Database (Denmark)

    Stolpe, Mathias

    2008-01-01

    We present an active-set algorithmic framework intended as an extension to existing implementations of sequential convex approximation methods for solving nonlinear inequality constrained programs. The framework is independent of the choice of approximations and the stabilization technique used...... to guarantee global convergence of the method. The algorithm works directly on the nonlinear constraints in the convex sub-problems and solves a sequence of relaxations of the current sub-problem. The algorithm terminates with the optimal solution to the sub-problem after solving a finite number of relaxations....

  18. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains

    DEFF Research Database (Denmark)

    Tataru, Paula Cristina; Hobolth, Asger

    2011-01-01

    past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. RESULTS: We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned......BACKGROUND: Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications...... of the algorithms is available at www.birc.au.dk/~paula/. CONCLUSIONS: We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually...

  19. GenoSets: visual analytic methods for comparative genomics.

    Directory of Open Access Journals (Sweden)

    Aurora A Cain

    Full Text Available Many important questions in biology are, fundamentally, comparative, and this extends to our analysis of a growing number of sequenced genomes. Existing genomic analysis tools are often organized around literal views of genomes as linear strings. Even when information is highly condensed, these views grow cumbersome as larger numbers of genomes are added. Data aggregation and summarization methods from the field of visual analytics can provide abstracted comparative views, suitable for sifting large multi-genome datasets to identify critical similarities and differences. We introduce a software system for visual analysis of comparative genomics data. The system automates the process of data integration, and provides the analysis platform to identify and explore features of interest within these large datasets. GenoSets borrows techniques from business intelligence and visual analytics to provide a rich interface of interactive visualizations supported by a multi-dimensional data warehouse. In GenoSets, visual analytic approaches are used to enable querying based on orthology, functional assignment, and taxonomic or user-defined groupings of genomes. GenoSets links this information together with coordinated, interactive visualizations for both detailed and high-level categorical analysis of summarized data. GenoSets has been designed to simplify the exploration of multiple genome datasets and to facilitate reasoning about genomic comparisons. Case examples are included showing the use of this system in the analysis of 12 Brucella genomes. GenoSets software and the case study dataset are freely available at http://genosets.uncc.edu. We demonstrate that the integration of genomic data using a coordinated multiple view approach can simplify the exploration of large comparative genomic data sets, and facilitate reasoning about comparisons and features of interest.

  20. Comparing Methods of Calculating Expected Annual Damage in Urban Pluvial Flood Risk Assessments

    Directory of Open Access Journals (Sweden)

    Anders Skovgård Olsen

    2015-01-01

    Full Text Available Estimating the expected annual damage (EAD due to flooding in an urban area is of great interest for urban water managers and other stakeholders. It is a strong indicator for a given area showing how vulnerable it is to flood risk and how much can be gained by implementing e.g., climate change adaptation measures. This study identifies and compares three different methods for estimating the EAD based on unit costs of flooding of urban assets. One of these methods was used in previous studies and calculates the EAD based on a few extreme events by assuming a log-linear relationship between cost of an event and the corresponding return period. This method is compared to methods that are either more complicated or require more calculations. The choice of method by which the EAD is calculated appears to be of minor importance. At all three case study areas it seems more important that there is a shift in the damage costs as a function of the return period. The shift occurs approximately at the 10 year return period and can perhaps be related to the design criteria for sewer systems. Further, it was tested if the EAD estimation could be simplified by assuming a single unit cost per flooded area. The results indicate that within each catchment this may be a feasible approach. However the unit costs varies substantially between different case study areas. Hence it is not feasible to develop unit costs that can be used to calculate EAD, most likely because the urban landscape is too heterogeneous.

  1. SET: A Pupil Detection Method Using Sinusoidal Approximation

    Directory of Open Access Journals (Sweden)

    Amir-Homayoun eJavadi

    2015-04-01

    Full Text Available Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as ‘SET’ that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (‘Natural’; and images of less challenging indoor scenes (‘CASIA-Iris-Thousand’. We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (‘DLL’, which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk.

  2. [Teaching methods for clinical settings: a literature review].

    Science.gov (United States)

    Brugnolli, Anna; Benaglio, Carla

    2017-01-01

    . Teaching Methods for clinical settings: a review. The teaching process during internship requires several methods to promote the acquisition of more complex technical skills such as relational, decisional and planning abilities. To describe effective teaching methods to promote the learning of relational, decisional and planning skills. A literature review of the teaching methods that have proven most effective, most appreciated by students, and most frequently used in Italian nursing schools. Clinical teaching is a central element to transform clinical experiences during internship in professional competences. The students are gradually brought to become more independent, because they are offered opportunities to practice in real contexts, to receive feedback, to have positive role models, to become more autonomous: all elements that facilitate and potentiate learning. Clinical teaching should be based on a variety of methods. The students value a gradual progression both in clinical experiences and teaching strategies from more supervised methods to methods more oriented towards reflecting on clinical practice and self-directed learning.

  3. Basis set approach in the constrained interpolation profile method

    International Nuclear Information System (INIS)

    Utsumi, T.; Koga, J.; Yabe, T.; Ogata, Y.; Matsunaga, E.; Aoki, T.; Sekine, M.

    2003-07-01

    We propose a simple polynomial basis-set that is easily extendable to any desired higher-order accuracy. This method is based on the Constrained Interpolation Profile (CIP) method and the profile is chosen so that the subgrid scale solution approaches the real solution by the constraints from the spatial derivative of the original equation. Thus the solution even on the subgrid scale becomes consistent with the master equation. By increasing the order of the polynomial, this solution quickly converges. 3rd and 5th order polynomials are tested on the one-dimensional Schroedinger equation and are proved to give solutions a few orders of magnitude higher in accuracy than conventional methods for lower-lying eigenstates. (author)

  4. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  5. Three randomized trials of maternal influenza immunization in Mali, Nepal, and South Africa: Methods and expectations.

    Science.gov (United States)

    Omer, Saad B; Richards, Jennifer L; Madhi, Shabir A; Tapia, Milagritos D; Steinhoff, Mark C; Aqil, Anushka R; Wairagkar, Niteen

    2015-07-31

    Influenza infection in pregnancy can have adverse impacts on maternal, fetal, and infant outcomes. Influenza vaccination in pregnancy is an appealing strategy to protect pregnant women and their infants. The Bill & Melinda Gates Foundation is supporting three large, randomized trials in Nepal, Mali, and South Africa evaluating the efficacy and safety of maternal immunization to prevent influenza disease in pregnant women and their infants <6 months of age. Results from these individual studies are expected in 2014 and 2015. While the results from the three maternal immunization trials are likely to strengthen the evidence base regarding the impact of influenza immunization in pregnancy, expectations for these results should be realistic. For example, evidence from previous influenza vaccine studies - conducted in general, non-pregnant populations - suggests substantial geographic and year-to-year variability in influenza incidence and vaccine efficacy/effectiveness. Since the evidence generated from the three maternal influenza immunization trials will be complementary, in this paper we present a side-by-side description of the three studies as well as the similarities and differences between these trials in terms of study location, design, outcome evaluation, and laboratory and epidemiological methods. We also describe the likely remaining knowledge gap after the results from these trials become available along with a description of the analyses that will be conducted when the results from these individual data are pooled. Moreover, we highlight that additional research on logistics of seasonal influenza vaccine supply, surveillance and strain matching, and optimal delivery strategies for pregnant women will be important for informing global policy related to maternal influenza immunization. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Life Expectancies Applied to Specific Statuses: a History of the Indicators and the Methods of Calculation {Population, 3, 1998)

    OpenAIRE

    N. Brouard; J.-M. Robine; E. Cambois

    1999-01-01

    Cambois (Emmanuelle), Robin? (Jean-Marie), Brouard (Nicolas).- Life Expectancies Applied to Specific Statuses: A History of the Indicators and the Methods of Calculation Indicators of life expectancy applied to specific statuses, such as the state of health or professional status, were introduced at the end of the 1930s and are currently the object of renewed interest. Because they relate mortality to different domains (health, professional activity) applied life expectancies reflect simultan...

  7. Analysis method set up to check against adulterated export honey

    International Nuclear Information System (INIS)

    Lyon, G.L.

    2001-01-01

    Over the past few years, North America has experienced occasional problems with the adulteration of honey, mainly by additions of other, cheaper sugar to increase bulk and lower production costs. The main addition was usually high fructose corn syrup, which had a similar chemical composition to that of honey. As a consequence of this type of adulteration, a method for its detection was developed using isotope ratio mass spectroscopy (IRMS). This was later refined to be more sensitive and is now specified as an Official Test. The Institute of Geological and Nuclear Sciences has now set up the analysis method to the international criteria at the Rafter Stable Isotope Laboratory in Lower Hutt. 2 refs

  8. A comparison of cosegregation analysis methods for the clinical setting.

    Science.gov (United States)

    Rañola, John Michael O; Liu, Quanhui; Rosenthal, Elisabeth A; Shirts, Brian H

    2018-04-01

    Quantitative cosegregation analysis can help evaluate the pathogenicity of genetic variants. However, genetics professionals without statistical training often use simple methods, reporting only qualitative findings. We evaluate the potential utility of quantitative cosegregation in the clinical setting by comparing three methods. One thousand pedigrees each were simulated for benign and pathogenic variants in BRCA1 and MLH1 using United States historical demographic data to produce pedigrees similar to those seen in the clinic. These pedigrees were analyzed using two robust methods, full likelihood Bayes factors (FLB) and cosegregation likelihood ratios (CSLR), and a simpler method, counting meioses. Both FLB and CSLR outperform counting meioses when dealing with pathogenic variants, though counting meioses is not far behind. For benign variants, FLB and CSLR greatly outperform as counting meioses is unable to generate evidence for benign variants. Comparing FLB and CSLR, we find that the two methods perform similarly, indicating that quantitative results from either of these methods could be combined in multifactorial calculations. Combining quantitative information will be important as isolated use of cosegregation in single families will yield classification for less than 1% of variants. To encourage wider use of robust cosegregation analysis, we present a website ( http://www.analyze.myvariant.org ) which implements the CSLR, FLB, and Counting Meioses methods for ATM, BRCA1, BRCA2, CHEK2, MEN1, MLH1, MSH2, MSH6, and PMS2. We also present an R package, CoSeg, which performs the CSLR analysis on any gene with user supplied parameters. Future variant classification guidelines should allow nuanced inclusion of cosegregation evidence against pathogenicity.

  9. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains.

    Science.gov (United States)

    Tataru, Paula; Hobolth, Asger

    2011-12-05

    Continuous time Markov chains (CTMCs) is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes) are unaccessible and the past must be inferred from DNA sequence data observed in the present. We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD), the second on uniformization (UNI), and the third on integrals of matrix exponentials (EXPM). The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  10. Comparison of methods for calculating conditional expectations of sufficient statistics for continuous time Markov chains

    Directory of Open Access Journals (Sweden)

    Tataru Paula

    2011-12-01

    Full Text Available Abstract Background Continuous time Markov chains (CTMCs is a widely used model for describing the evolution of DNA sequences on the nucleotide, amino acid or codon level. The sufficient statistics for CTMCs are the time spent in a state and the number of changes between any two states. In applications past evolutionary events (exact times and types of changes are unaccessible and the past must be inferred from DNA sequence data observed in the present. Results We describe and implement three algorithms for computing linear combinations of expected values of the sufficient statistics, conditioned on the end-points of the chain, and compare their performance with respect to accuracy and running time. The first algorithm is based on an eigenvalue decomposition of the rate matrix (EVD, the second on uniformization (UNI, and the third on integrals of matrix exponentials (EXPM. The implementation in R of the algorithms is available at http://www.birc.au.dk/~paula/. Conclusions We use two different models to analyze the accuracy and eight experiments to investigate the speed of the three algorithms. We find that they have similar accuracy and that EXPM is the slowest method. Furthermore we find that UNI is usually faster than EVD.

  11. A method to derive fixed budget results from expected optimisation times

    DEFF Research Database (Denmark)

    Doerr, Benjamin; Jansen, Thomas; Witt, Carsten

    2013-01-01

    At last year's GECCO a novel perspective for theoretical performance analysis of evolutionary algorithms and other randomised search heuristics was introduced that concentrates on the expected function value after a pre-defined number of steps, called budget. This is significantly different from...... the common perspective where the expected optimisation time is analysed. While there is a huge body of work and a large collection of tools for the analysis of the expected optimisation time the new fixed budget perspective introduces new analytical challenges. Here it is shown how results on the expected...... optimisation time that are strengthened by deviation bounds can be systematically turned into fixed budget results. We demonstrate our approach by considering the (1+1) EA on LeadingOnes and significantly improving previous results. We prove that deviating from the expected time by an additive term of ω(n3...

  12. What do you mean "drunk"? Convergent validation of multiple methods of mapping alcohol expectancy memory networks.

    Science.gov (United States)

    Reich, Richard R; Ariel, Idan; Darkes, Jack; Goldman, Mark S

    2012-09-01

    The configuration and activation of memory networks have been theorized as mechanisms that underlie the often observed link between alcohol expectancies and drinking. A key component of this network is the expectancy "drunk." The memory network configuration of "drunk" was mapped by using cluster analysis of data gathered from the paired-similarities task (PST) and the Alcohol Expectancy Multi-Axial Assessment (AEMAX). A third task, the free associates task (FA), assessed participants' strongest alcohol expectancy associates and was used as a validity check for the cluster analyses. Six hundred forty-seven 18-19-year-olds completed these measures and a measure of alcohol consumption at baseline assessment for a 5-year longitudinal study. For both the PST and AEMAX, "drunk" clustered with mainly negative and sedating effects (e.g., "sick," "dizzy," "sleepy") in lighter drinkers and with more positive and arousing effects (e.g., "happy," "horny," "outgoing") in heavier drinkers, showing that the cognitive organization of expectancies reflected drinker type (and might influence the choice to drink). Consistent with the cluster analyses, in participants who gave "drunk" as an FA response, heavier drinkers rated the word as more positive and arousing than lighter drinkers. Additionally, gender did not account for the observed drinker-type differences. These results support the notion that for some emerging adults, drinking may be linked to what they mean by the word "drunk." PsycINFO Database Record (c) 2012 APA, all rights reserved.

  13. Methods for evaluating cervical range of motion in trauma settings

    Directory of Open Access Journals (Sweden)

    Voss Sarah

    2012-08-01

    Full Text Available Abstract Immobilisation of the cervical spine is a common procedure following traumatic injury. This is often precautionary as the actual incidence of spinal injury is low. Nonetheless, stabilisation of the head and neck is an important part of pre-hospital care due to the catastrophic damage that may follow if further unrestricted movement occurs in the presence of an unstable spinal injury. Currently available collars are limited by the potential for inadequate immobilisation and complications caused by pressure on the patient’s skin, restricted airway access and compression of the jugular vein. Alternative approaches to cervical spine immobilisation are being considered, and the investigation of these new methods requires a standardised approach to the evaluation of neck movement. This review summarises the research methods and scientific technology that have been used to assess and measure cervical range of motion, and which are likely to underpin future research in this field. A systematic search of international literature was conducted to evaluate the methodologies used to assess the extremes of movement that can be achieved in six domains. 34 papers were included in the review. These studies used a range of methodologies, but study quality was generally low. Laboratory investigations and biomechanical studies have gradually given way to methods that more accurately reflect the real-life situations in which cervical spine immobilisation occurs. Latterly, new approaches using virtual reality and simulation have been developed. Coupled with modern electromagnetic tracking technology this has considerable potential for effective application in future research. However, use of these technologies in real life settings can be problematic and more research is needed.

  14. Topology optimization of hyperelastic structures using a level set method

    Science.gov (United States)

    Chen, Feifei; Wang, Yiqiang; Wang, Michael Yu; Zhang, Y. F.

    2017-12-01

    Soft rubberlike materials, due to their inherent compliance, are finding widespread implementation in a variety of applications ranging from assistive wearable technologies to soft material robots. Structural design of such soft and rubbery materials necessitates the consideration of large nonlinear deformations and hyperelastic material models to accurately predict their mechanical behaviour. In this paper, we present an effective level set-based topology optimization method for the design of hyperelastic structures that undergo large deformations. The method incorporates both geometric and material nonlinearities where the strain and stress measures are defined within the total Lagrange framework and the hyperelasticity is characterized by the widely-adopted Mooney-Rivlin material model. A shape sensitivity analysis is carried out, in the strict sense of the material derivative, where the high-order terms involving the displacement gradient are retained to ensure the descent direction. As the design velocity enters into the shape derivative in terms of its gradient and divergence terms, we develop a discrete velocity selection strategy. The whole optimization implementation undergoes a two-step process, where the linear optimization is first performed and its optimized solution serves as the initial design for the subsequent nonlinear optimization. It turns out that this operation could efficiently alleviate the numerical instability and facilitate the optimization process. To demonstrate the validity and effectiveness of the proposed method, three compliance minimization problems are studied and their optimized solutions present significant mechanical benefits of incorporating the nonlinearities, in terms of remarkable enhancement in not only the structural stiffness but also the critical buckling load.

  15. Rough sets selected methods and applications in management and engineering

    CERN Document Server

    Peters, Georg; Ślęzak, Dominik; Yao, Yiyu

    2012-01-01

    Introduced in the early 1980s, Rough Set Theory has become an important part of soft computing in the last 25 years. This book provides a practical, context-based analysis of rough set theory, with each chapter exploring a real-world application of Rough Sets.

  16. Point and interval forecasts of mortality rates and life expectancy: A comparison of ten principal component methods

    Directory of Open Access Journals (Sweden)

    Han Lin Shang

    2011-07-01

    Full Text Available Using the age- and sex-specific data of 14 developed countries, we compare the point and interval forecast accuracy and bias of ten principal component methods for forecasting mortality rates and life expectancy. The ten methods are variants and extensions of the Lee-Carter method. Based on one-step forecast errors, the weighted Hyndman-Ullah method provides the most accurate point forecasts of mortality rates and the Lee-Miller method is the least biased. For the accuracy and bias of life expectancy, the weighted Hyndman-Ullah method performs the best for female mortality and the Lee-Miller method for male mortality. While all methods underestimate variability in mortality rates, the more complex Hyndman-Ullah methods are more accurate than the simpler methods. The weighted Hyndman-Ullah method provides the most accurate interval forecasts for mortality rates, while the robust Hyndman-Ullah method provides the best interval forecast accuracy for life expectancy.

  17. Predictive Active Set Selection Methods for Gaussian Processes

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2012-01-01

    We propose an active set selection framework for Gaussian process classification for cases when the dataset is large enough to render its inference prohibitive. Our scheme consists of a two step alternating procedure of active set update rules and hyperparameter optimization based upon marginal...... high impact to the classifier decision process while removing those that are less relevant. We introduce two active set rules based on different criteria, the first one prefers a model with interpretable active set parameters whereas the second puts computational complexity first, thus a model...... with active set parameters that directly control its complexity. We also provide both theoretical and empirical support for our active set selection strategy being a good approximation of a full Gaussian process classifier. Our extensive experiments show that our approach can compete with state...

  18. Doctors' use of mobile devices in the clinical setting: a mixed methods study.

    Science.gov (United States)

    Nerminathan, Arany; Harrison, Amanda; Phelps, Megan; Alexander, Shirley; Scott, Karen M

    2017-03-01

    Mobile device use has become almost ubiquitous in daily life and therefore includes use by doctors in clinical settings. There has been little study as to the patterns of use and impact this has on doctors in the workplace and how negatively or positively it impacts at the point of care. To explore how doctors use mobile devices in the clinical setting and understand drivers for use. A mixed methods study was used with doctors in a paediatric and adult teaching hospital in 2013. A paper-based survey examined mobile device usage data by doctors in the clinical setting. Focus groups explored doctors' reasons for using or refraining from using mobile devices in the clinical setting, and their attitudes about others' use. The survey, completed by 109 doctors, showed that 91% owned a smartphone and 88% used their mobile devices frequently in the clinical setting. Trainees were more likely than consultants to use their mobile devices for learning and accessing information related to patient care, as well as for personal communication unrelated to work. Focus group data highlighted a range of factors that influenced doctors to use personal mobile devices in the clinical setting, including convenience for medical photography, and factors that limited use. Distraction in the clinical setting due to use of mobile devices was a key issue. Personal experience and confidence in using mobile devices affected their use, and was guided by role modelling and expectations within a medical team. Doctors use mobile devices to enhance efficiency in the workplace. In the current environment, doctors are making their own decisions based on balancing the risks and benefits of using mobile devices in the clinical setting. There is a need for guidelines around acceptable and ethical use that is patient-centred and that respects patient privacy. © 2016 Royal Australasian College of Physicians.

  19. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    OpenAIRE

    Zhang, Lijuan; Li, Dongming; Su, Wei; Yang, Jinhua; Jiang, Yutong

    2014-01-01

    To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constrain...

  20. Localized atomic basis set in the projector augmented wave method

    DEFF Research Database (Denmark)

    Larsen, Ask Hjorth; Vanin, Marco; Mortensen, Jens Jørgen

    2009-01-01

    We present an implementation of localized atomic-orbital basis sets in the projector augmented wave (PAW) formalism within the density-functional theory. The implementation in the real-space GPAW code provides a complementary basis set to the accurate but computationally more demanding grid...

  1. Comparison of Scalar Expectancy Theory (SET) and the Learning-to-Time (LeT) model in a successive temporal bisection task.

    Science.gov (United States)

    Arantes, Joana

    2008-06-01

    The present research tested the generality of the "context effect" previously reported in experiments using temporal double bisection tasks [e.g., Arantes, J., Machado, A. Context effects in a temporal discrimination task: Further tests of the Scalar Expectancy Theory and Learning-to-Time models. J. Exp. Anal. Behav., in press]. Pigeons learned two temporal discriminations in which all the stimuli appear successively: 1s (red) vs. 4s (green) and 4s (blue) vs. 16s (yellow). Then, two tests were conducted to compare predictions of two timing models, Scalar Expectancy Theory (SET) and the Learning-to-Time (LeT) model. In one test, two psychometric functions were obtained by presenting pigeons with intermediate signal durations (1-4s and 4-16s). Results were mixed. In the critical test, pigeons were exposed to signals ranging from 1 to 16s and followed by the green or the blue key. Whereas SET predicted that the relative response rate to each of these keys should be independent of the signal duration, LeT predicted that the relative response rate to the green key (compared with the blue key) should increase with the signal duration. Results were consistent with LeT's predictions, showing that the context effect is obtained even when subjects do not need to make a choice between two keys presented simultaneously.

  2. Random polynomials and expected complexity of bisection methods for real solving

    DEFF Research Database (Denmark)

    Emiris, Ioannis Z.; Galligo, André; Tsigaridas, Elias

    2010-01-01

    , and by Edelman and Kostlan in order to estimate the real root separation of degree d polynomials with i.i.d. coefficients that follow two zero-mean normal distributions: for SO(2) polynomials, the i-th coefficient has variance (d/i), whereas for Weyl polynomials its variance is 1/i!. By applying results from....... The second part of the paper shows that the expected number of real roots of a degree d polynomial in the Bernstein basis is √2d ± O(1), when the coefficients are i.i.d. variables with moderate standard deviation. Our paper concludes with experimental results which corroborate our analysis....

  3. Evaluation of two-phase flow solvers using Level Set and Volume of Fluid methods

    Science.gov (United States)

    Bilger, C.; Aboukhedr, M.; Vogiatzaki, K.; Cant, R. S.

    2017-09-01

    Two principal methods have been used to simulate the evolution of two-phase immiscible flows of liquid and gas separated by an interface. These are the Level-Set (LS) method and the Volume of Fluid (VoF) method. Both methods attempt to represent the very sharp interface between the phases and to deal with the large jumps in physical properties associated with it. Both methods have their own strengths and weaknesses. For example, the VoF method is known to be prone to excessive numerical diffusion, while the basic LS method has some difficulty in conserving mass. Major progress has been made in remedying these deficiencies, and both methods have now reached a high level of physical accuracy. Nevertheless, there remains an issue, in that each of these methods has been developed by different research groups, using different codes and most importantly the implementations have been fine tuned to tackle different applications. Thus, it remains unclear what are the remaining advantages and drawbacks of each method relative to the other, and what might be the optimal way to unify them. In this paper, we address this gap by performing a direct comparison of two current state-of-the-art variations of these methods (LS: RCLSFoam and VoF: interPore) and implemented in the same code (OpenFoam). We subject both methods to a pair of benchmark test cases while using the same numerical meshes to examine a) the accuracy of curvature representation, b) the effect of tuning parameters, c) the ability to minimise spurious velocities and d) the ability to tackle fluids with very different densities. For each method, one of the test cases is chosen to be fairly benign while the other test case is expected to present a greater challenge. The results indicate that both methods can be made to work well on both test cases, while displaying different sensitivity to the relevant parameters.

  4. Novel methods and expected run II performance of ATLAS track reconstruction in dense environments

    CERN Document Server

    Jansky, Roland Wolfgang; The ATLAS collaboration

    2015-01-01

    Detailed understanding and optimal track reconstruction performance of ATLAS in the core of high pT objects is paramount for a number of techniques such as jet energy and mass calibration, jet flavour tagging, and hadronic tau identification as well as measurements of physics quantities like jet fragmentation functions. These dense environments are characterized by charged particle separations on the order of the granularity of ATLAS’s inner detector. With the insertion of a new innermost layer in this tracking detector, which allows measurements closer to the interaction point, and an increase in the centre of mass energy, these difficult environments will become even more relevant in Run II, such as in searches for heavy resonances. Novel algorithmic developments to the ATLAS track reconstruction software targeting these topologies as well as the expected improved performance will be presented.

  5. Testing the scalar expectancy theory (SET) and the learning-to-time model (LeT) in a double bisection task.

    Science.gov (United States)

    Machado, Armando; Pata, Paulo

    2005-02-01

    Two theories of timing, scalar expectancy theory (SET) and learning-to-time (LeT), make substantially different assumptions about what animals learn in temporal tasks. In a test of these assumptions, pigeons learned two temporal discriminations. On Type 1 trials, they learned to choose a red key after a 1-sec signal and a green key after a 4-sec signal; on Type 2 trials, they learned to choose a blue key after a 4-sec signal and a yellow key after either an 8-sec signal (Group 8) or a 16-sec signal (Group 16). Then, the birds were exposed to signals 1 sec, 4 sec, and 16 sec in length and given a choice between novel key combinations (red or green vs. blue or yellow). The choice between the green key and the blue key was of particular significance because both keys were associated with the same 4-sec signal. Whereas SET predicted no effect of the test signal duration on choice, LeT predicted that preference for green would increase monotonically with the length of the signal but would do so faster for Group 8 than for Group 16. The results were consistent with LeT, but not with SET.

  6. Analytical one parameter method for PID motion controller settings

    NARCIS (Netherlands)

    van Dijk, Johannes; Aarts, Ronald G.K.M.

    2012-01-01

    In this paper analytical expressions for PID-controllers settings for electromechanical motion systems are presented. It will be shown that by an adequate frequency domain oriented parametrization, the parameters of a PID-controller are analytically dependent on one variable only, the cross-over

  7. Level set methods for inverse scattering—some recent developments

    International Nuclear Information System (INIS)

    Dorn, Oliver; Lesselier, Dominique

    2009-01-01

    We give an update on recent techniques which use a level set representation of shapes for solving inverse scattering problems, completing in that matter the exposition made in (Dorn and Lesselier 2006 Inverse Problems 22 R67) and (Dorn and Lesselier 2007 Deformable Models (New York: Springer) pp 61–90), and bringing it closer to the current state of the art

  8. [The new methods in gerontology for life expectancy prediction of the indigenous population of Yugra].

    Science.gov (United States)

    Gavrilenko, T V; Es'kov, V M; Khadartsev, A A; Khimikova, O I; Sokolova, A A

    2014-01-01

    The behavior of the state vector of human cardio-vascular system in different age groups according to methods of theory of chaos-self-organization and methods of classical statistics was investigated. Observations were made on the indigenous people of North of the Russian Federation. Using methods of the theory of chaos-self-organization the differences in the parameters of quasi-attractors of the human state vector of cardio-vascular system of the people of Russian Federation North were shown. Comparison with the results obtained by classical statistics was made.

  9. Development of short questionnaire to measure an extended set of role expectation conflict, coworker support and work-life balance: The new job stress scale

    Directory of Open Access Journals (Sweden)

    Abhishek Shukla

    2016-12-01

    Full Text Available This study aimed to investigate the reliability and validity of a new version of job stress scale, which measures the extended set of psychosocial stressors by adding new scales to the current version of the job stress scale. Additional scales were extensively collected from theoretical job stress models and similar questionnaire from different countries. Items were tested in workplace and refined through a pilot survey (n = 400 to examine the reliability and construct validity. Most scales showed acceptable levels of internal consistency, intra-class reliability, and test–retest reliability. Factor analysis and correlation analysis showed that these scales fit the theoretical expectations. These findings provided enough evidences that the new job stress scale is reliable and valid. Although confirmatory analysis should be examined in future studies. The new job stress scale is a useful instrument for organization and academicians to evaluate job stress in modern Indian workplace.

  10. Hope Matters: Developing and Validating a Measure of Future Expectations Among Young Women in a High HIV Prevalence Setting in Rural South Africa (HPTN 068).

    Science.gov (United States)

    Abler, Laurie; Hill, Lauren; Maman, Suzanne; DeVellis, Robert; Twine, Rhian; Kahn, Kathleen; MacPhail, Catherine; Pettifor, Audrey

    2017-07-01

    Hope is a future expectancy characterized by an individual's perception that a desirable future outcome can be achieved. Though scales exist to measure hope, they may have limited relevance in low resource, high HIV prevalence settings. We developed and validated a hope scale among young women living in rural South Africa. We conducted formative interviews to identify the key elements of hope. Using items developed from these interviews, we administered the hope scale to 2533 young women enrolled in an HIV-prevention trial. Women endorsed scale items highly and the scale proved to be unidimensional in the sample. Hope scores were significantly correlated with hypothesized psycholosocial correlates with the exception of life stressors. Overall, our hope measure was found to have excellent reliability and to show encouraging preliminary indications of validity in this population. This study presents a promising measure to assess hope among young women in South Africa.

  11. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    Directory of Open Access Journals (Sweden)

    Lijuan Zhang

    2014-01-01

    Full Text Available To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constraint. Secondly, the EM algorithm is improved by combining the AO imaging system parameters and regularization technique. A cost function for the joint-deconvolution multiframe AO images is given, and the optimization model for their parameter estimations is built. Lastly, the image-restoration experiments on both analog images and the real AO are performed to verify the recovery effect of our algorithm. The experimental results show that comparing with the Wiener-IBD or RL-IBD algorithm, our iterations decrease 14.3% and well improve the estimation accuracy. The model distinguishes the PSF of the AO images and recovers the observed target images clearly.

  12. Comparative evaluation of different methods of setting hygienic standards

    International Nuclear Information System (INIS)

    Ramzaev, P.V.; Rodionova, L.F.; Mashneva, N.I.

    1978-01-01

    Long-term experiments were carried out on white mice and rats to study the relative importance of various procedures used in setting hygienic standards for exposure to adverse factors. A variety of radionuclides and chemical substances were tested and the sensitivities to them of various indices of the bodily state were determined. For each index, statistically significant minimal effective concentrations of substances were established

  13. Extension and Enhancement Methods for Setting Data Quality Objectives

    Energy Technology Data Exchange (ETDEWEB)

    D. Goodman

    2000-03-01

    The project developed statistical tools for the application of decision theory and operations research methods, including cost-benefit analysis, to the DQO process for environmental clean up. A pilot study was conducted, using these methods at the Hanford site, to estimate vadose zone contamination plumes under the SX tank farm, and to help plan further sampling.

  14. Extension and Enhancement Methods for Setting Data Quality Objectives

    International Nuclear Information System (INIS)

    Goodman, D.

    2000-01-01

    The project developed statistical tools for the application of decision theory and operations research methods, including cost-benefit analysis, to the DQO process for environmental clean up. A pilot study was conducted, using these methods at the Hanford site, to estimate vadose zone contamination plumes under the SX tank farm, and to help plan further sampling

  15. Interacting Psycho-economic Expectations Ratios with Equity/debt Realities Suggests a Crisis Warning Method

    Directory of Open Access Journals (Sweden)

    Barry Thornton

    2011-12-01

    Full Text Available The recent April 2011 meeting of the G20 countries considered possible development of a global early warning system to avoid any future financial crisis. Psycho-economic factors are strong drivers of greed, fear and non-rational behavior and experience shows that they should not be excluded from such a project. Rational, logical behavior for attitude and actions has been an assumption in most financial models prior to the advent of the 2008 crisis. In recent years there has been an increasing interest in relating financial activity to phenomena in physics, turbulence, neurology and recent fMRI experiments show that cortical interactions for decisions are affected by previous experience. We use an extension of two Lotka-Volterra (LV interactive equations used in a model for the 2008 crisis but now with fluctuation theory from chemical physics to interact the two previously used heterogenous interacting agents, the psycho-economic ratio CE of investor expectations (favourable/unfavourable and the reality ratio of equity/debt. The model provides a variable, M, for uncertainties in CE arising from the ability of the economy to affect the financial sector. A condition obtained for keeping rates of change in M small to avoid divergence of spontaneous fluctuations, provides a quantifiable time dependent entity which can act as a warning of impending crisis. The conditional expression appears to be related to an extension of Ohm's law as in a recently discovered "chip" and memory; the memristor. The possible role of subthreshold legacies in CE from the previous crisis appears to be possible and related to recent neurological findings.

  16. Vacuum expectation value of the stress tensor in an arbitrary curved background: The covariant point-separation method

    International Nuclear Information System (INIS)

    Christensen, S.M.

    1976-01-01

    A method known as covariant geodesic point separation is developed to calculate the vacuum expectation value of the stress tensor for a massive scalar field in an arbitrary gravitational field. The vacuum expectation value will diverge because the stress-tensor operator is constructed from products of field operators evaluated at the same space-time point. To remedy this problem, one of the field operators is taken to a nearby point. The resultant vacuum expectation value is finite and may be expressed in terms of the Hadamard elementary function. This function is calculated using a curved-space generalization of Schwinger's proper-time method for calculating the Feynman Green's function. The expression for the Hadamard function is written in terms of the biscalar of geodetic interval which gives a measure of the square of the geodesic distance between the separated points. Next, using a covariant expansion in terms of the tangent to the geodesic, the stress tensor may be expanded in powers of the length of the geodesic. Covariant expressions for each divergent term and for certain terms in the finite portion of the vacuum expectation value of the stress tensor are found. The properties, uses, and limitations of the results are discussed

  17. Methods of mathematical modeling using polynomials of algebra of sets

    Science.gov (United States)

    Kazanskiy, Alexandr; Kochetkov, Ivan

    2018-03-01

    The article deals with the construction of discrete mathematical models for solving applied problems arising from the operation of building structures. Security issues in modern high-rise buildings are extremely serious and relevant, and there is no doubt that interest in them will only increase. The territory of the building is divided into zones for which it is necessary to observe. Zones can overlap and have different priorities. Such situations can be described using formulas algebra of sets. Formulas can be programmed, which makes it possible to work with them using computer models.

  18. Device and Method for Gathering Ensemble Data Sets

    Science.gov (United States)

    Racette, Paul E. (Inventor)

    2014-01-01

    An ensemble detector uses calibrated noise references to produce ensemble sets of data from which properties of non-stationary processes may be extracted. The ensemble detector comprising: a receiver; a switching device coupled to the receiver, the switching device configured to selectively connect each of a plurality of reference noise signals to the receiver; and a gain modulation circuit coupled to the receiver and configured to vary a gain of the receiver based on a forcing signal; whereby the switching device selectively connects each of the plurality of reference noise signals to the receiver to produce an output signal derived from the plurality of reference noise signals and the forcing signal.

  19. Further tests of the Scalar Expectancy Theory (SET) and the Learning-to-Time (LeT) model in a temporal bisection task.

    Science.gov (United States)

    Machado, Armando; Arantes, Joana

    2006-06-01

    To contrast two models of timing, Scalar Expectancy Theory (SET) and Learning to Time (LeT), pigeons were exposed to a double temporal bisection procedure. On half of the trials, they learned to choose a red key after a 1s signal and a green key after a 4s signal; on the other half of the trials, they learned to choose a blue key after a 4-s signal and a yellow key after a 16-s signal. This was Phase A of an ABA design. On Phase B, the pigeons were divided into two groups and exposed to a new bisection task in which the signals ranged from 1 to 16s and the choice keys were blue and green. One group was reinforced for choosing blue after 1-s signals and green after 16-s signals and the other group was reinforced for the opposite mapping (green after 1-s signals and blue after 16-s signals). Whereas SET predicted no differences between the groups, LeT predicted that the former group would learn the new discrimination faster than the latter group. The results were consistent with LeT. Finally, the pigeons returned to Phase A. Only LeT made specific predictions regarding the reacquisition of the four temporal discriminations. These predictions were only partly consistent with the results.

  20. Application of multivariate statistical methods in analyzing expectation surveys in Central Bank of Nigeria

    OpenAIRE

    Raymond, Ogbuka Obinna

    2017-01-01

    In analyzing survey data, most researchers and analysts make use of statistical methods with straight forward statistical approaches. More common, is the use of one‐way, two‐way or multi‐way tables, and graphical displays such as bar charts, line charts, etc. A brief overview of these approaches and a good discussion on aspects needing attention during the data analysis process can be found in Wilson & Stern (2001). In most cases however, analysis procedures that go beyond simp...

  1. [Which learning methods are expected for ultrasound training? Blended learning on trial].

    Science.gov (United States)

    Röhrig, S; Hempel, D; Stenger, T; Armbruster, W; Seibel, A; Walcher, F; Breitkreutz, R

    2014-10-01

    Current teaching methods in graduate and postgraduate training often include frontal presentations. Especially in ultrasound education not only knowledge but also sensomotory and visual skills need to be taught. This requires new learning methods. This study examined which types of teaching methods are preferred by participants in ultrasound training courses before, during and after the course by analyzing a blended learning concept. It also investigated how much time trainees are willing to spend on such activities. A survey was conducted at the end of a certified ultrasound training course. Participants were asked to complete a questionnaire based on a visual analogue scale (VAS) in which three categories were defined: category (1) vote for acceptance with a two thirds majority (VAS 67-100%), category (2) simple acceptance (50-67%) and category (3) rejection (learning program with interactive elements, short presentations (less than 20 min), incorporating interaction with the audience, hands-on sessions in small groups, an alternation between presentations and hands-on-sessions, live demonstrations and quizzes. For post-course learning, interactive and media-assisted approaches were preferred, such as e-learning, films of the presentations and the possibility to stay in contact with instructors in order to discuss the results. Participants also voted for maintaining a logbook for documentation of results. The results of this study indicate the need for interactive learning concepts and blended learning activities. Directors of ultrasound courses may consider these aspects and are encouraged to develop sustainable learning pathways.

  2. Hanford Site groundwater monitoring: Setting, sources and methods

    International Nuclear Information System (INIS)

    Hartman, M.J.

    2000-01-01

    Groundwater monitoring is conducted on the Hanford Site to meet the requirements of the Resource Conservation and Recovery Act of 1976 (RCRA); Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA); U.S. Department of Energy (DOE) orders; and the Washington Administrative Code. Results of monitoring are published annually (e.g., PNNL-11989). To reduce the redundancy of these annual reports, background information that does not change significantly from year to year has been extracted from the annual report and published in this companion volume. This report includes a description of groundwater monitoring requirements, site hydrogeology, and waste sites that have affected groundwater quality or that require groundwater monitoring. Monitoring networks and methods for sampling, analysis, and interpretation are summarized. Vadose zone monitoring methods and statistical methods also are described. Whenever necessary, updates to information contained in this document will be published in future groundwater annual reports

  3. Hanford Site groundwater monitoring: Setting, sources and methods

    Energy Technology Data Exchange (ETDEWEB)

    M.J. Hartman

    2000-04-11

    Groundwater monitoring is conducted on the Hanford Site to meet the requirements of the Resource Conservation and Recovery Act of 1976 (RCRA); Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA); U.S. Department of Energy (DOE) orders; and the Washington Administrative Code. Results of monitoring are published annually (e.g., PNNL-11989). To reduce the redundancy of these annual reports, background information that does not change significantly from year to year has been extracted from the annual report and published in this companion volume. This report includes a description of groundwater monitoring requirements, site hydrogeology, and waste sites that have affected groundwater quality or that require groundwater monitoring. Monitoring networks and methods for sampling, analysis, and interpretation are summarized. Vadose zone monitoring methods and statistical methods also are described. Whenever necessary, updates to information contained in this document will be published in future groundwater annual reports.

  4. Methods for extracellular vesicles isolation in a hospital setting

    Directory of Open Access Journals (Sweden)

    Matías eSáenz-Cuesta

    2015-02-01

    Full Text Available The research in extracellular vesicles (EVs has been rising during the last decade. However, there is no clear consensus on the most accurate protocol to isolate and analyze them. Besides, most of the current protocols are difficult to implement in a hospital setting due to being very time consuming or to requirements of specific infrastructure. Thus, our aim is to compare five different protocols (comprising two different medium-speed differential centrifugation protocols; commercially polymeric precipitation -exoquick-; acid precipitation; and ultracentrifugation for blood and urine samples to determine the most suitable one for the isolation of EVs. Nanoparticle tracking analysis, flow cytometry, western blot, electronic microscopy and spectrophotometry were used to characterize basic aspects of EVs such us concentration, size distribution, cell-origin and transmembrane markers and RNA concentration. The highest EV concentrations were obtained using the exoquick protocol, followed by both differential centrifugation protocols, while the ultracentrifugation and acid-precipitation protocols yielded considerably lower EV concentrations. The five protocols isolated EVs of similar characteristics regarding markers and RNA concentration however standard protocol recovered only small EVs. EV isolated with exoquick presented difficult to be analyzed with western blot. The RNA concentrations obtained from urine-derived EVs were similar to those obtained from blood-derived ones, despite the urine EV concentration being 10 to 20 times lower. We consider that a medium-speed differential centrifugation could be suitable to be applied in a hospital setting due to require the simplest infrastructure and recover higher concentration of EV than standard protocol. A workflow from sampling to characterization of EVs is proposed.

  5. Wheel set run profile renewing method effectiveness estimation

    OpenAIRE

    Somov, Dmitrij; Bazaras, Žilvinas; Žukauskaite, Orinta

    2010-01-01

    At all the repair enterprises, despite decreased rim wear-off resistance, after every grinding only geometry wheel profile parameters are renewed. Exploit wheel rim work edge decrease tendency is noticed what induces acquiring new wheels. This is related to considerable axle load and train speed increase and also because of wheel work edge repair method imperfection.

  6. A level set method for vapor bubble dynamics

    NARCIS (Netherlands)

    Can, E.; Prosperetti, Andrea

    2012-01-01

    This paper describes a finite-difference computational method suitable for the simulation of vapor–liquid (or gas–liquid) flows in which the dynamical effects of the vapor can be approximated by a time-dependent, spatially uniform pressure acting on the interface. In such flows it is not necessary

  7. Utility of qualitative methods in a clinical setting: perinatal care in the Western Province.

    Science.gov (United States)

    Jayasuriya, V

    2012-03-01

    A peculiar paradox that has been observed in previous studies of antenatal care is where patients are satisfied with the services despite obvious lack of basic facilities. Qualitative methods were used to describe the experience of perinatal care in the Western province with the objective of demonstrating application of this method in a clinical setting. This paper used a 'naturalistic' approach of qualitative methods. In-depth interviews conducted with 20 postnatal mothers delivering in tertiary care institutions in the Western province was tape recorded, transcribed and content analysed. To ensure objectivity and validity of results, the principle investigator received only the anonymised data to prevent any prejudices or pre-conceptions affecting the results. The main themes emerging from the text demonstrated 'naïve trust' in the carer and a state of 'hero worship' where patients were distanced and therefore unable and unwilling to query the decisions made by the carers. This is similar to a state of patient-carer relationship described in a published model known as guarded alliance, where the relationship develops though four phases based on the level of trust and confidence in the relationship. This state explains not only why patients fail to recognise and report any deficiencies in the services but also the need for them to justify the behaviour of caregivers even when it amounts to incompetence and negligence. Qualitative methods allow the researcher to capture experiences in its 'natural' form rather than based on pre-determined protocols or plans, which may be limited to our own understanding and expectations and therefore unable to explain many idiosyncrasies of the programmes. This paper argues favourably for the use of qualitative methods in other clinical settings.

  8. Steepest descent method for set-valued locally accretive mappings

    International Nuclear Information System (INIS)

    Chidume, C.E.

    1993-05-01

    Let E be a real q-uniformly smooth Banach space. Suppose T is a set-valued locally strongly accretive map with open domain D(T) in E and that 0 is an element of Tx has a solution x* in D(T). Then there exists a neighbourhood B in D(T) of x* and a real number r 1 >0 such that for any r>r 1 and some real sequence {c n }, any initial guess x 1 is an element of B and any single-valued selection T 0 of T, the sequence {x n } generated from x 1 by x n+1 =x n -c n T 0 x n , n≥1, remains in D(T) and converges strongly to x* with ||x n -x*|| O(n -(q-1)/ q). A related result deals with iterative approximation of a solution of the equation f is an element of x+Ax when A is a locally accretive map. Our theorems generalize important known results and resolve a problem of interest. (author). 39 refs

  9. Thermal oil recovery method using self-contained windelectric sets

    Science.gov (United States)

    Belsky, A. A.; Korolyov, I. A.

    2018-05-01

    The paper reviews challenges associated with questions of efficiency of thermal methods of impact on productive oil strata. The concept of using electrothermal complexes with WEG power supply for the indicated purposes was proposed and justified, their operating principles, main advantages and disadvantages, as well as a schematechnical solution for the implementation of the intensification of oil extraction, were considered. A mathematical model for finding the operating characteristics of WEG is presented and its main energy parameters are determined. The adequacy of the mathematical model is confirmed by laboratory simulation stand tests with nominal parameters.

  10. Ready for goal setting? Process evaluation of a patient-specific goal-setting method in physiotherapy.

    Science.gov (United States)

    Stevens, Anita; Köke, Albère; van der Weijden, Trudy; Beurskens, Anna

    2017-08-31

    Patient participation and goal setting appear to be difficult in daily physiotherapy practice, and practical methods are lacking. An existing patient-specific instrument, Patient-Specific Complaints (PSC), was therefore optimized into a new Patient Specific Goal-setting method (PSG). The aims of this study were to examine the feasibility of the PSG in daily physiotherapy practice, and to explore the potential impact of the new method. We conducted a process evaluation within a non-controlled intervention study. Community-based physiotherapists were instructed on how to work with the PSG in three group training sessions. The PSG is a six-step method embedded across the physiotherapy process, in which patients are stimulated to participate in the goal-setting process by: identifying problematic activities, prioritizing them, scoring their abilities, setting goals, planning and evaluating. Quantitative and qualitative data were collected among patients and physiotherapists by recording consultations and assessing patient files, questionnaires and written reflection reports. Data were collected from 51 physiotherapists and 218 patients, and 38 recordings and 219 patient files were analysed. The PSG steps were performed as intended, but the 'setting goals' and 'planning treatment' steps were not performed in detail. The patients and physiotherapists were positive about the method, and the physiotherapists perceived increased patient participation. They became aware of the importance of engaging patients in a dialogue, instead of focusing on gathering information. The lack of integration in the electronic patient system was a major barrier for optimal use in practice. Although the self-reported actual use of the PSG, i.e. informing and involving patients, and client-centred competences had improved, this was not completely confirmed by the objectively observed behaviour. The PSG is a feasible method and tends to have impact on increasing patient participation in the goal-setting

  11. Application of an expectation maximization method to the reconstruction of X-ray-tube spectra from transmission data

    International Nuclear Information System (INIS)

    Endrizzi, M.; Delogu, P.; Oliva, P.

    2014-01-01

    An expectation maximization method is applied to the reconstruction of X-ray tube spectra from transmission measurements in the energy range 7–40 keV. A semiconductor single-photon counting detector, ionization chambers and a scintillator-based detector are used for the experimental measurement of the transmission. The number of iterations required to reach an approximate solution is estimated on the basis of the measurement error, according to the discrepancy principle. The effectiveness of the stopping rule is studied on simulated data and validated with experiments. The quality of the reconstruction depends on the information available on the source itself and the possibility to add this knowledge to the solution process is investigated. The method can produce good approximations provided that the amount of noise in the data can be estimated. - Highlights: • An expectation maximization method was used together with the discrepancy principle. • The discrepancy principle is a suitable criterion for stopping the iteration. • The method can be applied to a variety of detectors/experimental conditions. • The minimum information required is the amount of noise that affects the data. • Improved results are achieved by inserting more information when available

  12. Elliptical tiling method to generate a 2-dimensional set of templates for gravitational wave search

    International Nuclear Information System (INIS)

    Arnaud, Nicolas; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Cavalier, Fabien; Davier, Michel; Hello, Patrice; Kreckelbergh, Stephane; Porter, Edward K.

    2003-01-01

    Searching for a signal depending on unknown parameters in a noisy background with matched filtering techniques always requires an analysis of the data with several templates in parallel in order to ensure a proper match between the filter and the real waveform. The key feature of such an implementation is the design of the filter bank which must be small to limit the computational cost while keeping the detection efficiency as high as possible. This paper presents a geometrical method that allows one to cover the corresponding physical parameter space by a set of ellipses, each of them being associated with a given template. After the description of the main characteristics of the algorithm, the method is applied in the field of gravitational wave (GW) data analysis, for the search of damped sine signals. Such waveforms are expected to be produced during the deexcitation phase of black holes - the so-called 'ringdown' signals - and are also encountered in some numerically computed supernova signals. First, the number of templates N computed by the method is similar to its analytical estimation, despite the overlaps between neighbor templates and the border effects. Moreover, N is small enough to test for the first time the performances of the set of templates for different choices of the minimal match MM, the parameter used to define the maximal allowed loss of signal-to-noise ratio (SNR) due to the mismatch between real signals and templates. The main result of this analysis is that the fraction of SNR recovered is on average much higher than MM, which dramatically decreases the mean percentage of false dismissals. Indeed, it goes well below its estimated value of 1-MM 3 used as input of the algorithm. Thus, as this feature should be common to any tiling algorithm, it seems possible to reduce the constraint on the value of MM - and indeed the number of templates and the computing power - without losing as many events as expected on average. This should be of great

  13. Multi-phase flow monitoring with electrical impedance tomography using level set based method

    International Nuclear Information System (INIS)

    Liu, Dong; Khambampati, Anil Kumar; Kim, Sin; Kim, Kyung Youn

    2015-01-01

    Highlights: • LSM has been used for shape reconstruction to monitor multi-phase flow using EIT. • Multi-phase level set model for conductivity is represented by two level set functions. • LSM handles topological merging and breaking naturally during evolution process. • To reduce the computational time, a narrowband technique was applied. • Use of narrowband and optimization approach results in efficient and fast method. - Abstract: In this paper, a level set-based reconstruction scheme is applied to multi-phase flow monitoring using electrical impedance tomography (EIT). The proposed scheme involves applying a narrowband level set method to solve the inverse problem of finding the interface between the regions having different conductivity values. The multi-phase level set model for the conductivity distribution inside the domain is represented by two level set functions. The key principle of the level set-based method is to implicitly represent the shape of interface as the zero level set of higher dimensional function and then solve a set of partial differential equations. The level set-based scheme handles topological merging and breaking naturally during the evolution process. It also offers several advantages compared to traditional pixel-based approach. Level set-based method for multi-phase flow is tested with numerical and experimental data. It is found that level set-based method has better reconstruction performance when compared to pixel-based method

  14. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin; Matevosyan, Norayr; Wolfram, Marie-Therese

    2011-01-01

    analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its

  15. Employment goals, expectations, and migration intentions of nursing graduates in a Canadian border city: a mixed methods study.

    Science.gov (United States)

    Freeman, Michelle; Baumann, Andrea; Akhtar-Danesh, Noori; Blythe, Jennifer; Fisher, Anita

    2012-12-01

    Internationally, nurse migration in border cities has received little attention. Nurses who graduate from nursing programs in Canadian border communities have the option of working in Canada or the United States. They are able to cross the international border each day as commuter migrants returning to their home country after work. Despite recent investment by Canada to increase the number of nursing students, the migration intentions of graduating nurses and the factors influencing their decision making has not been explored. The objective of this study is to explore the migration intentions of a graduating class of baccalaureate nursing students in a Canadian border community and the factors influencing their decision making. An explanatory sequential mixed methods design was used. In the first quantitative phase, data was collected by a web-based self-report survey. In the qualitative phase, semi-structured interviews were conducted. Data collection took place between February and July 2011. The response rate to the survey was 40.9% (n=115). Eighty-six percent of graduates preferred to work in Canada although two thirds identified that they were considering migrating for work outside of Canada. Knowing a nurse who worked in the US (Michigan) influenced intention to migrate and living in a border community was a strong predictor of migration. Migrants had significantly higher expectations that their economic, professional development, healthy work environment, adventure and autonomy values would be met in another country than Canada. Evidence from the interviews revealed that clinical instructors and clinical experiences played a significant role in framing students' perceptions of the work environment, influencing their choice of specialty, and where they secured their first job. The value-expectancy framework offered a novel approach to identifying job factors driving migration intentions. The study offered a snapshot of the graduates' perception of the work

  16. Transport and diffusion of material quantities on propagating interfaces via level set methods

    CERN Document Server

    Adalsteinsson, D

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies.

  17. Transport and diffusion of material quantities on propagating interfaces via level set methods

    International Nuclear Information System (INIS)

    Adalsteinsson, David; Sethian, J.A.

    2003-01-01

    We develop theory and numerical algorithms to apply level set methods to problems involving the transport and diffusion of material quantities in a level set framework. Level set methods are computational techniques for tracking moving interfaces; they work by embedding the propagating interface as the zero level set of a higher dimensional function, and then approximate the solution of the resulting initial value partial differential equation using upwind finite difference schemes. The traditional level set method works in the trace space of the evolving interface, and hence disregards any parameterization in the interface description. Consequently, material quantities on the interface which themselves are transported under the interface motion are not easily handled in this framework. We develop model equations and algorithmic techniques to extend the level set method to include these problems. We demonstrate the accuracy of our approach through a series of test examples and convergence studies

  18. Neutron fluence-to-dose equivalent conversion factors: a comparison of data sets and interpolation methods

    International Nuclear Information System (INIS)

    Sims, C.S.; Killough, G.G.

    1983-01-01

    Various segments of the health physics community advocate the use of different sets of neutron fluence-to-dose equivalent conversion factors as a function of energy and different methods of interpolation between discrete points in those data sets. The major data sets and interpolation methods are used to calculate the spectrum average fluence-to-dose equivalent conversion factors for five spectra associated with the various shielded conditions of the Health Physics Research Reactor. The results obtained by use of the different data sets and interpolation methods are compared and discussed. (author)

  19. Solving Fokker-Planck Equations on Cantor Sets Using Local Fractional Decomposition Method

    Directory of Open Access Journals (Sweden)

    Shao-Hong Yan

    2014-01-01

    Full Text Available The local fractional decomposition method is applied to approximate the solutions for Fokker-Planck equations on Cantor sets with local fractional derivative. The obtained results give the present method that is very effective and simple for solving the differential equations on Cantor set.

  20. How countries cope with competing demands and expectations: perspectives of different stakeholders on priority setting and resource allocation for health in the era of HIV and AIDS

    Directory of Open Access Journals (Sweden)

    Jenniskens Françoise

    2012-12-01

    Full Text Available Abstract Background Health systems have experienced unprecedented stress in recent years, and as yet no consensus has emerged as to how to deal with the multiple burden of disease in the context of HIV and AIDS and other competing health priorities. Priority setting is essential, yet this is a complex, multifaceted process. Drawing on a study conducted in five African countries, this paper explores different stakeholders′ perceptions of health priorities, how priorities are defined in practice, the process of resource allocation for HIV and Health and how different stakeholders perceive this. Methods A sub-analysis was conducted of selected data from a wider qualitative study that explored the interactions between health systems and HIV and AIDS responses in five sub-Saharan countries (Burkina Faso, the Democratic Republic of Congo, Ghana, Madagascar and Malawi. Key background documents were analysed and semi-structured interviews (n = 258 and focus group discussions (n = 45 were held with representatives of communities, health personnel, decision makers, civil society representatives and development partners at both national and district level. Results Health priorities were expressed either in terms of specific health problems and diseases or gaps in service delivery requiring a strengthening of the overall health system. In all five countries study respondents (with the exception of community members in Ghana identified malaria and HIV as the two top health priorities. Community representatives were more likely to report concerns about accessibility of services and quality of care. National level respondents often referred to wider systemic challenges in relation to achieving the Millennium Development Goals (MDGs. Indeed, actual priority setting was heavily influenced by international agendas (e.g. MDGs and by the ways in which development partners were supporting national strategic planning processes. At the same time, multi

  1. Comparative study on gene set and pathway topology-based enrichment methods.

    Science.gov (United States)

    Bayerlová, Michaela; Jung, Klaus; Kramer, Frank; Klemm, Florian; Bleckmann, Annalen; Beißbarth, Tim

    2015-10-22

    Enrichment analysis is a popular approach to identify pathways or sets of genes which are significantly enriched in the context of differentially expressed genes. The traditional gene set enrichment approach considers a pathway as a simple gene list disregarding any knowledge of gene or protein interactions. In contrast, the new group of so called pathway topology-based methods integrates the topological structure of a pathway into the analysis. We comparatively investigated gene set and pathway topology-based enrichment approaches, considering three gene set and four topological methods. These methods were compared in two extensive simulation studies and on a benchmark of 36 real datasets, providing the same pathway input data for all methods. In the benchmark data analysis both types of methods showed a comparable ability to detect enriched pathways. The first simulation study was conducted with KEGG pathways, which showed considerable gene overlaps between each other. In this study with original KEGG pathways, none of the topology-based methods outperformed the gene set approach. Therefore, a second simulation study was performed on non-overlapping pathways created by unique gene IDs. Here, methods accounting for pathway topology reached higher accuracy than the gene set methods, however their sensitivity was lower. We conducted one of the first comprehensive comparative works on evaluating gene set against pathway topology-based enrichment methods. The topological methods showed better performance in the simulation scenarios with non-overlapping pathways, however, they were not conclusively better in the other scenarios. This suggests that simple gene set approach might be sufficient to detect an enriched pathway under realistic circumstances. Nevertheless, more extensive studies and further benchmark data are needed to systematically evaluate these methods and to assess what gain and cost pathway topology information introduces into enrichment analysis. Both

  2. Benchmarking Data Sets for the Evaluation of Virtual Ligand Screening Methods: Review and Perspectives.

    Science.gov (United States)

    Lagarde, Nathalie; Zagury, Jean-François; Montes, Matthieu

    2015-07-27

    Virtual screening methods are commonly used nowadays in drug discovery processes. However, to ensure their reliability, they have to be carefully evaluated. The evaluation of these methods is often realized in a retrospective way, notably by studying the enrichment of benchmarking data sets. To this purpose, numerous benchmarking data sets were developed over the years, and the resulting improvements led to the availability of high quality benchmarking data sets. However, some points still have to be considered in the selection of the active compounds, decoys, and protein structures to obtain optimal benchmarking data sets.

  3. Transition of Care from the Emergency Department to the Outpatient Setting: A Mixed-Methods Analysis

    Directory of Open Access Journals (Sweden)

    Chad S. Kessler

    2018-02-01

    Full Text Available Introduction: The goal of this study was to characterize current practices in the transition of care between the emergency department and primary care setting, with an emphasis on the use of the electronic medical record (EMR. Methods: Using literature review and modified Delphi technique, we created and tested a pilot survey to evaluate for face and content validity. The final survey was then administered face-to-face at eight different clinical sites across the country. A total of 52 emergency physicians (EP and 49 primary care physicians (PCP were surveyed and analyzed. We performed quantitative analysis using chi-square test. Two independent coders performed a qualitative analysis, classifying answers by pre-defined themes (inter-rater reliability > 80%. Participants’ answers could cross several pre-defined themes within a given question. Results: EPs were more likely to prefer telephone communication compared with PCPs (30/52 [57.7%] vs. 3/49 [6.1%] P < 0.0001, whereas PCPs were more likely to prefer using the EMR for discharge communication compared with EPs (33/49 [67.4%] vs. 13/52 [25%] p < 0.0001. EPs were more likely to report not needing to communicate with a PCP when a patient had a benign condition (23/52 [44.2%] vs. 2/49 [4.1%] p < 0.0001, but were more likely to communicate if the patient required urgent follow-up prior to discharge from the ED (33/52 [63.5%] vs. 20/49 [40.8%] p = 0.029. When discussing barriers to effective communication, 51/98 (52% stated communication logistics, followed by 49/98 (50% who reported setting/environmental constraints and 32/98 (32% who stated EMR access was a significant barrier. Conclusion: Significant differences exist between EPs and PCPs in the transition of care process. EPs preferred telephone contact synchronous to the encounter whereas PCPs preferred using the EMR asynchronous to the encounter. Providers believe EP-to-PCP contact is important for improving patient care, but report varied

  4. A simple mass-conserved level set method for simulation of multiphase flows

    Science.gov (United States)

    Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.

    2018-04-01

    In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.

  5. Indications and expectations for neuropsychological assessment in routine epilepsy care: Report of the ILAE Neuropsychology Task Force, Diagnostic Methods Commission, 2013-2017.

    Science.gov (United States)

    Wilson, Sarah J; Baxendale, Sallie; Barr, William; Hamed, Sherifa; Langfitt, John; Samson, Séverine; Watanabe, Masako; Baker, Gus A; Helmstaedter, Christoph; Hermann, Bruce P; Smith, Mary-Lou

    2015-05-01

    The International League Against Epilepsy (ILAE) Diagnostic Methods Commission charged the Neuropsychology Task Force with the job of developing a set of recommendations to address the following questions: (1) What is the role of a neuropsychological assessment? (2) Who should do a neuropsychological assessment? (3) When should people with epilepsy be referred for a neuropsychological assessment? and (4) What should be expected from a neuropsychological assessment? The recommendations have been broadly written for health care clinicians in established epilepsy settings as well as those setting up new services. They are based on a detailed survey of neuropsychological assessment practices across international epilepsy centers, and formal ranking of specific recommendations for advancing clinical epilepsy care generated by specialist epilepsy neuropsychologists from around the world. They also incorporate the latest research findings to establish minimum standards for training and practice, reflecting the many roles of neuropsychological assessment in the routine care of children and adults with epilepsy. The recommendations endorse routine screening of cognition, mood, and behavior in new-onset epilepsy, and describe the range of situations when more detailed, formal neuropsychological assessment is indicated. They identify a core set of cognitive and psychological domains that should be assessed to provide an objective account of an individual's cognitive, emotional, and psychosocial functioning, including factors likely contributing to deficits identified on qualitative and quantitative examination. The recommendations also endorse routine provision of feedback to patients, families, and clinicians about the implications of the assessment results, including specific clinical recommendations of what can be done to improve a patient's cognitive or psychosocial functioning and alleviate the distress of any difficulties identified. By canvassing the breadth and depth

  6. A LEVEL SET BASED SHAPE OPTIMIZATION METHOD FOR AN ELLIPTIC OBSTACLE PROBLEM

    KAUST Repository

    Burger, Martin

    2011-04-01

    In this paper, we construct a level set method for an elliptic obstacle problem, which can be reformulated as a shape optimization problem. We provide a detailed shape sensitivity analysis for this reformulation and a stability result for the shape Hessian at the optimal shape. Using the shape sensitivities, we construct a geometric gradient flow, which can be realized in the context of level set methods. We prove the convergence of the gradient flow to an optimal shape and provide a complete analysis of the level set method in terms of viscosity solutions. To our knowledge this is the first complete analysis of a level set method for a nonlocal shape optimization problem. Finally, we discuss the implementation of the methods and illustrate its behavior through several computational experiments. © 2011 World Scientific Publishing Company.

  7. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    Science.gov (United States)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  8. A High-Performance Parallel FDTD Method Enhanced by Using SSE Instruction Set

    Directory of Open Access Journals (Sweden)

    Dau-Chyrh Chang

    2012-01-01

    Full Text Available We introduce a hardware acceleration technique for the parallel finite difference time domain (FDTD method using the SSE (streaming (single instruction multiple data SIMD extensions instruction set. The implementation of SSE instruction set to parallel FDTD method has achieved the significant improvement on the simulation performance. The benchmarks of the SSE acceleration on both the multi-CPU workstation and computer cluster have demonstrated the advantages of (vector arithmetic logic unit VALU acceleration over GPU acceleration. Several engineering applications are employed to demonstrate the performance of parallel FDTD method enhanced by SSE instruction set.

  9. Robust fault detection of linear systems using a computationally efficient set-membership method

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Bak, Thomas

    2014-01-01

    In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past measureme...... is trivially parallelizable. The method is demonstrated for fault detection of a hydraulic pitch actuator of a wind turbine. We show the effectiveness of the proposed method by comparing our results with two zonotope-based set-membership methods....

  10. Factors Analysis And Profit Achievement For Trading Company By Using Rough Set Method

    Directory of Open Access Journals (Sweden)

    Muhammad Ardiansyah Sembiring

    2017-06-01

    Full Text Available This research has been done to analysis the financial raport fortrading company and it is  intimately  related  to  some  factors  which  determine  the profit of company. The result of this reseach is showed about  New Knowledge and perform of the rule. In  discussion, by followed data mining process and using Rough Set method. Rough Set is to analyzed the performance of the result. This  reseach will be assist to the manager of company with draw the intactandobjective. Rough set method is also to difined  the rule of discovery process and started the formation about Decision System, Equivalence Class, Discernibility Matrix,  Discernibility Matrix Modulo D, Reduction and General Rules. Rough set method is efective model about the performing analysis in the company.   Keywords : Data Mining, General Rules, Profit,. Rough Set.

  11. Method of nuclear reactor control using a variable temperature load dependent set point

    International Nuclear Information System (INIS)

    Kelly, J.J.; Rambo, G.E.

    1982-01-01

    A method and apparatus for controlling a nuclear reactor in response to a variable average reactor coolant temperature set point is disclosed. The set point is dependent upon percent of full power load demand. A manually-actuated ''droop mode'' of control is provided whereby the reactor coolant temperature is allowed to drop below the set point temperature a predetermined amount wherein the control is switched from reactor control rods exclusively to feedwater flow

  12. A Body of Work Standard-Setting Method with Construct Maps

    Science.gov (United States)

    Wyse, Adam E.; Bunch, Michael B.; Deville, Craig; Viger, Steven G.

    2014-01-01

    This article describes a novel variation of the Body of Work method that uses construct maps to overcome problems of transparency, rater inconsistency, and scores gaps commonly occurring with the Body of Work method. The Body of Work method with construct maps was implemented to set cut-scores for two separate K-12 assessment programs in a large…

  13. Evolutionary Expectations

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    , they are correlated among people who share environments because these individuals satisfice within their cognitive bounds by using cues in order of validity, as opposed to using cues arbitrarily. Any difference in expectations thereby arise from differences in cognitive ability, because two individuals with identical...... cognitive bounds will perceive business opportunities identically. In addition, because cues provide information about latent causal structures of the environment, changes in causality must be accompanied by changes in cognitive representations if adaptation is to be maintained. The concept of evolutionary......The concept of evolutionary expectations descends from cue learning psychology, synthesizing ideas on rational expectations with ideas on bounded rationality, to provide support for these ideas simultaneously. Evolutionary expectations are rational, but within cognitive bounds. Moreover...

  14. Unequal Expectations

    DEFF Research Database (Denmark)

    Karlson, Kristian Bernt

    the role of causal inference in social science; and it discusses the potential of the findings of the dissertation to inform educational policy. In Chapters II and III, constituting the substantive contribution of the dissertation, I examine the process through which students form expectations...... of the relation between the self and educational prospects; evaluations that are socially bounded in that students take their family's social position into consideration when forming their educational expectations. One important consequence of this learning process is that equally talented students tend to make...... for their educational futures. Focusing on the causes rather than the consequences of educational expectations, I argue that students shape their expectations in response to the signals about their academic performance they receive from institutionalized performance indicators in schools. Chapter II considers...

  15. State of the art on nuclear heating measurement methods and expected improvements in zero power research reactors

    International Nuclear Information System (INIS)

    Le Guillou, M.; Gruel, A.; Destouches, C.; Blaise, P.

    2017-01-01

    The paper focuses on the recent methodological advances suitable for nuclear heating measurements in zero power research reactors. This bibliographical work is part of an experimental approach currently in progress at CEA Cadarache, aiming at optimizing photon heating measurements in low-power research reactors. It provides an overview of the application fields of the most widely used detectors, namely thermoluminescent dosimeters (TLDs) and optically stimulated luminescent dosimeters. Starting from the methodology currently implemented at CEA, the expected improvements relate to the experimental determination of the neutron component, which is a key point conditioning the accuracy of photon heating measurements in mixed n-γ field. A recently developed methodology based on the use of "7Li and "6Li-enriched TLDs, pre-calibrated both in photon and neutron fields, is a promising approach to de-convolute the 2 components of nuclear heating. We also investigate the different methods of optical fiber dosimetry, with a view to assess the feasibility of online photon heating measurements, whose primary benefit is to overcome constraints related to the withdrawal of dosimeters from the reactor immediately after irradiation. Moreover, a fiber-using setup could allow measuring the instantaneous dose rate during irradiation, as well as the delayed photon dose after reactor shutdown. Some insights from potential further developments are given. Obviously, any improvement of the technique has to lead to a measurement uncertainty at least equal to that of the currently used methodology (∼5% at 1 σ). (authors)

  16. State of the art on nuclear heating measurement methods and expected improvements in zero power research reactors

    Directory of Open Access Journals (Sweden)

    Le Guillou Mael

    2017-01-01

    Full Text Available The paper focuses on the recent methodological advances suitable for nuclear heating measurements in zero power research reactors. This bibliographical work is part of an experimental approach currently in progress at CEA Cadarache, aiming at optimizing photon heating measurements in low-power research reactors. It provides an overview of the application fields of the most widely used detectors, namely thermoluminescent dosimeters (TLDs and optically stimulated luminescent dosimeters. Starting from the methodology currently implemented at CEA, the expected improvements relate to the experimental determination of the neutron component, which is a key point conditioning the accuracy of photon heating measurements in mixed n–γ field. A recently developed methodology based on the use of 7Li and 6Li-enriched TLDs, precalibrated both in photon and neutron fields, is a promising approach to deconvolute the two components of nuclear heating. We also investigate the different methods of optical fiber dosimetry, with a view to assess the feasibility of online photon heating measurements, whose primary benefit is to overcome constraints related to the withdrawal of dosimeters from the reactor immediately after irradiation. Moreover, a fibered setup could allow measuring the instantaneous dose rate during irradiation, as well as the delayed photon dose after reactor shutdown. Some insights from potential further developments are given. Obviously, any improvement of the technique has to lead to a measurement uncertainty at least equal to that of the currently used methodology (∼5% at 1σ.

  17. Novel multiple criteria decision making methods based on bipolar neutrosophic sets and bipolar neutrosophic graphs

    OpenAIRE

    Muhammad, Akram; Musavarah, Sarwar

    2016-01-01

    In this research study, we introduce the concept of bipolar neutrosophic graphs. We present the dominating and independent sets of bipolar neutrosophic graphs. We describe novel multiple criteria decision making methods based on bipolar neutrosophic sets and bipolar neutrosophic graphs. We also develop an algorithm for computing domination in bipolar neutrosophic graphs.

  18. Application of Local Fractional Series Expansion Method to Solve Klein-Gordon Equations on Cantor Sets

    Directory of Open Access Journals (Sweden)

    Ai-Min Yang

    2014-01-01

    Full Text Available We use the local fractional series expansion method to solve the Klein-Gordon equations on Cantor sets within the local fractional derivatives. The analytical solutions within the nondifferential terms are discussed. The obtained results show the simplicity and efficiency of the present technique with application to the problems of the liner differential equations on Cantor sets.

  19. Personal Goal Setting and Quality of Life: A Mixed Methods Study of Adult Professionals

    Science.gov (United States)

    Ingraham, Frank

    2017-01-01

    This mixed methods study was designed to examine the potential impactful relationship between personal goal setting and the quality of life satisfaction (built upon the Goal Setting Theory of motivation and performance). The study aimed to determine how influential the goal achievement process is (or is not) regarding personal fulfillment and…

  20. Gene set analysis: limitations in popular existing methods and proposed improvements.

    Science.gov (United States)

    Mishra, Pashupati; Törönen, Petri; Leino, Yrjö; Holm, Liisa

    2014-10-01

    Gene set analysis is the analysis of a set of genes that collectively contribute to a biological process. Most popular gene set analysis methods are based on empirical P-value that requires large number of permutations. Despite numerous gene set analysis methods developed in the past decade, the most popular methods still suffer from serious limitations. We present a gene set analysis method (mGSZ) based on Gene Set Z-scoring function (GSZ) and asymptotic P-values. Asymptotic P-value calculation requires fewer permutations, and thus speeds up the gene set analysis process. We compare the GSZ-scoring function with seven popular gene set scoring functions and show that GSZ stands out as the best scoring function. In addition, we show improved performance of the GSA method when the max-mean statistics is replaced by the GSZ scoring function. We demonstrate the importance of both gene and sample permutations by showing the consequences in the absence of one or the other. A comparison of asymptotic and empirical methods of P-value estimation demonstrates a clear advantage of asymptotic P-value over empirical P-value. We show that mGSZ outperforms the state-of-the-art methods based on two different evaluations. We compared mGSZ results with permutation and rotation tests and show that rotation does not improve our asymptotic P-values. We also propose well-known asymptotic distribution models for three of the compared methods. mGSZ is available as R package from cran.r-project.org. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Wu’s Characteristic Set Method for SystemVerilog Assertions Verification

    Directory of Open Access Journals (Sweden)

    Xinyan Gao

    2013-01-01

    Full Text Available We propose a verification solution based on characteristic set of Wu’s method towards SystemVerilog assertion checking over digital circuit systems. We define a suitable subset of SVAs so that an efficient polynomial modeling mechanism for both circuit descriptions and assertions can be applied. We present an algorithm framework based on the algebraic representations using characteristic set of polynomial system. This symbolic algebraic approach is a useful supplement to the existent verification methods based on simulation.

  2. RESEARCH PRIORITY-SETTING IN PAPUA NEW GUINEA: POLICIES, METHODS AND PRACTICALITIES

    OpenAIRE

    Omuru, Eric; Kingwell, Ross S.

    2000-01-01

    Agricultural research priority-setting at best promotes the effective and efficient use of scarce research resources. This paper reviews firstly the priority-setting methods used in Papua New Guinea for agricultural R&D and examines the practicalities of implementing these and other methods. Secondly, this paper reports on key factors affecting the strategic directions for agricultural R&D in Papua New Guinea. These factors include:(i) the long term trends in international crop prices; (ii) l...

  3. Treinamento esfincteriano: métodos, expectativas dos pais e morbidades associadas Toilet training: methods, parental expectations and associated dysfunctions

    Directory of Open Access Journals (Sweden)

    Denise M. Mota

    2008-02-01

    Full Text Available OBJETIVO: Revisar a literatura científica e leiga sobre o treinamento esfincteriano, abordando expectativas dos pais, métodos disponíveis para aquisição do controle esfincteriano e morbidades associadas. FONTES DOS DADOS: Publicações no período de 1960 a 2007, obtidas a partir das bases bibliográficas MEDLINE, Cochrane Collaboration, ERIC, Web of Science, LILACS, SciELO e Google; busca em artigos relacionados, referências dos artigos, por autor e nas sociedades de pediatria. Foram examinados 473 artigos, sendo selecionados 85. SÍNTESE DOS DADOS: Os pais apresentam expectativas irreais sobre idade de retirada de fraldas, sem levar em conta o desenvolvimento infantil. As estratégias de treinamento não se modificaram nas últimas décadas, e a idade vem sendo postergada na maioria dos países. Métodos de treinamento raramente são utilizados. O início precoce do treinamento esfincteriano e eventos estressantes durante o período podem prolongar o processo de treinamento. Uma maior freqüência de enurese, infecção urinária, disfunção miccional, constipação, encoprese e recusa em ir ao banheiro é observada nas crianças com treinamento inadequado. A literatura leiga para os pais é abundante e adequada, veiculada através de livros e da Internet, mas não largamente disponível para a população brasileira. Apenas três sociedades internacionais de pediatria disponibilizam diretrizes sobre treinamento esfincteriano. CONCLUSÕES: O controle esfincteriano vem sendo postergado na maioria dos países. Os métodos de treinamento existentes são de décadas passadas, sendo pouco utilizados pelas mães e pouco valorizados pelos pediatras; o treinamento inadequado pode ser um dos fatores causadores de distúrbios miccionais e intestinais, que causam transtornos para as crianças e famílias.OBJECTIVE: To review both the scientific literature and lay literature on toilet training, covering parents' expectations, the methods available

  4. Knowledge Reduction Based on Divide and Conquer Method in Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2012-01-01

    Full Text Available The divide and conquer method is a typical granular computing method using multiple levels of abstraction and granulations. So far, although some achievements based on divided and conquer method in the rough set theory have been acquired, the systematic methods for knowledge reduction based on divide and conquer method are still absent. In this paper, the knowledge reduction approaches based on divide and conquer method, under equivalence relation and under tolerance relation, are presented, respectively. After that, a systematic approach, named as the abstract process for knowledge reduction based on divide and conquer method in rough set theory, is proposed. Based on the presented approach, two algorithms for knowledge reduction, including an algorithm for attribute reduction and an algorithm for attribute value reduction, are presented. Some experimental evaluations are done to test the methods on uci data sets and KDDCUP99 data sets. The experimental results illustrate that the proposed approaches are efficient to process large data sets with good recognition rate, compared with KNN, SVM, C4.5, Naive Bayes, and CART.

  5. Timetable-based simulation method for choice set generation in large-scale public transport networks

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær; Anderson, Marie Karen; Nielsen, Otto Anker

    2016-01-01

    The composition and size of the choice sets are a key for the correct estimation of and prediction by route choice models. While existing literature has posed a great deal of attention towards the generation of path choice sets for private transport problems, the same does not apply to public...... transport problems. This study proposes a timetable-based simulation method for generating path choice sets in a multimodal public transport network. Moreover, this study illustrates the feasibility of its implementation by applying the method to reproduce 5131 real-life trips in the Greater Copenhagen Area...... and to assess the choice set quality in a complex multimodal transport network. Results illustrate the applicability of the algorithm and the relevance of the utility specification chosen for the reproduction of real-life path choices. Moreover, results show that the level of stochasticity used in choice set...

  6. Multi person detection and tracking based on hierarchical level-set method

    Science.gov (United States)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  7. A Nonparametric, Multiple Imputation-Based Method for the Retrospective Integration of Data Sets

    Science.gov (United States)

    Carrig, Madeline M.; Manrique-Vallier, Daniel; Ranby, Krista W.; Reiter, Jerome P.; Hoyle, Rick H.

    2015-01-01

    Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches. PMID:26257437

  8. The Symbolic Violence of Setting: A Bourdieusian Analysis of Mixed Methods Data on Secondary Students' Views about Setting

    Science.gov (United States)

    Archer, Louise; Francis, Becky; Miller, Sarah; Taylor, Becky; Tereshchenko, Antonina; Mazenod, Anna; Pepper, David; Travers, Mary-Claire

    2018-01-01

    "Setting" is a widespread practice in the UK, despite little evidence of its efficacy and substantial evidence of its detrimental impact on those allocated to the lowest sets. Taking a Bourdieusian approach, we propose that setting can be understood as a practice through which the social and cultural reproduction of dominant power…

  9. Great Expectations

    NARCIS (Netherlands)

    Dickens, Charles

    2005-01-01

    One of Dickens's most renowned and enjoyable novels, Great Expectations tells the story of Pip, an orphan boy who wishes to transcend his humble origins and finds himself unexpectedly given the opportunity to live a life of wealth and respectability. Over the course of the tale, in which Pip

  10. Local Fractional Series Expansion Method for Solving Wave and Diffusion Equations on Cantor Sets

    Directory of Open Access Journals (Sweden)

    Ai-Min Yang

    2013-01-01

    Full Text Available We proposed a local fractional series expansion method to solve the wave and diffusion equations on Cantor sets. Some examples are given to illustrate the efficiency and accuracy of the proposed method to obtain analytical solutions to differential equations within the local fractional derivatives.

  11. An Investigation of Undefined Cut Scores with the Hofstee Standard-Setting Method

    Science.gov (United States)

    Wyse, Adam E.; Babcock, Ben

    2017-01-01

    This article provides an overview of the Hofstee standard-setting method and illustrates several situations where the Hofstee method will produce undefined cut scores. The situations where the cut scores will be undefined involve cases where the line segment derived from the Hofstee ratings does not intersect the score distribution curve based on…

  12. A Mapmark method of standard setting as implemented for the National Assessment Governing Board.

    Science.gov (United States)

    Schulz, E Matthew; Mitzel, Howard C

    2011-01-01

    This article describes a Mapmark standard setting procedure, developed under contract with the National Assessment Governing Board (NAGB). The procedure enhances the bookmark method with spatially representative item maps, holistic feedback, and an emphasis on independent judgment. A rationale for these enhancements, and the bookmark method, is presented, followed by a detailed description of the materials and procedures used in a meeting to set standards for the 2005 National Assessment of Educational Progress (NAEP) in Grade 12 mathematics. The use of difficulty-ordered content domains to provide holistic feedback is a particularly novel feature of the method. Process evaluation results comparing Mapmark to Anghoff-based methods previously used for NAEP standard setting are also presented.

  13. An accurate conservative level set/ghost fluid method for simulating turbulent atomization

    International Nuclear Information System (INIS)

    Desjardins, Olivier; Moureau, Vincent; Pitsch, Heinz

    2008-01-01

    This paper presents a novel methodology for simulating incompressible two-phase flows by combining an improved version of the conservative level set technique introduced in [E. Olsson, G. Kreiss, A conservative level set method for two phase flow, J. Comput. Phys. 210 (2005) 225-246] with a ghost fluid approach. By employing a hyperbolic tangent level set function that is transported and re-initialized using fully conservative numerical schemes, mass conservation issues that are known to affect level set methods are greatly reduced. In order to improve the accuracy of the conservative level set method, high order numerical schemes are used. The overall robustness of the numerical approach is increased by computing the interface normals from a signed distance function reconstructed from the hyperbolic tangent level set by a fast marching method. The convergence of the curvature calculation is ensured by using a least squares reconstruction. The ghost fluid technique provides a way of handling the interfacial forces and large density jumps associated with two-phase flows with good accuracy, while avoiding artificial spreading of the interface. Since the proposed approach relies on partial differential equations, its implementation is straightforward in all coordinate systems, and it benefits from high parallel efficiency. The robustness and efficiency of the approach is further improved by using implicit schemes for the interface transport and re-initialization equations, as well as for the momentum solver. The performance of the method is assessed through both classical level set transport tests and simple two-phase flow examples including topology changes. It is then applied to simulate turbulent atomization of a liquid Diesel jet at Re=3000. The conservation errors associated with the accurate conservative level set technique are shown to remain small even for this complex case

  14. Effect of the absolute statistic on gene-sampling gene-set analysis methods.

    Science.gov (United States)

    Nam, Dougu

    2017-06-01

    Gene-set enrichment analysis and its modified versions have commonly been used for identifying altered functions or pathways in disease from microarray data. In particular, the simple gene-sampling gene-set analysis methods have been heavily used for datasets with only a few sample replicates. The biggest problem with this approach is the highly inflated false-positive rate. In this paper, the effect of absolute gene statistic on gene-sampling gene-set analysis methods is systematically investigated. Thus far, the absolute gene statistic has merely been regarded as a supplementary method for capturing the bidirectional changes in each gene set. Here, it is shown that incorporating the absolute gene statistic in gene-sampling gene-set analysis substantially reduces the false-positive rate and improves the overall discriminatory ability. Its effect was investigated by power, false-positive rate, and receiver operating curve for a number of simulated and real datasets. The performances of gene-set analysis methods in one-tailed (genome-wide association study) and two-tailed (gene expression data) tests were also compared and discussed.

  15. Automatic segmentation of Leishmania parasite in microscopic images using a modified CV level set method

    Science.gov (United States)

    Farahi, Maria; Rabbani, Hossein; Talebi, Ardeshir; Sarrafzadeh, Omid; Ensafi, Shahab

    2015-12-01

    Visceral Leishmaniasis is a parasitic disease that affects liver, spleen and bone marrow. According to World Health Organization report, definitive diagnosis is possible just by direct observation of the Leishman body in the microscopic image taken from bone marrow samples. We utilize morphological and CV level set method to segment Leishman bodies in digital color microscopic images captured from bone marrow samples. Linear contrast stretching method is used for image enhancement and morphological method is applied to determine the parasite regions and wipe up unwanted objects. Modified global and local CV level set methods are proposed for segmentation and a shape based stopping factor is used to hasten the algorithm. Manual segmentation is considered as ground truth to evaluate the proposed method. This method is tested on 28 samples and achieved 10.90% mean of segmentation error for global model and 9.76% for local model.

  16. Setting health research priorities using the CHNRI method: IV. Key conceptual advances

    Directory of Open Access Journals (Sweden)

    Igor Rudan

    2016-06-01

    Full Text Available Child Health and Nutrition Research Initiative (CHNRI started as an initiative of the Global Forum for Health Research in Geneva, Switzerland. Its aim was to develop a method that could assist priority setting in health research investments. The first version of the CHNRI method was published in 2007–2008. The aim of this paper was to summarize the history of the development of the CHNRI method and its key conceptual advances.

  17. Experiments expectations

    OpenAIRE

    Gorini, B; Meschi, E

    2014-01-01

    This paper presents the expectations and the constraints of the experiments relatively to the commissioning procedure and the running conditions for the 2015 data taking period. The views about the various beam parameters for the p-p period, like beam energy, maximum pileup, bunch spacing and luminosity limitation in IP2 and IP8, are discussed. The goals and the constraints of the 2015 physics program are also presented, including the heavy ions period as well as the special...

  18. An Alternative to the Carlson-Parkin Method for the Quantification of Qualitative Inflation Expectations: Evidence from the Ifo World Economic Survey

    OpenAIRE

    Henzel, Steffen; Wollmershäuser, Timo

    2005-01-01

    This paper presents a new methodology for the quantification of qualitative survey data. Traditional conversion methods, such as the probability approach of Carlson and Parkin (1975) or the time-varying parameters model of Seitz (1988), require very restrictive assumptions concerning the expectations formation process of survey respondents. Above all, the unbiasedness of expectations, which is a necessary condition for rationality, is imposed. Our approach avoids these assumptions. The novelt...

  19. Comparison of different methods for the solution of sets of linear equations

    International Nuclear Information System (INIS)

    Bilfinger, T.; Schmidt, F.

    1978-06-01

    The application of the conjugate-gradient methods as novel general iterative methods for the solution of sets of linear equations with symmetrical systems matrices led to this paper, where a comparison of these methods with the conventional differently accelerated Gauss-Seidel iteration was carried out. In additon, the direct Cholesky method was also included in the comparison. The studies referred mainly to memory requirement, computing time, speed of convergence, and accuracy of different conditions of the systems matrices, by which also the sensibility of the methods with respect to the influence of truncation errors may be recognized. (orig.) 891 RW [de

  20. Studying learning in the healthcare setting: the potential of quantitative diary methods.

    Science.gov (United States)

    Ciere, Yvette; Jaarsma, Debbie; Visser, Annemieke; Sanderman, Robbert; Snippe, Evelien; Fleer, Joke

    2015-08-01

    Quantitative diary methods are longitudinal approaches that involve the repeated measurement of aspects of peoples' experience of daily life. In this article, we outline the main characteristics and applications of quantitative diary methods and discuss how their use may further research in the field of medical education. Quantitative diary methods offer several methodological advantages, such as measuring aspects of learning with great detail, accuracy and authenticity. Moreover, they enable researchers to study how and under which conditions learning in the health care setting occurs and in which way learning can be promoted. Hence, quantitative diary methods may contribute to theory development and the optimization of teaching methods in medical education.

  1. Use of simulated data sets to evaluate the fidelity of Metagenomicprocessing methods

    Energy Technology Data Exchange (ETDEWEB)

    Mavromatis, Konstantinos; Ivanova, Natalia; Barry, Kerri; Shapiro, Harris; Goltsman, Eugene; McHardy, Alice C.; Rigoutsos, Isidore; Salamov, Asaf; Korzeniewski, Frank; Land, Miriam; Lapidus, Alla; Grigoriev, Igor; Richardson, Paul; Hugenholtz, Philip; Kyrpides, Nikos C.

    2006-12-01

    Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity--based (blast hit distribution) and two sequence composition--based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.

  2. Use of simulated data sets to evaluate the fidelity of metagenomic processing methods

    Energy Technology Data Exchange (ETDEWEB)

    Mavromatis, K [U.S. Department of Energy, Joint Genome Institute; Ivanova, N [U.S. Department of Energy, Joint Genome Institute; Barry, Kerrie [U.S. Department of Energy, Joint Genome Institute; Shapiro, Harris [U.S. Department of Energy, Joint Genome Institute; Goltsman, Eugene [U.S. Department of Energy, Joint Genome Institute; McHardy, Alice C. [IBM T. J. Watson Research Center; Rigoutsos, Isidore [IBM T. J. Watson Research Center; Salamov, Asaf [U.S. Department of Energy, Joint Genome Institute; Korzeniewski, Frank [U.S. Department of Energy, Joint Genome Institute; Land, Miriam L [ORNL; Lapidus, Alla L. [U.S. Department of Energy, Joint Genome Institute; Grigoriev, Igor [U.S. Department of Energy, Joint Genome Institute; Hugenholtz, Philip [U.S. Department of Energy, Joint Genome Institute; Kyrpides, Nikos C [U.S. Department of Energy, Joint Genome Institute

    2007-01-01

    Metagenomics is a rapidly emerging field of research for studying microbial communities. To evaluate methods presently used to process metagenomic sequences, we constructed three simulated data sets of varying complexity by combining sequencing reads randomly selected from 113 isolate genomes. These data sets were designed to model real metagenomes in terms of complexity and phylogenetic composition. We assembled sampled reads using three commonly used genome assemblers (Phrap, Arachne and JAZZ), and predicted genes using two popular gene-finding pipelines (fgenesb and CRITICA/GLIMMER). The phylogenetic origins of the assembled contigs were predicted using one sequence similarity-based ( blast hit distribution) and two sequence composition-based (PhyloPythia, oligonucleotide frequencies) binning methods. We explored the effects of the simulated community structure and method combinations on the fidelity of each processing step by comparison to the corresponding isolate genomes. The simulated data sets are available online to facilitate standardized benchmarking of tools for metagenomic analysis.

  3. How countries cope with competing demands and expectations: perspectives of different stakeholders on priority setting and resource allocation for health in the era of HIV and AIDS.

    Science.gov (United States)

    Jenniskens, Françoise; Tiendrebeogo, Georges; Coolen, Anne; Blok, Lucie; Kouanda, Seni; Sataru, Fuseini; Ralisimalala, Andriamampianina; Mwapasa, Victor; Kiyombo, Mbela; Plummer, David

    2012-12-11

    Health systems have experienced unprecedented stress in recent years, and as yet no consensus has emerged as to how to deal with the multiple burden of disease in the context of HIV and AIDS and other competing health priorities. Priority setting is essential, yet this is a complex, multifaceted process. Drawing on a study conducted in five African countries, this paper explores different stakeholders' perceptions of health priorities, how priorities are defined in practice, the process of resource allocation for HIV and Health and how different stakeholders perceive this. A sub-analysis was conducted of selected data from a wider qualitative study that explored the interactions between health systems and HIV and AIDS responses in five sub-Saharan countries (Burkina Faso, the Democratic Republic of Congo, Ghana, Madagascar and Malawi). Key background documents were analysed and semi-structured interviews (n = 258) and focus group discussions (n = 45) were held with representatives of communities, health personnel, decision makers, civil society representatives and development partners at both national and district level. Health priorities were expressed either in terms of specific health problems and diseases or gaps in service delivery requiring a strengthening of the overall health system. In all five countries study respondents (with the exception of community members in Ghana) identified malaria and HIV as the two top health priorities. Community representatives were more likely to report concerns about accessibility of services and quality of care. National level respondents often referred to wider systemic challenges in relation to achieving the Millennium Development Goals (MDGs). Indeed, actual priority setting was heavily influenced by international agendas (e.g. MDGs) and by the ways in which development partners were supporting national strategic planning processes. At the same time, multi-stakeholder processes were increasingly used to identify

  4. Set simulation of a turbulent arc by Monte-Carlo method

    International Nuclear Information System (INIS)

    Zhukov, M.F.; Devyatov, B.N.; Nazaruk, V.I.

    1982-01-01

    A method of simulation of turbulent arc fluctuations is suggested which is based on the probabilistic set description of conducting channel displacements over the plane not nodes with taking into account the turbulent eddies causing non-uniformity of the field of displacements. The problem is treated in terms of the random set theory. Methods to control the displacements by varying the local displacement sets are described. A local-set approach in the turbulent arc simulation is used for a statistical study of the arc form evolution in a turbulent gas flow. The method implies the performance of numerical experiments on a computer. Various ways to solve the problem of control of the geometric form of an arc column on a model are described. Under consideration are the problems of organization of physical experiments to obtain the required information for the identification of local sets. The suggested method of the application of mathematical experiments is associated with the principles of an operational game. (author)

  5. Level Set Projection Method for Incompressible Navier-Stokes on Arbitrary Boundaries

    KAUST Repository

    Williams-Rioux, Bertrand

    2012-01-12

    Second order level set projection method for incompressible Navier-Stokes equations is proposed to solve flow around arbitrary geometries. We used rectilinear grid with collocated cell centered velocity and pressure. An explicit Godunov procedure is used to address the nonlinear advection terms, and an implicit Crank-Nicholson method to update viscous effects. An approximate pressure projection is implemented at the end of the time stepping using multigrid as a conventional fast iterative method. The level set method developed by Osher and Sethian [17] is implemented to address real momentum and pressure boundary conditions by the advection of a distance function, as proposed by Aslam [3]. Numerical results for the Strouhal number and drag coefficients validated the model with good accuracy for flow over a cylinder in the parallel shedding regime (47 < Re < 180). Simulations for an array of cylinders and an oscillating cylinder were performed, with the latter demonstrating our methods ability to handle dynamic boundary conditions.

  6. Application of the level set method for multi-phase flow computation in fusion engineering

    International Nuclear Information System (INIS)

    Luo, X-Y.; Ni, M-J.; Ying, A.; Abdou, M.

    2006-01-01

    Numerical simulation of multi-phase flow is essential to evaluate the feasibility of a liquid protection scheme for the power plant chamber. The level set method is one of the best methods for computing and analyzing the motion of interface among the multi-phase flow. This paper presents a general formula for the second-order projection method combined with the level set method to simulate unsteady incompressible multi-phase flow with/out phase change flow encountered in fusion science and engineering. The third-order ENO scheme and second-order semi-implicit Crank-Nicholson scheme is used to update the convective and diffusion term. The numerical results show this method can handle the complex deformation of the interface and the effect of liquid-vapor phase change will be included in the future work

  7. Robust boundary detection of left ventricles on ultrasound images using ASM-level set method.

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Li, Hong; Teng, Yueyang; Kang, Yan

    2015-01-01

    Level set method has been widely used in medical image analysis, but it has difficulties when being used in the segmentation of left ventricular (LV) boundaries on echocardiography images because the boundaries are not very distinguish, and the signal-to-noise ratio of echocardiography images is not very high. In this paper, we introduce the Active Shape Model (ASM) into the traditional level set method to enforce shape constraints. It improves the accuracy of boundary detection and makes the evolution more efficient. The experiments conducted on the real cardiac ultrasound image sequences show a positive and promising result.

  8. Comparative Analysis of Fuzzy Set Defuzzification Methods in the Context of Ecological Risk Assessment

    Directory of Open Access Journals (Sweden)

    Užga-Rebrovs Oļegs

    2017-12-01

    Full Text Available Fuzzy inference systems are widely used in various areas of human activity. Their most widespread use lies in the field of fuzzy control of technical devices of different kind. Another direction of using fuzzy inference systems is modelling and assessment of different kind of risks under insufficient or missing objective initial data. Fuzzy inference is concluded by the procedure of defuzzification of the resulting fuzzy sets. A large number of techniques for implementing the defuzzification procedure are available nowadays. The paper presents a comparative analysis of some widespread methods of fuzzy set defuzzification, and proposes the most appropriate methods in the context of ecological risk assessment.

  9. Best Practice Life Expectancy

    DEFF Research Database (Denmark)

    Medford, Anthony

    2017-01-01

    been reported previously by various authors. Though remarkable, this is simply an empirical observation. Objective: We examine best-practice life expectancy more formally by using extreme value theory. Methods: Extreme value distributions are fit to the time series (1900 to 2012) of maximum life......Background: Whereas the rise in human life expectancy has been extensively studied, the evolution of maximum life expectancies, i.e., the rise in best-practice life expectancy in a group of populations, has not been examined to the same extent. The linear rise in best-practice life expectancy has...... expectancies at birth and age 65, for both sexes, using data from the Human Mortality Database and the United Nations. Conclusions: Generalized extreme value distributions offer a theoretically justified way to model best-practice life expectancies. Using this framework one can straightforwardly obtain...

  10. A simplified approach to the PROMETHEE method for priority setting in management of mine action projects

    Directory of Open Access Journals (Sweden)

    Marko Mladineo

    2016-12-01

    Full Text Available In the last 20 years, priority setting in mine actions, i.e. in humanitarian demining, has become an increasingly important topic. Given that mine action projects require management and decision-making based on a multi -criteria approach, multi-criteria decision-making methods like PROMETHEE and AHP have been used worldwide for priority setting. However, from the aspect of mine action, where stakeholders in the decision-making process for priority setting are project managers, local politicians, leaders of different humanitarian organizations, or similar, applying these methods can be difficult. Therefore, a specialized web-based decision support system (Web DSS for priority setting, developed as part of the FP7 project TIRAMISU, has been extended using a module for developing custom priority setting scenarios in line with an exceptionally easy, user-friendly approach. The idea behind this research is to simplify the multi-criteria analysis based on the PROMETHEE method. Therefore, a simplified PROMETHEE method based on statistical analysis for automated suggestions of parameters such as preference function thresholds, interactive selection of criteria weights, and easy input of criteria evaluations is presented in this paper. The result is web-based DSS that can be applied worldwide for priority setting in mine action. Additionally, the management of mine action projects is supported using modules for providing spatial data based on the geographic information system (GIS. In this paper, the benefits and limitations of a simplified PROMETHEE method are presented using a case study involving mine action projects, and subsequently, certain proposals are given for the further research.

  11. The Interval-Valued Triangular Fuzzy Soft Set and Its Method of Dynamic Decision Making

    Directory of Open Access Journals (Sweden)

    Xiaoguo Chen

    2014-01-01

    Full Text Available A concept of interval-valued triangular fuzzy soft set is presented, and some operations of “AND,” “OR,” intersection, union and complement, and so forth are defined. Then some relative properties are discussed and several conclusions are drawn. A dynamic decision making model is built based on the definition of interval-valued triangular fuzzy soft set, in which period weight is determined by the exponential decay method. The arithmetic weighted average operator of interval-valued triangular fuzzy soft set is given by the aggregating thought, thereby aggregating interval-valued triangular fuzzy soft sets of different time-series into a collective interval-valued triangular fuzzy soft set. The formulas of selection and decision values of different objects are given; therefore the optimal decision making is achieved according to the decision values. Finally, the steps of this method are concluded, and one example is given to explain the application of the method.

  12. Novel method of finding extreme edges in a convex set of N-dimension vectors

    Science.gov (United States)

    Hu, Chia-Lun J.

    2001-11-01

    As we published in the last few years, for a binary neural network pattern recognition system to learn a given mapping {Um mapped to Vm, m=1 to M} where um is an N- dimension analog (pattern) vector, Vm is a P-bit binary (classification) vector, the if-and-only-if (IFF) condition that this network can learn this mapping is that each i-set in {Ymi, m=1 to M} (where Ymithere existsVmiUm and Vmi=+1 or -1, is the i-th bit of VR-m).)(i=1 to P and there are P sets included here.) Is POSITIVELY, LINEARLY, INDEPENDENT or PLI. We have shown that this PLI condition is MORE GENERAL than the convexity condition applied to a set of N-vectors. In the design of old learning machines, we know that if a set of N-dimension analog vectors form a convex set, and if the machine can learn the boundary vectors (or extreme edges) of this set, then it can definitely learn the inside vectors contained in this POLYHEDRON CONE. This paper reports a new method and new algorithm to find the boundary vectors of a convex set of ND analog vectors.

  13. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    Science.gov (United States)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  14. Reconstruction of thin electromagnetic inclusions by a level-set method

    International Nuclear Information System (INIS)

    Park, Won-Kwang; Lesselier, Dominique

    2009-01-01

    In this contribution, we consider a technique of electromagnetic imaging (at a single, non-zero frequency) which uses the level-set evolution method for reconstructing a thin inclusion (possibly made of disconnected parts) with either dielectric or magnetic contrast with respect to the embedding homogeneous medium. Emphasis is on the proof of the concept, the scattering problem at hand being so far based on a two-dimensional scalar model. To do so, two level-set functions are employed; the first one describes location and shape, and the other one describes connectivity and length. Speeds of evolution of the level-set functions are calculated via the introduction of Fréchet derivatives of a least-square cost functional. Several numerical experiments on noiseless and noisy data as well illustrate how the proposed method behaves

  15. Improvement of training set structure in fusion data cleaning using Time-Domain Global Similarity method

    International Nuclear Information System (INIS)

    Liu, J.; Lan, T.; Qin, H.

    2017-01-01

    Traditional data cleaning identifies dirty data by classifying original data sequences, which is a class-imbalanced problem since the proportion of incorrect data is much less than the proportion of correct ones for most diagnostic systems in Magnetic Confinement Fusion (MCF) devices. When using machine learning algorithms to classify diagnostic data based on class-imbalanced training set, most classifiers are biased towards the major class and show very poor classification rates on the minor class. By transforming the direct classification problem about original data sequences into a classification problem about the physical similarity between data sequences, the class-balanced effect of Time-Domain Global Similarity (TDGS) method on training set structure is investigated in this paper. Meanwhile, the impact of improved training set structure on data cleaning performance of TDGS method is demonstrated with an application example in EAST POlarimetry-INTerferometry (POINT) system.

  16. Level set methods for detonation shock dynamics using high-order finite elements

    Energy Technology Data Exchange (ETDEWEB)

    Dobrev, V. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Grogan, F. C. [Univ. of California, San Diego, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kolev, T. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rieben, R [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tomov, V. Z. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-26

    Level set methods are a popular approach to modeling evolving interfaces. We present a level set ad- vection solver in two and three dimensions using the discontinuous Galerkin method with high-order nite elements. During evolution, the level set function is reinitialized to a signed distance function to maintain ac- curacy. Our approach leads to stable front propagation and convergence on high-order, curved, unstructured meshes. The ability of the solver to implicitly track moving fronts lends itself to a number of applications; in particular, we highlight applications to high-explosive (HE) burn and detonation shock dynamics (DSD). We provide results for two- and three-dimensional benchmark problems as well as applications to DSD.

  17. Quality of Gaussian basis sets: direct optimization of orbital exponents by the method of conjugate gradients

    International Nuclear Information System (INIS)

    Kari, R.E.; Mezey, P.G.; Csizmadia, I.G.

    1975-01-01

    Expressions are given for calculating the energy gradient vector in the exponent space of Gaussian basis sets and a technique to optimize orbital exponents using the method of conjugate gradients is described. The method is tested on the (9/sups/5/supp/) Gaussian basis space and optimum exponents are determined for the carbon atom. The analysis of the results shows that the calculated one-electron properties converge more slowly to their optimum values than the total energy converges to its optimum value. In addition, basis sets approximating the optimum total energy very well can still be markedly improved for the prediction of one-electron properties. For smaller basis sets, this improvement does not warrant the necessary expense

  18. A topology optimization method based on the level set method for the design of negative permeability dielectric metamaterials

    DEFF Research Database (Denmark)

    Otomori, Masaki; Yamada, Takayuki; Izui, Kazuhiro

    2012-01-01

    This paper presents a level set-based topology optimization method for the design of negative permeability dielectric metamaterials. Metamaterials are artificial materials that display extraordinary physical properties that are unavailable with natural materials. The aim of the formulated...... optimization problem is to find optimized layouts of a dielectric material that achieve negative permeability. The presence of grayscale areas in the optimized configurations critically affects the performance of metamaterials, positively as well as negatively, but configurations that contain grayscale areas...... are highly impractical from an engineering and manufacturing point of view. Therefore, a topology optimization method that can obtain clear optimized configurations is desirable. Here, a level set-based topology optimization method incorporating a fictitious interface energy is applied to a negative...

  19. Assessing data quality and the variability of source data verification auditing methods in clinical research settings.

    Science.gov (United States)

    Houston, Lauren; Probst, Yasmine; Martin, Allison

    2018-05-18

    Data audits within clinical settings are extensively used as a major strategy to identify errors, monitor study operations and ensure high-quality data. However, clinical trial guidelines are non-specific in regards to recommended frequency, timing and nature of data audits. The absence of a well-defined data quality definition and method to measure error undermines the reliability of data quality assessment. This review aimed to assess the variability of source data verification (SDV) auditing methods to monitor data quality in a clinical research setting. The scientific databases MEDLINE, Scopus and Science Direct were searched for English language publications, with no date limits applied. Studies were considered if they included data from a clinical trial or clinical research setting and measured and/or reported data quality using a SDV auditing method. In total 15 publications were included. The nature and extent of SDV audit methods in the articles varied widely, depending upon the complexity of the source document, type of study, variables measured (primary or secondary), data audit proportion (3-100%) and collection frequency (6-24 months). Methods for coding, classifying and calculating error were also inconsistent. Transcription errors and inexperienced personnel were the main source of reported error. Repeated SDV audits using the same dataset demonstrated ∼40% improvement in data accuracy and completeness over time. No description was given in regards to what determines poor data quality in clinical trials. A wide range of SDV auditing methods are reported in the published literature though no uniform SDV auditing method could be determined for "best practice" in clinical trials. Published audit methodology articles are warranted for the development of a standardised SDV auditing method to monitor data quality in clinical research settings. Copyright © 2018. Published by Elsevier Inc.

  20. A perturbation method for dark solitons based on a complete set of the squared Jost solutions

    International Nuclear Information System (INIS)

    Ao Shengmei; Yan Jiaren

    2005-01-01

    A perturbation method for dark solitons is developed, which is based on the construction and the rigorous proof of the complete set of squared Jost solutions. The general procedure solving the adiabatic solution of perturbed nonlinear Schroedinger + equation, the time-evolution equation of dark soliton parameters and a formula for calculating the first-order correction are given. The method can also overcome the difficulties resulting from the non-vanishing boundary condition

  1. Finite test sets development method for test execution of safety critical software

    International Nuclear Information System (INIS)

    Shin, Sung Min; Kim, Hee Eun; Kang, Hyun Gook; Lee, Sung Jiun

    2014-01-01

    The V and V method has been utilized for this safety critical software, while SRGM has difficulties because of lack of failure occurrence data on developing phase. For the safety critical software, however, failure data cannot be gathered after installation in real plant when we consider the severe consequence. Therefore, to complement the V and V method, the test-based method need to be developed. Some studies on test-based reliability quantification method for safety critical software have been conducted in nuclear field. These studies provide useful guidance on generating test sets. An important concept of the guidance is that the test sets represent 'trajectories' (a series of successive values for the input variables of a program that occur during the operation of the software over time) in the space of inputs to the software.. Actually, the inputs to the software depends on the state of plant at that time, and these inputs form a new internal state of the software by changing values of some variables. In other words, internal state of the software at specific timing depends on the history of past inputs. Here the internal state of the software which can be changed by past inputs is named as Context of Software (CoS). In a certain CoS, a software failure occurs when a fault is triggered by some inputs. To cover the failure occurrence mechanism of a software, preceding researches insist that the inputs should be a trajectory form. However, in this approach, there are two critical problems. One is the length of the trajectory input. Input trajectory should long enough to cover failure mechanism, but the enough length is not clear. What is worse, to cover some accident scenario, one set of input should represent dozen hours of successive values. The other problem is number of tests needed. To satisfy a target reliability with reasonable confidence level, very large number of test sets are required. Development of this number of test sets is a herculean

  2. A scalable method for identifying frequent subtrees in sets of large phylogenetic trees.

    Science.gov (United States)

    Ramu, Avinash; Kahveci, Tamer; Burleigh, J Gordon

    2012-10-03

    We consider the problem of finding the maximum frequent agreement subtrees (MFASTs) in a collection of phylogenetic trees. Existing methods for this problem often do not scale beyond datasets with around 100 taxa. Our goal is to address this problem for datasets with over a thousand taxa and hundreds of trees. We develop a heuristic solution that aims to find MFASTs in sets of many, large phylogenetic trees. Our method works in multiple phases. In the first phase, it identifies small candidate subtrees from the set of input trees which serve as the seeds of larger subtrees. In the second phase, it combines these small seeds to build larger candidate MFASTs. In the final phase, it performs a post-processing step that ensures that we find a frequent agreement subtree that is not contained in a larger frequent agreement subtree. We demonstrate that this heuristic can easily handle data sets with 1000 taxa, greatly extending the estimation of MFASTs beyond current methods. Although this heuristic does not guarantee to find all MFASTs or the largest MFAST, it found the MFAST in all of our synthetic datasets where we could verify the correctness of the result. It also performed well on large empirical data sets. Its performance is robust to the number and size of the input trees. Overall, this method provides a simple and fast way to identify strongly supported subtrees within large phylogenetic hypotheses.

  3. Convex Coverage Set Methods for Multi-Objective Collaborative Decision Making

    NARCIS (Netherlands)

    Roijers, D.M.; Lomuscio, A.; Scerri, P.; Bazzan, A.; Huhns, M.

    2014-01-01

    My research is aimed at finding efficient coordination methods for multi-objective collaborative multi-agent decision theoretic planning. Key to coordinating efficiently in these settings is exploiting loose couplings between agents. We proposed two algorithms for the case in which the agents need

  4. Review of the different methods to derive average spacing from resolved resonance parameters sets

    International Nuclear Information System (INIS)

    Fort, E.; Derrien, H.; Lafond, D.

    1979-12-01

    The average spacing of resonances is an important parameter for statistical model calculations, especially concerning non fissile nuclei. The different methods to derive this average value from resonance parameters sets have been reviewed and analyzed in order to tentatively detect their respective weaknesses and propose recommendations. Possible improvements are suggested

  5. The black-body radiation inversion problem, its instability and a new universal function set method

    International Nuclear Information System (INIS)

    Ye, JiPing; Ji, FengMin; Wen, Tao; Dai, Xian-Xi; Dai, Ji-Xin; Evenson, William E.

    2006-01-01

    The black-body radiation inversion (BRI) problem is ill-posed and requires special techniques to achieve stable solutions. In this Letter, the universal function set method (UFS), is developed in BRI. An improved unique existence theorem is proposed. Asymptotic behavior control (ABC) is introduced. A numerical example shows that practical calculations are possible with UFS

  6. Differentiability properties of the efficient (u,q2)-set in the Markowitz portfolio selection method

    NARCIS (Netherlands)

    Kriens, J.; Strijbosch, L.W.G.; Vörös, J.

    1994-01-01

    The set of efficient (Rho2)-combinations in the (Rho2)-plane of the Markowitz portfolio selection method consists of a series of strictly convex parabola. In the transition points from one parabola to the next one, the curve may be indifferentiable. The article gives necessary and sufficient

  7. Setting health research priorities using the CHNRI method: VII. A review of the first 50 applications of the CHNRI method.

    Science.gov (United States)

    Rudan, Igor; Yoshida, Sachiyo; Chan, Kit Yee; Sridhar, Devi; Wazny, Kerri; Nair, Harish; Sheikh, Aziz; Tomlinson, Mark; Lawn, Joy E; Bhutta, Zulfiqar A; Bahl, Rajiv; Chopra, Mickey; Campbell, Harry; El Arifeen, Shams; Black, Robert E; Cousens, Simon

    2017-06-01

    Several recent reviews of the methods used to set research priorities have identified the CHNRI method (acronym derived from the "Child Health and Nutrition Research Initiative") as an approach that clearly became popular and widely used over the past decade. In this paper we review the first 50 examples of application of the CHNRI method, published between 2007 and 2016, and summarize the most important messages that emerged from those experiences. We conducted a literature review to identify the first 50 examples of application of the CHNRI method in chronological order. We searched Google Scholar, PubMed and so-called grey literature. Initially, between 2007 and 2011, the CHNRI method was mainly used for setting research priorities to address global child health issues, although the first cases of application outside this field (eg, mental health, disabilities and zoonoses) were also recorded. Since 2012 the CHNRI method was used more widely, expanding into the topics such as adolescent health, dementia, national health policy and education. The majority of the exercises were focused on issues that were only relevant to low- and middle-income countries, and national-level applications are on the rise. The first CHNRI-based articles adhered to the five recommended priority-setting criteria, but by 2016 more than two-thirds of all conducted exercises departed from recommendations, modifying the CHNRI method to suit each particular exercise. This was done not only by changing the number of criteria used, but also by introducing some entirely new criteria (eg, "low cost", "sustainability", "acceptability", "feasibility", "relevance" and others). The popularity of the CHNRI method in setting health research priorities can be attributed to several key conceptual advances that have addressed common concerns. The method is systematic in nature, offering an acceptable framework for handling many research questions. It is also transparent and replicable, because it

  8. Online monitoring of oil film using electrical capacitance tomography and level set method

    International Nuclear Information System (INIS)

    Xue, Q.; Ma, M.; Sun, B. Y.; Cui, Z. Q.; Wang, H. X.

    2015-01-01

    In the application of oil-air lubrication system, electrical capacitance tomography (ECT) provides a promising way for monitoring oil film in the pipelines by reconstructing cross sectional oil distributions in real time. While in the case of small diameter pipe and thin oil film, the thickness of the oil film is hard to be observed visually since the interface of oil and air is not obvious in the reconstructed images. And the existence of artifacts in the reconstructions has seriously influenced the effectiveness of image segmentation techniques such as level set method. Besides, level set method is also unavailable for online monitoring due to its low computation speed. To address these problems, a modified level set method is developed: a distance regularized level set evolution formulation is extended to image two-phase flow online using an ECT system, a narrowband image filter is defined to eliminate the influence of artifacts, and considering the continuity of the oil distribution variation, the detected oil-air interface of a former image can be used as the initial contour for the detection of the subsequent frame; thus, the propagation from the initial contour to the boundary can be greatly accelerated, making it possible for real time tracking. To testify the feasibility of the proposed method, an oil-air lubrication facility with 4 mm inner diameter pipe is measured in normal operation using an 8-electrode ECT system. Both simulation and experiment results indicate that the modified level set method is capable of visualizing the oil-air interface accurately online

  9. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    International Nuclear Information System (INIS)

    Anderson, Ryan B.; Bell, James F.; Wiens, Roger C.; Morris, Richard V.; Clegg, Samuel M.

    2012-01-01

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO 2 at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by ∼ 3 wt.%. The statistical significance of these improvements was ∼ 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and specifically

  10. Setting health research priorities using the CHNRI method: V. Quantitative properties of human collective knowledge.

    Science.gov (United States)

    Rudan, Igor; Yoshida, Sachiyo; Wazny, Kerri; Chan, Kit Yee; Cousens, Simon

    2016-06-01

    The CHNRI method for setting health research priorities has crowdsourcing as the major component. It uses the collective opinion of a group of experts to generate, assess and prioritize between many competing health research ideas. It is difficult to compare the accuracy of human individual and collective opinions in predicting uncertain future outcomes before the outcomes are known. However, this limitation does not apply to existing knowledge, which is an important component underlying opinion. In this paper, we report several experiments to explore the quantitative properties of human collective knowledge and discuss their relevance to the CHNRI method. We conducted a series of experiments in groups of about 160 (range: 122-175) undergraduate Year 2 medical students to compare their collective knowledge to their individual knowledge. We asked them to answer 10 questions on each of the following: (i) an area in which they have a degree of expertise (undergraduate Year 1 medical curriculum); (ii) an area in which they likely have some knowledge (general knowledge); and (iii) an area in which they are not expected to have any knowledge (astronomy). We also presented them with 20 pairs of well-known celebrities and asked them to identify the older person of the pair. In all these experiments our goal was to examine how the collective answer compares to the distribution of students' individual answers. When answering the questions in their own area of expertise, the collective answer (the median) was in the top 20.83% of the most accurate individual responses; in general knowledge, it was in the top 11.93%; and in an area with no expertise, the group answer was in the top 7.02%. However, the collective answer based on mean values fared much worse, ranging from top 75.60% to top 95.91%. Also, when confronted with guessing the older of the two celebrities, the collective response was correct in 18/20 cases (90%), while the 8 most successful individuals among the

  11. Using Machine Learning Methods Jointly to Find Better Set of Rules in Data Mining

    Directory of Open Access Journals (Sweden)

    SUG Hyontai

    2017-01-01

    Full Text Available Rough set-based data mining algorithms are one of widely accepted machine learning technologies because of their strong mathematical background and capability of finding optimal rules based on given data sets only without room for prejudiced views to be inserted on the data. But, because the algorithms find rules very precisely, we may confront with the overfitting problem. On the other hand, association rule algorithms find rules of association, where the association resides between sets of items in database. The algorithms find itemsets that occur more than given minimum support, so that they can find the itemsets practically in reasonable time even for very large databases by supplying the minimum support appropriately. In order to overcome the problem of the overfitting problem in rough set-based algorithms, first we find large itemsets, after that we select attributes that cover the large itemsets. By using the selected attributes only, we may find better set of rules based on rough set theory. Results from experiments support our suggested method.

  12. Benchmarking Methods and Data Sets for Ligand Enrichment Assessment in Virtual Screening

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2014-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. “analogue bias”, “artificial enrichment” and “false negative”. In addition, we introduced our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylase (HDAC) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The Leave-One-Out Cross-Validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased in terms of property matching, ROC curves and AUCs. PMID:25481478

  13. Benchmarking methods and data sets for ligand enrichment assessment in virtual screening.

    Science.gov (United States)

    Xia, Jie; Tilahun, Ermias Lemma; Reid, Terry-Elinor; Zhang, Liangren; Wang, Xiang Simon

    2015-01-01

    Retrospective small-scale virtual screening (VS) based on benchmarking data sets has been widely used to estimate ligand enrichments of VS approaches in the prospective (i.e. real-world) efforts. However, the intrinsic differences of benchmarking sets to the real screening chemical libraries can cause biased assessment. Herein, we summarize the history of benchmarking methods as well as data sets and highlight three main types of biases found in benchmarking sets, i.e. "analogue bias", "artificial enrichment" and "false negative". In addition, we introduce our recent algorithm to build maximum-unbiased benchmarking sets applicable to both ligand-based and structure-based VS approaches, and its implementations to three important human histone deacetylases (HDACs) isoforms, i.e. HDAC1, HDAC6 and HDAC8. The leave-one-out cross-validation (LOO CV) demonstrates that the benchmarking sets built by our algorithm are maximum-unbiased as measured by property matching, ROC curves and AUCs. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. A multilevel, level-set method for optimizing eigenvalues in shape design problems

    International Nuclear Information System (INIS)

    Haber, E.

    2004-01-01

    In this paper, we consider optimal design problems that involve shape optimization. The goal is to determine the shape of a certain structure such that it is either as rigid or as soft as possible. To achieve this goal we combine two new ideas for an efficient solution of the problem. First, we replace the eigenvalue problem with an approximation by using inverse iteration. Second, we use a level set method but rather than propagating the front we use constrained optimization methods combined with multilevel continuation techniques. Combining these two ideas we obtain a robust and rapid method for the solution of the optimal design problem

  15. Microwave imaging of dielectric cylinder using level set method and conjugate gradient algorithm

    International Nuclear Information System (INIS)

    Grayaa, K.; Bouzidi, A.; Aguili, T.

    2011-01-01

    In this paper, we propose a computational method for microwave imaging cylinder and dielectric object, based on combining level set technique and the conjugate gradient algorithm. By measuring the scattered field, we tried to retrieve the shape, localisation and the permittivity of the object. The forward problem is solved by the moment method, while the inverse problem is reformulate in an optimization one and is solved by the proposed scheme. It found that the proposed method is able to give good reconstruction quality in terms of the reconstructed shape and permittivity.

  16. Systems-based biological concordance and predictive reproducibility of gene set discovery methods in cardiovascular disease.

    Science.gov (United States)

    Azuaje, Francisco; Zheng, Huiru; Camargo, Anyela; Wang, Haiying

    2011-08-01

    The discovery of novel disease biomarkers is a crucial challenge for translational bioinformatics. Demonstration of both their classification power and reproducibility across independent datasets are essential requirements to assess their potential clinical relevance. Small datasets and multiplicity of putative biomarker sets may explain lack of predictive reproducibility. Studies based on pathway-driven discovery approaches have suggested that, despite such discrepancies, the resulting putative biomarkers tend to be implicated in common biological processes. Investigations of this problem have been mainly focused on datasets derived from cancer research. We investigated the predictive and functional concordance of five methods for discovering putative biomarkers in four independently-generated datasets from the cardiovascular disease domain. A diversity of biosignatures was identified by the different methods. However, we found strong biological process concordance between them, especially in the case of methods based on gene set analysis. With a few exceptions, we observed lack of classification reproducibility using independent datasets. Partial overlaps between our putative sets of biomarkers and the primary studies exist. Despite the observed limitations, pathway-driven or gene set analysis can predict potentially novel biomarkers and can jointly point to biomedically-relevant underlying molecular mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Setting Priorities in Global Child Health Research Investments: Guidelines for Implementation of the CHNRI Method

    Science.gov (United States)

    Rudan, Igor; Gibson, Jennifer L.; Ameratunga, Shanthi; El Arifeen, Shams; Bhutta, Zulfiqar A.; Black, Maureen; Black, Robert E.; Brown, Kenneth H.; Campbell, Harry; Carneiro, Ilona; Chan, Kit Yee; Chandramohan, Daniel; Chopra, Mickey; Cousens, Simon; Darmstadt, Gary L.; Gardner, Julie Meeks; Hess, Sonja Y.; Hyder, Adnan A.; Kapiriri, Lydia; Kosek, Margaret; Lanata, Claudio F.; Lansang, Mary Ann; Lawn, Joy; Tomlinson, Mark; Tsai, Alexander C.; Webster, Jayne

    2008-01-01

    This article provides detailed guidelines for the implementation of systematic method for setting priorities in health research investments that was recently developed by Child Health and Nutrition Research Initiative (CHNRI). The target audience for the proposed method are international agencies, large research funding donors, and national governments and policy-makers. The process has the following steps: (i) selecting the managers of the process; (ii) specifying the context and risk management preferences; (iii) discussing criteria for setting health research priorities; (iv) choosing a limited set of the most useful and important criteria; (v) developing means to assess the likelihood that proposed health research options will satisfy the selected criteria; (vi) systematic listing of a large number of proposed health research options; (vii) pre-scoring check of all competing health research options; (viii) scoring of health research options using the chosen set of criteria; (ix) calculating intermediate scores for each health research option; (x) obtaining further input from the stakeholders; (xi) adjusting intermediate scores taking into account the values of stakeholders; (xii) calculating overall priority scores and assigning ranks; (xiii) performing an analysis of agreement between the scorers; (xiv) linking computed research priority scores with investment decisions; (xv) feedback and revision. The CHNRI method is a flexible process that enables prioritizing health research investments at any level: institutional, regional, national, international, or global. PMID:19090596

  18. The Interval-Valued Triangular Fuzzy Soft Set and Its Method of Dynamic Decision Making

    OpenAIRE

    Xiaoguo Chen; Hong Du; Yue Yang

    2014-01-01

    A concept of interval-valued triangular fuzzy soft set is presented, and some operations of “AND,” “OR,” intersection, union and complement, and so forth are defined. Then some relative properties are discussed and several conclusions are drawn. A dynamic decision making model is built based on the definition of interval-valued triangular fuzzy soft set, in which period weight is determined by the exponential decay method. The arithmetic weighted average operator of interval-valued triangular...

  19. A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets

    Directory of Open Access Journals (Sweden)

    Vilius Matiukas

    2011-08-01

    Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English

  20. Plan-for-Gov[IT] - Planning for Governance of IT Method: use of the Techniques of "Text Retrieval" for mapping the expected support needs from IT Area to serve of the Corporation's Core-Business expectations

    Directory of Open Access Journals (Sweden)

    Altino José Mentzingen De Moraes

    2013-04-01

    Full Text Available The IT - Information Technology, in accordance with the philosophy of the IT Governance (and also as defined by Authors listed as follows requires its integration to the process of Strategic Planning of its Corporation, with the intention to align its actions with the Core-Business aiming at to reach the expected results by the IT Area. The question is how the IT can, under a methodological and direct way, to know how to interpret the expectations expressed by the Strategic Planning (a component of the Corporative Governance, in actions that are addressed to its Area in a practical manner and with an adequate tool kit related to the Frameworks (Models focused to the implementation of the IT Governance, for posterior creation of the necessary Effectiveness Indicators for monitoring about success level of the actions of IT in alignment with the Business. The result of this work is the proposal of the Text Retrieval and its subsequent validation (as a plausible resource for actual use to try to help the Governance of IT in its primary task of assisting the Corporation Core-Business, which was named as Plan-for- Gov[IT] - Planning for Governance of IT Method, which can be automated by the use of resources of "word finding" in Word Processors or in another software products with also this purpose.

  1. Community expectations

    International Nuclear Information System (INIS)

    Kraemer, L.

    2004-01-01

    Historically, the relationship between the nuclear generator and the local community has been one of stability and co-operation. However in more recent times (2000-2003) the nuclear landscape has had several major issues that directly effect the local nuclear host communities. - The associations mandate is to be supportive of the nuclear industry through ongoing dialogue, mutual cooperation and education, - To strengthen community representation with the nuclear industry and politically through networking with other nuclear host communities. As a result of these issues, the Mayors of a number of communities started having informal meetings to discuss the issues at hand and how they effect their constituents. These meetings led to the official formation of the CANHC with representation from: In Canada it is almost impossible to discuss decommissioning and dismantling of Nuclear Facilities without also discussing Nuclear Waste disposal for reasons that I will soon make clear. Also I would like to briefly touch on how and why expectation of communities may differ by geography and circumstance. (author)

  2. Evaluation of three methods for hemoglobin measurement in a blood donor setting

    Directory of Open Access Journals (Sweden)

    Jacob Rosenblit

    1999-05-01

    Full Text Available CONTEXT: The hemoglobin (Hb level is the most-used parameter for screening blood donors for the presence of anemia, one of the most-used methods for measuring Hb levels is based on photometric detection of cyanmetahemoglobin, as an alternative to this technology, HemoCue has developed a photometric method based on the determination of azide metahemoglobin. OBJECTIVE: To evaluate the performance of three methods for hemoglobin (Hb determination in a blood bank setting. DESIGN: Prospective study utilizing blood samples to compare methods for Hb determination. SETTING: Hemotherapy Service of the Hospital Israelita Albert Einstein, a private institution in the tertiary health care system. SAMPLE: Serial blood samples were collected from 259 individuals during the period from March to June 1996. MAIN MEASUREMENTS: Test performances and their comparisons were assessed by the analysis of coefficients of variation (CV, linear regression and mean differences. RESULTS: The CV for the three methods were: Coulter 0.68%, Cobas 0.82% and HemoCue 0.69%. There was no difference between the mean Hb determination for the three methods (p>0.05. The Coulter and Cobas methods showed the best agreement and the HemoCue method gave a lower Hb determination when compared to both the Coulter and Cobas methods. However, pairs of methods involving the HemoCue seem to have narrower limits of agreement (± 0.78 and ± 1.02 than the Coulter and Cobas combination (± 1.13. CONCLUSION: The three methods provide good agreement for hemoglobin determination.

  3. A level set method for cupping artifact correction in cone-beam CT

    International Nuclear Information System (INIS)

    Xie, Shipeng; Li, Haibo; Ge, Qi; Li, Chunming

    2015-01-01

    Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts in CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts

  4. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    Science.gov (United States)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  5. Setting health research priorities using the CHNRI method: III. Involving stakeholders

    Directory of Open Access Journals (Sweden)

    Sachiyo Yoshida

    2016-06-01

    Full Text Available Setting health research priorities is a complex and value–driven process. The introduction of the Child Health and Nutrition Research Initiative (CHNRI method has made the process of setting research priorities more transparent and inclusive, but much of the process remains in the hands of funders and researchers, as described in the previous two papers in this series. However, the value systems of numerous other important stakeholders, particularly those on the receiving end of health research products, are very rarely addressed in any process of priority setting. Inclusion of a larger and more diverse group of stakeholders in the process would result in a better reflection of the system of values of the broader community, resulting in recommendations that are more legitimate and acceptable.

  6. A mass conserving level set method for detailed numerical simulation of liquid atomization

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Kun; Shao, Changxiao [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China); Yang, Yue [State Key Laboratory of Turbulence and Complex Systems, Peking University, Beijing 100871 (China); Fan, Jianren, E-mail: fanjr@zju.edu.cn [State Key Laboratory of Clean Energy Utilization, Zhejiang University, Hangzhou 310027 (China)

    2015-10-01

    An improved mass conserving level set method for detailed numerical simulations of liquid atomization is developed to address the issue of mass loss in the existing level set method. This method introduces a mass remedy procedure based on the local curvature at the interface, and in principle, can ensure the absolute mass conservation of the liquid phase in the computational domain. Three benchmark cases, including Zalesak's disk, a drop deforming in a vortex field, and the binary drop head-on collision, are simulated to validate the present method, and the excellent agreement with exact solutions or experimental results is achieved. It is shown that the present method is able to capture the complex interface with second-order accuracy and negligible additional computational cost. The present method is then applied to study more complex flows, such as a drop impacting on a liquid film and the swirling liquid sheet atomization, which again, demonstrates the advantages of mass conservation and the capability to represent the interface accurately.

  7. Formal Analysis of SET and NSL Protocols Using the Interpretation Functions-Based Method

    Directory of Open Access Journals (Sweden)

    Hanane Houmani

    2012-01-01

    Full Text Available Most applications in the Internet such as e-banking and e-commerce use the SET and the NSL protocols to protect the communication channel between the client and the server. Then, it is crucial to ensure that these protocols respect some security properties such as confidentiality, authentication, and integrity. In this paper, we analyze the SET and the NSL protocols with respect to the confidentiality (secrecy property. To perform this analysis, we use the interpretation functions-based method. The main idea behind the interpretation functions-based technique is to give sufficient conditions that allow to guarantee that a cryptographic protocol respects the secrecy property. The flexibility of the proposed conditions allows the verification of daily-life protocols such as SET and NSL. Also, this method could be used under different assumptions such as a variety of intruder abilities including algebraic properties of cryptographic primitives. The NSL protocol, for instance, is analyzed with and without the homomorphism property. We show also, using the SET protocol, the usefulness of this approach to correct weaknesses and problems discovered during the analysis.

  8. Two Surface-Tension Formulations For The Level Set Interface-Tracking Method

    International Nuclear Information System (INIS)

    Shepel, S.V.; Smith, B.L.

    2005-01-01

    The paper describes a comparative study of two surface-tension models for the Level Set interface tracking method. In both models, the surface tension is represented as a body force, concentrated near the interface, but the technical implementation of the two options is different. The first is based on a traditional Level Set approach, in which the surface tension is distributed over a narrow band around the interface using a smoothed Delta function. In the second model, which is based on the integral form of the fluid-flow equations, the force is imposed only in those computational cells through which the interface passes. Both models have been incorporated into the Finite-Element/Finite-Volume Level Set method, previously implemented into the commercial Computational Fluid Dynamics (CFD) code CFX-4. A critical evaluation of the two models, undertaken in the context of four standard Level Set benchmark problems, shows that the first model, based on the smoothed Delta function approach, is the more general, and more robust, of the two. (author)

  9. Variational Level Set Method for Two-Stage Image Segmentation Based on Morphological Gradients

    Directory of Open Access Journals (Sweden)

    Zemin Ren

    2014-01-01

    Full Text Available We use variational level set method and transition region extraction techniques to achieve image segmentation task. The proposed scheme is done by two steps. We first develop a novel algorithm to extract transition region based on the morphological gradient. After this, we integrate the transition region into a variational level set framework and develop a novel geometric active contour model, which include an external energy based on transition region and fractional order edge indicator function. The external energy is used to drive the zero level set toward the desired image features, such as object boundaries. Due to this external energy, the proposed model allows for more flexible initialization. The fractional order edge indicator function is incorporated into the length regularization term to diminish the influence of noise. Moreover, internal energy is added into the proposed model to penalize the deviation of the level set function from a signed distance function. The results evolution of the level set function is the gradient flow that minimizes the overall energy functional. The proposed model has been applied to both synthetic and real images with promising results.

  10. Fault Diagnosis Method of Polymerization Kettle Equipment Based on Rough Sets and BP Neural Network

    Directory of Open Access Journals (Sweden)

    Shu-zhi Gao

    2013-01-01

    Full Text Available Polyvinyl chloride (PVC polymerizing production process is a typical complex controlled object, with complexity features, such as nonlinear, multivariable, strong coupling, and large time-delay. Aiming at the real-time fault diagnosis and optimized monitoring requirements of the large-scale key polymerization equipment of PVC production process, a real-time fault diagnosis strategy is proposed based on rough sets theory with the improved discernibility matrix and BP neural networks. The improved discernibility matrix is adopted to reduct the attributes of rough sets in order to decrease the input dimensionality of fault characteristics effectively. Levenberg-Marquardt BP neural network is trained to diagnose the polymerize faults according to the reducted decision table, which realizes the nonlinear mapping from fault symptom set to polymerize fault set. Simulation experiments are carried out combining with the industry history datum to show the effectiveness of the proposed rough set neural networks fault diagnosis method. The proposed strategy greatly increased the accuracy rate and efficiency of the polymerization fault diagnosis system.

  11. Phase-of-flight method for setting the accelerating fields in the ion linear accelerator

    International Nuclear Information System (INIS)

    Dvortsov, S.V.; Lomize, L.G.

    1983-01-01

    For setting amplitudes and phases of accelerating fields in multiresonator ion accelerators presently Δt-procedure is used. The determination and setting of two unknown parameters of RF-field (amplitude and phase) in n-resonator is made according to the two increments of particle time-of-flight, measured experimentally: according to the change of the particle time-of-flight Δt 1 in the n-resonator, during the field switching in the resonator, and according to the change of Δt 2 of the time-of-flight in (n+1) resonator without RF-field with the switching of accelerating field in the n-resonator. When approaching the accelerator exit the particle energy increases, relative energy increment decreases and the accuracy of setting decreases. To enchance the accuracy of accelerating fields setting in a linear ion accelerator a phase-of-flight method is developed, in which for the setting of accelerating fields the measured time-of-flight increment Δt only in one resonator is used (the one in which the change of amplitude and phase is performed). Results of simulation of point bunch motion in the IYaI AN USSR linear accelerator are presented

  12. A level-set method for two-phase flows with soluble surfactant

    Science.gov (United States)

    Xu, Jian-Jun; Shi, Weidong; Lai, Ming-Chih

    2018-01-01

    A level-set method is presented for solving two-phase flows with soluble surfactant. The Navier-Stokes equations are solved along with the bulk surfactant and the interfacial surfactant equations. In particular, the convection-diffusion equation for the bulk surfactant on the irregular moving domain is solved by using a level-set based diffusive-domain method. A conservation law for the total surfactant mass is derived, and a re-scaling procedure for the surfactant concentrations is proposed to compensate for the surfactant mass loss due to numerical diffusion. The whole numerical algorithm is easy for implementation. Several numerical simulations in 2D and 3D show the effects of surfactant solubility on drop dynamics under shear flow.

  13. New Multi-Criteria Group Decision-Making Method Based on Vague Set Theory

    OpenAIRE

    Kuo-Sui Lin

    2016-01-01

    In light of the deficiencies and limitations for existing score functions, Lin has proposed a more effective and reasonable new score function for measuring vague values. By using Lin’s score function and a new weighted aggregation score function, an algorithm for multi-criteria group decision-making method was proposed to solve vague set based group decision-making problems under vague environments. Finally, a numerical example was illustrated to show the effectiveness of the proposed multi-...

  14. Set up of a method for the adjustment of resonance parameters on integral experiments

    International Nuclear Information System (INIS)

    Blaise, P.

    1996-01-01

    Resonance parameters for actinides play a significant role in the neutronic characteristics of all reactor types. All the major integral parameters strongly depend on the nuclear data of the isotopes in the resonance-energy regions.The author sets up a method for the adjustment of resonance parameters taking into account the self-shielding effects and restricting the cross section deconvolution problem to a limited energy region. (N.T.)

  15. Group supervision in a private setting: Practice and method for theory and practice in psychotherapy

    Directory of Open Access Journals (Sweden)

    Graziana Mangiacavallo

    2015-05-01

    Full Text Available The report aims to tell the experience of a supervision group in a private setting. The group consists of professional psychotherapists driven by the more experienced practitioner, who shares a clinical reasoning on psychotherapy with younger colleagues. The report aims to present the supervision group as a methode and to showcase its features. The supervision group becomes a container of professional experiences that speak of the new way of doing psychotherapy. 

  16. Comparing simple root phenotyping methods on a core set of rice genotypes.

    Science.gov (United States)

    Shrestha, R; Al-Shugeairy, Z; Al-Ogaidi, F; Munasinghe, M; Radermacher, M; Vandenhirtz, J; Price, A H

    2014-05-01

    Interest in belowground plant growth is increasing, especially in relation to arguments that shallow-rooted cultivars are efficient at exploiting soil phosphorus while deep-rooted ones will access water at depth. However, methods for assessing roots in large numbers of plants are diverse and direct comparisons of methods are rare. Three methods for measuring root growth traits were evaluated for utility in discriminating rice cultivars: soil-filled rhizotrons, hydroponics and soil-filled pots whose bottom was sealed with a non-woven fabric (a potential method for assessing root penetration ability). A set of 38 rice genotypes including the OryzaSNP set of 20 cultivars, additional parents of mapping populations and products of marker-assisted selection for root QTLs were assessed. A novel method of image analysis for assessing rooting angles from rhizotron photographs was employed. The non-woven fabric was the easiest yet least discriminatory method, while the rhizotron was highly discriminatory and allowed the most traits to be measured but required more than three times the labour of the other methods. The hydroponics was both easy and discriminatory, allowed temporal measurements, but is most likely to suffer from artefacts. Image analysis of rhizotrons compared favourably to manual methods for discriminating between cultivars. Previous observations that cultivars from the indica subpopulation have shallower rooting angles than aus or japonica cultivars were confirmed in the rhizotrons, and indica and temperate japonicas had lower maximum root lengths in rhizotrons and hydroponics. It is concluded that rhizotrons are the preferred method for root screening, particularly since root angles can be assessed. © 2013 German Botanical Society and The Royal Botanical Society of the Netherlands.

  17. Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set.

    Science.gov (United States)

    Lenselink, Eelke B; Ten Dijke, Niels; Bongers, Brandon; Papadatos, George; van Vlijmen, Herman W T; Kowalczyk, Wojtek; IJzerman, Adriaan P; van Westen, Gerard J P

    2017-08-14

    The increase of publicly available bioactivity data in recent years has fueled and catalyzed research in chemogenomics, data mining, and modeling approaches. As a direct result, over the past few years a multitude of different methods have been reported and evaluated, such as target fishing, nearest neighbor similarity-based methods, and Quantitative Structure Activity Relationship (QSAR)-based protocols. However, such studies are typically conducted on different datasets, using different validation strategies, and different metrics. In this study, different methods were compared using one single standardized dataset obtained from ChEMBL, which is made available to the public, using standardized metrics (BEDROC and Matthews Correlation Coefficient). Specifically, the performance of Naïve Bayes, Random Forests, Support Vector Machines, Logistic Regression, and Deep Neural Networks was assessed using QSAR and proteochemometric (PCM) methods. All methods were validated using both a random split validation and a temporal validation, with the latter being a more realistic benchmark of expected prospective execution. Deep Neural Networks are the top performing classifiers, highlighting the added value of Deep Neural Networks over other more conventional methods. Moreover, the best method ('DNN_PCM') performed significantly better at almost one standard deviation higher than the mean performance. Furthermore, Multi-task and PCM implementations were shown to improve performance over single task Deep Neural Networks. Conversely, target prediction performed almost two standard deviations under the mean performance. Random Forests, Support Vector Machines, and Logistic Regression performed around mean performance. Finally, using an ensemble of DNNs, alongside additional tuning, enhanced the relative performance by another 27% (compared with unoptimized 'DNN_PCM'). Here, a standardized set to test and evaluate different machine learning algorithms in the context of multi

  18. Approaches, tools and methods used for setting priorities in health research in the 21st century

    Science.gov (United States)

    Yoshida, Sachiyo

    2016-01-01

    Background Health research is difficult to prioritize, because the number of possible competing ideas for research is large, the outcome of research is inherently uncertain, and the impact of research is difficult to predict and measure. A systematic and transparent process to assist policy makers and research funding agencies in making investment decisions is a permanent need. Methods To obtain a better understanding of the landscape of approaches, tools and methods used to prioritize health research, I conducted a methodical review using the PubMed database for the period 2001–2014. Results A total of 165 relevant studies were identified, in which health research prioritization was conducted. They most frequently used the CHNRI method (26%), followed by the Delphi method (24%), James Lind Alliance method (8%), the Combined Approach Matrix (CAM) method (2%) and the Essential National Health Research method (priorities were set. A further 19% used a combination of expert panel interview and focus group discussion (“consultation process”) but provided few details, while a further 2% used approaches that were clearly described, but not established as a replicable method. Online surveys that were not accompanied by face–to–face meetings were used in 8% of studies, while 9% used a combination of literature review and questionnaire to scrutinise the research options for prioritization among the participating experts. Conclusion The number of priority setting exercises in health research published in PubMed–indexed journals is increasing, especially since 2010. These exercises are being conducted at a variety of levels, ranging from the global level to the level of an individual hospital. With the development of new tools and methods which have a well–defined structure – such as the CHNRI method, James Lind Alliance Method and Combined Approach Matrix – it is likely that the Delphi method and non–replicable consultation processes will gradually be

  19. Angular quadrature sets for the streaming ray method in x-y geometry

    International Nuclear Information System (INIS)

    England, R.; Filippone, W.L.

    1983-01-01

    Steaming ray (SR) computations normally employ a set of specially selected ray directions. For x-y geometry, these directions are not uniformly spaced in the azimuthal angle, nor do they conform to any of the standard quadrature sets in current use. For simplicity in all previous SR computations, uniform angular weights were used. This note investigates two methods--a bisection scheme and a Fourier scheme--for selecting more appropriate azimuthal angular weights. In the bisection scheme, the azimuthal weight assigned to an SR direction is half the angular spread (in the x-y plane) between its two adjacent ray directions. In the Fourier method, the weights are chosen such that the number of terms in a Fourier series exactly integrable on the interval (0, 2π) is maximized. Several sample calculations have been performed. While both the Fourier and bisection weights showed significant advantage over the uniform weights used previously, the Fourier scheme appears to be the best method. Lists of bisection and Fourier weights are given for quadrature sets containing 4, 8, 12, ..., 60 azimuthal SR directions

  20. GSMA: Gene Set Matrix Analysis, An Automated Method for Rapid Hypothesis Testing of Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Chris Cheadle

    2007-01-01

    Full Text Available Background: Microarray technology has become highly valuable for identifying complex global changes in gene expression patterns. The assignment of functional information to these complex patterns remains a challenging task in effectively interpreting data and correlating results from across experiments, projects and laboratories. Methods which allow the rapid and robust evaluation of multiple functional hypotheses increase the power of individual researchers to data mine gene expression data more efficiently.Results: We have developed (gene set matrix analysis GSMA as a useful method for the rapid testing of group-wise up- or downregulation of gene expression simultaneously for multiple lists of genes (gene sets against entire distributions of gene expression changes (datasets for single or multiple experiments. The utility of GSMA lies in its flexibility to rapidly poll gene sets related by known biological function or as designated solely by the end-user against large numbers of datasets simultaneously.Conclusions: GSMA provides a simple and straightforward method for hypothesis testing in which genes are tested by groups across multiple datasets for patterns of expression enrichment.

  1. Deriving the probability of a linear opinion pooling method being superior to a set of alternatives

    International Nuclear Information System (INIS)

    Bolger, Donnacha; Houlding, Brett

    2017-01-01

    Linear opinion pools are a common method for combining a set of distinct opinions into a single succinct opinion, often to be used in a decision making task. In this paper we consider a method, termed the Plug-in approach, for determining the weights to be assigned in this linear pool, in a manner that can be deemed as rational in some sense, while incorporating multiple forms of learning over time into its process. The environment that we consider is one in which every source in the pool is herself a decision maker (DM), in contrast to the more common setting in which expert judgments are amalgamated for use by a single DM. We discuss a simulation study that was conducted to show the merits of our technique, and demonstrate how theoretical probabilistic arguments can be used to exactly quantify the probability of this technique being superior (in terms of a probability density metric) to a set of alternatives. Illustrations are given of simulated proportions converging to these true probabilities in a range of commonly used distributional cases. - Highlights: • A novel context for combination of expert opinion is provided. • A dynamic reliability assessment method is stated, justified by properties and a data study. • The theoretical grounding underlying the data-driven justification is explored. • We conclude with areas for expansion and further relevant research.

  2. Exponential operations and aggregation operators of interval neutrosophic sets and their decision making methods.

    Science.gov (United States)

    Ye, Jun

    2016-01-01

    An interval neutrosophic set (INS) is a subclass of a neutrosophic set and a generalization of an interval-valued intuitionistic fuzzy set, and then the characteristics of INS are independently described by the interval numbers of its truth-membership, indeterminacy-membership, and falsity-membership degrees. However, the exponential parameters (weights) of all the existing exponential operational laws of INSs and the corresponding exponential aggregation operators are crisp values in interval neutrosophic decision making problems. As a supplement, this paper firstly introduces new exponential operational laws of INSs, where the bases are crisp values or interval numbers and the exponents are interval neutrosophic numbers (INNs), which are basic elements in INSs. Then, we propose an interval neutrosophic weighted exponential aggregation (INWEA) operator and a dual interval neutrosophic weighted exponential aggregation (DINWEA) operator based on these exponential operational laws and introduce comparative methods based on cosine measure functions for INNs and dual INNs. Further, we develop decision-making methods based on the INWEA and DINWEA operators. Finally, a practical example on the selecting problem of global suppliers is provided to illustrate the applicability and rationality of the proposed methods.

  3. An Extended TOPSIS Method for the Multiple Attribute Decision Making Problems Based on Interval Neutrosophic Set

    Directory of Open Access Journals (Sweden)

    Pingping Chi

    2013-03-01

    Full Text Available The interval neutrosophic set (INS can be easier to express the incomplete, indeterminate and inconsistent information, and TOPSIS is one of the most commonly used and effective method for multiple attribute decision making, however, in general, it can only process the attribute values with crisp numbers. In this paper, we have extended TOPSIS to INS, and with respect to the multiple attribute decision making problems in which the attribute weights are unknown and the attribute values take the form of INSs, we proposed an expanded TOPSIS method. Firstly, the definition of INS and the operational laws are given, and distance between INSs is defined. Then, the attribute weights are determined based on the Maximizing deviation method and an extended TOPSIS method is developed to rank the alternatives. Finally, an illustrative example is given to verify the developed approach and to demonstrate its practicality and effectiveness.

  4. Approaches, tools and methods used for setting priorities in health research in the 21(st) century.

    Science.gov (United States)

    Yoshida, Sachiyo

    2016-06-01

    Health research is difficult to prioritize, because the number of possible competing ideas for research is large, the outcome of research is inherently uncertain, and the impact of research is difficult to predict and measure. A systematic and transparent process to assist policy makers and research funding agencies in making investment decisions is a permanent need. To obtain a better understanding of the landscape of approaches, tools and methods used to prioritize health research, I conducted a methodical review using the PubMed database for the period 2001-2014. A total of 165 relevant studies were identified, in which health research prioritization was conducted. They most frequently used the CHNRI method (26%), followed by the Delphi method (24%), James Lind Alliance method (8%), the Combined Approach Matrix (CAM) method (2%) and the Essential National Health Research method (priorities were set. A further 19% used a combination of expert panel interview and focus group discussion ("consultation process") but provided few details, while a further 2% used approaches that were clearly described, but not established as a replicable method. Online surveys that were not accompanied by face-to-face meetings were used in 8% of studies, while 9% used a combination of literature review and questionnaire to scrutinise the research options for prioritization among the participating experts. The number of priority setting exercises in health research published in PubMed-indexed journals is increasing, especially since 2010. These exercises are being conducted at a variety of levels, ranging from the global level to the level of an individual hospital. With the development of new tools and methods which have a well-defined structure - such as the CHNRI method, James Lind Alliance Method and Combined Approach Matrix - it is likely that the Delphi method and non-replicable consultation processes will gradually be replaced by these emerging tools, which offer more

  5. Setting health research priorities using the CHNRI method: IV. Key conceptual advances.

    Science.gov (United States)

    Rudan, Igor

    2016-06-01

    Child Health and Nutrition Research Initiative (CHNRI) started as an initiative of the Global Forum for Health Research in Geneva, Switzerland. Its aim was to develop a method that could assist priority setting in health research investments. The first version of the CHNRI method was published in 2007-2008. The aim of this paper was to summarize the history of the development of the CHNRI method and its key conceptual advances. The guiding principle of the CHNRI method is to expose the potential of many competing health research ideas to reduce disease burden and inequities that exist in the population in a feasible and cost-effective way. The CHNRI method introduced three key conceptual advances that led to its increased popularity in comparison to other priority-setting methods and processes. First, it proposed a systematic approach to listing a large number of possible research ideas, using the "4D" framework (description, delivery, development and discovery research) and a well-defined "depth" of proposed research ideas (research instruments, avenues, options and questions). Second, it proposed a systematic approach for discriminating between many proposed research ideas based on a well-defined context and criteria. The five "standard" components of the context are the population of interest, the disease burden of interest, geographic limits, time scale and the preferred style of investing with respect to risk. The five "standard" criteria proposed for prioritization between research ideas are answerability, effectiveness, deliverability, maximum potential for disease burden reduction and the effect on equity. However, both the context and the criteria can be flexibly changed to meet the specific needs of each priority-setting exercise. Third, it facilitated consensus development through measuring collective optimism on each component of each research idea among a larger group of experts using a simple scoring system. This enabled the use of the knowledge of

  6. Uncertainties must be expected. Stochastic methods support power plant control; Mit Unsicherheiten ist zu rechnen. Stochastische Methoden unterstuetzen Kraftwerkssteuerung

    Energy Technology Data Exchange (ETDEWEB)

    Syben, Olaf; Dehery, Francois-Regis [ProCom GmbH, Aachen (Germany)

    2012-07-01

    Uncertainties must be considered for successful portfolio management in the energy markets. This is not only a structural aspect of the increasingly complex interdependences between raw materials, technical assets and economic considerations but also covers the distortion of degrees of freedom in time, which require ever faster decision processes. Stochastic methods make it possible to judge uncertainties even in complex planning problems. Integration of the mathematical methods in an efficient IT environment ensures that a verifiable decision basis is available at any time. (orig./AKB)

  7. A method for patient set-up guidance in radiotherapy using augmented reality

    International Nuclear Information System (INIS)

    Talbot, J.; Meyer, J.; Watts, R.; Grasset, R.

    2009-01-01

    Full text: A system for patient set-up in external beam radiotherapy was developed using Augmented Reality (AR). Live images of the linac treatment couch and patient were obtained with video cameras and displayed on a nearby monitor. A 3D model of the patient's external contour was obtained from planning CT data, and AR tracking software was used to superimpose the model onto the video images in the correct position for treatment. Throughout set-up and treatment, the user can view the monitor and visually confirm that the patient is positioned correctly. To ensure that the virtual contour was displayed in the correct position, a process was devised to register the coordinates of the linac with the camera images. A cube with AR tracking markers attached to its faces was constructed for alignment with the isocentre using room lasers or cone-beam CT. The performance of the system was investigated in a clinical environment by using it to position an anthropomorphic phantom without the aid of additional set-up methods. The positioning errors were determined by means of CBCT and image registration. The translational set-up errors were found to be less than 2.4 mm and the rotational errors less than 0.3 0 . This proof-of-principle study has demonstrated the feasibility of using AR for patient position and pose guidance.

  8. Development of a set of benchmark problems to verify numerical methods for solving burnup equations

    International Nuclear Information System (INIS)

    Lago, Daniel; Rahnema, Farzad

    2017-01-01

    Highlights: • Description transmutation chain benchmark problems. • Problems for validating numerical methods for solving burnup equations. • Analytical solutions for the burnup equations. • Numerical solutions for the burnup equations. - Abstract: A comprehensive set of transmutation chain benchmark problems for numerically validating methods for solving burnup equations was created. These benchmark problems were designed to challenge both traditional and modern numerical methods used to solve the complex set of ordinary differential equations used for tracking the change in nuclide concentrations over time due to nuclear phenomena. Given the development of most burnup solvers is done for the purpose of coupling with an established transport solution method, these problems provide a useful resource in testing and validating the burnup equation solver before coupling for use in a lattice or core depletion code. All the relevant parameters for each benchmark problem are described. Results are also provided in the form of reference solutions generated by the Mathematica tool, as well as additional numerical results from MATLAB.

  9. Failure Mode and Effect Analysis using Soft Set Theory and COPRAS Method

    Directory of Open Access Journals (Sweden)

    Ze-Ling Wang

    2017-01-01

    Full Text Available Failure mode and effect analysis (FMEA is a risk management technique frequently applied to enhance the system performance and safety. In recent years, many researchers have shown an intense interest in improving FMEA due to inherent weaknesses associated with the classical risk priority number (RPN method. In this study, we develop a new risk ranking model for FMEA based on soft set theory and COPRAS method, which can deal with the limitations and enhance the performance of the conventional FMEA. First, trapezoidal fuzzy soft set is adopted to manage FMEA team membersr linguistic assessments on failure modes. Then, a modified COPRAS method is utilized for determining the ranking order of the failure modes recognized in FMEA. Especially, we treat the risk factors as interdependent and employ the Choquet integral to obtain the aggregate risk of failures in the new FMEA approach. Finally, a practical FMEA problem is analyzed via the proposed approach to demonstrate its applicability and effectiveness. The result shows that the FMEA model developed in this study outperforms the traditional RPN method and provides a more reasonable risk assessment of failure modes.

  10. Clustering and training set selection methods for improving the accuracy of quantitative laser induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Ryan B., E-mail: randerson@astro.cornell.edu [Cornell University Department of Astronomy, 406 Space Sciences Building, Ithaca, NY 14853 (United States); Bell, James F., E-mail: Jim.Bell@asu.edu [Arizona State University School of Earth and Space Exploration, Bldg.: INTDS-A, Room: 115B, Box 871404, Tempe, AZ 85287 (United States); Wiens, Roger C., E-mail: rwiens@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States); Morris, Richard V., E-mail: richard.v.morris@nasa.gov [NASA Johnson Space Center, 2101 NASA Parkway, Houston, TX 77058 (United States); Clegg, Samuel M., E-mail: sclegg@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663 MS J565, Los Alamos, NM 87545 (United States)

    2012-04-15

    We investigated five clustering and training set selection methods to improve the accuracy of quantitative chemical analysis of geologic samples by laser induced breakdown spectroscopy (LIBS) using partial least squares (PLS) regression. The LIBS spectra were previously acquired for 195 rock slabs and 31 pressed powder geostandards under 7 Torr CO{sub 2} at a stand-off distance of 7 m at 17 mJ per pulse to simulate the operational conditions of the ChemCam LIBS instrument on the Mars Science Laboratory Curiosity rover. The clustering and training set selection methods, which do not require prior knowledge of the chemical composition of the test-set samples, are based on grouping similar spectra and selecting appropriate training spectra for the partial least squares (PLS2) model. These methods were: (1) hierarchical clustering of the full set of training spectra and selection of a subset for use in training; (2) k-means clustering of all spectra and generation of PLS2 models based on the training samples within each cluster; (3) iterative use of PLS2 to predict sample composition and k-means clustering of the predicted compositions to subdivide the groups of spectra; (4) soft independent modeling of class analogy (SIMCA) classification of spectra, and generation of PLS2 models based on the training samples within each class; (5) use of Bayesian information criteria (BIC) to determine an optimal number of clusters and generation of PLS2 models based on the training samples within each cluster. The iterative method and the k-means method using 5 clusters showed the best performance, improving the absolute quadrature root mean squared error (RMSE) by {approx} 3 wt.%. The statistical significance of these improvements was {approx} 85%. Our results show that although clustering methods can modestly improve results, a large and diverse training set is the most reliable way to improve the accuracy of quantitative LIBS. In particular, additional sulfate standards and

  11. Numerical Modelling of Three-Fluid Flow Using The Level-set Method

    Science.gov (United States)

    Li, Hongying; Lou, Jing; Shang, Zhi

    2014-11-01

    This work presents a numerical model for simulation of three-fluid flow involving two different moving interfaces. These interfaces are captured using the level-set method via two different level-set functions. A combined formulation with only one set of conservation equations for the whole physical domain, consisting of the three different immiscible fluids, is employed. Numerical solution is performed on a fixed mesh using the finite volume method. Surface tension effect is incorporated using the Continuum Surface Force model. Validation of the present model is made against available results for stratified flow and rising bubble in a container with a free surface. Applications of the present model are demonstrated by a variety of three-fluid flow systems including (1) three-fluid stratified flow, (2) two-fluid stratified flow carrying the third fluid in the form of drops and (3) simultaneous rising and settling of two drops in a stationary third fluid. The work is supported by a Thematic and Strategic Research from A*STAR, Singapore (Ref. #: 1021640075).

  12. Study of different effectives on wind energy by using mathematical methods and rough set theory

    International Nuclear Information System (INIS)

    Marrouf, A.A.

    2009-01-01

    Analysis of data plays an important role in all fields of life, a huge number of data that results from experimental data in all scientific and social sciences. The analysis of these data was carried out by statistical methods and its representation depended on classical Euclidean geometric concepts.In the 21 st century, new direction for data analysis have been started in applications. These direction depend basically on modern mathematical theories. The quality of data and information can be characterized as interfering and man is unable to distinguish between its vocabularies. The topological methods are the most compatible for this process of analysis for making decision. At the end of 20 th century, a new topological method appeared, this is known by R ough Set Theory Approach , this doesn't depend on external suppositions. It is known as (let data speak). This is good for all types of data. The theory was originated by Pawlak in 1982 [48] as a result of long term program of fundamental research on logical properties of information systems, carried out by him and a group of logicians from Phlish Academy of sciences and the University of Warsaw, Poland. Various real life application of rough sets have shown its usefulness in many domains as civil engineering, medical data analysis, generating of a cement kiln control algorithm from observation of stocker's actions, vibration analysis, air craft pilot performance evaluation, hydrology, pharmacology, image processing and ecology.Variable Precision Rough Set (VPRS)-model is proposed by W. Ziarko [80]. It is a new generalization of the rough set model. It is aimed at handling underlain information and is directly derived from the original model without any additional assumptions.Topology is a mathematical tool to study information systems and variable precision rough sets. Ziarko presumed that the notion of variable precision rough sets depend on special types of topological spaces. In this space, the families of

  13. Reliability Quantification Method for Safety Critical Software Based on a Finite Test Set

    International Nuclear Information System (INIS)

    Shin, Sung Min; Kim, Hee Eun; Kang, Hyun Gook; Lee, Seung Jun

    2014-01-01

    Software inside of digitalized system have very important role because it may cause irreversible consequence and affect the whole system as common cause failure. However, test-based reliability quantification method for some safety critical software has limitations caused by difficulties in developing input sets as a form of trajectory which is series of successive values of variables. To address these limitations, this study proposed another method which conduct the test using combination of single values of variables. To substitute the trajectory form of input using combination of variables, the possible range of each variable should be identified. For this purpose, assigned range of each variable, logical relations between variables, plant dynamics under certain situation, and characteristics of obtaining information of digital device are considered. A feasibility of the proposed method was confirmed through an application to the Reactor Protection System (RPS) software trip logic

  14. Level set method for optimal shape design of MRAM core. Micromagnetic approach

    International Nuclear Information System (INIS)

    Melicher, Valdemar; Cimrak, Ivan; Keer, Roger van

    2008-01-01

    We aim at optimizing the shape of the magnetic core in MRAM memories. The evolution of the magnetization during the writing process is described by the Landau-Lifshitz equation (LLE). The actual shape of the core in one cell is characterized by the coefficient γ. Cost functional f=f(γ) expresses the quality of the writing process having in mind the competition between the full-select and the half-select element. We derive an explicit form of the derivative F=∂f/∂γ which allows for the use of gradient-type methods for the actual computation of the optimized shape (e.g., steepest descend method). The level set method (LSM) is employed for the representation of the piecewise constant coefficient γ

  15. New method of three-dimensional reconstruction from two-dimensional MR data sets

    International Nuclear Information System (INIS)

    Wrazidlo, W.; Schneider, S.; Brambs, H.J.; Richter, G.M.; Kauffmann, G.W.; Geiger, B.; Fischer, C.

    1989-01-01

    In medical diagnosis and therapy, cross-sectional images are obtained by means of US, CT, or MR imaging. The authors propose a new solution to the problem of constructing a shape over a set of cross-sectional contours from two-dimensional (2D) MR data sets. The authors' method reduces the problem of constructing a shape over the cross sections to one of constructing a sequence of partial shapes, each of them connecting two cross sections lying on adjacent planes. The solution makes use of the Delaunay triangulation, which is isomorphic in that specific situation. The authors compute this Delaunay triangulation. Shape reconstruction is then achieved section by pruning Delaunay triangulations

  16. The Train Driver Recovery Problem - a Set Partitioning Based Model and Solution Method

    DEFF Research Database (Denmark)

    Rezanova, Natalia Jurjevna; Ryan, David

    2010-01-01

    The need to recover a train driver schedule occurs during major disruptions in the daily railway operations. Based on data from the Danish passenger railway operator DSB S-tog A/S, a solution method to the train driver recovery problem (TDRP) is developed. The TDRP is formulated as a set...... branching strategy using the depth-first search of the Branch & Bound tree. The LP relaxation of the TDRP possesses strong integer properties. We present test scenarios generated from the historical real-life operations data of DSB S-tog A/S. The numerical results show that all but one tested instances...... partitioning problem. We define a disruption neighbourhood by identifying a small set of drivers and train tasks directly affected by the disruption. Based on the disruption neighbourhood, the TDRP model is formed and solved. If the TDRP solution provides a feasible recovery for the drivers within...

  17. Set up of analytical methods for evaluation of specifications of recombinant Hepatitis-B vaccine

    Directory of Open Access Journals (Sweden)

    Daram M

    2009-06-01

    Full Text Available "nBackground: Hepatitis B vaccination has been included in routine immunization of all individuals according to WHO recommendations since 1991. Despite successful coverage, 3-5% of recipients fail to mount a desirable protection level of Ab. Vaccine failure results from: emergence of mutation, immune failure of individuals, decrease in vaccine potency, and etc. The quality of Hepatitis B vaccine should be evaluated by a reliable method. "n"nMethods: The amount of vaccine antigen was measured through the in vitro assay of Hepatitis B vaccines which consists of multiple dilutions of the reference material and samples. The preparations were evaluated by Elisa to determine the amount of HBsAg. The data were analyzed by parallel-line analysis software. The in vivo assay was performed by inoculating multiple doses of the reference and sample preparations in Balb/c mice. A control group was also inoculated with vaccine matrix. Four weeks later, the mice sera were evaluated to determine the presence of antibodies against Hepatitis B by Elisa method. The data were analyzed by Probit analysis software. "n"nResults: Both methods were set up in our laboratory by which different batches of Hepatitis B vaccine were evaluated. It was observed that In vivo and In vitro methods provide comparable results. Therefore we can use the in vitro method for routine testing of HB vaccine quality control. "n"nConclusion: In vitro method can be used in place of In vivo method because of its time and cost-effectiveness. Moreover, since no animals are used in in vitro method, it complies well with the 3R concept (Reduction, Refinement, and Replacement of animal testing and the current tendency to use alternative method.

  18. A method of setting limits for the purpose of quality assurance

    International Nuclear Information System (INIS)

    Sanghangthum, Taweap; Suriyapee, Sivalee; Kim, Gwe-Ya; Pawlicki, Todd

    2013-01-01

    The result from any assurance measurement needs to be checked against some limits for acceptability. There are two types of limits; those that define clinical acceptability (action limits) and those that are meant to serve as a warning that the measurement is close to the action limits (tolerance limits). Currently, there is no standard procedure to set these limits. In this work, we propose an operational procedure to set tolerance limits and action limits. The approach to establish the limits is based on techniques of quality engineering using control charts and a process capability index. The method is different for tolerance limits and action limits with action limits being categorized into those that are specified and unspecified. The procedure is to first ensure process control using the I-MR control charts. Then, the tolerance limits are set equal to the control chart limits on the I chart. Action limits are determined using the C pm process capability index with the requirements that the process must be in-control. The limits from the proposed procedure are compared to an existing or conventional method. Four examples are investigated: two of volumetric modulated arc therapy (VMAT) point dose quality assurance (QA) and two of routine linear accelerator output QA. The tolerance limits range from about 6% larger to 9% smaller than conventional action limits for VMAT QA cases. For the linac output QA, tolerance limits are about 60% smaller than conventional action limits. The operational procedure describe in this work is based on established quality management tools and will provide a systematic guide to set up tolerance and action limits for different equipment and processes. (paper)

  19. New approaches to wipe sampling methods for antineoplastic and other hazardous drugs in healthcare settings.

    Science.gov (United States)

    Connor, Thomas H; Smith, Jerome P

    2016-09-01

    At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic. Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Current recommendations are that all healthc a re settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use.

  20. Developing a digital photography-based method for dietary analysis in self-serve dining settings.

    Science.gov (United States)

    Christoph, Mary J; Loman, Brett R; Ellison, Brenna

    2017-07-01

    Current population-based methods for assessing dietary intake, including food frequency questionnaires, food diaries, and 24-h dietary recall, are limited in their ability to objectively measure food intake. Digital photography has been identified as a promising addition to these techniques but has rarely been assessed in self-serve settings. We utilized digital photography to examine university students' food choices and consumption in a self-serve dining hall setting. Research assistants took pre- and post-photos of students' plates during lunch and dinner to assess selection (presence), servings, and consumption of MyPlate food groups. Four coders rated the same set of approximately 180 meals for inter-rater reliability analyses; approximately 50 additional meals were coded twice by each coder to assess intra-rater agreement. Inter-rater agreement on the selection, servings, and consumption of food groups was high at 93.5%; intra-rater agreement was similarly high with an average of 95.6% agreement. Coders achieved the highest rates of agreement in assessing if a food group was present on the plate (95-99% inter-rater agreement, depending on food group) and estimating the servings of food selected (81-98% inter-rater agreement). Estimating consumption, particularly for items such as beans and cheese that were often in mixed dishes, was more challenging (77-94% inter-rater agreement). Results suggest that the digital photography method presented is feasible for large studies in real-world environments and can provide an objective measure of food selection, servings, and consumption with a high degree of agreement between coders; however, to make accurate claims about the state of dietary intake in all-you-can-eat, self-serve settings, researchers will need to account for the possibility of diners taking multiple trips through the serving line. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Finite test sets development method for test execution of safety critical software

    International Nuclear Information System (INIS)

    El-Bordany Ayman; Yun, Won Young

    2014-01-01

    It reads inputs, computes new states, and updates output for each scan cycle. Korea Nuclear Instrumentation and Control System (KNICS) has recently developed a fully digitalized Reactor Protection System (RPS) based on PLD. As a digital system, this RPS is equipped with a dedicated software. The Reliability of this software is crucial to NPPs safety where its malfunction may cause irreversible consequences and affect the whole system as a Common Cause Failure (CCF). To guarantee the reliability of the whole system, the reliability of this software needs to be quantified. There are three representative methods for software reliability quantification, namely the Verification and Validation (V and V) quality-based method, the Software Reliability Growth Model (SRGM), and the test-based method. An important concept of the guidance is that the test sets represent 'trajectories' (a series of successive values for the input variables of a program that occur during the operation of the software over time) in the space of inputs to the software.. Actually, the inputs to the software depends on the state of plant at that time, and these inputs form a new internal state of the software by changing values of some variables. In other words, internal state of the software at specific timing depends on the history of past inputs. Here the internal state of the software which can be changed by past inputs is named as Context of Software (CoS). In a certain CoS, a software failure occurs when a fault is triggered by some inputs. To cover the failure occurrence mechanism of a software, preceding researches insist that the inputs should be a trajectory form. However, in this approach, there are two critical problems. One is the length of the trajectory input. Input trajectory should long enough to cover failure mechanism, but the enough length is not clear. What is worse, to cover some accident scenario, one set of input should represent dozen hours of successive values

  2. RS-SNP: a random-set method for genome-wide association studies

    Directory of Open Access Journals (Sweden)

    Mukherjee Sayan

    2011-03-01

    Full Text Available Abstract Background The typical objective of Genome-wide association (GWA studies is to identify single-nucleotide polymorphisms (SNPs and corresponding genes with the strongest evidence of association (the 'most-significant SNPs/genes' approach. Borrowing ideas from micro-array data analysis, we propose a new method, named RS-SNP, for detecting sets of genes enriched in SNPs moderately associated to the phenotype. RS-SNP assesses whether the number of significant SNPs, with p-value P ≤ α, belonging to a given SNP set is statistically significant. The rationale of proposed method is that two kinds of null hypotheses are taken into account simultaneously. In the first null model the genotype and the phenotype are assumed to be independent random variables and the null distribution is the probability of the number of significant SNPs in greater than observed by chance. The second null model assumes the number of significant SNPs in depends on the size of and not on the identity of the SNPs in . Statistical significance is assessed using non-parametric permutation tests. Results We applied RS-SNP to the Crohn's disease (CD data set collected by the Wellcome Trust Case Control Consortium (WTCCC and compared the results with GENGEN, an approach recently proposed in literature. The enrichment analysis using RS-SNP and the set of pathways contained in the MSigDB C2 CP pathway collection highlighted 86 pathways rich in SNPs weakly associated to CD. Of these, 47 were also indicated to be significant by GENGEN. Similar results were obtained using the MSigDB C5 pathway collection. Many of the pathways found to be enriched by RS-SNP have a well-known connection to CD and often with inflammatory diseases. Conclusions The proposed method is a valuable alternative to other techniques for enrichment analysis of SNP sets. It is well founded from a theoretical and statistical perspective. Moreover, the experimental comparison with GENGEN highlights that it is

  3. An improved level set method for brain MR images segmentation and bias correction.

    Science.gov (United States)

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  4. Incremental Knowledge Acquisition for WSD: A Rough Set and IL based Method

    Directory of Open Access Journals (Sweden)

    Xu Huang

    2015-07-01

    Full Text Available Word sense disambiguation (WSD is one of tricky tasks in natural language processing (NLP as it needs to take into full account all the complexities of language. Because WSD involves in discovering semantic structures from unstructured text, automatic knowledge acquisition of word sense is profoundly difficult. To acquire knowledge about Chinese multi-sense verbs, we introduce an incremental machine learning method which combines rough set method and instance based learning. First, context of a multi-sense verb is extracted into a table; its sense is annotated by a skilled human and stored in the same table. By this way, decision table is formed, and then rules can be extracted within the framework of attributive value reduction of rough set. Instances not entailed by any rule are treated as outliers. When new instances are added to decision table, only the new added and outliers need to be learned further, thus incremental leaning is fulfilled. Experiments show the scale of decision table can be reduced dramatically by this method without performance decline.

  5. A method for statistical comparison of data sets and its uses in analysis of nuclear physics data

    International Nuclear Information System (INIS)

    Bityukov, S.I.; Smirnova, V.V.; Krasnikov, N.V.; Maksimushkina, A.V.; Nikitenko, A.N.

    2014-01-01

    Authors propose a method for statistical comparison of two data sets. The method is based on the method of statistical comparison of histograms. As an estimator of quality of the decision made, it is proposed to use the value which it is possible to call the probability that the decision (data sets are various) is correct [ru

  6. CUDA based Level Set Method for 3D Reconstruction of Fishes from Large Acoustic Data

    DEFF Research Database (Denmark)

    Sharma, Ojaswa; Anton, François

    2009-01-01

    Acoustic images present views of underwater dynamics, even in high depths. With multi-beam echo sounders (SONARs), it is possible to capture series of 2D high resolution acoustic images. 3D reconstruction of the water column and subsequent estimation of fish abundance and fish species identificat...... of suppressing threshold and show its convergence as the evolution proceeds. We also present a GPU based streaming computation of the method using NVIDIA's CUDA framework to handle large volume data-sets. Our implementation is optimised for memory usage to handle large volumes....

  7. Some free boundary problems in potential flow regime usinga based level set method

    Energy Technology Data Exchange (ETDEWEB)

    Garzon, M.; Bobillo-Ares, N.; Sethian, J.A.

    2008-12-09

    Recent advances in the field of fluid mechanics with moving fronts are linked to the use of Level Set Methods, a versatile mathematical technique to follow free boundaries which undergo topological changes. A challenging class of problems in this context are those related to the solution of a partial differential equation posed on a moving domain, in which the boundary condition for the PDE solver has to be obtained from a partial differential equation defined on the front. This is the case of potential flow models with moving boundaries. Moreover the fluid front will possibly be carrying some material substance which will diffuse in the front and be advected by the front velocity, as for example the use of surfactants to lower surface tension. We present a Level Set based methodology to embed this partial differential equations defined on the front in a complete Eulerian framework, fully avoiding the tracking of fluid particles and its known limitations. To show the advantages of this approach in the field of Fluid Mechanics we present in this work one particular application: the numerical approximation of a potential flow model to simulate the evolution and breaking of a solitary wave propagating over a slopping bottom and compare the level set based algorithm with previous front tracking models.

  8. AN EFFICIENT DATA MINING METHOD TO FIND FREQUENT ITEM SETS IN LARGE DATABASE USING TR- FCTM

    Directory of Open Access Journals (Sweden)

    Saravanan Suba

    2016-01-01

    Full Text Available Mining association rules in large database is one of most popular data mining techniques for business decision makers. Discovering frequent item set is the core process in association rule mining. Numerous algorithms are available in the literature to find frequent patterns. Apriori and FP-tree are the most common methods for finding frequent items. Apriori finds significant frequent items using candidate generation with more number of data base scans. FP-tree uses two database scans to find significant frequent items without using candidate generation. This proposed TR-FCTM (Transaction Reduction- Frequency Count Table Method discovers significant frequent items by generating full candidates once to form frequency count table with one database scan. Experimental results of TR-FCTM shows that this algorithm outperforms than Apriori and FP-tree.

  9. Studying the properties of Variational Data Assimilation Methods by Applying a Set of Test-Examples

    DEFF Research Database (Denmark)

    Thomsen, Per Grove; Zlatev, Zahari

    2007-01-01

    and backward computations are carried out by using the model under consideration and its adjoint equations (both the model and its adjoint are defined by systems of differential equations). The major difficulty is caused by the huge increase of the computational load (normally by a factor more than 100...... assimilation method (numerical algorithms for solving differential equations, splitting procedures and optimization algorithms) have been studied by using these tests. The presentation will include results from testing carried out in the study.......he variational data assimilation methods can successfully be used in different fields of science and engineering. An attempt to utilize available sets of observations in the efforts to improve (i) the models used to study different phenomena (ii) the model results is systematically carried out when...

  10. Numerical simulation of overflow at vertical weirs using a hybrid level set/VOF method

    Science.gov (United States)

    Lv, Xin; Zou, Qingping; Reeve, Dominic

    2011-10-01

    This paper presents the applications of a newly developed free surface flow model to the practical, while challenging overflow problems for weirs. Since the model takes advantage of the strengths of both the level set and volume of fluid methods and solves the Navier-Stokes equations on an unstructured mesh, it is capable of resolving the time evolution of very complex vortical motions, air entrainment and pressure variations due to violent deformations following overflow of the weir crest. In the present study, two different types of vertical weir, namely broad-crested and sharp-crested, are considered for validation purposes. The calculated overflow parameters such as pressure head distributions, velocity distributions, and water surface profiles are compared against experimental data as well as numerical results available in literature. A very good quantitative agreement has been obtained. The numerical model, thus, offers a good alternative to traditional experimental methods in the study of weir problems.

  11. A combined single-multiphase flow formulation of the premixing phase using the level set method

    International Nuclear Information System (INIS)

    Leskovar, M.; Marn, J.

    1999-01-01

    The premixing phase of a steam explosion covers the interaction of the melt jet or droplets with the water prior to any steam explosion occurring. To get a better insight of the hydrodynamic processes during the premixing phase beside hot premixing experiments, where the water evaporation is significant, also cold isothermal premixing experiments are performed. The specialty of isothermal premixing experiments is that three phases are involved: the water, the air and the spheres phase, but only the spheres phase mixes with the other two phases whereas the water and air phases do not mix and remain separated by a free surface. Our idea therefore was to treat the isothermal premixing process with a combined single-multiphase flow model. In this combined model the water and air phase are treated as a single phase with discontinuous phase properties at the water air interface, whereas the spheres are treated as usually with a multiphase flow model, where the spheres represent the dispersed phase and the common water-air phase represents the continuous phase. The common water-air phase was described with the front capturing method based on the level set formulation. In the level set formulation, the boundary of two-fluid interfaces is modeled as the zero set of a smooth signed normal distance function defined on the entire physical domain. The boundary is then updated by solving a nonlinear equation of the Hamilton-Jacobi type on the whole domain. With this single-multiphase flow model the Queos isothermal premixing Q08 has been simulated. A numerical analysis using different treatments of the water-air interface (level set, high-resolution and upwind) has been performed for the incompressible and compressible case and the results were compared to experimental measurements.(author)

  12. Topology optimization in acoustics and elasto-acoustics via a level-set method

    Science.gov (United States)

    Desai, J.; Faure, A.; Michailidis, G.; Parry, G.; Estevez, R.

    2018-04-01

    Optimizing the shape and topology (S&T) of structures to improve their acoustic performance is quite challenging. The exact position of the structural boundary is usually of critical importance, which dictates the use of geometric methods for topology optimization instead of standard density approaches. The goal of the present work is to investigate different possibilities for handling topology optimization problems in acoustics and elasto-acoustics via a level-set method. From a theoretical point of view, we detail two equivalent ways to perform the derivation of surface-dependent terms and propose a smoothing technique for treating problems of boundary conditions optimization. In the numerical part, we examine the importance of the surface-dependent term in the shape derivative, neglected in previous studies found in the literature, on the optimal designs. Moreover, we test different mesh adaptation choices, as well as technical details related to the implicit surface definition in the level-set approach. We present results in two and three-space dimensions.

  13. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    Science.gov (United States)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J.; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest.

  14. Validation of non-rigid point-set registration methods using a porcine bladder pelvic phantom

    International Nuclear Information System (INIS)

    Zakariaee, Roja; Hamarneh, Ghassan; Brown, Colin J; Spadinger, Ingrid

    2016-01-01

    The problem of accurate dose accumulation in fractionated radiotherapy treatment for highly deformable organs, such as bladder, has garnered increasing interest over the past few years. However, more research is required in order to find a robust and efficient solution and to increase the accuracy over the current methods. The purpose of this study was to evaluate the feasibility and accuracy of utilizing non-rigid (affine or deformable) point-set registration in accumulating dose in bladder of different sizes and shapes. A pelvic phantom was built to house an ex vivo porcine bladder with fiducial landmarks adhered onto its surface. Four different volume fillings of the bladder were used (90, 180, 360 and 480 cc). The performance of MATLAB implementations of five different methods were compared, in aligning the bladder contour point-sets. The approaches evaluated were coherent point drift (CPD), gaussian mixture model, shape context, thin-plate spline robust point matching (TPS-RPM) and finite iterative closest point (ICP-finite). The evaluation metrics included registration runtime, target registration error (TRE), root-mean-square error (RMS) and Hausdorff distance (HD). The reference (source) dataset was alternated through all four points-sets, in order to study the effect of reference volume on the registration outcomes. While all deformable algorithms provided reasonable registration results, CPD provided the best TRE values (6.4 mm), and TPS-RPM yielded the best mean RMS and HD values (1.4 and 6.8 mm, respectively). ICP-finite was the fastest technique and TPS-RPM, the slowest. (paper)

  15. Bioethics education in clinical settings: theory and practice of the dilemma method of moral case deliberation.

    Science.gov (United States)

    Stolper, Margreet; Molewijk, Bert; Widdershoven, Guy

    2016-07-22

    Moral Case Deliberation is a specific form of bioethics education fostering professionals' moral competence in order to deal with their moral questions. So far, few studies focus in detail on Moral Case Deliberation methodologies and their didactic principles. The dilemma method is a structured and frequently used method in Moral Case Deliberation that stimulates methodological reflection and reasoning through a systematic dialogue on an ethical issue experienced in practice. In this paper we present a case-study of a Moral Case Deliberation with the dilemma method in a health care institution for people with an intellectual disability, describing the theoretical background and the practical application of the dilemma method. The dilemma method focuses on moral experiences of participants concerning a concrete dilemma in practice. By an in-depth description of each of the steps of the deliberation process, we elucidate the educational value and didactics of this specific method. The didactics and methodical steps of the dilemma method both supported and structured the dialogical reflection process of the participants. The process shows that the participants learned to recognize the moral dimension of the issue at stake and were able to distinguish various perspectives and reasons in a systematic manner. The facilitator played an important role in the learning process of the participants, by assisting them in focusing on and exploring moral aspects of the case. The reflection and learning process, experienced by the participants, shows competency-based characteristics. The role of the facilitator is that of a Socratic teacher with specific knowledge and skills, fostering reflection, inquiry and dialogue. The specific didactics of the dilemma method is well suited for teaching bioethics in clinical settings. The dilemma method follows an inductive learning approach through a dialogical moral inquiry in which participants develop not only knowledge but also skills

  16. Reservoir characterisation by a binary level set method and adaptive multiscale estimation

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Lars Kristian

    2006-01-15

    The main focus of this work is on estimation of the absolute permeability as a solution of an inverse problem. We have both considered a single-phase and a two-phase flow model. Two novel approaches have been introduced and tested numerical for solving the inverse problems. The first approach is a multi scale zonation technique which is treated in Paper A. The purpose of the work in this paper is to find a coarse scale solution based on production data from wells. In the suggested approach, the robustness of an already developed method, the adaptive multi scale estimation (AME), has been improved by utilising information from several candidate solutions generated by a stochastic optimizer. The new approach also suggests a way of combining a stochastic and a gradient search method, which in general is a problematic issue. The second approach is a piecewise constant level set approach and is applied in Paper B, C, D and E. Paper B considers the stationary single-phase problem, while Paper C, D and E use a two-phase flow model. In the two-phase flow problem we have utilised information from both production data in wells and spatially distributed data gathered from seismic surveys. Due to the higher content of information provided by the spatially distributed data, we search solutions on a slightly finer scale than one typically does with only production data included. The applied level set method is suitable for reconstruction of fields with a supposed known facies-type of solution. That is, the solution should be close to piecewise constant. This information is utilised through a strong restriction of the number of constant levels in the estimate. On the other hand, the flexibility in the geometries of the zones is much larger for this method than in a typical zonation approach, for example the multi scale approach applied in Paper A. In all these papers, the numerical studies are done on synthetic data sets. An advantage of synthetic data studies is that the true

  17. Formulation of improved basis sets for the study of polymer dynamics through diffusion theory methods.

    Science.gov (United States)

    Gaspari, Roberto; Rapallo, Arnaldo

    2008-06-28

    In this work a new method is proposed for the choice of basis functions in diffusion theory (DT) calculations. This method, named hybrid basis approach (HBA), combines the two previously adopted long time sorting procedure (LTSP) and maximum correlation approximation (MCA) techniques; the first emphasizing contributions from the long time dynamics, the latter being based on the local correlations along the chain. In order to fulfill this task, the HBA procedure employs a first order basis set corresponding to a high order MCA one and generates upper order approximations according to LTSP. A test of the method is made first on a melt of cis-1,4-polyisoprene decamers where HBA and LTSP are compared in terms of efficiency. Both convergence properties and numerical stability are improved by the use of the HBA basis set whose performance is evaluated on local dynamics, by computing the correlation times of selected bond vectors along the chain, and on global ones, through the eigenvalues of the diffusion operator L. Further use of the DT with a HBA basis set has been made on a 71-mer of syndiotactic trans-1,2-polypentadiene in toluene solution, whose dynamical properties have been computed with a high order calculation and compared to the "numerical experiment" provided by the molecular dynamics (MD) simulation in explicit solvent. The necessary equilibrium averages have been obtained by a vacuum trajectory of the chain where solvent effects on conformational properties have been reproduced with a proper screening of the nonbonded interactions, corresponding to a definite value of the mean radius of gyration of the polymer in vacuum. Results show a very good agreement between DT calculations and the MD numerical experiment. This suggests a further use of DT methods with the necessary input quantities obtained by the only knowledge of some experimental values, i.e., the mean radius of gyration of the chain and the viscosity of the solution, and by a suitable vacuum

  18. The use of principal component, discriminate and rough sets analysis methods of radiological data

    International Nuclear Information System (INIS)

    Seddeek, M.K.; Kozae, A.M.; Sharshar, T.; Badran, H.M.

    2006-01-01

    In this work, computational methods of finding clusters of multivariate data points were explored using principal component analysis (PCA), discriminate analysis (DA) and rough set analysis (RSA) methods. The variables were the concentrations of four natural isotopes and the texture characteristics of 100 sand samples from the coast of North Sinai, Egypt. Beach and dune sands are the two types of samples included. These methods were used to reduce the dimensionality of multivariate data and as classification and clustering methods. The results showed that the classification of sands in the environment of North Sinai is dependent upon the radioactivity contents of the naturally occurring radioactive materials and not upon the characteristics of the sand. The application of DA enables the creation of a classification rule for sand type and it revealed that samples with high negatively values of the first score have the highest contamination of black sand. PCA revealed that radioactivity concentrations alone can be considered to predict the classification of other samples. The results of RSA showed that only one of the concentrations of 238 U, 226 Ra and 232 Th with 40 K content, can characterize the clusters together with characteristics of the sand. Both PCA and RSA result in the following conclusion: 238 U, 226 Ra and 232 Th behave similarly. RSA revealed that one/two of them may not be considered without affecting the body of knowledge

  19. Set-up and methods for SiPM Photo-Detection Efficiency measurements

    International Nuclear Information System (INIS)

    Zappalà, G.; Acerbi, F.; Ferri, A.; Gola, A.; Paternoster, G.; Zorzi, N.; Piemonte, C.

    2016-01-01

    In this work, a compact set-up and three different methods to measure the Photo-Detection Efficiency (PDE) of Silicon Photomultipliers (SiPMs) and Single-Photon Avalanche Diodes (SPADs) are presented. The methods, based on either continuous or pulsed light illumination, are discussed in detail and compared in terms of measurement precision and time. For the SiPM, these methods have the feature of minimizing the effect of both the primary and correlated noise on the PDE estimation. The PDE of SiPMs (produced at FBK, Trento, Italy) was measured in a range from UV to NIR, obtaining similar results with all the methods. Furthermore, the advantages of measuring, when possible, the PDE of SPADs (of the same technology and with the same layout of a single SiPM cell) instead of larger devices are also discussed and a direct comparison between measurement results is shown. Using a SPAD, it is possible to reduce the measurement complexity and uncertainty since the correlated noise sources are reduced with respect to the SiPM case.

  20. Developing and setting up optical methods to study the speckle patterns created by optical beam smoothing

    International Nuclear Information System (INIS)

    Surville, J.

    2005-12-01

    We have developed three main optical methods to study the speckles generated by a smoothed laser source. The first method addresses the measurement of the temporal and spatial correlation functions of the source, with a modified Michelson interferometer. The second one is a pump-probe technique created to shoot a picture of a speckle pattern generated at a set time. And the third one is an evolution of the second method dedicated to time-frequency coding, thanks to a frequency chirped probe pulse. Thus, the speckles can be followed in time and their motion can be described. According to these three methods, the average size and duration of the speckles can be measured. It is also possible to measure the size and the duration of each of them and mostly their velocity in a given direction. All the results obtained have been confronted to the different existing theories. We show that the statistical distributions of the measured speckles'size and speckles'intensity agree satisfactorily with theoretical values

  1. Validation of five minimally obstructive methods to estimate physical activity energy expenditure in young adults in semi-standardized settings

    DEFF Research Database (Denmark)

    Schneller, Mikkel Bo; Pedersen, Mogens Theisen; Gupta, Nidhi

    2015-01-01

    We compared the accuracy of five objective methods, including two newly developed methods combining accelerometry and activity type recognition (Acti4), against indirect calorimetry, to estimate total energy expenditure (EE) of different activities in semi-standardized settings. Fourteen particip...

  2. The use of qualitative methods to inform Delphi surveys in core outcome set development.

    Science.gov (United States)

    Keeley, T; Williamson, P; Callery, P; Jones, L L; Mathers, J; Jones, J; Young, B; Calvert, M

    2016-05-04

    Core outcome sets (COS) help to minimise bias in trials and facilitate evidence synthesis. Delphi surveys are increasingly being used as part of a wider process to reach consensus about what outcomes should be included in a COS. Qualitative research can be used to inform the development of Delphi surveys. This is an advance in the field of COS development and one which is potentially valuable; however, little guidance exists for COS developers on how best to use qualitative methods and what the challenges are. This paper aims to provide early guidance on the potential role and contribution of qualitative research in this area. We hope the ideas we present will be challenged, critiqued and built upon by others exploring the role of qualitative research in COS development. This paper draws upon the experiences of using qualitative methods in the pre-Delphi stage of the development of three different COS. Using these studies as examples, we identify some of the ways that qualitative research might contribute to COS development, the challenges in using such methods and areas where future research is required. Qualitative research can help to identify what outcomes are important to stakeholders; facilitate understanding of why some outcomes may be more important than others, determine the scope of outcomes; identify appropriate language for use in the Delphi survey and inform comparisons between stakeholder data and other sources, such as systematic reviews. Developers need to consider a number of methodological points when using qualitative research: specifically, which stakeholders to involve, how to sample participants, which data collection methods are most appropriate, how to consider outcomes with stakeholders and how to analyse these data. A number of areas for future research are identified. Qualitative research has the potential to increase the research community's confidence in COS, although this will be dependent upon using rigorous and appropriate

  3. Setting the light conditions for measuring root transparency for age-at-death estimation methods.

    Science.gov (United States)

    Adserias-Garriga, Joe; Nogué-Navarro, Laia; Zapico, Sara C; Ubelaker, Douglas H

    2018-03-01

    Age-at-death estimation is one of the main goals in forensic identification, being an essential parameter to determine the biological profile, narrowing the possibility of identification in cases involving missing persons and unidentified bodies. The study of dental tissues has been long considered as a proper tool for age estimation with several age estimation methods based on them. Dental age estimation methods can be divided into three categories: tooth formation and development, post-formation changes, and histological changes. While tooth formation and growth changes are important for fetal and infant consideration, when the end of dental and skeletal growth is achieved, post-formation or biochemical changes can be applied. Lamendin et al. in J Forensic Sci 37:1373-1379, (1992) developed an adult age estimation method based on root transparency and periodontal recession. The regression formula demonstrated its accuracy of use for 40 to 70-year-old individuals. Later on, Prince and Ubelaker in J Forensic Sci 47(1):107-116, (2002) evaluated the effects of ancestry and sex and incorporated root height into the equation, developing four new regression formulas for males and females of African and European ancestry. Even though root transparency is a key element in the method, the conditions for measuring this element have not been established. The aim of the present study is to set the light conditions measured in lumens that offer greater accuracy when applying the Lamendin et al. method modified by Prince and Ubelaker. The results must be also taken into account in the application of other age estimation methodologies using root transparency to estimate age-at-death.

  4. [Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie

    At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.

  5. Developing Common Set of Weights with Considering Nondiscretionary Inputs and Using Ideal Point Method

    Directory of Open Access Journals (Sweden)

    Reza Kiani Mavi

    2013-01-01

    Full Text Available Data envelopment analysis (DEA is used to evaluate the performance of decision making units (DMUs with multiple inputs and outputs in a homogeneous group. In this way, the acquired relative efficiency score for each decision making unit lies between zero and one where a number of them may have an equal efficiency score of one. DEA successfully divides them into two categories of efficient DMUs and inefficient DMUs. A ranking for inefficient DMUs is given but DEA does not provide further information about the efficient DMUs. One of the popular methods for evaluating and ranking DMUs is the common set of weights (CSW method. We generate a CSW model with considering nondiscretionary inputs that are beyond the control of DMUs and using ideal point method. The main idea of this approach is to minimize the distance between the evaluated decision making unit and the ideal decision making unit (ideal point. Using an empirical example we put our proposed model to test by applying it to the data of some 20 bank branches and rank their efficient units.

  6. The Visual Matrix Method: Imagery and Affect in a Group-Based Research Setting

    Directory of Open Access Journals (Sweden)

    Lynn Froggett

    2015-07-01

    Full Text Available The visual matrix is a method for researching shared experience, stimulated by sensory material relevant to a research question. It is led by imagery, visualization and affect, which in the matrix take precedence over discourse. The method enables the symbolization of imaginative and emotional material, which might not otherwise be articulated and allows "unthought" dimensions of experience to emerge into consciousness in a participatory setting. We describe the process of the matrix with reference to the study "Public Art and Civic Engagement" (FROGGETT, MANLEY, ROY, PRIOR & DOHERTY, 2014 in which it was developed and tested. Subsequently, examples of its use in other contexts are provided. Both the matrix and post-matrix discussions are described, as is the interpretive process that follows. Theoretical sources are highlighted: its origins in social dreaming; the atemporal, associative nature of the thinking during and after the matrix which we describe through the Deleuzian idea of the rhizome; and the hermeneutic analysis which draws from object relations theory and the Lorenzerian tradition of scenic understanding. The matrix has been conceptualized as a "scenic rhizome" to account for its distinctive quality and hybrid origins in research practice. The scenic rhizome operates as a "third" between participants and the "objects" of contemplation. We suggest that some of the drawbacks of other group-based methods are avoided in the visual matrix—namely the tendency for inter-personal dynamics to dominate the event. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs150369

  7. A Novel Method for Predicting Anisakid Nematode Infection of Atlantic Cod Using Rough Set Theory.

    Science.gov (United States)

    Wąsikowska, Barbara; Sobecka, Ewa; Bielat, Iwona; Legierko, Monika; Więcaszek, Beata

    2018-03-01

    Atlantic cod ( Gadus morhua L.) is one of the most important fish species in the fisheries industries of many countries; however, these fish are often infected with parasites. The detection of pathogenic larval nematodes is usually performed in fish processing facilities by visual examination using candling or by digesting muscles in artificial digestive juices, but these methods are both time and labor intensive. This article presents an innovative approach to the analysis of cod parasites from both the Atlantic and Baltic Sea areas through the application of rough set theory, one of the methods of artificial intelligence, for the prediction of food safety in a food production chain. The parasitological examinations were performed focusing on nematode larvae pathogenic to humans, e.g., Anisakis simplex, Contracaecum osculatum, and Pseudoterranova decipiens. The analysis allowed identification of protocols with which it is possible to make preliminary estimates of the quantity and quality of parasites found in cod catches before detailed analyses are performed. The results indicate that the method used can be an effective analytical tool for these types of data. To achieve this goal, a database is needed that contains the patterns intensity of parasite infections and the conditions of commercial fish species in different localities in their distributions.

  8. [Proposal of a method for collective analysis of work-related accidents in the hospital setting].

    Science.gov (United States)

    Osório, Claudia; Machado, Jorge Mesquita Huet; Minayo-Gomez, Carlos

    2005-01-01

    The article presents a method for the analysis of work-related accidents in hospitals, with the double aim of analyzing accidents in light of actual work activity and enhancing the vitality of the various professions that comprise hospital work. This process involves both research and intervention, combining knowledge output with training of health professionals, fostering expanded participation by workers in managing their daily work. The method consists of stimulating workers to recreate the situation in which a given accident occurred, shifting themselves to the position of observers of their own work. In the first stage of analysis, workers are asked to show the work analyst how the accident occurred; in the second stage, the work accident victim and analyst jointly record the described series of events in a diagram; in the third, the resulting record is re-discussed and further elaborated; in the fourth, the work accident victim and analyst evaluate and implement measures aimed to prevent the accident from recurring. The article concludes by discussing the method's possibilities and limitations in the hospital setting.

  9. Setting health research priorities using the CHNRI method: I. Involving funders

    Directory of Open Access Journals (Sweden)

    Igor Rudan

    2016-06-01

    Full Text Available In 2007 and 2008, the World Health Organization's Department for Child and Adolescent Health and Development commissioned five large research priority setting exercises using the CHNRI (Child Health and Nutrition Research Initiative method. The aim was to define research priorities related to the five major causes of child deaths for the period up to the year 2015. The selected causes were childhood pneumonia, diarrhoea, birth asphyxia, neonatal infections and preterm birth/low birth weight. The criteria used for prioritization in all five exercises were the “standard” CHNRI criteria: answerability, effectiveness, deliverability, potential for mortality burden reduction and the effect on equity. Having completed the exercises, the WHO officers were left with another question: how “fundable” were the identified priorities, i.e. how attractive were they to research funders?

  10. Methods for intraoperative, sterile pose-setting of patient-specific microstereotactic frames

    Science.gov (United States)

    Vollmann, Benjamin; Müller, Samuel; Kundrat, Dennis; Ortmaier, Tobias; Kahrs, Lüder A.

    2015-03-01

    This work proposes new methods for a microstereotactic frame based on bone cement fixation. Microstereotactic frames are under investigation for minimal invasive temporal bone surgery, e.g. cochlear implantation, or for deep brain stimulation, where products are already on the market. The correct pose of the microstereotactic frame is either adjusted outside or inside the operating room and the frame is used for e.g. drill or electrode guidance. We present a patientspecific, disposable frame that allows intraoperative, sterile pose-setting. Key idea of our approach is bone cement between two plates that cures while the plates are positioned with a mechatronics system in the desired pose. This paper includes new designs of microstereotactic frames, a system for alignment and first measurements to analyze accuracy and applicable load.

  11. Auditing local methods for quality assurance in radiotherapy using the same set of predefined treatment plans

    Directory of Open Access Journals (Sweden)

    Enrica Seravalli

    2018-01-01

    Full Text Available Background and purpose: Local implementation of plan-specific quality assurance (QA methods for intensity-modulated radiotherapy (IMRT and volumetric modulated arc therapy (VMAT treatment plans may vary because of dissimilarities in procedures, equipment and software. The purpose of this work is detecting possible differences between local QA findings and those of an audit, using the same set of treatment plans. Methods: A pre-defined set of clinical plans was devised and imported in the participating institute’s treatment planning system for dose computation. The dose distribution was measured using an ionisation chamber, radiochromic film and an ionisation chamber array. The centres performed their own QA, which was compared to the audit findings. The agreement/disagreement between the audit and the institute QA results were assessed along with the differences between the dose distributions measured by the audit team and computed by the institute. Results: For the majority of the cases the results of the audit were in agreement with the institute QA findings: ionisation chamber: 92%, array: 88%, film: 76% of the total measurements. In only a few of these cases the evaluated measurements failed for both: ionisation chamber: 2%, array: 4%, film: 0% of the total measurements. Conclusion: Using predefined treatment plans, we found that in approximately 80% of the evaluated measurements the results of local QA of IMRT and VMAT plans were in line with the findings of the audit. However, the percentage of agreement/disagreement depended on the characteristics of the measurement equipment used and on the analysis metric. Keywords: Quality assurance, Dosimetry audit, IMRT, VMAT, QA devices

  12. Developing an objective evaluation method to estimate diabetes risk in community-based settings.

    Science.gov (United States)

    Kenya, Sonjia; He, Qing; Fullilove, Robert; Kotler, Donald P

    2011-05-01

    Exercise interventions often aim to affect abdominal obesity and glucose tolerance, two significant risk factors for type 2 diabetes. Because of limited financial and clinical resources in community and university-based environments, intervention effects are often measured with interviews or questionnaires and correlated with weight loss or body fat indicated by body bioimpedence analysis (BIA). However, self-reported assessments are subject to high levels of bias and low levels of reliability. Because obesity and body fat are correlated with diabetes at different levels in various ethnic groups, data reflecting changes in weight or fat do not necessarily indicate changes in diabetes risk. To determine how exercise interventions affect diabetes risk in community and university-based settings, improved evaluation methods are warranted. We compared a noninvasive, objective measurement technique--regional BIA--with whole-body BIA for its ability to assess abdominal obesity and predict glucose tolerance in 39 women. To determine regional BIA's utility in predicting glucose, we tested the association between the regional BIA method and blood glucose levels. Regional BIA estimates of abdominal fat area were significantly correlated (r = 0.554, P < 0.003) with fasting glucose. When waist circumference and family history of diabetes were added to abdominal fat in multiple regression models, the association with glucose increased further (r = 0.701, P < 0.001). Regional BIA estimates of abdominal fat may predict fasting glucose better than whole-body BIA as well as provide an objective assessment of changes in diabetes risk achieved through physical activity interventions in community settings.

  13. Design and methods for evaluating an early childhood obesity prevention program in the childcare center setting

    Directory of Open Access Journals (Sweden)

    Natale Ruby

    2013-01-01

    Full Text Available Abstract Background Many unhealthy dietary and physical activity habits that foster the development of obesity are established by the age of five. Presently, approximately 70 percent of children in the United States are currently enrolled in early childcare facilities, making this an ideal setting to implement and evaluate childhood obesity prevention efforts. We describe here the methods for conducting an obesity prevention randomized trial in the child care setting. Methods/design A randomized, controlled obesity prevention trial is currently being conducted over a three year period (2010-present. The sample consists of 28 low-income, ethnically diverse child care centers with 1105 children (sample is 60% Hispanic, 15% Haitian, 12% Black, 2% non-Hispanic White and 71% of caregivers were born outside of the US. The purpose is to test the efficacy of a parent and teacher role-modeling intervention on children’s nutrition and physical activity behaviors. . The Healthy Caregivers-Healthy Children (HC2 intervention arm schools received a combination of (1 implementing a daily curricula for teachers/parents (the nutritional gatekeepers; (2 implementing a daily curricula for children; (3 technical assistance with meal and snack menu modifications such as including more fresh and less canned produce; and (4 creation of a center policy for dietary requirements for meals and snacks, physical activity and screen time. Control arm schools received an attention control safety curriculum. Major outcome measures include pre-post changes in child body mass index percentile and z score, fruit and vegetable and other nutritious food intake, amount of physical activity, and parental nutrition and physical activity knowledge, attitudes, and beliefs, defined by intentions and behaviors. All measures were administered at the beginning and end of the school year for year one and year two of the study for a total of 4 longitudinal time points for assessment

  14. Analyzing Planck and low redshift data sets with advanced statistical methods

    Science.gov (United States)

    Eifler, Tim

    The recent ESA/NASA Planck mission has provided a key data set to constrain cosmology that is most sensitive to physics of the early Universe, such as inflation and primordial NonGaussianity (Planck 2015 results XIII). In combination with cosmological probes of the LargeScale Structure (LSS), the Planck data set is a powerful source of information to investigate late time phenomena (Planck 2015 results XIV), e.g. the accelerated expansion of the Universe, the impact of baryonic physics on the growth of structure, and the alignment of galaxies in their dark matter halos. It is the main objective of this proposal to re-analyze the archival Planck data, 1) with different, more recently developed statistical methods for cosmological parameter inference, and 2) to combine Planck and ground-based observations in an innovative way. We will make the corresponding analysis framework publicly available and believe that it will set a new standard for future CMB-LSS analyses. Advanced statistical methods, such as the Gibbs sampler (Jewell et al 2004, Wandelt et al 2004) have been critical in the analysis of Planck data. More recently, Approximate Bayesian Computation (ABC, see Weyant et al 2012, Akeret et al 2015, Ishida et al 2015, for cosmological applications) has matured to an interesting tool in cosmological likelihood analyses. It circumvents several assumptions that enter the standard Planck (and most LSS) likelihood analyses, most importantly, the assumption that the functional form of the likelihood of the CMB observables is a multivariate Gaussian. Beyond applying new statistical methods to Planck data in order to cross-check and validate existing constraints, we plan to combine Planck and DES data in a new and innovative way and run multi-probe likelihood analyses of CMB and LSS observables. The complexity of multiprobe likelihood analyses scale (non-linearly) with the level of correlations amongst the individual probes that are included. For the multi

  15. Logarithmic Similarity Measure between Interval-Valued Fuzzy Sets and Its Fault Diagnosis Method

    Directory of Open Access Journals (Sweden)

    Zhikang Lu

    2018-02-01

    Full Text Available Fault diagnosis is an important task for the normal operation and maintenance of equipment. In many real situations, the diagnosis data cannot provide deterministic values and are usually imprecise or uncertain. Thus, interval-valued fuzzy sets (IVFSs are very suitable for expressing imprecise or uncertain fault information in real problems. However, existing literature scarcely deals with fault diagnosis problems, such as gasoline engines and steam turbines with IVFSs. However, the similarity measure is one of the important tools in fault diagnoses. Therefore, this paper proposes a new similarity measure of IVFSs based on logarithmic function and its fault diagnosis method for the first time. By the logarithmic similarity measure between the fault knowledge and some diagnosis-testing samples with interval-valued fuzzy information and its relation indices, we can determine the fault type and ranking order of faults corresponding to the relation indices. Then, the misfire fault diagnosis of the gasoline engine and the vibrational fault diagnosis of a turbine are presented to demonstrate the simplicity and effectiveness of the proposed diagnosis method. The fault diagnosis results of gasoline engine and steam turbine show that the proposed diagnosis method not only gives the main fault types of the gasoline engine and steam turbine but also provides useful information for multi-fault analyses and predicting future fault trends. Hence, the logarithmic similarity measure and its fault diagnosis method are main contributions in this study and they provide a useful new way for the fault diagnosis with interval-valued fuzzy information.

  16. The development of a patient-specific method for physiotherapy goal setting: a user-centered design.

    Science.gov (United States)

    Stevens, Anita; Köke, Albère; van der Weijden, Trudy; Beurskens, Anna

    2018-08-01

    To deliver client-centered care, physiotherapists need to identify the patients' individual treatment goals. However, practical tools for involving patients in goal setting are lacking. The purpose of this study was to improve the frequently used Patient-Specific Complaints instrument in Dutch physiotherapy, and to develop it into a feasible method to improve physiotherapy goal setting. An iterative user-centered design was conducted in co-creation with the physiotherapists and patients, in three phases. Their needs and preferences were identified by means of group meetings and questionnaires. The new method was tested in several field tests in physiotherapy practices. Four main objectives for improvement were formulated: clear instructions for the administration procedure, targeted use across the physiotherapy process, client-activating communication skills, and a client-centered attitude of the physiotherapist. A theoretical goal-setting framework and elements of shared decision making were integrated into the new-called, Patient-Specific Goal-setting method, together with a practical training course. The user-centered approach resulted in a goal-setting method that is fully integrated in the physiotherapy process. The new goal-setting method contributes to a more structured approach to goal setting and enables patient participation and goal-oriented physiotherapy. Before large-scale implementation, its feasibility in physiotherapy practice needs to be investigated. Implications for rehabilitation Involving patients and physiotherapists in the development and testing of a goal-setting method, increases the likelihood of its feasibility in practice. The integration of a goal-setting method into the physiotherapy process offers the opportunity to focus more fully on the patient's goals. Patients should be informed about the aim of every step of the goal-setting process in order to increase their awareness and involvement. Training physiotherapists to use a patient

  17. Governance of professional nursing practice in a hospital setting: a mixed methods study

    Directory of Open Access Journals (Sweden)

    José Luís Guedes dos Santos

    2015-12-01

    Full Text Available Objective: to elaborate an interpretative model for the governance of professional nursing practice in a hospital setting. Method: a mixed methods study with concurrent triangulation strategy, using data from a cross-sectional study with 106 nurses and a Grounded Theory study with 63 participants. The quantitative data were collected through the Brazilian Nursing Work Index - Revised and underwent descriptive statistical analysis. Qualitative data were obtained from interviews and analyzed through initial, selective and focused coding. Results: based on the results obtained with the Brazilian Nursing Work Index - Revised, it is possible to state that nurses perceived that they had autonomy, control over the environment, good relationships with physicians and organizational support for nursing governance. The governance of the professional nursing practice is based on the management of nursing care and services carried out by the nurses. To perform these tasks, nurses aim to get around the constraints of the organizational support and develop management knowledge and skills. Conclusion: it is important to reorganize the structures and processes of nursing governance, especially the support provided by the organization for the management practices of nurses.

  18. Modeling of Two-Phase Flow in Rough-Walled Fracture Using Level Set Method

    Directory of Open Access Journals (Sweden)

    Yunfeng Dai

    2017-01-01

    Full Text Available To describe accurately the flow characteristic of fracture scale displacements of immiscible fluids, an incompressible two-phase (crude oil and water flow model incorporating interfacial forces and nonzero contact angles is developed. The roughness of the two-dimensional synthetic rough-walled fractures is controlled with different fractal dimension parameters. Described by the Navier–Stokes equations, the moving interface between crude oil and water is tracked using level set method. The method accounts for differences in densities and viscosities of crude oil and water and includes the effect of interfacial force. The wettability of the rough fracture wall is taken into account by defining the contact angle and slip length. The curve of the invasion pressure-water volume fraction is generated by modeling two-phase flow during a sudden drainage. The volume fraction of water restricted in the rough-walled fracture is calculated by integrating the water volume and dividing by the total cavity volume of the fracture while the two-phase flow is quasistatic. The effect of invasion pressure of crude oil, roughness of fracture wall, and wettability of the wall on two-phase flow in rough-walled fracture is evaluated.

  19. Measurement of thermally ablated lesions in sonoelastographic images using level set methods

    Science.gov (United States)

    Castaneda, Benjamin; Tamez-Pena, Jose Gerardo; Zhang, Man; Hoyt, Kenneth; Bylund, Kevin; Christensen, Jared; Saad, Wael; Strang, John; Rubens, Deborah J.; Parker, Kevin J.

    2008-03-01

    The capability of sonoelastography to detect lesions based on elasticity contrast can be applied to monitor the creation of thermally ablated lesion. Currently, segmentation of lesions depicted in sonoelastographic images is performed manually which can be a time consuming process and prone to significant intra- and inter-observer variability. This work presents a semi-automated segmentation algorithm for sonoelastographic data. The user starts by planting a seed in the perceived center of the lesion. Fast marching methods use this information to create an initial estimate of the lesion. Subsequently, level set methods refine its final shape by attaching the segmented contour to edges in the image while maintaining smoothness. The algorithm is applied to in vivo sonoelastographic images from twenty five thermal ablated lesions created in porcine livers. The estimated area is compared to results from manual segmentation and gross pathology images. Results show that the algorithm outperforms manual segmentation in accuracy, inter- and intra-observer variability. The processing time per image is significantly reduced.

  20. Foreshock search over a long duration using a method of setting appropriate criteria

    Science.gov (United States)

    Toyomoto, Y.; Kawakata, H.; Hirano, S.; Doi, I.

    2016-12-01

    Recently, small foreshocks have been detected using cross-correlation techniques (e.g., Bouchon et al., 2011) in which the foreshocks are identified when the cross-correlation coefficient (CC) exceeded a certain threshold. For some shallow intraplate earthquakes, foreshocks whose hypocenters were estimated to be adjacent to the main shock hypocenter were detected from several tens of minutes before the main shock occurrence (Doi and Kawakata, 2012; 2013). At least two problems remain in the cross-correlation techniques employed. First, previous studies on foreshocks used data whose durations are at most a month (Kato et al., 2013); this is insufficient to check if such events occurred only before the main shock occurrence or not. Second, CC is used for detection criteria without considering validity of the threshold. In this study, we search for foreshocks of an M 5.4 earthquake in central Nagano prefecture in Japan on June 30, 2011 with a vertical-component waveform at N.MWDH (Hi-net) station due to one of the cataloged foreshocks (M 1) as a template to calculate CC. We calculate CC between the template and continuous waveforms of the same component at the same station for two years before the main shock occurrence, and we try to overcome the problems mentioned above. We find that histogram of CC is well modeled with the normal distribution, which is similar to previous studies on tremors (e.g., Ohta and Ide, 2008). According to the model, the expected number of misdetection is less than 1 when CC > 0.63. Therefore, we regard that the waveform is due to a foreshock when CC > 0.63. As a result, foreshocks are detected only within thirteen hours immediately before the main shock occurrence for the two years. By setting an appropriate threshold, we conclude that foreshocks just before the main shock occurrence are not stationary events. Acknowledgments: We use continuous waveform records of NIED high sensitivity seismograph network in Japan (Hi-net) and the JMA

  1. On the modeling of bubble evolution and transport using coupled level-set/CFD method

    International Nuclear Information System (INIS)

    Bartlomiej Wierzbicki; Steven P Antal; Michael Z Podowski

    2005-01-01

    Full text of publication follows: The ability to predict the shape of the gas/liquid/solid interfaces is important for various multiphase flow and heat transfer applications. Specific issues of interest to nuclear reactor thermal-hydraulics, include the evolution of the shape of bubbles attached to solid surfaces during nucleation, bubble surface interactions in complex geometries, etc. Additional problems, making the overall task even more complicated, are associated with the effect of material properties that may be significantly altered by the addition of minute amounts of impurities, such as surfactants or nano-particles. The present paper is concerned with the development of an innovative approach to model time-dependent shape of gas/liquid interfaces in the presence of solid walls. The proposed approach combines a modified level-set method with an advanced CFD code, NPHASE. The coupled numerical solver can be used to simulate the evolution of gas/liquid interfaces in two-phase flows for a variety of geometries and flow conditions, from individual bubbles to free surfaces (stratified flows). The issues discussed in the full paper will include: a description of the novel aspects of the proposed level-set concept based method, an overview of the NPHASE code modeling framework and a description of the coupling method between these two elements of the overall model. A particular attention will be give to the consistency and completeness of model formulation for the interfacial phenomena near the liquid/gas/solid triple line, and to the impact of the proposed numerical approach on the accuracy and consistency of predictions. The accuracy will be measured in terms of both the calculated shape of the interfaces and the gas and liquid velocity fields around the interfaces and in the entire computational domain. The results of model testing and validation will also be shown in the full paper. The situations analyzed will include: bubbles of different sizes and varying

  2. Using the expected detection delay to assess the performance of different multivariate statistical process monitoring methods for multiplicative and drift faults.

    Science.gov (United States)

    Zhang, Kai; Shardt, Yuri A W; Chen, Zhiwen; Peng, Kaixiang

    2017-03-01

    Using the expected detection delay (EDD) index to measure the performance of multivariate statistical process monitoring (MSPM) methods for constant additive faults have been recently developed. This paper, based on a statistical investigation of the T 2 - and Q-test statistics, extends the EDD index to the multiplicative and drift fault cases. As well, it is used to assess the performance of common MSPM methods that adopt these two test statistics. Based on how to use the measurement space, these methods can be divided into two groups, those which consider the complete measurement space, for example, principal component analysis-based methods, and those which only consider some subspace that reflects changes in key performance indicators, such as partial least squares-based methods. Furthermore, a generic form for them to use T 2 - and Q-test statistics are given. With the extended EDD index, the performance of these methods to detect drift and multiplicative faults is assessed using both numerical simulations and the Tennessee Eastman process. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Methods for sampling geographically mobile female traders in an East African market setting

    Science.gov (United States)

    Achiro, Lillian; Kwena, Zachary A.; McFarland, Willi; Neilands, Torsten B.; Cohen, Craig R.; Bukusi, Elizabeth A.; Camlin, Carol S.

    2018-01-01

    Background The role of migration in the spread of HIV in sub-Saharan Africa is well-documented. Yet migration and HIV research have often focused on HIV risks to male migrants and their partners, or migrants overall, often failing to measure the risks to women via their direct involvement in migration. Inconsistent measures of mobility, gender biases in those measures, and limited data sources for sex-specific population-based estimates of mobility have contributed to a paucity of research on the HIV prevention and care needs of migrant and highly mobile women. This study addresses an urgent need for novel methods for developing probability-based, systematic samples of highly mobile women, focusing on a population of female traders operating out of one of the largest open air markets in East Africa. Our method involves three stages: 1.) identification and mapping of all market stall locations using Global Positioning System (GPS) coordinates; 2.) using female market vendor stall GPS coordinates to build the sampling frame using replicates; and 3.) using maps and GPS data for recruitment of study participants. Results The location of 6,390 vendor stalls were mapped using GPS. Of these, 4,064 stalls occupied by women (63.6%) were used to draw four replicates of 128 stalls each, and a fifth replicate of 15 pre-selected random alternates for a total of 527 stalls assigned to one of five replicates. Staff visited 323 stalls from the first three replicates and from these successfully recruited 306 female vendors into the study for a participation rate of 94.7%. Mobilization strategies and involving traders association representatives in participant recruitment were critical to the study’s success. Conclusion The study’s high participation rate suggests that this geospatial sampling method holds promise for development of probability-based samples in other settings that serve as transport hubs for highly mobile populations. PMID:29324780

  4. Design and methods for evaluating an early childhood obesity prevention program in the childcare center setting.

    Science.gov (United States)

    Natale, Ruby; Scott, Stephanie Hapeman; Messiah, Sarah E; Schrack, Maria Mesa; Uhlhorn, Susan B; Delamater, Alan

    2013-01-28

    Many unhealthy dietary and physical activity habits that foster the development of obesity are established by the age of five. Presently, approximately 70 percent of children in the United States are currently enrolled in early childcare facilities, making this an ideal setting to implement and evaluate childhood obesity prevention efforts. We describe here the methods for conducting an obesity prevention randomized trial in the child care setting. A randomized, controlled obesity prevention trial is currently being conducted over a three year period (2010-present). The sample consists of 28 low-income, ethnically diverse child care centers with 1105 children (sample is 60% Hispanic, 15% Haitian, 12% Black, 2% non-Hispanic White and 71% of caregivers were born outside of the US). The purpose is to test the efficacy of a parent and teacher role-modeling intervention on children's nutrition and physical activity behaviors. . The Healthy Caregivers-Healthy Children (HC2) intervention arm schools received a combination of (1) implementing a daily curricula for teachers/parents (the nutritional gatekeepers); (2) implementing a daily curricula for children; (3) technical assistance with meal and snack menu modifications such as including more fresh and less canned produce; and (4) creation of a center policy for dietary requirements for meals and snacks, physical activity and screen time. Control arm schools received an attention control safety curriculum. Major outcome measures include pre-post changes in child body mass index percentile and z score, fruit and vegetable and other nutritious food intake, amount of physical activity, and parental nutrition and physical activity knowledge, attitudes, and beliefs, defined by intentions and behaviors. All measures were administered at the beginning and end of the school year for year one and year two of the study for a total of 4 longitudinal time points for assessment. Although few attempts have been made to prevent obesity

  5. Subjective expected utility without preferences

    OpenAIRE

    Bouyssou , Denis; Marchant , Thierry

    2011-01-01

    This paper proposes a theory of subjective expected utility based on primitives only involving the fact that an act can be judged either "attractive" or "unattractive". We give conditions implying that there are a utility function on the set of consequences and a probability distribution on the set of states such that attractive acts have a subjective expected utility above some threshold. The numerical representation that is obtained has strong uniqueness properties.

  6. Anomalous vacuum expectation values

    International Nuclear Information System (INIS)

    Suzuki, H.

    1986-01-01

    The anomalous vacuum expectation value is defined as the expectation value of a quantity that vanishes by means of the field equations. Although this value is expected to vanish in quantum systems, regularization in general produces a finite value of this quantity. Calculation of this anomalous vacuum expectation value can be carried out in the general framework of field theory. The result is derived by subtraction of divergences and by zeta-function regularization. Various anomalies are included in these anomalous vacuum expectation values. This method is useful for deriving not only the conformal, chiral, and gravitational anomalies but also the supercurrent anomaly. The supercurrent anomaly is obtained in the case of N = 1 supersymmetric Yang-Mills theory in four, six, and ten dimensions. The original form of the energy-momentum tensor and the supercurrent have anomalies in their conservation laws. But the modification of these quantities to be equivalent to the original one on-shell causes no anomaly in their conservation laws and gives rise to anomalous traces

  7. The structure and material composition of ossified aortic valves identified using a set of scientific methods

    Science.gov (United States)

    Zeman, Antonín; Šmíd, Michal; Havelcová, Martina; Coufalová, Lucie; Kučková, Štěpánka; Velčovská, Martina; Hynek, Radovan

    2013-11-01

    Degenerative aortic stenosis has become a common and dangerous disease in recent decades. This disease leads to the mineralization of aortic valves, their gradual thickening and loss of functionality. We studied the detailed assessment of the proportion and composition of inorganic and organic components in the ossified aortic valve, using a set of analytical methods applied in science: polarized light microscopy, scanning electron microscopy, X-ray fluorescence, X-ray diffraction, gas chromatography/mass spectrometry and liquid chromatography-tandem mass spectrometry. The sample valves showed the occurrence of phosphorus and calcium in the form of phosphate and calcium carbonate, hydroxyapatite, fluorapatite and hydroxy-fluorapatite, with varying content of inorganic components from 65 to 90 wt%, and with phased development of degenerative disability. The outer layers of the plaque contained an organic component with peptide bonds, fatty acids, proteins and cholesterol. The results show a correlation between the formation of fluorapatite in aortic valves and in other parts of the human bodies, associated with the formation of bones.

  8. Governance of professional nursing practice in a hospital setting: a mixed methods study.

    Science.gov (United States)

    dos Santos, José Luís Guedes; Erdmann, Alacoque Lorenzini

    2015-01-01

    To elaborate an interpretative model for the governance of professional nursing practice in a hospital setting. A mixed methods study with concurrent triangulation strategy, using data from a cross-sectional study with 106 nurses and a Grounded Theory study with 63 participants. The quantitative data were collected through the Brazilian Nursing Work Index - Revised and underwent descriptive statistical analysis. Qualitative data were obtained from interviews and analyzed through initial, selective and focused coding. Based on the results obtained with the Brazilian Nursing Work Index - Revised, it is possible to state that nurses perceived that they had autonomy, control over the environment, good relationships with physicians and organizational support for nursing governance. The governance of the professional nursing practice is based on the management of nursing care and services carried out by the nurses. To perform these tasks, nurses aim to get around the constraints of the organizational support and develop management knowledge and skills. It is important to reorganize the structures and processes of nursing governance, especially the support provided by the organization for the management practices of nurses.

  9. A comparison of methods to separate treatment from self-selection effects in an online banking setting

    NARCIS (Netherlands)

    Gensler, S.; Leeflang, P.S.H.; Skiera, B.

    The literature discusses several methods to control for self-selection effects but provides little guidance on which method to use in a setting with a limited number of variables. The authors theoretically compare and empirically assess the performance of different matching methods and instrumental

  10. Different methods and settings for glucose monitoring for gestational diabetes during pregnancy.

    Science.gov (United States)

    Raman, Puvaneswary; Shepherd, Emily; Dowswell, Therese; Middleton, Philippa; Crowther, Caroline A

    2017-10-29

    Incidence of gestational diabetes mellitus (GDM) is increasing worldwide. Blood glucose monitoring plays a crucial part in maintaining glycaemic control in women with GDM and is generally recommended by healthcare professionals. There are several different methods for monitoring blood glucose which can be carried out in different settings (e.g. at home versus in hospital). The objective of this review is to compare the effects of different methods and settings for glucose monitoring for women with GDM on maternal and fetal, neonatal, child and adult outcomes, and use and costs of health care. We searched the Cochrane Pregnancy and Childbirth Group Trials Register (30 September 2016) and reference lists of retrieved studies. Randomised controlled trials (RCTs) or quasi-randomised controlled trials (qRCTs) comparing different methods (such as timings and frequencies) or settings, or both, for blood glucose monitoring for women with GDM. Two authors independently assessed study eligibility, risk of bias, and extracted data. Data were checked for accuracy.We assessed the quality of the evidence for the main comparisons using GRADE, for:- primary outcomes for mothers: that is, hypertensive disorders of pregnancy; caesarean section; type 2 diabetes; and- primary outcomes for children: that is, large-for-gestational age; perinatal mortality; death or serious morbidity composite; childhood/adulthood neurosensory disability;- secondary outcomes for mothers: that is, induction of labour; perineal trauma; postnatal depression; postnatal weight retention or return to pre-pregnancy weight; and- secondary outcomes for children: that is, neonatal hypoglycaemia; childhood/adulthood adiposity; childhood/adulthood type 2 diabetes. We included 11 RCTs (10 RCTs; one qRCT) that randomised 1272 women with GDM in upper-middle or high-income countries; we considered these to be at a moderate to high risk of bias. We assessed the RCTs under five comparisons. For outcomes assessed using

  11. Laplace transform series expansion method for solving the local fractional heat-transfer equation defined on Cantor sets

    Directory of Open Access Journals (Sweden)

    Sun Huan

    2016-01-01

    Full Text Available In this paper, we use the Laplace transform series expansion method to find the analytical solution for the local fractional heat-transfer equation defined on Cantor sets via local fractional calculus.

  12. Study on intelligence fault diagnosis method for nuclear power plant equipment based on rough set and fuzzy neural network

    International Nuclear Information System (INIS)

    Liu Yongkuo; Xia Hong; Xie Chunli; Chen Zhihui; Chen Hongxia

    2007-01-01

    Rough set theory and fuzzy neural network are combined, to take full advantages of the two of them. Based on the reduction technology to knowledge of Rough set method, and by drawing the simple rule from a large number of initial data, the fuzzy neural network was set up, which was with better topological structure, improved study speed, accurate judgment, strong fault-tolerant ability, and more practical. In order to test the validity of the method, the inverted U-tubes break accident of Steam Generator and etc are used as examples, and many simulation experiments are performed. The test result shows that it is feasible to incorporate the fault intelligence diagnosis method based on rough set and fuzzy neural network in the nuclear power plant equipment, and the method is simple and convenience, with small calculation amount and reliable result. (authors)

  13. Analysis of a cost effective method of establishing and maintaining torque switch settings in Limitorque operators

    International Nuclear Information System (INIS)

    Huskey, D.R.

    1991-01-01

    The arrival of Generic Letter 89-10 came as no surprise to many in the industry. The surprise was the apparent focus of the letter in light of the past history and experiences of the industry. Indeed, even the Attachment 1 list of '33' deficiencies reflects the true picture more accurately than the letter itself. From reviewing the 'GL-33', one can see that the vast majority of problems are maintenance or training related, or some combination of both. Hence, the bulk of solutions to these problems should also be focused on maintenance and training of maintenance and operations personnel. The one or two problems associated with 'stem thrust' or torque, however, are what the apparent bulk of the efforts to respond to the generic letter will ultimately entail. The reasons for this focus by the NRC seems to stem from some limited blow-down tests, which are still under review and are not necessarily representative of the majority of valves in the industry; coupled with repeated claims by certain diagnostic vendors as to the extent of the stem thrust problem based on their test results at the time. Hindsight now reflects that the problems were largely in methodology and interpretations errors of the data as opposed to real problems with the motor operated valves (MOVs) themselves. As a result however, the letter was issued, and many utilities have been put in a rather awkward position at the timing. First generation stem thrust measuring technology has proven to be full of pitfalls and methodology problems and is expensive on top of it all. State-of-the-art equipment is still undergoing growing pains. The purpose of this paper is to describe briefly a technique in use at a number of stations to set and maintain torque switch set points, i.e. use of a torque wrench combined with an ohmmeter to determine torque switch trip point. At least one station is utilizing the method to trend the spring pack relaxation phenomenon which Limitorque has been investigating

  14. Theoretical Frameworks, Methods, and Procedures for Conducting Phenomenological Studies in Educational Settings

    Directory of Open Access Journals (Sweden)

    Pelin Yüksel

    2015-01-01

    Full Text Available The main purposes of phenomenological research are to seek reality from individuals’ narratives of their experiences and feelings, and to produce in-depth descriptions of the phenomenon. Phenomenological research studies in educational settings generally embody lived experience, perception, and feelings of participants about a phenomenon. This study aims to provide a general framework for researchers who are interested in phenomenological studies especially in educational setting. Additionally, the study provides a guide for researchers on how to conduct a phenomenological research and how to collect and analyze phenomenal data. The first part of the paper explains the underpinnings of the research methodology consisting of methodological framework and key phenomenological concepts. The second part provides guidance for a phenomenological research in education settings, focusing particularly on phenomenological data collection procedure and phenomenological data analysis methods.Keywords: Phenomenology, phenomenological inquiry, phenomenological data analysis Eğitim Ortamlarında Fenomenal Çalışmaları Yürütmek İçin Teorik Çerçeveler, Yöntemler ve ProsedürlerÖzFenomenolojik araştırmaların temel amacı, bireyin deneyimlerinden ve duygularından yola çıkarak belli bir fenomenan üzerinde yaptığı anlatılarında gerçeği aramak ve bu fenomenana yönelik derinlemesine açıklamalar üretmektir. Eğitim ortamlarında fenomenolojik araştırmalar genellikle araştırmaya katılanların belli bir fenomenan hakkında yaşantıları, deneyimleri, algıları ve duyguları somutlaştırmak için kullanılır. Bu çalışma, özellikle eğitim ortamlarında fenomenolojik çalışmalarla ilgilenen araştırmacılar için genel bir çerçeve sunmayı amaçlamaktadır. Ayrıca, çalışmada fenomenolojik araştırmalar için veri toplamak ve bu fenomenal verileri analiz yapmak için araştırmacılara yön gösterici bir k

  15. A Text Matching Method to Facilitate the Validation of Frequent Order Sets Obtained Through Data Mining

    OpenAIRE

    Che, Chengjian; Rocha, Roberto A.

    2006-01-01

    In order to compare order sets discovered using a data mining algorithm with existing order sets, we developed an order matching tool based on Oracle Text. The tool includes both automated searching and manual review processes. The comparison between the automated process and the manual review process indicates that the sensitivity of the automated matching is 81% and the specificity is 84%.

  16. Homogeneity analysis with k sets of variables: An alternating least squares method with optimal scaling features

    NARCIS (Netherlands)

    van der Burg, Eeke; de Leeuw, Jan; Verdegaal, Renée

    1988-01-01

    Homogeneity analysis, or multiple correspondence analysis, is usually applied tok separate variables. In this paper we apply it to sets of variables by using sums within sets. The resulting technique is called OVERALS. It uses the notion of optimal scaling, with transformations that can be multiple

  17. Computerized detection of multiple sclerosis candidate regions based on a level set method using an artificial neural network

    International Nuclear Information System (INIS)

    Kuwazuru, Junpei; Magome, Taiki; Arimura, Hidetaka; Yamashita, Yasuo; Oki, Masafumi; Toyofuku, Fukai; Kakeda, Shingo; Yamamoto, Daisuke

    2010-01-01

    Yamamoto et al. developed the system for computer-aided detection of multiple sclerosis (MS) candidate regions. In a level set method in their proposed method, they employed the constant threshold value for the edge indicator function related to a speed function of the level set method. However, it would be appropriate to adjust the threshold value to each MS candidate region, because the edge magnitudes in MS candidates differ from each other. Our purpose of this study was to develop a computerized detection of MS candidate regions in MR images based on a level set method using an artificial neural network (ANN). To adjust the threshold value for the edge indicator function in the level set method to each true positive (TP) and false positive (FP) region, we constructed the ANN. The ANN could provide the suitable threshold value for each candidate region in the proposed level set method so that TP regions can be segmented and FP regions can be removed. Our proposed method detected MS regions at a sensitivity of 82.1% with 0.204 FPs per slice and similarity index of MS candidate regions was 0.717 on average. (author)

  18. On some methods for improving time of reachability sets computation for the dynamic system control problem

    Science.gov (United States)

    Zimovets, Artem; Matviychuk, Alexander; Ushakov, Vladimir

    2016-12-01

    The paper presents two different approaches to reduce the time of computer calculation of reachability sets. First of these two approaches use different data structures for storing the reachability sets in the computer memory for calculation in single-threaded mode. Second approach is based on using parallel algorithms with reference to the data structures from the first approach. Within the framework of this paper parallel algorithm of approximate reachability set calculation on computer with SMP-architecture is proposed. The results of numerical modelling are presented in the form of tables which demonstrate high efficiency of parallel computing technology and also show how computing time depends on the used data structure.

  19. Local and global recoding methods for anonymizing set-valued data

    KAUST Repository

    Terrovitis, Manolis; Mamoulis, Nikos; Kalnis, Panos

    2010-01-01

    In this paper, we study the problem of protecting privacy in the publication of set-valued data. Consider a collection of supermarket transactions that contains detailed information about items bought together by individuals. Even after removing all

  20. Evaluating quality of patient care communication in integrated care settings: a mixed methods apporach

    NARCIS (Netherlands)

    Gulmans, J.; Gulmans, J.; Vollenbroek-Hutten, Miriam Marie Rosé; van Gemert-Pijnen, Julia E.W.C.; van Harten, Willem H.

    2007-01-01

    Background. Owing to the involvement of multiple professionals from various institutions, integrated care settings are prone to suboptimal patient care communication. To assure continuity, communication gaps should be identified for targeted improvement initiatives. However, available assessment

  1. Theoretical Frameworks, Methods, and Procedures for Conducting Phenomenological Studies in Educational Settings

    OpenAIRE

    Pelin Yüksel; Soner Yıldırım

    2015-01-01

    The main purposes of phenomenological research are to seek reality from individuals’ narratives of their experiences and feelings, and to produce in-depth descriptions of the phenomenon. Phenomenological research studies in educational settings generally embody lived experience, perception, and feelings of participants about a phenomenon. This study aims to provide a general framework for researchers who are interested in phenomenological studies especially in educational setting. Additionall...

  2. Composites Similarity Analysis Method Based on Knowledge Set in Composites Quality Control

    OpenAIRE

    Li Haifeng

    2016-01-01

    Composites similarity analysis is an important link of composites review, it can not only to declare composites review rechecking, still help composites applicants promptly have the research content relevant progress and avoid duplication. This paper mainly studies the composites similarity model in composites review. With the actual experience of composites management, based on the author’s knowledge set theory, paper analyzes deeply knowledge set representation of composites knowledge, impr...

  3. A New 3D Object Pose Detection Method Using LIDAR Shape Set.

    Science.gov (United States)

    Kim, Jung-Un; Kang, Hang-Bong

    2018-03-16

    In object detection systems for autonomous driving, LIDAR sensors provide very useful information. However, problems occur because the object representation is greatly distorted by changes in distance. To solve this problem, we propose a LIDAR shape set that reconstructs the shape surrounding the object more clearly by using the LIDAR point information projected on the object. The LIDAR shape set restores object shape edges from a bird's eye view by filtering LIDAR points projected on a 2D pixel-based front view. In this study, we use this shape set for two purposes. The first is to supplement the shape set with a LIDAR Feature map, and the second is to divide the entire shape set according to the gradient of the depth and density to create a 2D and 3D bounding box proposal for each object. We present a multimodal fusion framework that classifies objects and restores the 3D pose of each object using enhanced feature maps and shape-based proposals. The network structure consists of a VGG -based object classifier that receives multiple inputs and a LIDAR-based Region Proposal Networks (RPN) that identifies object poses. It works in a very intuitive and efficient manner and can be extended to other classes other than vehicles. Our research has outperformed object classification accuracy (Average Precision, AP) and 3D pose restoration accuracy (3D bounding box recall rate) based on the latest studies conducted with KITTI data sets.

  4. Local Fractional Variational Iteration and Decomposition Methods for Wave Equation on Cantor Sets within Local Fractional Operators

    Directory of Open Access Journals (Sweden)

    Dumitru Baleanu

    2014-01-01

    Full Text Available We perform a comparison between the fractional iteration and decomposition methods applied to the wave equation on Cantor set. The operators are taken in the local sense. The results illustrate the significant features of the two methods which are both very effective and straightforward for solving the differential equations with local fractional derivative.

  5. An unbiased method to build benchmarking sets for ligand-based virtual screening and its application to GPCRs.

    Science.gov (United States)

    Xia, Jie; Jin, Hongwei; Liu, Zhenming; Zhang, Liangren; Wang, Xiang Simon

    2014-05-27

    Benchmarking data sets have become common in recent years for the purpose of virtual screening, though the main focus had been placed on the structure-based virtual screening (SBVS) approaches. Due to the lack of crystal structures, there is great need for unbiased benchmarking sets to evaluate various ligand-based virtual screening (LBVS) methods for important drug targets such as G protein-coupled receptors (GPCRs). To date these ready-to-apply data sets for LBVS are fairly limited, and the direct usage of benchmarking sets designed for SBVS could bring the biases to the evaluation of LBVS. Herein, we propose an unbiased method to build benchmarking sets for LBVS and validate it on a multitude of GPCRs targets. To be more specific, our methods can (1) ensure chemical diversity of ligands, (2) maintain the physicochemical similarity between ligands and decoys, (3) make the decoys dissimilar in chemical topology to all ligands to avoid false negatives, and (4) maximize spatial random distribution of ligands and decoys. We evaluated the quality of our Unbiased Ligand Set (ULS) and Unbiased Decoy Set (UDS) using three common LBVS approaches, with Leave-One-Out (LOO) Cross-Validation (CV) and a metric of average AUC of the ROC curves. Our method has greatly reduced the "artificial enrichment" and "analogue bias" of a published GPCRs benchmarking set, i.e., GPCR Ligand Library (GLL)/GPCR Decoy Database (GDD). In addition, we addressed an important issue about the ratio of decoys per ligand and found that for a range of 30 to 100 it does not affect the quality of the benchmarking set, so we kept the original ratio of 39 from the GLL/GDD.

  6. Setting health research priorities using the CHNRI method: VI. Quantitative properties of human collective opinion.

    Science.gov (United States)

    Yoshida, Sachiyo; Rudan, Igor; Cousens, Simon

    2016-06-01

    Crowdsourcing has become an increasingly important tool to address many problems - from government elections in democracies, stock market prices, to modern online tools such as TripAdvisor or Internet Movie Database (IMDB). The CHNRI method (the acronym for the Child Health and Nutrition Research Initiative) for setting health research priorities has crowdsourcing as the major component, which it uses to generate, assess and prioritize between many competing health research ideas. We conducted a series of analyses using data from a group of 91 scorers to explore the quantitative properties of their collective opinion. We were interested in the stability of their collective opinion as the sample size increases from 15 to 90. From a pool of 91 scorers who took part in a previous CHNRI exercise, we used sampling with replacement to generate multiple random samples of different size. First, for each sample generated, we identified the top 20 ranked research ideas, among 205 that were proposed and scored, and calculated the concordance with the ranking generated by the 91 original scorers. Second, we used rank correlation coefficients to compare the ranks assigned to all 205 proposed research ideas when samples of different size are used. We also analysed the original pool of 91 scorers to to look for evidence of scoring variations based on scorers' characteristics. The sample sizes investigated ranged from 15 to 90. The concordance for the top 20 scored research ideas increased with sample sizes up to about 55 experts. At this point, the median level of concordance stabilized at 15/20 top ranked questions (75%), with the interquartile range also generally stable (14-16). There was little further increase in overlap when the sample size increased from 55 to 90. When analysing the ranking of all 205 ideas, the rank correlation coefficient increased as the sample size increased, with a median correlation of 0.95 reached at the sample size of 45 experts (median of the rank

  7. Comparison of Deep Learning With Multiple Machine Learning Methods and Metrics Using Diverse Drug Discovery Data Sets.

    Science.gov (United States)

    Korotcov, Alexandru; Tkachenko, Valery; Russo, Daniel P; Ekins, Sean

    2017-12-04

    Machine learning methods have been applied to many data sets in pharmaceutical research for several decades. The relative ease and availability of fingerprint type molecular descriptors paired with Bayesian methods resulted in the widespread use of this approach for a diverse array of end points relevant to drug discovery. Deep learning is the latest machine learning algorithm attracting attention for many of pharmaceutical applications from docking to virtual screening. Deep learning is based on an artificial neural network with multiple hidden layers and has found considerable traction for many artificial intelligence applications. We have previously suggested the need for a comparison of different machine learning methods with deep learning across an array of varying data sets that is applicable to pharmaceutical research. End points relevant to pharmaceutical research include absorption, distribution, metabolism, excretion, and toxicity (ADME/Tox) properties, as well as activity against pathogens and drug discovery data sets. In this study, we have used data sets for solubility, probe-likeness, hERG, KCNQ1, bubonic plague, Chagas, tuberculosis, and malaria to compare different machine learning methods using FCFP6 fingerprints. These data sets represent whole cell screens, individual proteins, physicochemical properties as well as a data set with a complex end point. Our aim was to assess whether deep learning offered any improvement in testing when assessed using an array of metrics including AUC, F1 score, Cohen's kappa, Matthews correlation coefficient and others. Based on ranked normalized scores for the metrics or data sets Deep Neural Networks (DNN) ranked higher than SVM, which in turn was ranked higher than all the other machine learning methods. Visualizing these properties for training and test sets using radar type plots indicates when models are inferior or perhaps over trained. These results also suggest the need for assessing deep learning further

  8. Comparing of goal setting strategy with group education method to increase physical activity level: A randomized trial

    Directory of Open Access Journals (Sweden)

    Nasrin Jiryaee

    2015-01-01

    Full Text Available Background: Designing an intervention to increase physical activity is important to be based on the health care settings resources and be acceptable by the subject group. This study was designed to assess and compare the effect of the goal setting strategy with a group education method on increasing the physical activity of mothers of children aged 1 to 5. Materials and Methods: Mothers who had at least one child of 1-5 years were randomized into two groups. The effect of 1 goal-setting strategy and 2 group education method on increasing physical activity was assessed and compared 1 month and 3 months after the intervention. Also, the weight, height, body mass index (BMI, waist and hip circumference, and well-being were compared between the two groups before and after the intervention. Results: Physical activity level increased significantly after the intervention in the goal-setting group and it was significantly different between the two groups after intervention (P < 0.05. BMI, waist circumference, hip circumference, and well-being score were significantly different in the goal-setting group after the intervention. In the group education method, only the well-being score improved significantly (P < 0.05. Conclusion: Our study presented the effects of using the goal-setting strategy to boost physical activity, improving the state of well-being and decreasing BMI, waist, and hip circumference.

  9. An electromagnetism-like method for the maximum set splitting problem

    Directory of Open Access Journals (Sweden)

    Kratica Jozef

    2013-01-01

    Full Text Available In this paper, an electromagnetism-like approach (EM for solving the maximum set splitting problem (MSSP is applied. Hybrid approach consisting of the movement based on the attraction-repulsion mechanisms combined with the proposed scaling technique directs EM to promising search regions. Fast implementation of the local search procedure additionally improves the efficiency of overall EM system. The performance of the proposed EM approach is evaluated on two classes of instances from the literature: minimum hitting set and Steiner triple systems. The results show, except in one case, that EM reaches optimal solutions up to 500 elements and 50000 subsets on minimum hitting set instances. It also reaches all optimal/best-known solutions for Steiner triple systems.

  10. A new acoustic method to determine the setting time of calcium sulfate bone cement mixed with antibiotics.

    Science.gov (United States)

    Cooper, J J; Brayford, M J; Laycock, P A

    2014-08-01

    A new method is described which can be used to determine the setting times of small amounts of high value bone cements. The test was developed to measure how the setting times of a commercially available synthetic calcium sulfate cement (Stimulan, Biocomposites, UK) in two forms (standard and Rapid Cure) varies with the addition of clinically relevant antibiotics. The importance of being able to accurately quantify these setting times is discussed. The results demonstrate that this new method, which is shown to correlate to the Vicat needle, gives reliable and repeatable data with additional benefits expressed in the article. The majority of antibiotics mixed were found to retard the setting reaction of the calcium sulfate cement.

  11. A new acoustic method to determine the setting time of calcium sulfate bone cement mixed with antibiotics

    International Nuclear Information System (INIS)

    Cooper, J J; Brayford, M J; Laycock, P A

    2014-01-01

    A new method is described which can be used to determine the setting times of small amounts of high value bone cements. The test was developed to measure how the setting times of a commercially available synthetic calcium sulfate cement (Stimulan, Biocomposites, UK) in two forms (standard and Rapid Cure) varies with the addition of clinically relevant antibiotics. The importance of being able to accurately quantify these setting times is discussed. The results demonstrate that this new method, which is shown to correlate to the Vicat needle, gives reliable and repeatable data with additional benefits expressed in the article. The majority of antibiotics mixed were found to retard the setting reaction of the calcium sulfate cement. (paper)

  12. An improved method for setting upper limits with small numbers of events

    International Nuclear Information System (INIS)

    Swartz, M.L.

    1990-01-01

    We note that most experimental searches for rare phenomena actually measure the ratio of the number of event candidates to the number of some normalizing events. These measurements are most naturally interpreted within the framework of binomial or trinomial statistics. We present a general expression, based upon a classical treatment, that accounts for statistical normalization errors and incorporates expected background rates. The solutions of this expression converge to the standard Poisson values when the number of normalizing events is larger than a few hundred. (orig.)

  13. Shape Reconstruction of Thin Electromagnetic Inclusions via Boundary Measurements: Level-Set Method Combined with the Topological Derivative

    Directory of Open Access Journals (Sweden)

    Won-Kwang Park

    2013-01-01

    Full Text Available An inverse problem for reconstructing arbitrary-shaped thin penetrable electromagnetic inclusions concealed in a homogeneous material is considered in this paper. For this purpose, the level-set evolution method is adopted. The topological derivative concept is incorporated in order to evaluate the evolution speed of the level-set functions. The results of the corresponding numerical simulations with and without noise are presented in this paper.

  14. Comparing of goal setting strategy with group education method to increase physical activity level: A randomized trial.

    Science.gov (United States)

    Jiryaee, Nasrin; Siadat, Zahra Dana; Zamani, Ahmadreza; Taleban, Roya

    2015-10-01

    Designing an intervention to increase physical activity is important to be based on the health care settings resources and be acceptable by the subject group. This study was designed to assess and compare the effect of the goal setting strategy with a group education method on increasing the physical activity of mothers of children aged 1 to 5. Mothers who had at least one child of 1-5 years were randomized into two groups. The effect of 1) goal-setting strategy and 2) group education method on increasing physical activity was assessed and compared 1 month and 3 months after the intervention. Also, the weight, height, body mass index (BMI), waist and hip circumference, and well-being were compared between the two groups before and after the intervention. Physical activity level increased significantly after the intervention in the goal-setting group and it was significantly different between the two groups after intervention (P goal-setting group after the intervention. In the group education method, only the well-being score improved significantly (P goal-setting strategy to boost physical activity, improving the state of well-being and decreasing BMI, waist, and hip circumference.

  15. A novel method of evaluation of three heat-moisture exchangers in six different ventilator settings

    NARCIS (Netherlands)

    Unal, N.; Kanhai, J. K.; Buijk, S. L.; Pompe, J. C.; Holland, W. P.; Gültuna, I.; Ince, C.; Saygin, B.; Bruining, H. A.

    1998-01-01

    The purpose of this study was to assess and compare the humidification, heating, and resistance properties of three commercially available heat-moisture exchangers (HMEs). To mimic clinical conditions, a previously validated, new, realistic experimental set-up and measurement protocol was used.

  16. A Review of Functional Analysis Methods Conducted in Public School Classroom Settings

    Science.gov (United States)

    Lloyd, Blair P.; Weaver, Emily S.; Staubitz, Johanna L.

    2016-01-01

    The use of functional behavior assessments (FBAs) to address problem behavior in classroom settings has increased as a result of education legislation and long-standing evidence supporting function-based interventions. Although functional analysis remains the standard for identifying behavior--environment functional relations, this component is…

  17. Methods for optimizing over the efficient and weakly efficient sets of an affine fractional vector optimization program

    DEFF Research Database (Denmark)

    Le, T.H.A.; Pham, D. T.; Canh, Nam Nguyen

    2010-01-01

    Both the efficient and weakly efficient sets of an affine fractional vector optimization problem, in general, are neither convex nor given explicitly. Optimization problems over one of these sets are thus nonconvex. We propose two methods for optimizing a real-valued function over the efficient...... and weakly efficient sets of an affine fractional vector optimization problem. The first method is a local one. By using a regularization function, we reformulate the problem into a standard smooth mathematical programming problem that allows applying available methods for smooth programming. In case...... the objective function is linear, we have investigated a global algorithm based upon a branch-and-bound procedure. The algorithm uses Lagrangian bound coupling with a simplicial bisection in the criteria space. Preliminary computational results show that the global algorithm is promising....

  18. A method for partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    International Nuclear Information System (INIS)

    Barbee, David L; Holden, James E; Nickles, Robert J; Jeraj, Robert; Flynn, Ryan T

    2010-01-01

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised by partial volume effects which may affect treatment prognosis, assessment or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discovery LS at positions of increasing radii from the scanner's center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method's correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three-dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated

  19. A variational approach to multi-phase motion of gas, liquid and solid based on the level set method

    Science.gov (United States)

    Yokoi, Kensuke

    2009-07-01

    We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.

  20. Evaluation of Different Methods for Identification of Structural Alerts Using Chemical Ames Mutagenicity Data Set as a Benchmark.

    Science.gov (United States)

    Yang, Hongbin; Li, Jie; Wu, Zengrui; Li, Weihua; Liu, Guixia; Tang, Yun

    2017-06-19

    Identification of structural alerts for toxicity is useful in drug discovery and other fields such as environmental protection. With structural alerts, researchers can quickly identify potential toxic compounds and learn how to modify them. Hence, it is important to determine structural alerts from a large number of compounds quickly and accurately. There are already many methods reported for identification of structural alerts. However, how to evaluate those methods is a problem. In this paper, we tried to evaluate four of the methods for monosubstructure identification with three indices including accuracy rate, coverage rate, and information gain to compare their advantages and disadvantages. The Kazius' Ames mutagenicity data set was used as the benchmark, and the four methods were MoSS (graph-based), SARpy (fragment-based), and two fingerprint-based methods including Bioalerts and the fingerprint (FP) method we previously used. The results showed that Bioalerts and FP could detect key substructures with high accuracy and coverage rates because they allowed unclosed rings and wildcard atom or bond types. However, they also resulted in redundancy so that their predictive performance was not as good as that of SARpy. SARpy was competitive in predictive performance in both training set and external validation set. These results might be helpful for users to select appropriate methods and further development of methods for identification of structural alerts.

  1. Method research of fault diagnosis based on rough set for nuclear power plant

    International Nuclear Information System (INIS)

    Chen Zhihui; Xia Hong

    2005-01-01

    Nuclear power equipment fault feature is complicated and uncertain. Rough set theory can express and deal with vagueness and uncertainty, so that it can be introduced nuclear power fault diagnosis to analyze and process historical data to find rule of fault feature. Rough set theory treatment step: Data preprocessing, attribute reduction, attribute value reduction, rule generation. According to discernibility matrix definition and nature, we can utilize discernibility matrix in reduction algorithm that make attribute and attribute value reduction, so that it can minish algorithmic complication and simplify programming. This algorithm is applied to the nuclear power fault diagnosis to generate rules of diagnosis. Using these rules, we have diagnosed five kinds of model faults correctly. (authors)

  2. First-principle modelling of forsterite surface properties: Accuracy of methods and basis sets.

    Science.gov (United States)

    Demichelis, Raffaella; Bruno, Marco; Massaro, Francesco R; Prencipe, Mauro; De La Pierre, Marco; Nestola, Fabrizio

    2015-07-15

    The seven main crystal surfaces of forsterite (Mg2 SiO4 ) were modeled using various Gaussian-type basis sets, and several formulations for the exchange-correlation functional within the density functional theory (DFT). The recently developed pob-TZVP basis set provides the best results for all properties that are strongly dependent on the accuracy of the wavefunction. Convergence on the structure and on the basis set superposition error-corrected surface energy can be reached also with poorer basis sets. The effect of adopting different DFT functionals was assessed. All functionals give the same stability order for the various surfaces. Surfaces do not exhibit any major structural differences when optimized with different functionals, except for higher energy orientations where major rearrangements occur around the Mg sites at the surface or subsurface. When dispersions are not accounted for, all functionals provide similar surface energies. The inclusion of empirical dispersions raises the energy of all surfaces by a nearly systematic value proportional to the scaling factor s of the dispersion formulation. An estimation for the surface energy is provided through adopting C6 coefficients that are more suitable than the standard ones to describe O-O interactions in minerals. A 2 × 2 supercell of the most stable surface (010) was optimized. No surface reconstruction was observed. The resulting structure and surface energy show no difference with respect to those obtained when using the primitive cell. This result validates the (010) surface model here adopted, that will serve as a reference for future studies on adsorption and reactivity of water and carbon dioxide at this interface. © 2015 Wiley Periodicals, Inc.

  3. A Method of Forming the Optimal Set of Disjoint Path in Computer Networks

    Directory of Open Access Journals (Sweden)

    As'ad Mahmoud As'ad ALNASER

    2017-04-01

    Full Text Available This work provides a short analysis of algorithms of multipath routing. The modified algorithm of formation of the maximum set of not crossed paths taking into account their metrics is offered. Optimization of paths is carried out due to their reconfiguration with adjacent deadlock path. Reconfigurations are realized within the subgraphs including only peaks of the main and an adjacent deadlock path. It allows to reduce the field of formation of an optimum path and time complexity of its formation.

  4. Computing the dynamics of biomembranes by combining conservative level set and adaptive finite element methods

    OpenAIRE

    Laadhari , Aymen; Saramito , Pierre; Misbah , Chaouqi

    2014-01-01

    International audience; The numerical simulation of the deformation of vesicle membranes under simple shear external fluid flow is considered in this paper. A new saddle-point approach is proposed for the imposition of the fluid incompressibility and the membrane inextensibility constraints, through Lagrange multipliers defined in the fluid and on the membrane respectively. Using a level set formulation, the problem is approximated by mixed finite elements combined with an automatic adaptive ...

  5. On piecewise constant level-set (PCLS) methods for the identification of discontinuous parameters in ill-posed problems

    International Nuclear Information System (INIS)

    De Cezaro, A; Leitão, A; Tai, X-C

    2013-01-01

    We investigate level-set-type methods for solving ill-posed problems with discontinuous (piecewise constant) coefficients. The goal is to identify the level sets as well as the level values of an unknown parameter function on a model described by a nonlinear ill-posed operator equation. The PCLS approach is used here to parametrize the solution of a given operator equation in terms of a L 2 level-set function, i.e. the level-set function itself is assumed to be a piecewise constant function. Two distinct methods are proposed for computing stable solutions of the resulting ill-posed problem: the first is based on Tikhonov regularization, while the second is based on the augmented Lagrangian approach with total variation penalization. Classical regularization results (Engl H W et al 1996 Mathematics and its Applications (Dordrecht: Kluwer)) are derived for the Tikhonov method. On the other hand, for the augmented Lagrangian method, we succeed in proving the existence of (generalized) Lagrangian multipliers in the sense of (Rockafellar R T and Wets R J-B 1998 Grundlehren der Mathematischen Wissenschaften (Berlin: Springer)). Numerical experiments are performed for a 2D inverse potential problem (Hettlich F and Rundell W 1996 Inverse Problems 12 251–66), demonstrating the capabilities of both methods for solving this ill-posed problem in a stable way (complicated inclusions are recovered without any a priori geometrical information on the unknown parameter). (paper)

  6. LEGO: a novel method for gene set over-representation analysis by incorporating network-based gene weights.

    Science.gov (United States)

    Dong, Xinran; Hao, Yun; Wang, Xiao; Tian, Weidong

    2016-01-11

    Pathway or gene set over-representation analysis (ORA) has become a routine task in functional genomics studies. However, currently widely used ORA tools employ statistical methods such as Fisher's exact test that reduce a pathway into a list of genes, ignoring the constitutive functional non-equivalent roles of genes and the complex gene-gene interactions. Here, we develop a novel method named LEGO (functional Link Enrichment of Gene Ontology or gene sets) that takes into consideration these two types of information by incorporating network-based gene weights in ORA analysis. In three benchmarks, LEGO achieves better performance than Fisher and three other network-based methods. To further evaluate LEGO's usefulness, we compare LEGO with five gene expression-based and three pathway topology-based methods using a benchmark of 34 disease gene expression datasets compiled by a recent publication, and show that LEGO is among the top-ranked methods in terms of both sensitivity and prioritization for detecting target KEGG pathways. In addition, we develop a cluster-and-filter approach to reduce the redundancy among the enriched gene sets, making the results more interpretable to biologists. Finally, we apply LEGO to two lists of autism genes, and identify relevant gene sets to autism that could not be found by Fisher.

  7. The Television Framing Methods of the National Basketball Association: An Agenda-Setting Application.

    Science.gov (United States)

    Fortunato, John A.

    2001-01-01

    Identifies and analyzes the exposure and portrayal framing methods that are utilized by the National Basketball Association (NBA). Notes that key informant interviews provide insight into the exposure framing method and reveal two portrayal instruments: cameras and announcers; and three framing strategies: depicting the NBA as a team game,…

  8. Studying learning in the healthcare setting: the potential of quantitative diary methods

    NARCIS (Netherlands)

    Ciere, Yvette; Jaarsma, Debbie; Visser, Annemieke; Sanderman, Robbert; Snippe, Evelien; Fleer, Joke

    2015-01-01

    Quantitative diary methods are longitudinal approaches that involve the repeated measurement of aspects of peoples’ experience of daily life. In this article, we outline the main characteristics and applications of quantitative diary methods and discuss how their use may further research in the

  9. A method to evaluate performance reliability of individual subjects in laboratory research applied to work settings.

    Science.gov (United States)

    1978-10-01

    This report presents a method that may be used to evaluate the reliability of performance of individual subjects, particularly in applied laboratory research. The method is based on analysis of variance of a tasks-by-subjects data matrix, with all sc...

  10. Set problem teaching methods used in the course of Control and protection systems of nuclear reactors

    International Nuclear Information System (INIS)

    Korolev, V.V.

    1995-01-01

    Some results of pedagogical investigations on the application of the sep problem teaching method at the Obninsk Institute of Nuclear Power Engineering are presented. The method aims at improving the quality of training operation and maintenance personnel for nuclear power plants

  11. Combining evidence and values in priority setting: testing the balance sheet method in a low-income country.

    Science.gov (United States)

    Makundi, Emmanuel; Kapiriri, Lydia; Norheim, Ole Frithjof

    2007-09-24

    Procedures for priority setting need to incorporate both scientific evidence and public values. The aim of this study was to test out a model for priority setting which incorporates both scientific evidence and public values, and to explore use of evidence by a selection of stakeholders and to study reasons for the relative ranking of health care interventions in a setting of extreme resource scarcity. Systematic search for and assessment of relevant evidence for priority setting in a low-income country. Development of a balance sheet according to Eddy's explicit method. Eight group interviews (n-85), using a modified nominal group technique for eliciting individual and group rankings of a given set of health interventions. The study procedure made it possible to compare the groups' ranking before and after all the evidence was provided to participants. A rank deviation is significant if the rank order of the same intervention differed by two or more points on the ordinal scale. A comparison between the initial rank and the final rank (before deliberation) showed a rank deviation of 67%. The difference between the initial rank and the final rank after discussion and voting gave a rank deviation of 78%. Evidence-based and deliberative decision-making does change priorities significantly in an experimental setting. Our use of the balance sheet method was meant as a demonstration project, but could if properly developed be feasible for health planners, experts and health workers, although more work is needed before it can be used for laypersons.

  12. A SEMI-AUTOMATIC RULE SET BUILDING METHOD FOR URBAN LAND COVER CLASSIFICATION BASED ON MACHINE LEARNING AND HUMAN KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    H. Y. Gu

    2017-09-01

    Full Text Available Classification rule set is important for Land Cover classification, which refers to features and decision rules. The selection of features and decision are based on an iterative trial-and-error approach that is often utilized in GEOBIA, however, it is time-consuming and has a poor versatility. This study has put forward a rule set building method for Land cover classification based on human knowledge and machine learning. The use of machine learning is to build rule sets effectively which will overcome the iterative trial-and-error approach. The use of human knowledge is to solve the shortcomings of existing machine learning method on insufficient usage of prior knowledge, and improve the versatility of rule sets. A two-step workflow has been introduced, firstly, an initial rule is built based on Random Forest and CART decision tree. Secondly, the initial rule is analyzed and validated based on human knowledge, where we use statistical confidence interval to determine its threshold. The test site is located in Potsdam City. We utilised the TOP, DSM and ground truth data. The results show that the method could determine rule set for Land Cover classification semi-automatically, and there are static features for different land cover classes.

  13. Setting value optimization method in integration for relay protection based on improved quantum particle swarm optimization algorithm

    Science.gov (United States)

    Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong

    2018-03-01

    With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.

  14. Three-Dimensional Simulation of DRIE Process Based on the Narrow Band Level Set and Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Jia-Cheng Yu

    2018-02-01

    Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.

  15. Methods of measurement of integral and differential linearity distortions of spectrometry sets

    International Nuclear Information System (INIS)

    Fuan, Jacques; Grimont, Bernard; Marin, Roland; Richard, Jean-Pierre

    1969-05-01

    The objective of this document is to describe different measurement methods, and more particularly to present a software for the processing of obtained results in order to avoid interpretation by the investigator. In a first part, the authors define the parameters of integral and differential linearity, outlines their importance in measurements performed by spectrometry, and describe the use of these parameters. In the second part, they propose various methods of measurement of these linearity parameters, report experimental applications of these methods and compare the obtained results

  16. The Expanded FindCore Method for Identification of a Core Atom Set for Assessment of Protein Structure Prediction

    Science.gov (United States)

    Snyder, David A.; Grullon, Jennifer; Huang, Yuanpeng J.; Tejero, Roberto; Montelione, Gaetano T.

    2014-01-01

    Maximizing the scientific impact of NMR-based structure determination requires robust and statistically sound methods for assessing the precision of NMR-derived structures. In particular, a method to define a core atom set for calculating superimpositions and validating structure predictions is critical to the use of NMR-derived structures as targets in the CASP competition. FindCore (D.A. Snyder and G.T. Montelione PROTEINS 2005;59:673–686) is a superimposition independent method for identifying a core atom set, and partitioning that set into domains. However, as FindCore optimizes superimposition by sensitively excluding not-well-defined atoms, the FindCore core may not comprise all atoms suitable for use in certain applications of NMR structures, including the CASP assessment process. Adapting the FindCore approach to assess predicted models against experimental NMR structures in CASP10 required modification of the FindCore method. This paper describes conventions and a standard protocol to calculate an “Expanded FindCore” atom set suitable for validation and application in biological and biophysical contexts. A key application of the Expanded FindCore method is to identify a core set of atoms in the experimental NMR structure for which it makes sense to validate predicted protein structure models. We demonstrate the application of this Expanded FindCore method in characterizing well-defined regions of 18 NMR-derived CASP10 target structures. The Expanded FindCore protocol defines “expanded core atom sets” that match an expert’s intuition of which parts of the structure are sufficiently well-defined to use in assessing CASP model predictions. We also illustrate the impact of this analysis on the CASP GDT assessment scores. PMID:24327305

  17. Application of activation methods on the Dubna experimental transmutation set-ups.

    Science.gov (United States)

    Stoulos, S; Fragopoulou, M; Adloff, J C; Debeauvais, M; Brandt, R; Westmeier, W; Krivopustov, M; Sosnin, A; Papastefanou, C; Zamani, M; Manolopoulou, M

    2003-02-01

    High spallation neutron fluxes were produced by irradiating massive heavy targets with proton beams in the GeV range. The experiments were performed at the Dubna High Energy Laboratory using the nuclotron accelerator. Two different experimental set-ups were used to produce neutron spectra convenient for transmutation of radioactive waste by (n,x) reactions. By a theoretical analysis neutron spectra can be reproduced from activation measurements. Thermal-epithermal and fast-super-fast neutron fluxes were estimated using the 197Au, 238U (n,gamma) and (n,2n) reactions, respectively. Depleted uranium transmutation rates were also studied in both experiments.

  18. Approaches and methods for eutrophication target setting in the Baltic Sea region

    Energy Technology Data Exchange (ETDEWEB)

    Carstensen, J.; Andersen, J.; Dromph, K. [and others

    2013-06-01

    This report describes the outcome of the project 'Review of the ecological targets for eutrophication of the HELCOM BSAP', also known as HELCOM TARGREV. The objectives of HELCOM TARGREV have been to revise the scientific basis underlying the ecological targets for eutrophication, placing much emphasis on providing a strengthened data and information basis for the setting of quantitative targets. The results are first of all likely to form the information basis on which decisions in regard to reviewing and if necessary revising the maximum allowable inputs (MAI) of nutrient of the Baltic Sea Action Plan, including the provisional country-wise allocation reduction targets (CART), will be made.

  19. Setting up Information Literacy Workshops in School Libraries: Imperatives, Principles and Methods

    Directory of Open Access Journals (Sweden)

    Reza Mokhtarpour

    2010-09-01

    Full Text Available While many professional literature have talked at length about the importance of dealing with information literacy in school libraries in ICT dominated era, but few have dealt with the nature and mode of implementation nor offered a road map. The strategy emphasized in this paper is to hold information literacy sessions through effective workshops. While explaining the reasons behind such workshops being essential in enhancing information literacy skills, the most important principles and stages for setting up of such workshops are offered in a step-by-step manner.

  20. Calculations of wavefunctions and energies of electron system in Coulomb potential by variational method without a basis set

    International Nuclear Information System (INIS)

    Bykov, V.P.; Gerasimov, A.V.

    1992-08-01

    A new variational method without a basis set for calculation of the eigenvalues and eigenfunctions of Hamiltonians is suggested. The expansion of this method for the Coulomb potentials is given. Calculation of the energy and charge distribution in the two-electron system for different values of the nuclear charge Z is made. It is shown that at small Z the Coulomb forces disintegrate the electron cloud into two clots. (author). 3 refs, 4 figs, 1 tab

  1. Action research as a method for changing patient education practice in a clinical diabetes setting

    DEFF Research Database (Denmark)

    Voigt, Jane Rohde; Hansen, Ulla M.; Glindorf, Mette

    2014-01-01

    with researchers developed and implemented a participatory, group-based diabetes education program in a diabetes clinic in the Danish health care system. The research process included a variety of qualitative methods: workshops, classroom observations, video recordings and semi-structured interviews. These methods......Action research is potentially a useful method for changing clinical practice by involving practitioners in the process of change. The aim of this study was to explore the utility of action research in bridging the gap between research and practice. Diabetes educators in collaboration...... aimed at obtaining contextual sensitivity, allowing dynamic interactions with educators and people with diabetes. Despite challenges, the study demonstrates how action research methods contribute to development and change of diabetes education practice while simultaneously adding knowledge to the action...

  2. Group Decision-Making for Hesitant Fuzzy Sets Based on Characteristic Objects Method

    Directory of Open Access Journals (Sweden)

    Shahzad Faizi

    2017-07-01

    Full Text Available There are many real-life problems that, because of the need to involve a wide domain of knowledge, are beyond a single expert. This is especially true for complex problems. Therefore, it is usually necessary to allocate more than one expert to a decision process. In such situations, we can observe an increasing importance of uncertainty. In this paper, the Multi-Criteria Decision-Making (MCDM method called the Characteristic Objects Method (COMET is extended to solve problems for Multi-Criteria Group Decision-Making (MCGDM in a hesitant fuzzy environment. It is a completely new idea for solving problems of group decision-making under uncertainty. In this approach, we use L-R-type Generalized Fuzzy Numbers (GFNs to get the degree of hesitancy for an alternative under a certain criterion. Therefore, the classical COMET method was adapted to work with GFNs in group decision-making problems. The proposed extension is presented in detail, along with the necessary background information. Finally, an illustrative numerical example is provided to elaborate the proposed method with respect to the support of a decision process. The presented extension of the COMET method, as opposed to others’ group decision-making methods, is completely free of the rank reversal phenomenon, which is identified as one of the most important MCDM challenges.

  3. Review of radiological scoring methods of osteoporotic vertebral fractures for clinical and research settings

    Energy Technology Data Exchange (ETDEWEB)

    Oei, Ling [Erasmus Medical Center, Department of Internal Medicine, Rotterdam (Netherlands); Erasmus Medical Center, Department of Epidemiology, P.O. Box 2040 Ee21-75, CA, Rotterdam (Netherlands); Netherlands Genomics Initiative (NGI)-sponsored Netherlands Consortium for Healthy Aging (NCHA), Rotterdam (Netherlands); Erasmus Medical Center, Departments of Internal Medicine and Epidemiology, P.O. Box 2040 Ee21-83, CA, Rotterdam (Netherlands); Rivadeneira, Fernando [Erasmus Medical Center, Department of Internal Medicine, Rotterdam (Netherlands); Erasmus Medical Center, Department of Epidemiology, P.O. Box 2040 Ee21-75, CA, Rotterdam (Netherlands); Netherlands Genomics Initiative (NGI)-sponsored Netherlands Consortium for Healthy Aging (NCHA), Rotterdam (Netherlands); Erasmus Medical Center, Departments of Internal Medicine and Epidemiology, P.O. Box 2040 Ee5-79, CA, Rotterdam (Netherlands); Ly, Felisia; Breda, Stephan J. [Erasmus Medical Center, Department of Internal Medicine, Rotterdam (Netherlands); Erasmus Medical Center, Department of Epidemiology, P.O. Box 2040 Ee21-75, CA, Rotterdam (Netherlands); Erasmus Medical Center, Departments of Internal Medicine and Epidemiology, P.O. Box 2040 Ee21-83, CA, Rotterdam (Netherlands); Zillikens, M.C. [Erasmus Medical Center, Department of Internal Medicine, Rotterdam (Netherlands); Netherlands Genomics Initiative (NGI)-sponsored Netherlands Consortium for Healthy Aging (NCHA), Rotterdam (Netherlands); Erasmus Medical Center, Department of Internal Medicine, ' s Gravendijkwal 230, CE, Rotterdam (Netherlands); Hofman, Albert [Erasmus Medical Center, Department of Epidemiology, P.O. Box 2040 Ee21-75, CA, Rotterdam (Netherlands); Netherlands Genomics Initiative (NGI)-sponsored Netherlands Consortium for Healthy Aging (NCHA), Rotterdam (Netherlands); Uitterlinden, Andre G. [Erasmus Medical Center, Department of Internal Medicine, Rotterdam (Netherlands); Erasmus Medical Center, Department of Epidemiology, P.O. Box 2040 Ee21-75, CA, Rotterdam (Netherlands); Netherlands Genomics Initiative (NGI)-sponsored Netherlands Consortium for Healthy Aging (NCHA), Rotterdam (Netherlands); Erasmus Medical Center, Departments of Internal Medicine and Epidemiology, P.O. Box 2040 Ee5-75B, CA, Rotterdam (Netherlands); Krestin, Gabriel P.; Oei, Edwin H.G. [Erasmus Medical Center, Department of Radiology, ' s Gravendijkwal 230, CE, Rotterdam (Netherlands)

    2013-02-15

    Osteoporosis is the most common metabolic bone disease; vertebral fractures are the most common osteoporotic fractures. Several radiological scoring methods using different criteria for osteoporotic vertebral fractures exist. Quantitative morphometry (QM) uses ratios derived from direct vertebral body height measurements to define fractures. Semi-quantitative (SQ) visual grading is performed according to height and area reduction. The algorithm-based qualitative (ABQ) method introduced a scheme to systematically rule out non-fracture deformities and diagnoses osteoporotic vertebral fractures based on endplate depression. The concordance across methods is currently a matter of debate. This article reviews the most commonly applied standardised radiographic scoring methods for osteoporotic vertebral fractures, attaining an impartial perspective of benefits and limitations. It provides image examples and discusses aspects that facilitate large-scale application, such as automated image analysis software and different imaging investigations. It also reviews the implications of different fracture definitions for scientific research and clinical practice. Accurate standardised scoring methods for assessing osteoporotic vertebral fractures are crucial, considering that differences in definition will have implications for patient care and scientific research. Evaluation of the feasibility and concordance among methods will allow establishing their benefits and limitations, and most importantly, optimise their effectiveness for widespread application. (orig.)

  4. Review of radiological scoring methods of osteoporotic vertebral fractures for clinical and research settings

    International Nuclear Information System (INIS)

    Oei, Ling; Rivadeneira, Fernando; Ly, Felisia; Breda, Stephan J.; Zillikens, M.C.; Hofman, Albert; Uitterlinden, Andre G.; Krestin, Gabriel P.; Oei, Edwin H.G.

    2013-01-01

    Osteoporosis is the most common metabolic bone disease; vertebral fractures are the most common osteoporotic fractures. Several radiological scoring methods using different criteria for osteoporotic vertebral fractures exist. Quantitative morphometry (QM) uses ratios derived from direct vertebral body height measurements to define fractures. Semi-quantitative (SQ) visual grading is performed according to height and area reduction. The algorithm-based qualitative (ABQ) method introduced a scheme to systematically rule out non-fracture deformities and diagnoses osteoporotic vertebral fractures based on endplate depression. The concordance across methods is currently a matter of debate. This article reviews the most commonly applied standardised radiographic scoring methods for osteoporotic vertebral fractures, attaining an impartial perspective of benefits and limitations. It provides image examples and discusses aspects that facilitate large-scale application, such as automated image analysis software and different imaging investigations. It also reviews the implications of different fracture definitions for scientific research and clinical practice. Accurate standardised scoring methods for assessing osteoporotic vertebral fractures are crucial, considering that differences in definition will have implications for patient care and scientific research. Evaluation of the feasibility and concordance among methods will allow establishing their benefits and limitations, and most importantly, optimise their effectiveness for widespread application. (orig.)

  5. Evaluating patient care communication in integrated care settings: application of a mixed method approach in cerebral palsy programs

    NARCIS (Netherlands)

    Gulmans, J.; Gulmans, J.; Vollenbroek-Hutten, Miriam Marie Rosé; van Gemert-Pijnen, Julia E.W.C.; van Harten, Willem H.

    2009-01-01

    Objective. In this study, we evaluated patient care communication in the integrated care setting of children with cerebral palsy in three Dutch regions in order to identify relevant communication gaps experienced by both parents and involved professionals. - Design. A three-step mixed method

  6. Comparison of some dispersion-corrected and traditional functionals with CCSD(T) and MP2 ab initio methods: Dispersion, induction, and basis set superposition error

    Science.gov (United States)

    Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.

    2012-10-01

    We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.

  7. Comparison of methods of alert acknowledgement by critical care clinicians in the ICU setting

    Directory of Open Access Journals (Sweden)

    Andrew M. Harrison

    2017-03-01

    Full Text Available Background Electronic Health Record (EHR-based sepsis alert systems have failed to demonstrate improvements in clinically meaningful endpoints. However, the effect of implementation barriers on the success of new sepsis alert systems is rarely explored. Objective To test the hypothesis time to severe sepsis alert acknowledgement by critical care clinicians in the ICU setting would be reduced using an EHR-based alert acknowledgement system compared to a text paging-based system. Study Design In one arm of this simulation study, real alerts for patients in the medical ICU were delivered to critical care clinicians through the EHR. In the other arm, simulated alerts were delivered through text paging. The primary outcome was time to alert acknowledgement. The secondary outcomes were a structured, mixed quantitative/qualitative survey and informal group interview. Results The alert acknowledgement rate from the severe sepsis alert system was 3% (N = 148 and 51% (N = 156 from simulated severe sepsis alerts through traditional text paging. Time to alert acknowledgement from the severe sepsis alert system was median 274 min (N = 5 and median 2 min (N = 80 from text paging. The response rate from the EHR-based alert system was insufficient to compare primary measures. However, secondary measures revealed important barriers. Conclusion Alert fatigue, interruption, human error, and information overload are barriers to alert and simulation studies in the ICU setting.

  8. On Time with Minimal Expected Cost!

    DEFF Research Database (Denmark)

    David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand

    2014-01-01

    (Priced) timed games are two-player quantitative games involving an environment assumed to be completely antogonistic. Classical analysis consists in the synthesis of strategies ensuring safety, time-bounded or cost-bounded reachability objectives. Assuming a randomized environment, the (priced......) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...... reachability strategies that will both ensure worst case time-bounds as well as provide (near-) minimal expected cost. Our method extends the synthesis algorithms of the synthesis tool Uppaal-Tiga with suitable adapted reinforcement learning techniques, that exhibits several orders of magnitude improvements w...

  9. Using digital photography in a clinical setting: a valid, accurate, and applicable method to assess food intake.

    Science.gov (United States)

    Winzer, Eva; Luger, Maria; Schindler, Karin

    2018-06-01

    Regular monitoring of food intake is hardly integrated in clinical routine. Therefore, the aim was to examine the validity, accuracy, and applicability of an appropriate and also quick and easy-to-use tool for recording food intake in a clinical setting. Two digital photography methods, the postMeal method with a picture after the meal, the pre-postMeal method with a picture before and after the meal, and the visual estimation method (plate diagram; PD) were compared against the reference method (weighed food records; WFR). A total of 420 dishes from lunch (7 weeks) were estimated with both photography methods and the visual method. Validity, applicability, accuracy, and precision of the estimation methods, and additionally food waste, macronutrient composition, and energy content were examined. Tests of validity revealed stronger correlations for photography methods (postMeal: r = 0.971, p < 0.001; pre-postMeal: r = 0.995, p < 0.001) compared to the visual estimation method (r = 0.810; p < 0.001). The pre-postMeal method showed smaller variability (bias < 1 g) and also smaller overestimation and underestimation. This method accurately and precisely estimated portion sizes in all food items. Furthermore, the total food waste was 22% for lunch over the study period. The highest food waste was observed in salads and the lowest in desserts. The pre-postMeal digital photography method is valid, accurate, and applicable in monitoring food intake in clinical setting, which enables a quantitative and qualitative dietary assessment. Thus, nutritional care might be initiated earlier. This method might be also advantageous for quantitative and qualitative evaluation of food waste, with a resultantly reduction in costs.

  10. Comparing two sampling methods to engage hard-to-reach communities in research priority setting

    Directory of Open Access Journals (Sweden)

    Melissa A. Valerio

    2016-10-01

    Full Text Available Abstract Background Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. Methods In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1 snowball sampling, a chain- referral method or 2 purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community. Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities’ stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Results Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 % consented, 52 (95 % attended the first meeting, and 36 (65 % attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 % consented, 36 (58 % attended the first meeting, and 26 (42 % attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P < 0.05. Despite differing recruitment strategies, stakeholders from the two communities identified largely similar ideas for research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045 which was higher for the purposive/convenience sampling group and for city improvements

  11. Difficulties experienced in setting and achieving goals by participants of a falls prevention programme: a mixed-methods evaluation.

    Science.gov (United States)

    Haas, Romi; Mason, Wendy; Haines, Terry P

    2014-01-01

    To evaluate the ability of participants of a falls prevention programme to set and achieve goals. The study used a prospective longitudinal design and a mixed-methods approach to data collection. Study participants were (1) 220 older adults participating in a 15-week combined exercise and education falls prevention programme and (2) 9 practitioners (3 home-care nurses, 5 community workers, and an exercise physiologist) involved in delivering the programme. Data from goal-setting forms were analyzed, and descriptive statistics were used to determine the number of appropriate goals set and achieved. Data were analyzed according to programme setting (home- or group-based) and whether or not participants were classified as being from a Culturally and Linguistically Diverse (CALD) background in the Australian context. Semi-structured interviews with programme practitioners were thematically analyzed. A total of 144 respondents (n=75 CALD group, n=41 non-CALD group, n=6 CALD home, n=22 non-CALD home) set 178 goals. Only 101 (57%) goals could be evaluated according to achievement, because participants set goals that focused on health state instead of behaviour, set goals not relevant to falls prevention, used inappropriate constructs to measure goal achievement, and either did not review their goals or dropped out of the programme before goal review. Of these 101 goals, 64 were achieved. Practitioners described their own difficulties in understanding the process of setting health behaviour goals along with communication, cultural, and logistic difficulties. Both CALD and non-CALD participants and those participating in both group- and home-based programmes experienced difficulty in setting and achieving goals to facilitate behaviour change for falls prevention. Data suggest that home-based participants had more difficulty in setting goals than their group-based counterparts and, to a lesser extent, that CALD participants experienced more difficulty in setting goals than

  12. Alternative Method for the Mass Rearing of Lutzomyia (Lutzomyia) cruzi (Diptera: Psychodidae) in a Laboratory Setting.

    Science.gov (United States)

    Oliveira, E F; Fernandes, W S; Oshiro, E T; Oliveira, A G; Galati, E A B

    2015-09-01

    The understanding of the transmission dynamics of Leishmania spp. Ross as well as the epidemiology and spread of leishmaniasis is related to parasite-vector-host interactions. These interactions can be studied using specimens of a sand fly population reared in the laboratory, exposing individuals to experimental infection for the investigation of vector competence and parameters of the vectorial capacity of the species. The present study sought to describe an alternative method for the implantation of a Lutzomyia (Lutzomyia) cruzi colony with wild specimens captured in the municipality of Corumbá, Brazil. With Method 1, engorged females were individualized for oviposition. The eggs were transferred to an acrylic petri dish with a layer of plaster on the bottom, on which food was placed after hatching of the first larvae. With Method 2, females were kept in groups for oviposition in containers, in which soil and food were placed on their bottom for the larvae. In addition, the exposure time of the larvae to light was reduced in comparison with Method 1. With Method 2, a significantly greater number of specimens of Lu. cruzi was obtained. The ratio between the number of emerged adults and the females followed for oviposition was 0.42 with Method 1 and 2.75 with Method 2. The optimization of the rearing conditions for Lu. cruzi will enable the establishment of a colony providing a sufficient number of specimens to develop experimental infection by Leishmania as well as vectorial competence and some parameters of the vectorial capacity of this sand fly. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Comparing two sampling methods to engage hard-to-reach communities in research priority setting.

    Science.gov (United States)

    Valerio, Melissa A; Rodriguez, Natalia; Winkler, Paula; Lopez, Jaime; Dennison, Meagen; Liang, Yuanyuan; Turner, Barbara J

    2016-10-28

    Effective community-partnered and patient-centered outcomes research needs to address community priorities. However, optimal sampling methods to engage stakeholders from hard-to-reach, vulnerable communities to generate research priorities have not been identified. In two similar rural, largely Hispanic communities, a community advisory board guided recruitment of stakeholders affected by chronic pain using a different method in each community: 1) snowball sampling, a chain- referral method or 2) purposive sampling to recruit diverse stakeholders. In both communities, three groups of stakeholders attended a series of three facilitated meetings to orient, brainstorm, and prioritize ideas (9 meetings/community). Using mixed methods analysis, we compared stakeholder recruitment and retention as well as priorities from both communities' stakeholders on mean ratings of their ideas based on importance and feasibility for implementation in their community. Of 65 eligible stakeholders in one community recruited by snowball sampling, 55 (85 %) consented, 52 (95 %) attended the first meeting, and 36 (65 %) attended all 3 meetings. In the second community, the purposive sampling method was supplemented by convenience sampling to increase recruitment. Of 69 stakeholders recruited by this combined strategy, 62 (90 %) consented, 36 (58 %) attended the first meeting, and 26 (42 %) attended all 3 meetings. Snowball sampling recruited more Hispanics and disabled persons (all P research, focusing on non-pharmacologic interventions for management of chronic pain. Ratings on importance and feasibility for community implementation differed only on the importance of massage services (P = 0.045) which was higher for the purposive/convenience sampling group and for city improvements/transportation services (P = 0.004) which was higher for the snowball sampling group. In each of the two similar hard-to-reach communities, a community advisory board partnered with researchers

  14. A set packing inspired method for real-time junction train routing

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    2013-01-01

    Efficiently coordinating the often large number of interdependent, timetabled train movements on a railway junction, while satisfying a number of operational requirements, is one of the most important problems faced by a railway company. The most critical variant of the problem arises on a daily...... basis at major railway junctions where disruptions to rail traffic make the planned schedule/routing infeasible and rolling stock planners are forced to re-schedule/re-route trains in order to recover feasibility. The dynamic nature of the problem means that good solutions must be obtained quickly....... In this paper we describe a set packing inspired formulation of this problem and develop a branch-and-price based solution approach. A real life test instance arising in Germany and supplied by the major German railway company, Deutsche Bahn, indicates the efficiency of the proposed approach by confirming...

  15. A Set Packing Inspired Method for Real-Time Junction Train Routing

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Larsen, Jesper; Ehrgott, Matthias

    Efficiently coordinating the often large number of interdependent, timetabled train movements on a railway junction, while satisfying a number of operational requirements, is one of the most important problems faced by a railway company. The most critical variant of the problem arises on a daily...... basis at major railway junctions where disruptions to rail traffi c make the planned schedule/routing infeasible and rolling stock planners are forced to reschedule/re-route trains in order to recover feasibility. The dynamic nature of the problem means that good solutions must be obtained quickly....... In this paper we describe a set packing inspired formulation of this problem and develop a branch-and-price based solution approach. A real life test instance arising in Germany and supplied by the major German railway company, Deutsche Bahn, indicates the efficiency of the proposed approach by confirming...

  16. Local and global recoding methods for anonymizing set-valued data

    KAUST Repository

    Terrovitis, Manolis

    2010-06-10

    In this paper, we study the problem of protecting privacy in the publication of set-valued data. Consider a collection of supermarket transactions that contains detailed information about items bought together by individuals. Even after removing all personal characteristics of the buyer, which can serve as links to his identity, the publication of such data is still subject to privacy attacks from adversaries who have partial knowledge about the set. Unlike most previous works, we do not distinguish data as sensitive and non-sensitive, but we consider them both as potential quasi-identifiers and potential sensitive data, depending on the knowledge of the adversary. We define a new version of the k-anonymity guarantee, the k m-anonymity, to limit the effects of the data dimensionality, and we propose efficient algorithms to transform the database. Our anonymization model relies on generalization instead of suppression, which is the most common practice in related works on such data. We develop an algorithm that finds the optimal solution, however, at a high cost that makes it inapplicable for large, realistic problems. Then, we propose a greedy heuristic, which performs generalizations in an Apriori, level-wise fashion. The heuristic scales much better and in most of the cases finds a solution close to the optimal. Finally, we investigate the application of techniques that partition the database and perform anonymization locally, aiming at the reduction of the memory consumption and further scalability. A thorough experimental evaluation with real datasets shows that a vertical partitioning approach achieves excellent results in practice. © 2010 Springer-Verlag.

  17. Extending the charge-flipping method towards structure solution from incomplete data sets

    Czech Academy of Sciences Publication Activity Database

    Palatinus, Lukáš; Steurer, W.; Chapuis, G.

    2007-01-01

    Roč. 40, - (2007), s. 456-462 ISSN 0021-8898 Institutional research plan: CEZ:AV0Z10100521 Keywords : ab initio structure solution * density modification * maximum entropy method * intensity extrapolation Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.629, year: 2007

  18. A new method for fatigue life prediction based on the Thick Level Set approach

    NARCIS (Netherlands)

    Voormeeren, L.O.; van der Meer, F.P.; Maljaars, J.; Sluys, L.J.

    2017-01-01

    The last decade has seen a growing interest in cohesive zone models for fatigue applications. These cohesive zone models often suffer from a lack of generality and applying them typically requires calibrating a large number of model-specific parameters. To improve on these issues a new method has

  19. A new method for fatigue life prediction based on the Thick Level set approach

    NARCIS (Netherlands)

    Voormeeren, L.O.; Meer, F.P. van der; Maljaars, J.; Sluys, L.J.

    2017-01-01

    The last decade has seen a growing interest in cohesive zone models for fatigue applications. These cohesive zone models often suffer from a lack of generality and applying them typically requires calibrating a large number of model-specific parameters. To improve on these issues a new method has

  20. Teaching to Think: Applying the Socratic Method outside the Law School Setting

    Science.gov (United States)

    Peterson, Evan

    2009-01-01

    An active learning process has the potential to provide educational benefits above-and-beyond what they might receive from more traditional, passive approaches. The Socratic Method is a unique approach to passive learning that facilitates critical thinking, open-mindedness, and teamwork. By imposing a series of guided questions to students, an…

  1. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    Science.gov (United States)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  2. Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method

    Science.gov (United States)

    Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao

    2016-09-01

    To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.

  3. Peptide dynamics by molecular dynamics simulation and diffusion theory method with improved basis sets

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, Po Jen; Lai, S. K., E-mail: sklai@coll.phy.ncu.edu.tw [Complex Liquids Laboratory, Department of Physics, National Central University, Chungli 320, Taiwan and Molecular Science and Technology Program, Taiwan International Graduate Program, Academia Sinica, Taipei 115, Taiwan (China); Rapallo, Arnaldo [Istituto per lo Studio delle Macromolecole (ISMAC) Consiglio Nazionale delle Ricerche (CNR), via E. Bassini 15, C.A.P 20133 Milano (Italy)

    2014-03-14

    Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit

  4. Peptide dynamics by molecular dynamics simulation and diffusion theory method with improved basis sets

    International Nuclear Information System (INIS)

    Hsu, Po Jen; Lai, S. K.; Rapallo, Arnaldo

    2014-01-01

    Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit

  5. Career Expectations of Accounting Students

    Science.gov (United States)

    Elam, Dennis; Mendez, Francis

    2010-01-01

    The demographic make-up of accounting students is dramatically changing. This study sets out to measure how well the profession is ready to accommodate what may be very different needs and expectations of this new generation of students. Non-traditional students are becoming more and more of a tradition in the current college classroom.…

  6. LAMP-B: a Fortran program set for the lattice cell analysis by collision probability method

    International Nuclear Information System (INIS)

    Tsuchihashi, Keiichiro

    1979-02-01

    Nature of physical problem solved: LAMB-B solves an integral transport equation by the collision probability method for many variety of lattice cell geometries: spherical, plane and cylindrical lattice cell; square and hexagonal arrays of pin rods; annular clusters and square clusters. LAMP-B produces homogenized constants for multi and/or few group diffusion theory programs. Method of solution: LAMP-B performs an exact numerical integration to obtain the collision probabilities. Restrictions on the complexity of the problem: Not more than 68 group in the fast group calculation, and not more than 20 regions in the resonance integral calculation. Typical running time: It varies with the number of energy groups and the selection of the geometry. Unusual features of the program: Any or any combination of constituent subprograms can be used so that the partial use of this program is available. (author)

  7. High Energy Beam Impacts on Beam Intercepting Devices: Advanced Numerical Methods and Experimental Set-up

    CERN Document Server

    Bertarelli, A; Carra, F; Cerutti, F; Dallocchio, A; Mariani, N; Timmins, M; Peroni, L; Scapin, M

    2011-01-01

    Beam Intercepting Devices are potentially exposed to severe accidental events triggered by direct impacts of energetic particle beams. State-of-the-art numerical methods are required to simulate the behaviour of affected components. A review of the different dynamic response regimes is presented, along with an indication of the most suited tools to treat each of them. The consequences on LHC tungsten collimators of a number of beam abort scenarios were extensively studied, resorting to a novel category of numerical explicit methods, named Hydrocodes. Full shower simulations were performed providing the energy deposition distribution. Structural dynamics and shock wave propagation analyses were carried out with varying beam parameters, identifying important thresholds for collimator operation, ranging from the onset of permanent damage up to catastrophic failure. Since the main limitation of these tools lies in the limited information available on constitutive material models under extreme conditions, a dedica...

  8. High Energy Beam Impacts on Beam Intercepting Devices: Advanced Numerical Methods and Experimental Set-Up

    CERN Document Server

    Bertarelli, A; Carra, F; Cerutti, F; Dallocchio, A; Mariani, N; Timmins, M; Peroni, L; Scapin, M

    2011-01-01

    Beam Intercepting Devices are potentially exposed to severe accidental events triggered by direct impacts of energetic particle beams. State-of-the-art numerical methods are required to simulate the behaviour of affected components. A review of the different dynamic response regimes is presented, along with an indication of the most suited tools to treat each of them. The consequences on LHC tungsten collimators of a number of beam abort scenarios were extensively studied, resorting to a novel category of numerical explicit methods, named Hydrocodes. Full shower simulations were performed providing the energy deposition distribution. Structural dynamics and shock wave propagation analyses were carried out with varying beam parameters, identifying important thresholds for collimator operation, ranging from the onset of permanent damage up to catastrophic failure. Since the main limitation of these tools lies in the limited information available on constitutive material models under extreme conditions, a dedica...

  9. Multiscale optical simulation settings: challenging applications handled with an iterative ray-tracing FDTD interface method.

    Science.gov (United States)

    Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian

    2016-03-20

    We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.

  10. A method for manufacturing superior set yogurt under reduced oxygen conditions.

    Science.gov (United States)

    Horiuchi, H; Inoue, N; Liu, E; Fukui, M; Sasaki, Y; Sasaki, T

    2009-09-01

    The yogurt starters Lactobacillus delbrueckii ssp. bulgaricus and Streptococcus thermophilus are well-known facultatively anaerobic bacteria that can grow in oxygenated environments. We found that they removed dissolved oxygen (DO) in a yogurt mix as the fermentation progressed and that they began to produce acid actively after the DO concentration in the yogurt mix was reduced to 0 mg/kg, suggesting that the DO retarded the production of acid. Yogurt fermentation was carried out at 43 or 37 degrees C both after the DO reduction treatment and without prior treatment. Nitrogen gas was mixed and dispersed into the yogurt mix after inoculation with yogurt starter culture to reduce the DO concentration in the yogurt mix. The treatment that reduced DO concentration in the yogurt mix to approximately 0 mg/kg beforehand caused the starter culture LB81 used in this study to enter into the exponential growth phase earlier. Furthermore, the combination of reduced DO concentration in the yogurt mix beforehand and incubation at a lower temperature (37 degrees C) resulted in a superior set yogurt with a smooth texture and strong curd structure.

  11. Molecular-based mycobacterial identification in a clinical laboratory setting: a comparison of two methods.

    LENUS (Irish Health Repository)

    O'Donnell, N

    2012-01-01

    Many mycobacterial species are pathogenic to humans, with infection occurring worldwide. Infection with Mycobacterium tuberculosis is a well-described global phenomenon, but other mycobacterial species are increasingly shown to be the cause of both pulmonary and extrapulmonary infection and are managed differently from M. tuberculosis infection. Rapid and accurate differentiation of mycobacterial species is, therefore, critical to guide timely and appropriate therapeutic and public health management. This study evaluates two commercially available DNA strip assays, the Genotype Common Mycobacteria (CM) assay (Hain Lifescience, Nehren, Germany) and the Speed-oligo Mycobacteria assay (Vircell, Spain) for their usefulness in a clinical laboratory setting. Both assays were evaluated on 71 clinical mycobacterial isolates, previously identified using Gen-Probe AccuProbe and through a UK mycobacteriology reference laboratory, as well as 29 non-mycobacterial isolates. Concordant results were obtained for 98% of isolates using both assays. The sensitivity was 97% (95% confidence interval [CI]: 93.3-100%) for the CM assay and 98.6% (95% CI: 95.9-100%) for the Speed-oligo assay. Overall, both assays proved to be useful tools for rapid and sensitive mycobacterial species identification, although interpretation of results was easier with the CM assay. Finally, results were available within one day, compared to current identification times which range between seven days and four weeks.

  12. The Ethics of Setting Course Expectations to Manipulate Student Evaluations of Teaching Effectiveness in Higher Education: An Examination of the Ethical Dilemmas Created by the Use of SETEs and a Proposal for Further Study and Analysis

    Science.gov (United States)

    Neal, Catherine S.; Elliott, Teressa

    2009-01-01

    Because student evaluations of teaching effectiveness (SETEs) are an important and widely used tool used in the evaluation and reward systems for faculty members in higher education, a discussion and analysis of the ethical problems that may arise as a result of the conflict created by expectations of performance is provided. This discussion…

  13. Assessing the reliability of the borderline regression method as a standard setting procedure for objective structured clinical examination

    Directory of Open Access Journals (Sweden)

    Sara Mortaz Hejri

    2013-01-01

    Full Text Available Background: One of the methods used for standard setting is the borderline regression method (BRM. This study aims to assess the reliability of BRM when the pass-fail standard in an objective structured clinical examination (OSCE was calculated by averaging the BRM standards obtained for each station separately. Materials and Methods: In nine stations of the OSCE with direct observation the examiners gave each student a checklist score and a global score. Using a linear regression model for each station, we calculated the checklist score cut-off on the regression equation for the global scale cut-off set at 2. The OSCE pass-fail standard was defined as the average of all station′s standard. To determine the reliability, the root mean square error (RMSE was calculated. The R2 coefficient and the inter-grade discrimination were calculated to assess the quality of OSCE. Results: The mean total test score was 60.78. The OSCE pass-fail standard and its RMSE were 47.37 and 0.55, respectively. The R2 coefficients ranged from 0.44 to 0.79. The inter-grade discrimination score varied greatly among stations. Conclusion: The RMSE of the standard was very small indicating that BRM is a reliable method of setting standard for OSCE, which has the advantage of providing data for quality assurance.

  14. A practical method of predicting client revisit intention in a hospital setting.

    Science.gov (United States)

    Lee, Kyun Jick

    2005-01-01

    Data mining (DM) models are an alternative to traditional statistical methods for examining whether higher customer satisfaction leads to higher revisit intention. This study used a total of 906 outpatients' satisfaction data collected from a nationwide survey interviews conducted by professional interviewers on a face-to-face basis in South Korea, 1998. Analyses showed that the relationship between overall satisfaction with hospital services and outpatients' revisit intention, along with word-of-mouth recommendation as intermediate variables, developed into a nonlinear relationship. The five strongest predictors of revisit intention were overall satisfaction, intention to recommend to others, awareness of hospital promotion, satisfaction with physician's kindness, and satisfaction with treatment level.

  15. The Train Driver Recovery Problem - a Set Partitioning Based Model and Solution Method

    DEFF Research Database (Denmark)

    Rezanova, Natalia Jurjevna; Ryan, David

    The need to recover a train driver schedule occurs during major disruptions in the daily railway operations. Using data from the train driver schedule of the Danish passenger railway operator DSB S-tog A/S, a solution method to the Train Driver Recovery Problem (TDRP) is developed. The TDRP...... the depth-first search of the Branch & Bound tree. Preliminarily results are encouraging, showing that nearly all tested real-life instances produce integer solutions to the LP relaxation and solutions are found within a few seconds....

  16. Application of biclustering of gene expression data and gene set enrichment analysis methods to identify potentially disease causing nanomaterials

    Directory of Open Access Journals (Sweden)

    Andrew Williams

    2015-12-01

    Full Text Available Background: The presence of diverse types of nanomaterials (NMs in commerce is growing at an exponential pace. As a result, human exposure to these materials in the environment is inevitable, necessitating the need for rapid and reliable toxicity testing methods to accurately assess the potential hazards associated with NMs. In this study, we applied biclustering and gene set enrichment analysis methods to derive essential features of altered lung transcriptome following exposure to NMs that are associated with lung-specific diseases. Several datasets from public microarray repositories describing pulmonary diseases in mouse models following exposure to a variety of substances were examined and functionally related biclusters of genes showing similar expression profiles were identified. The identified biclusters were then used to conduct a gene set enrichment analysis on pulmonary gene expression profiles derived from mice exposed to nano-titanium dioxide (nano-TiO2, carbon black (CB or carbon nanotubes (CNTs to determine the disease significance of these data-driven gene sets.Results: Biclusters representing inflammation (chemokine activity, DNA binding, cell cycle, apoptosis, reactive oxygen species (ROS and fibrosis processes were identified. All of the NM studies were significant with respect to the bicluster related to chemokine activity (DAVID; FDR p-value = 0.032. The bicluster related to pulmonary fibrosis was enriched in studies where toxicity induced by CNT and CB studies was investigated, suggesting the potential for these materials to induce lung fibrosis. The pro-fibrogenic potential of CNTs is well established. Although CB has not been shown to induce fibrosis, it induces stronger inflammatory, oxidative stress and DNA damage responses than nano-TiO2 particles.Conclusion: The results of the analysis correctly identified all NMs to be inflammogenic and only CB and CNTs as potentially fibrogenic. In addition to identifying several

  17. Cost-effectiveness thresholds: methods for setting and examples from around the world.

    Science.gov (United States)

    Santos, André Soares; Guerra-Junior, Augusto Afonso; Godman, Brian; Morton, Alec; Ruas, Cristina Mariano

    2018-06-01

    Cost-effectiveness thresholds (CETs) are used to judge if an intervention represents sufficient value for money to merit adoption in healthcare systems. The study was motivated by the Brazilian context of HTA, where meetings are being conducted to decide on the definition of a threshold. Areas covered: An electronic search was conducted on Medline (via PubMed), Lilacs (via BVS) and ScienceDirect followed by a complementary search of references of included studies, Google Scholar and conference abstracts. Cost-effectiveness thresholds are usually calculated through three different approaches: the willingness-to-pay, representative of welfare economics; the precedent method, based on the value of an already funded technology; and the opportunity cost method, which links the threshold to the volume of health displaced. An explicit threshold has never been formally adopted in most places. Some countries have defined thresholds, with some flexibility to consider other factors. An implicit threshold could be determined by research of funded cases. Expert commentary: CETs have had an important role as a 'bridging concept' between the world of academic research and the 'real world' of healthcare prioritization. The definition of a cost-effectiveness threshold is paramount for the construction of a transparent and efficient Health Technology Assessment system.

  18. Experiences with and expectations of maternity waiting homes in Luapula Province, Zambia: a mixed-methods, cross-sectional study with women, community groups and stakeholders.

    Science.gov (United States)

    Chibuye, Peggy S; Bazant, Eva S; Wallon, Michelle; Rao, Namratha; Fruhauf, Timothee

    2018-01-25

    Luapula Province has the highest maternal mortality and one of the lowest facility-based births in Zambia. The distance to facilities limits facility-based births for women in rural areas. In 2013, the government incorporated maternity homes into the health system at the community level to increase facility-based births and reduce maternal mortality. To examine the experiences with maternity homes, formative research was undertaken in four districts of Luapula Province to assess women's and community's needs, use patterns, collaboration between maternity homes, facilities and communities, and promising practices and models in Central and Lusaka Provinces. A cross-sectional, mixed-methods design was used. In Luapula Province, qualitative data were collected through 21 focus group discussions with 210 pregnant women, mothers, elderly women, and Safe Motherhood Action Groups (SMAGs) and 79 interviews with health workers, traditional leaders, couples and partner agency staff. Health facility assessment tools, service abstraction forms and registers from 17 facilities supplied quantitative data. Additional qualitative data were collected from 26 SMAGs and 10 health workers in Central and Lusaka Provinces to contextualise findings. Qualitative transcripts were analysed thematically using Atlas-ti. Quantitative data were analysed descriptively using Stata. Women who used maternity homes recognized the advantages of facility-based births. However, women and community groups requested better infrastructure, services, food, security, privacy, and transportation. SMAGs led the construction of maternity homes and advocated the benefits to women and communities in collaboration with health workers, but management responsibilities of the homes remained unassigned to SMAGs or staff. Community norms often influenced women's decisions to use maternity homes. Successful maternity homes in Central Province also relied on SMAGs for financial support, but the sustainability of these

  19. Dynamic-thresholding level set: a novel computer-aided volumetry method for liver tumors in hepatic CT images

    Science.gov (United States)

    Cai, Wenli; Yoshida, Hiroyuki; Harris, Gordon J.

    2007-03-01

    Measurement of the volume of focal liver tumors, called liver tumor volumetry, is indispensable for assessing the growth of tumors and for monitoring the response of tumors to oncology treatments. Traditional edge models, such as the maximum gradient and zero-crossing methods, often fail to detect the accurate boundary of a fuzzy object such as a liver tumor. As a result, the computerized volumetry based on these edge models tends to differ from manual segmentation results performed by physicians. In this study, we developed a novel computerized volumetry method for fuzzy objects, called dynamic-thresholding level set (DT level set). An optimal threshold value computed from a histogram tends to shift, relative to the theoretical threshold value obtained from a normal distribution model, toward a smaller region in the histogram. We thus designed a mobile shell structure, called a propagating shell, which is a thick region encompassing the level set front. The optimal threshold calculated from the histogram of the shell drives the level set front toward the boundary of a liver tumor. When the volume ratio between the object and the background in the shell approaches one, the optimal threshold value best fits the theoretical threshold value and the shell stops propagating. Application of the DT level set to 26 hepatic CT cases with 63 biopsy-confirmed hepatocellular carcinomas (HCCs) and metastases showed that the computer measured volumes were highly correlated with those of tumors measured manually by physicians. Our preliminary results showed that DT level set was effective and accurate in estimating the volumes of liver tumors detected in hepatic CT images.

  20. A statistical method for predicting sound absorbing property of porous metal materials by using quartet structure generation set

    International Nuclear Information System (INIS)

    Guan, Dong; Wu, Jiu Hui; Jing, Li

    2015-01-01

    Highlights: • A random internal morphology and structure generation-growth method, termed as the quartet structure generation set (QSGS), has been utilized based on the stochastic cluster growth theory for numerical generating the various microstructures of porous metal materials. • Effects of different parameters such as thickness and porosity on sound absorption performance of the generated structures are studied by the present method, and the obtained results are validated by an empirical model as well. • This method could be utilized to guide the design and fabrication of the sound-absorption porous metal materials. - Abstract: In this paper, a statistical method for predicting sound absorption properties of porous metal materials is presented. To reflect the stochastic distribution characteristics of the porous metal materials, a random internal morphology and structure generation-growth method, termed as the quartet structure generation set (QSGS), has been utilized based on the stochastic cluster growth theory for numerical generating the various microstructures of porous metal materials. Then by using the transfer-function approach along with the QSGS tool, we investigate the sound absorbing performance of porous metal materials with complex stochastic geometries. The statistical method has been validated by the good agreement among the numerical results for metal rubber from this method and a previous empirical model and the corresponding experimental data. Furthermore, the effects of different parameters such as thickness and porosity on sound absorption performance of the generated structures are studied by the present method, and the obtained results are validated by an empirical model as well. Therefore, the present method is a reliable and robust method for predicting the sound absorption performance of porous metal materials, and could be utilized to guide the design and fabrication of the sound-absorption porous metal materials

  1. Comparison of three methods for registration of abdominal/pelvic volume data sets from functional-anatomic scans

    Science.gov (United States)

    Mahmoud, Faaiza; Ton, Anthony; Crafoord, Joakim; Kramer, Elissa L.; Maguire, Gerald Q., Jr.; Noz, Marilyn E.; Zeleznik, Michael P.

    2000-06-01

    The purpose of this work was to evaluate three volumetric registration methods in terms of technique, user-friendliness and time requirements. CT and SPECT data from 11 patients were interactively registered using: a 3D method involving only affine transformation; a mixed 3D - 2D non-affine (warping) method; and a 3D non-affine (warping) method. In the first method representative isosurfaces are generated from the anatomical images. Registration proceeds through translation, rotation, and scaling in all three space variables. Resulting isosurfaces are fused and quantitative measurements are possible. In the second method, the 3D volumes are rendered co-planar by performing an oblique projection. Corresponding landmark pairs are chosen on matching axial slice sets. A polynomial warp is then applied. This method has undergone extensive validation and was used to evaluate the results. The third method employs visualization tools. The data model allows images to be localized within two separate volumes. Landmarks are chosen on separate slices. Polynomial warping coefficients are generated and data points from one volume are moved to the corresponding new positions. The two landmark methods were the least time consuming (10 to 30 minutes from start to finish), but did demand a good knowledge of anatomy. The affine method was tedious and required a fair understanding of 3D geometry.

  2. An accurate anisotropic adaptation method for solving the level set advection equation

    International Nuclear Information System (INIS)

    Bui, C.; Dapogny, C.; Frey, P.

    2012-01-01

    In the present paper, a mesh adaptation process for solving the advection equation on a fully unstructured computational mesh is introduced, with a particular interest in the case it implicitly describes an evolving surface. This process mainly relies on a numerical scheme based on the method of characteristics. However, low order, this scheme lends itself to a thorough analysis on the theoretical side. It gives rise to an anisotropic error estimate which enjoys a very natural interpretation in terms of the Hausdorff distance between the exact and approximated surfaces. The computational mesh is then adapted according to the metric supplied by this estimate. The whole process enjoys a good accuracy as far as the interface resolution is concerned. Some numerical features are discussed and several classical examples are presented and commented in two or three dimensions. (authors)

  3. Role Models and Teachers: medical students perception of teaching-learning methods in clinical settings, a qualitative study from Sri Lanka.

    Science.gov (United States)

    Jayasuriya-Illesinghe, Vathsala; Nazeer, Ishra; Athauda, Lathika; Perera, Jennifer

    2016-02-09

    Medical education research in general, and those focusing on clinical settings in particular, have been a low priority in South Asia. This explorative study from 3 medical schools in Sri Lanka, a South Asian country, describes undergraduate medical students' experiences during their final year clinical training with the aim of understanding the teaching-learning experiences. Using qualitative methods we conducted an exploratory study. Twenty eight graduates from 3 medical schools participated in individual interviews. Interview recordings were transcribed verbatim and analyzed using qualitative content analysis method. Emergent themes reveled 2 types of teaching-learning experiences, role modeling, and purposive teaching. In role modelling, students were expected to observe teachers while they conduct their clinical work, however, this method failed to create positive learning experiences. The clinical teachers who predominantly used this method appeared to be 'figurative' role models and were not perceived as modelling professional behaviors. In contrast, purposeful teaching allowed dedicated time for teacher-student interactions and teachers who created these learning experiences were more likely to be seen as 'true' role models. Students' responses and reciprocations to these interactions were influenced by their perception of teachers' behaviors, attitudes, and the type of teaching-learning situations created for them. Making a distinction between role modeling and purposeful teaching is important for students in clinical training settings. Clinical teachers' awareness of their own manifest professional characterizes, attitudes, and behaviors, could help create better teaching-learning experiences. Moreover, broader systemic reforms are needed to address the prevailing culture of teaching by humiliation and subordination.

  4. Modifying alcohol expectancies of Hispanic children: examining the effects of expectancy-targeted, developmentally congruous prevention videos.

    Science.gov (United States)

    Weinstein, Allison; Lisman, Stephen A; Johnson, Matthew D

    2015-03-01

    Children's expectations about the effects of alcohol consumption are known to predict the amount of alcohol they consume as adults. Previous research has used videotaped interventions to modify children's alcohol expectancies and found that puppet actors had the expected effect of decreasing children's positive alcohol expectancies, whereas adult actors did not. The current study sought to enhance the methods and outcomes of previous research by developing brief prevention videos that focus on pre-selected negative and sedating alcohol expectancies and include youth actors and age-relevant scenarios. Using a 2 × 2 factorial design (actor's age [youth or adult] × scenario relevance [youth or adult]), we examined the alcohol expectancies of 183 Hispanic third-, fourth-, and fifth-grade students (50% girls) in a public school setting. Expectancies were assessed before, immediately following the intervention, and 1 month later. The intervention consisted of four 8-minute videos based on beliefs associated with expectancies related to low alcohol consumption and a control group video about school bus safety. Positive alcohol expectancies were significantly lower directly after the intervention than at baseline. At 1-month follow-up, this effect decreased but was still significant. The current study adds to existing findings that expectancies can be modified in children, using interventions that are extremely brief, low-cost, and linked to research in children's cognitive and social development. In addition, it appears that children of different ages and genders respond differently to varying components of prevention media.

  5. Physicians' and Pharmacists' Experience and Expectations of the ...

    African Journals Online (AJOL)

    Purpose: To investigate physicians' and pharmacists' experience and expectations of the roles of pharmacists in hospital setting in Macau for the development of physician-pharmacist collaborative working relationship (CWR). Methods: A survey was conducted to address the research questions. The study population ...

  6. Exploring the expectations, needs and experiences of general practitioners and nurses towards a proactive and structured care program for frail older patients: a mixed-method study

    NARCIS (Netherlands)

    Valerie ten Dam; Mattijs Numans; Nienke Bleijenberg; Bas Steunenberg; Niek de Wit; Prof. Dr. Marieke J. Schuurmans; Irene Drubbel

    2013-01-01

    Aim. To report the expectations and experiences of general practitioners and practice nurses regarding the U-CARE programme, to gain a better understanding of the barriers and facilitators in providing proactive, structured care to frail older people and to determine whether implementation is

  7. Comparison of combinatorial clustering methods on pharmacological data sets represented by machine learning-selected real molecular descriptors.

    Science.gov (United States)

    Rivera-Borroto, Oscar Miguel; Marrero-Ponce, Yovani; García-de la Vega, José Manuel; Grau-Ábalo, Ricardo del Corazón

    2011-12-27

    Cluster algorithms play an important role in diversity related tasks of modern chemoinformatics, with the widest applications being in pharmaceutical industry drug discovery programs. The performance of these grouping strategies depends on various factors such as molecular representation, mathematical method, algorithmical technique, and statistical distribution of data. For this reason, introduction and comparison of new methods are necessary in order to find the model that best fits the problem at hand. Earlier comparative studies report on Ward's algorithm using fingerprints for molecular description as generally superior in this field. However, problems still remain, i.e., other types of numerical descriptions have been little exploited, current descriptors selection strategy is trial and error-driven, and no previous comparative studies considering a broader domain of the combinatorial methods in grouping chemoinformatic data sets have been conducted. In this work, a comparison between combinatorial methods is performed,with five of them being novel in cheminformatics. The experiments are carried out using eight data sets that are well established and validated in the medical chemistry literature. Each drug data set was represented by real molecular descriptors selected by machine learning techniques, which are consistent with the neighborhood principle. Statistical analysis of the results demonstrates that pharmacological activities of the eight data sets can be modeled with a few of families with 2D and 3D molecular descriptors, avoiding classification problems associated with the presence of nonrelevant features. Three out of five of the proposed cluster algorithms show superior performance over most classical algorithms and are similar (or slightly superior in the most optimistic sense) to Ward's algorithm. The usefulness of these algorithms is also assessed in a comparative experiment to potent QSAR and machine learning classifiers, where they perform

  8. African swine fever in Uganda: qualitative evaluation of three surveillance methods with implications for other resource-poor settings

    OpenAIRE

    Chenais, Erika; Sternberg Lewerin, Susanna; Boqvist, Sofia; Emanuelson, Ulf; Aliro, Tonny; Tejler, Emma; Cocca, Giampaolo; Masembe, Charles; Ståhl, Karl

    2015-01-01

    Animal diseases impact negatively on households and on national economies. In low-income countries, this pertains especially to socio-economic effects on household level. To control animal diseases and mitigate their impact, it is necessary to understand the epidemiology of the disease in its local context. Such understanding, gained through disease surveillance, is often lacking in resource-poor settings. Alternative surveillance methods have been developed to overcome some of the hurdles ob...

  9. African swine fever in Uganda: qualitative evaluation of three surveillance methods with implications for other resource-poor settings

    OpenAIRE

    Erika eChenais; Erika eChenais; Susanna eSternberg-Lewerin; Sofia eBoqvist; Ulf eEmanuelson; Tonny eAliro; Emma eTejler; Giampaolo eCocca; Charles eMasembe; Karl eStåhl; Karl eStåhl

    2015-01-01

    Animal diseases impact negatively on households and on national economies. In low-income countries this pertains especially to socio-economic effects on household level. To control animal diseases and mitigate their impact, it is necessary to understand the epidemiology of the disease in its local context. Such understanding, gained through disease surveillance, is often lacking in resource-poor settings. Alternative surveillance methods have been developed to overcome some of the hurdles obs...

  10. The South Asian heart lifestyle intervention (SAHELI) study to improve cardiovascular risk factors in a community setting: Design and methods

    OpenAIRE

    Kandula, Namratha R.; Patel, Yasin; Dave, Swapna; Seguil, Paola; Kumar, Santosh; Baker, David W.; Spring, Bonnie; Siddique, Juned

    2013-01-01

    Disseminating and implementing evidence-based, cardiovascular disease (CVD) prevention lifestyle interventions in community settings and in ethnic minority populations is a challenge. We describe the design and methods for the South Asian heart lifestyle intervention (SAHELI) study, a pilot study designed to determine the feasibility and initial efficacy of a culturally-targeted, community-based lifestyle intervention to improve physical activity and diet behaviors among medically underserved...

  11. Accuracy Veriffication of Image-Matching in a Setting Method for the Stem during Total Hip Arthroplasty

    OpenAIRE

    Kubota, Yosuke; Sakamoto, Makoto; Kobayashi, Koichi; Koga, Yoshio; Tanabe, Yuji

    2010-01-01

    Currently, stem insertion during total hip arthroplasty (THA) is not well controlled. The present study investigated a method for improving stem setting in accordance with preoperative planning using a three-dimensional (3-D) computed tomography (CT) model of the femur and RGB images of the excised femoral head. We utilized three femoral heads removed during THA and modeled each head using three spherical acrylic markers. Each femoral head was osteotomized using a parallel jig and three recta...

  12. The Standard Days Method(®): efficacy, satisfaction and demand at regular family planning service delivery settings in Turkey.

    Science.gov (United States)

    Kursun, Zerrin; Cali, Sanda; Sakarya, Sibel

    2014-06-01

    To evaluate the demand, efficacy, and satisfaction concerning the Standard Days Method(®) (SDM; a fertility awareness method) as an option presented among other contraceptive methods at regular service delivery settings. The survey group consisted of 993 women who presented at the primary care units in Umraniye District of Istanbul, Turkey, between 1 October 2006 and 31 March 2008, and started to use a new method. Women were enrolled until reaching a limit of 250 new users for each method, or expiration of the six-month registration period. Participants were followed for up to one year of method use. The characteristics of women who chose the SDM were similar to those of participants who opted for other methods. The most common reasons for selecting it were that it is natural and causes no side effects. Fifty-one percent used the SDM for the full year, compared to 71% who chose an intrauterine device (IUD). Continuation rates were significantly lower for all other methods. During the one-year follow-up period, 12% of SDM-, 7% of pill-, 7% of condom-, 3% of monthly injection-, 1% of quarterly injection-, and 0.5% of IUD users became pregnant. The SDM had relatively high continuation rates and relatively good levels of satisfaction among participants and their husbands. It should be mentioned among the routinely offered contraceptive methods.

  13. CubeAid - an interactive method of quickly analyzing 3-dimensional gamma-ray data sets

    Energy Technology Data Exchange (ETDEWEB)

    Kuehner, J A; Waddington, J C; Prevost, D [McMaster Univ., Hamilton, ON (Canada)

    1992-08-01

    With the advent of highly efficient gamma detector arrays capable of producing significant 4- and 5-fold data, a new challenge will be to develop appropriate data analysis techniques. One method may be to exploit the relatively fast analysis possible using three-dimensional (3D) analysis of sorted higher-fold data, as can be done using CubeAid software running on a personal computer (PC). This paper describes some of the capabilities of CubeAid. The main idea is to construct and use a 3D array (a cube) of triple data of dimensions suitable to the capability of a PC using VGA mode or higher. So far (as of the time of the conference), the authors had used a cube of edge size 640, and typically 2 or 3 keV per channel. In order to make data extraction fast, and to reduce disk space, a symmetrized 1/2 cube was used, the depth dimension having been compressed. In making this cube, sorting was first done into a symmetrized 1/6 cube from tape to a VAX hard disk. 2 figs.

  14. System and method for the adaptive mapping of matrix data to sets of polygons

    Science.gov (United States)

    Burdon, David (Inventor)

    2003-01-01

    A system and method for converting bitmapped data, for example, weather data or thermal imaging data, to polygons is disclosed. The conversion of the data into polygons creates smaller data files. The invention is adaptive in that it allows for a variable degree of fidelity of the polygons. Matrix data is obtained. A color value is obtained. The color value is a variable used in the creation of the polygons. A list of cells to check is determined based on the color value. The list of cells to check is examined in order to determine a boundary list. The boundary list is then examined to determine vertices. The determination of the vertices is based on a prescribed maximum distance. When drawn, the ordered list of vertices create polygons which depict the cell data. The data files which include the vertices for the polygons are much smaller than the corresponding cell data files. The fidelity of the polygon representation can be adjusted by repeating the logic with varying fidelity values to achieve a given maximum file size or a maximum number of vertices per polygon.

  15. The impact of individual expectations and expectation conflicts on virtual teams

    NARCIS (Netherlands)

    Bosch-Sijtsema, Petra

    Virtual teams are characterized by geographical dispersion, organizational, and cultural heterogeneity, and their members have little history and lateral and weak relationships. Literature denotes the importance of expectations in virtual settings, but individual expectations of virtual team members

  16. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: Application to SSSH

    Science.gov (United States)

    Kolmann, Stephen J.; Jordan, Meredith J. T.

    2010-02-01

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol-1 at the CCSD(T)/6-31G∗ level of theory, has a 4 kJ mol-1 dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol-1 lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol-1 lower in energy at the CCSD(T)/6-31G∗ level of theory. Ideally, for sub-kJ mol-1 thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  17. Method and basis set dependence of anharmonic ground state nuclear wave functions and zero-point energies: application to SSSH.

    Science.gov (United States)

    Kolmann, Stephen J; Jordan, Meredith J T

    2010-02-07

    One of the largest remaining errors in thermochemical calculations is the determination of the zero-point energy (ZPE). The fully coupled, anharmonic ZPE and ground state nuclear wave function of the SSSH radical are calculated using quantum diffusion Monte Carlo on interpolated potential energy surfaces (PESs) constructed using a variety of method and basis set combinations. The ZPE of SSSH, which is approximately 29 kJ mol(-1) at the CCSD(T)/6-31G* level of theory, has a 4 kJ mol(-1) dependence on the treatment of electron correlation. The anharmonic ZPEs are consistently 0.3 kJ mol(-1) lower in energy than the harmonic ZPEs calculated at the Hartree-Fock and MP2 levels of theory, and 0.7 kJ mol(-1) lower in energy at the CCSD(T)/6-31G* level of theory. Ideally, for sub-kJ mol(-1) thermochemical accuracy, ZPEs should be calculated using correlated methods with as big a basis set as practicable. The ground state nuclear wave function of SSSH also has significant method and basis set dependence. The analysis of the nuclear wave function indicates that SSSH is localized to a single symmetry equivalent global minimum, despite having sufficient ZPE to be delocalized over both minima. As part of this work, modifications to the interpolated PES construction scheme of Collins and co-workers are presented.

  18. Comparison of national health research priority-setting methods and characteristics in Latin America and the Caribbean, 2002 - 2012

    Directory of Open Access Journals (Sweden)

    Ludovic Reveiz

    2013-07-01

    Full Text Available OBJECTIVE: To compare health research priority-setting methods and characteristics among countries in Latin America and the Caribbean during 2002 - 2012. METHODS: This was a systematic review that identified national health research policies and priority agendas through a search of ministry and government databases related to health care institutions. PubMed, LILACS, the Health Research Web, and others were searched for the period from January 2002 - February 2012. The study excluded research organized by governmental institutions and specific national strategies on particular disease areas. Priority-setting methods were compared to the "nine common themes for good practice in health research priorities." National health research priorities were compared to those of the World Health Organization's Millennium Development Goals (MDG. RESULTS: Of the 18 Latin American countries assessed, 13 had documents that established national health research priorities; plus the Caribbean Health Research Council had a research agenda for its 19 constituents. These 14 total reports varied widely in terms of objectives, content, dissemination, and implementation; most provided a list of strategic areas, suggestions, and/or sub-priorities for each country; however, few proposed specific research topics and questions. CONCLUSIONS: Future reports could be improved by including more details on the comprehensive approach employed to identify priorities, on the information gathering process, and on practices to be undertaken after priorities are set. There is a need for improving the quality of the methodologies utilized and coordinating Regional efforts as countries strive to meet the MDG.

  19. Expecting the unexpected

    DEFF Research Database (Denmark)

    Mcneill, Ilona M.; Dunlop, Patrick D.; Heath, Jonathan B.

    2013-01-01

    People who live in wildfire-prone communities tend to form their own hazard-related expectations, which may influence their willingness to prepare for a fire. Past research has already identified two important expectancy-based factors associated with people's intentions to prepare for a natural......) and measured actual rather than intended preparedness. In addition, we tested the relation between preparedness and two additional threat-related expectations: the expectation that one can rely on an official warning and the expectation of encountering obstacles (e.g., the loss of utilities) during a fire...

  20. Evaluation of Parallel Level Sets and Bowsher's Method as Segmentation-Free Anatomical Priors for Time-of-Flight PET Reconstruction.

    Science.gov (United States)

    Schramm, Georg; Holler, Martin; Rezaei, Ahmadreza; Vunckx, Kathleen; Knoll, Florian; Bredies, Kristian; Boada, Fernando; Nuyts, Johan

    2018-02-01

    In this article, we evaluate Parallel Level Sets (PLS) and Bowsher's method as segmentation-free anatomical priors for regularized brain positron emission tomography (PET) reconstruction. We derive the proximity operators for two PLS priors and use the EM-TV algorithm in combination with the first order primal-dual algorithm by Chambolle and Pock to solve the non-smooth optimization problem for PET reconstruction with PLS regularization. In addition, we compare the performance of two PLS versions against the symmetric and asymmetric Bowsher priors with quadratic and relative difference penalty function. For this aim, we first evaluate reconstructions of 30 noise realizations of simulated PET data derived from a real hybrid positron emission tomography/magnetic resonance imaging (PET/MR) acquisition in terms of regional bias and noise. Second, we evaluate reconstructions of a real brain PET/MR data set acquired on a GE Signa time-of-flight PET/MR in a similar way. The reconstructions of simulated and real 3D PET/MR data show that all priors were superior to post-smoothed maximum likelihood expectation maximization with ordered subsets (OSEM) in terms of bias-noise characteristics in different regions of interest where the PET uptake follows anatomical boundaries. Our implementation of the asymmetric Bowsher prior showed slightly superior performance compared with the two versions of PLS and the symmetric Bowsher prior. At very high regularization weights, all investigated anatomical priors suffer from the transfer of non-shared gradients.

  1. Degeneracy relations in QCD and the equivalence of two systematic all-orders methods for setting the renormalization scale

    Directory of Open Access Journals (Sweden)

    Huan-Yu Bi

    2015-09-01

    Full Text Available The Principle of Maximum Conformality (PMC eliminates QCD renormalization scale-setting uncertainties using fundamental renormalization group methods. The resulting scale-fixed pQCD predictions are independent of the choice of renormalization scheme and show rapid convergence. The coefficients of the scale-fixed couplings are identical to the corresponding conformal series with zero β-function. Two all-orders methods for systematically implementing the PMC-scale setting procedure for existing high order calculations are discussed in this article. One implementation is based on the PMC-BLM correspondence (PMC-I; the other, more recent, method (PMC-II uses the Rδ-scheme, a systematic generalization of the minimal subtraction renormalization scheme. Both approaches satisfy all of the principles of the renormalization group and lead to scale-fixed and scheme-independent predictions at each finite order. In this work, we show that PMC-I and PMC-II scale-setting methods are in practice equivalent to each other. We illustrate this equivalence for the four-loop calculations of the annihilation ratio Re+e− and the Higgs partial width Γ(H→bb¯. Both methods lead to the same resummed (‘conformal’ series up to all orders. The small scale differences between the two approaches are reduced as additional renormalization group {βi}-terms in the pQCD expansion are taken into account. We also show that special degeneracy relations, which underly the equivalence of the two PMC approaches and the resulting conformal features of the pQCD series, are in fact general properties of non-Abelian gauge theory.

  2. Active-Set Reduced-Space Methods with Nonlinear Elimination for Two-Phase Flow Problems in Porous Media

    KAUST Repository

    Yang, Haijian

    2016-07-26

    Fully implicit methods are drawing more attention in scientific and engineering applications due to the allowance of large time steps in extreme-scale simulations. When using a fully implicit method to solve two-phase flow problems in porous media, one major challenge is the solution of the resultant nonlinear system at each time step. To solve such nonlinear systems, traditional nonlinear iterative methods, such as the class of the Newton methods, often fail to achieve the desired convergent rate due to the high nonlinearity of the system and/or the violation of the boundedness requirement of the saturation. In the paper, we reformulate the two-phase model as a variational inequality that naturally ensures the physical feasibility of the saturation variable. The variational inequality is then solved by an active-set reduced-space method with a nonlinear elimination preconditioner to remove the high nonlinear components that often causes the failure of the nonlinear iteration for convergence. To validate the effectiveness of the proposed method, we compare it with the classical implicit pressure-explicit saturation method for two-phase flow problems with strong heterogeneity. The numerical results show that our nonlinear solver overcomes the often severe limits on the time step associated with existing methods, results in superior convergence performance, and achieves reduction in the total computing time by more than one order of magnitude.

  3. Active-Set Reduced-Space Methods with Nonlinear Elimination for Two-Phase Flow Problems in Porous Media

    KAUST Repository

    Yang, Haijian; Yang, Chao; Sun, Shuyu

    2016-01-01

    Fully implicit methods are drawing more attention in scientific and engineering applications due to the allowance of large time steps in extreme-scale simulations. When using a fully implicit method to solve two-phase flow problems in porous media, one major challenge is the solution of the resultant nonlinear system at each time step. To solve such nonlinear systems, traditional nonlinear iterative methods, such as the class of the Newton methods, often fail to achieve the desired convergent rate due to the high nonlinearity of the system and/or the violation of the boundedness requirement of the saturation. In the paper, we reformulate the two-phase model as a variational inequality that naturally ensures the physical feasibility of the saturation variable. The variational inequality is then solved by an active-set reduced-space method with a nonlinear elimination preconditioner to remove the high nonlinear components that often causes the failure of the nonlinear iteration for convergence. To validate the effectiveness of the proposed method, we compare it with the classical implicit pressure-explicit saturation method for two-phase flow problems with strong heterogeneity. The numerical results show that our nonlinear solver overcomes the often severe limits on the time step associated with existing methods, results in superior convergence performance, and achieves reduction in the total computing time by more than one order of magnitude.

  4. Numerical simulation of interface movement in gas-liquid two-phase flows with Level Set method

    International Nuclear Information System (INIS)

    Li Huixiong; Chinese Academy of Sciences, Beijing; Deng Sheng; Chen Tingkuan; Zhao Jianfu; Wang Fei

    2005-01-01

    Numerical simulation of gas-liquid two-phase flow and heat transfer has been an attractive work for a quite long time, but still remains as a knotty difficulty due to the inherent complexities of the gas-liquid two-phase flow resulted from the existence of moving interfaces with topology changes. This paper reports the effort and the latest advances that have been made by the authors, with special emphasis on the methods for computing solutions to the advection equation of the Level set function, which is utilized to capture the moving interfaces in gas-liquid two-phase flows. Three different schemes, i.e. the simple finite difference scheme, the Superbee-TVD scheme and the 5-order WENO scheme in combination with the Runge-Kutta method are respectively applied to solve the advection equation of the Level Set. A numerical procedure based on the well-verified SIMPLER method is employed to numerically calculate the momentum equations of the two-phase flow. The above-mentioned three schemes are employed to simulate the movement of four typical interfaces under 5 typical flowing conditions. Analysis of the numerical results shows that the 5-order WENO scheme and the Superbee-TVD scheme are much better than the simple finite difference scheme, and the 5-order WENO scheme is the best to compute solutions to the advection equation of the Level Set. The 5-order WENO scheme will be employed as the main scheme to get solutions to the advection equations of the Level Set when gas-liquid two-phase flows are numerically studied in the future. (authors)

  5. Effect of a uniform magnetic field on dielectric two-phase bubbly flows using the level set method

    International Nuclear Information System (INIS)

    Ansari, M.R.; Hadidi, A.; Nimvari, M.E.

    2012-01-01

    In this study, the behavior of a single bubble in a dielectric viscous fluid under a uniform magnetic field has been simulated numerically using the Level Set method in two-phase bubbly flow. The two-phase bubbly flow was considered to be laminar and homogeneous. Deformation of the bubble was considered to be due to buoyancy and magnetic forces induced from the external applied magnetic field. A computer code was developed to solve the problem using the flow field, the interface of two phases, and the magnetic field. The Finite Volume method was applied using the SIMPLE algorithm to discretize the governing equations. Using this algorithm enables us to calculate the pressure parameter, which has been eliminated by previous researchers because of the complexity of the two-phase flow. The finite difference method was used to solve the magnetic field equation. The results outlined in the present study agree well with the existing experimental data and numerical results. These results show that the magnetic field affects and controls the shape, size, velocity, and location of the bubble. - Highlights: ►A bubble behavior was simulated numerically. ► A single bubble behavior was considered in a dielectric viscous fluid. ► A uniform magnetic field is used to study a bubble behavior. ► Deformation of the bubble was considered using the Level Set method. ► The magnetic field affects the shape, size, velocity, and location of the bubble.

  6. Shadow analysis of soil surface roughness compared to the chain set method and direct measurement of micro-relief

    Directory of Open Access Journals (Sweden)

    R. García Moreno

    2010-08-01

    Full Text Available Soil surface roughness (SSR expresses soil susceptibility to wind and water erosion and plays an important role in the development and the maintenance of soil biota. Several methods have been developed to characterise SSR based on different methods of acquiring data. Because the main problems related to these methods involve the use and handling of equipment in the field, the present study aims to fill the need for a method for measuring SSR that is more reliable, low-cost and convenient in the field than traditional field methods. Shadow analysis, which interprets micro-topographic shadows, is based on the principle that there is a direct relationship between the soil surface roughness and the shadows cast by soil structures under fixed sunlight conditions. SSR was calculated with shadows analysis in the laboratory using hemispheres of different diameter with a diverse distribution of known altitudes and a surface area of 1 m2.

    Data obtained from the shadow analysis were compared to data obtained with the chain method and simulation of the micro-relief. The results show a relationship among the SSR calculated using the different methods. To further improve the method, shadow analysis was used to measure the SSR in a sandy clay loam field using different tillage tools (chisel, tiller and roller and in a control of 4 m2 surface plots divided into subplots of 1 m2. The measurements were compared to the data obtained using the chain set and pin meter methods. The SSR measured was the highest when the chisel was used, followed by the tiller and the roller, and finally the control, for each of the three methods. Shadow analysis is shown to be a reliable method that does not disturb the measured surface, is easy to handle and analyse, and shortens the time involved in field operations by a factor ranging from 4 to 20 compared to well known techniques such as the chain set and pin meter methods.

  7. Methods to characterize environmental settings of stream and groundwater sampling sites for National Water-Quality Assessment

    Science.gov (United States)

    Nakagaki, Naomi; Hitt, Kerie J.; Price, Curtis V.; Falcone, James A.

    2012-01-01

    Characterization of natural and anthropogenic features that define the environmental settings of sampling sites for streams and groundwater, including drainage basins and groundwater study areas, is an essential component of water-quality and ecological investigations being conducted as part of the U.S. Geological Survey's National Water-Quality Assessment program. Quantitative characterization of environmental settings, combined with physical, chemical, and biological data collected at sampling sites, contributes to understanding the status of, and influences on, water-quality and ecological conditions. To support studies for the National Water-Quality Assessment program, a geographic information system (GIS) was used to develop a standard set of methods to consistently characterize the sites, drainage basins, and groundwater study areas across the nation. This report describes three methods used for characterization-simple overlay, area-weighted areal interpolation, and land-cover-weighted areal interpolation-and their appropriate applications to geographic analyses that have different objectives and data constraints. In addition, this document records the GIS thematic datasets that are used for the Program's national design and data analyses.

  8. Comparison of national health research priority-setting methods and characteristics in Latin America and the Caribbean, 2002-2012.

    Science.gov (United States)

    Reveiz, Ludovic; Elias, Vanessa; Terry, Robert F; Alger, Jackeline; Becerra-Posada, Francisco

    2013-07-01

    To compare health research priority-setting methods and characteristics among countries in Latin America and the Caribbean during 2002 - 2012. This was a systematic review that identified national health research policies and priority agendas through a search of ministry and government databases related to health care institutions. PubMed, LILACS, the Health Research Web, and others were searched for the period from January 2002 - February 2012. The study excluded research organized by governmental institutions and specific national strategies on particular disease areas. Priority-setting methods were compared to the "nine common themes for good practice in health research priorities." National health research priorities were compared to those of the World Health Organization's Millennium Development Goals (MDG). Of the 18 Latin American countries assessed, 13 had documents that established national health research priorities; plus the Caribbean Health Research Council had a research agenda for its 19 constituents. These 14 total reports varied widely in terms of objectives, content, dissemination, and implementation; most provided a list of strategic areas, suggestions, and/or sub-priorities for each country; however, few proposed specific research topics and questions. Future reports could be improved by including more details on the comprehensive approach employed to identify priorities, on the information gathering process, and on practices to be undertaken after priorities are set. There is a need for improving the quality of the methodologies utilized and coordinating Regional efforts as countries strive to meet the MDG.

  9. Laparoscopic inguinal hernia repair by the hook method in emergency setting in children presenting with incarcerated inguinal hernia.

    Science.gov (United States)

    Chan, Kin Wai Edwin; Lee, Kim Hung; Tam, Yuk Him; Sihoe, Jennifer Dart Yin; Cheung, Sing Tak; Mou, Jennifer Wai Cheung

    2011-10-01

    The development of laparoscopic hernia repair has provided an alternative approach to the management of incarcerated inguinal hernia in children. Different laparoscopic techniques for hernia repair have been described. However, we hereby review the role of laparoscopic hernia repair using the hook method in the emergency setting for incarcerated inguinal hernias in children. A retrospective review was conducted of all children who presented with incarcerated inguinal hernia and underwent laparoscopic hernia repair using the hook method in emergency setting between 2004 and 2010. There were a total of 15 boys and 1 girl with a mean age of 30 ± 36 months (range, 4 months to 12 years). The hernia was successfully reduced after sedation in 7 children and after general anesthesia in 4 children. In 5 children, the hernia was reduced by a combined manual and laparoscopic-assisted approach. Emergency laparoscopic inguinal hernia repair using the hook method was performed after reduction of the hernia. The presence of preperitoneal fluid secondary to recent incarceration facilitated the dissection of the preperitoneal space by the hernia hook. All children underwent successful reduction and hernia repair. The median operative time was 37 minutes. There was no postoperative complication. The median hospital stay was 3 days. At a median follow-up of 40 months, there was no recurrence of the hernia or testicular atrophy. Emergency laparoscopic inguinal hernia repair by the hook method is safe and feasible. Easier preperitoneal dissection was experienced, and repair of the contralateral patent processus vaginalis can be performed in the same setting. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Setting the Stage with Geometry: Lessons & Worksheets to Build Skills in Measuring Perimeter, Area, Surface Area, and Volume. Poster/Teaching Guide. Expect the Unexpected with Math[R

    Science.gov (United States)

    Actuarial Foundation, 2013

    2013-01-01

    "Setting the Stage with Geometry" is a new math program aligned with the National Council of Teachers of Mathematics (NCTM) standards that is designed to help students in grades 6-8 build and reinforce basic geometry skills for measuring 2D and 3D shapes. Developed by The Actuarial Foundation, this program seeks to provide skill-building math…

  11. A degenerate primer MOB typing (DPMT method to classify gamma-proteobacterial plasmids in clinical and environmental settings.

    Directory of Open Access Journals (Sweden)

    Andrés Alvarado

    Full Text Available Transmissible plasmids are responsible for the spread of genetic determinants, such as antibiotic resistance or virulence traits, causing a large ecological and epidemiological impact. Transmissible plasmids, either conjugative or mobilizable, have in common the presence of a relaxase gene. Relaxases were previously classified in six protein families according to their phylogeny. Degenerate primers hybridizing to coding sequences of conserved amino acid motifs were designed to amplify related relaxase genes from γ-Proteobacterial plasmids. Specificity and sensitivity of a selected set of 19 primer pairs were first tested using a collection of 33 reference relaxases, representing the diversity of γ-Proteobacterial plasmids. The validated set was then applied to the analysis of two plasmid collections obtained from clinical isolates. The relaxase screening method, which we call "Degenerate Primer MOB Typing" or DPMT, detected not only most known Inc/Rep groups, but also a plethora of plasmids not previously assigned to any Inc group or Rep-type.

  12. Automatic Enhancement of the Reference Set for Multi-Criteria Sorting in The Frame of Theseus Method

    Directory of Open Access Journals (Sweden)

    Fernandez Eduardo

    2014-05-01

    Full Text Available Some recent works have established the importance of handling abundant reference information in multi-criteria sorting problems. More valid information allows a better characterization of the agent’s assignment policy, which can lead to an improved decision support. However, sometimes information for enhancing the reference set may be not available, or may be too expensive. This paper explores an automatic mode of enhancing the reference set in the framework of the THESEUS multi-criteria sorting method. Some performance measures are defined in order to test results of the enhancement. Several theoretical arguments and practical experiments are provided here, supporting a basic advantage of the automatic enhancement: a reduction of the vagueness measure that improves the THESEUS accuracy, without additional efforts from the decision agent. The experiments suggest that the errors coming from inadequate automatic assignments can be kept at a manageable level.

  13. Comparison of different statistical methods for estimation of extreme sea levels with wave set-up contribution

    Science.gov (United States)

    Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme

    2013-04-01

    Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.

  14. Communication skills training in dementia care: a systematic review of effectiveness, training content, and didactic methods in different care settings.

    Science.gov (United States)

    Eggenberger, Eva; Heimerl, Katharina; Bennett, Michael I

    2013-03-01

    Caring for and caring about people with dementia require specific communication skills. Healthcare professionals and family caregivers usually receive little training to enable them to meet the communicative needs of people with dementia. This review identifies existent interventions to enhance communication in dementia care in various care settings. We searched MEDLINE, AMED, EMBASE, PsychINFO, CINAHL, The Cochrane Library, Gerolit, and Web of Science for scientific articles reporting interventions in both English and German. An intervention was defined as communication skills training by means of face-to-face interaction with the aim of improving basic communicative skills. Both professional and family caregivers were included. The effectiveness of such training was analyzed. Different types of training were defined. Didactic methods, training content, and additional organizational features were qualitatively examined. This review included 12 trials totaling 831 persons with dementia, 519 professional caregivers, and 162 family caregivers. Most studies were carried out in the USA, the UK, and Germany. Eight studies took place in nursing homes; four studies were located in a home-care setting. No studies could be found in an acute-care setting. We provide a list of basic communicative principles for good communication in dementia care. Didactic methods included lectures, hands-on training, group discussions, and role-play. This review shows that communication skills training in dementia care significantly improves the quality of life and wellbeing of people with dementia and increases positive interactions in various care settings. Communication skills training shows significant impact on professional and family caregivers' communication skills, competencies, and knowledge. Additional organizational features improve the sustainability of communication interventions.

  15. Evaluating the construct of triage acuity against a set of reference vignettes developed via modified Delphi method.

    Science.gov (United States)

    Twomey, Michèle; Wallis, Lee A; Myers, Jonathan E

    2014-07-01

    To evaluate the construct of triage acuity as measured by the South African Triage Scale (SATS) against a set of reference vignettes. A modified Delphi method was used to develop a set of reference vignettes. Delphi participants completed a 2-round consensus-building process, and independently assigned triage acuity ratings to 100 written vignettes unaware of the ratings given by others. Triage acuity ratings were summarised for all vignettes, and only those that reached 80% consensus during round 2 were included in the reference set. Triage ratings for the reference vignettes given by two independent experts using the SATS were compared with the ratings given by the international Delphi panel. Measures of sensitivity, specificity, associated percentages for over-triage/under-triage were used to evaluate the construct of triage acuity (as measured by the SATS) by examining the association between the ratings by the two experts and the international panel. On completion of the Delphi process, 42 of the 100 vignettes reached 80% consensus on their acuity rating and made up the reference set. On average, over all acuity levels, sensitivity was 74% (CI 64% to 82%), specificity 92% (CI 87% to 94%), under-triage occurred 14% (CI 8% to 23%) and over-triage 12% (CI 8% to 23%) of the time. The results of this study provide an alternative to evaluating triage scales against the construct of acuity as measured with the SATS. This method of using 80% consensus vignettes may, however, systematically bias the validity estimate towards better performance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Expectations for a scientific collaboratory

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    2003-01-01

    In the past decade, a number of scientific collaboratories have emerged, yet adoption of scientific collaboratories remains limited. Meeting expectations is one factor that influences adoption of innovations, including scientific collaboratories. This paper investigates expectations scientists have...... with respect to scientific collaboratories. Interviews were conducted with 17 scientists who work in a variety of settings and have a range of experience conducting and managing scientific research. Results indicate that scientists expect a collaboratory to: support their strategic plans; facilitate management...... of the scientific process; have a positive or neutral impact on scientific outcomes; provide advantages and disadvantages for scientific task execution; and provide personal conveniences when collaborating across distances. These results both confirm existing knowledge and raise new issues for the design...

  17. Determining health expectancies

    National Research Council Canada - National Science Library

    Robine, Jean-Marie

    2003-01-01

    ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jean-Marie Robine 9 1 Increase in Life Expectancy and Concentration of Ages at Death . . . . France Mesle´ and Jacques Vallin 13 2 Compression of Morbidity...

  18. Developing adaptive interventions for adolescent substance use treatment settings: protocol of an observational, mixed-methods project.

    Science.gov (United States)

    Grant, Sean; Agniel, Denis; Almirall, Daniel; Burkhart, Q; Hunter, Sarah B; McCaffrey, Daniel F; Pedersen, Eric R; Ramchand, Rajeev; Griffin, Beth Ann

    2017-12-19

    Over 1.6 million adolescents in the United States meet criteria for substance use disorders (SUDs). While there are promising treatments for SUDs, adolescents respond to these treatments differentially in part based on the setting in which treatments are delivered. One way to address such individualized response to treatment is through the development of adaptive interventions (AIs): sequences of decision rules for altering treatment based on an individual's needs. This protocol describes a project with the overarching goal of beginning the development of AIs that provide recommendations for altering the setting of an adolescent's substance use treatment. This project has three discrete aims: (1) explore the views of various stakeholders (parents, providers, policymakers, and researchers) on deciding the setting of substance use treatment for an adolescent based on individualized need, (2) generate hypotheses concerning candidate AIs, and (3) compare the relative effectiveness among candidate AIs and non-adaptive interventions commonly used in everyday practice. This project uses a mixed-methods approach. First, we will conduct an iterative stakeholder engagement process, using RAND's ExpertLens online system, to assess the importance of considering specific individual needs and clinical outcomes when deciding the setting for an adolescent's substance use treatment. Second, we will use results from the stakeholder engagement process to analyze an observational longitudinal data set of 15,656 adolescents in substance use treatment, supported by the Substance Abuse and Mental Health Services Administration, using the Global Appraisal of Individual Needs questionnaire. We will utilize methods based on Q-learning regression to generate hypotheses about candidate AIs. Third, we will use robust statistical methods that aim to appropriately handle casemix adjustment on a large number of covariates (marginal structural modeling and inverse probability of treatment weights

  19. Life expectancy in bipolar disorder

    DEFF Research Database (Denmark)

    Kessing, Lars Vedel; Vradi, Eleni; Andersen, Per Kragh

    2015-01-01

    OBJECTIVE: Life expectancy in patients with bipolar disorder has been reported to be decreased by 11 to 20 years. These calculations are based on data for individuals at the age of 15 years. However, this may be misleading for patients with bipolar disorder in general as most patients have a later...... onset of illness. The aim of the present study was to calculate the remaining life expectancy for patients of different ages with a diagnosis of bipolar disorder. METHODS: Using nationwide registers of all inpatient and outpatient contacts to all psychiatric hospitals in Denmark from 1970 to 2012 we...... remaining life expectancy in bipolar disorder and that of the general population decreased with age, indicating that patients with bipolar disorder start losing life-years during early and mid-adulthood. CONCLUSIONS: Life expectancy in bipolar disorder is decreased substantially, but less so than previously...

  20. Water exchange method for colonoscopy: learning curve of an experienced colonoscopist in a U.S. community practice setting.

    Science.gov (United States)

    Fischer, Leonard S; Lumsden, Antoinette; Leung, Felix W

    2012-07-01

    Water exchange colonoscopy has been reported to reduce examination discomfort and to provide salvage cleansing in unsedated or minimally sedated patients. The prolonged insertion time and perceived difficulty of insertion associated with water exchange have been cited as a barrier to its widespread use. To assess the feasibility of learning and using the water exchange method of colonoscopy in a U.S. community practice setting. Quality improvement program in nonacademic community endoscopy centers. Patients undergoing sedated diagnostic, surveillance, or screening colonoscopy. After direct coaching by a knowledgeable trainer, an experienced colonoscopist initiated colonoscopy using the water method. Whenever >5 min elapsed without advancing the colonoscope, conversion to air insufflation was made to ensure timely completion of the examination. Water Method Intention-to-treat (ITT) cecal intubation rate (CIR). Female patients had a significantly higher rate of past abdominal surgery and a significantly lower ITTCIR. The ITTCIR showed a progressive increase over time in both males and females to 85-90%. Mean insertion time was maintained at 9 to 10 min. The overall CIR was 99%. Use of water exchange did not preclude cecal intubation upon conversion to usual air insufflation in sedated patients examined by an experienced colonoscopist. With practice ITTCIR increased over time in both male and female patients. Larger volumes of water exchanged were associated with higher ITTCIR and better quality scores of bowel preparation. The data suggest that learning water exchange by a busy colonoscopist in a community practice setting is feasible and outcomes conform to accepted quality standards.

  1. A new method for setting guidelines to protect human health from agricultural exposure by using chlorpyrifos as an example

    Directory of Open Access Journals (Sweden)

    Dung Tri Phung

    2015-05-01

    Full Text Available Introduction and objectives. Guidelines set by various agencies for the control and management of chlorpyrifos cover a wide range of values reflecting difficulties in the procedures for their development. To overcome these difficulties a new method to set guidelines would be developed. Published data derived from epidemiological investigations on human populations would be used to develop a dose-response relationship for chlorpyrifos allowing the calculation of threshold values which can be used as guidelines. Materials and Method. Data from the scientific literature on human populations were collected to evaluate the adverse response doses for a range of health effects. The Cumulative Frequency Distribution (CFD for the minimum levels of adverse effects measured in terms of the Lifetime Average Daily Dose (LADD[sub]D[/sub] and the Absorbed Daily Dose for neurological (ADD[sub]DN[/sub] and non-neurological effects were used. Results. Linear regression equations were fitted to the CFD plots giving R 2 values of 0.93 and 0.86 indicating a normal distribution of the data. Using these CFD plots, the chronic and acute threshold values were calculated at the 5% cumulative frequency level for chlorpyrifos exposure giving values at 0.5 µg/kg/d and 3 µg/kg/d respectively. Conclusions. Guidelines set using this technique at the values at 0.5 µg/kg/d and 3 µg/kg/d for chronic and acute exposure respectively provide an alternative to the currently used biological endpoint and safety factor method.

  2. Expected Utility Illustrated: A Graphical Analysis of Gambles with More than Two Possible Outcomes

    Science.gov (United States)

    Chen, Frederick H.

    2010-01-01

    The author presents a simple geometric method to graphically illustrate the expected utility from a gamble with more than two possible outcomes. This geometric result gives economics students a simple visual aid for studying expected utility theory and enables them to analyze a richer set of decision problems under uncertainty compared to what…

  3. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings

    Science.gov (United States)

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-01-01

    Background Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Methods Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. Results A ‘Rich Picture’ was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. Conclusions When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further

  4. Evaluation of medical students' expectations for multimedia teaching materials: Illustration by an original method using the evaluation of a web site on cardiovascular rehabilitation.

    Science.gov (United States)

    Casillas, J-M; Gremeaux, V

    2012-02-01

    Different multimedia tools have been developed to help medical students prepare for the National Ranking Examination (NRE), rendering their choice quite difficult. No study has specifically evaluated students' expectations regarding these materials. To learn how medical students in Dijon assessed a website dedicated to cardiovascular rehabilitation, and collecting their suggestions in order to meet their expectations and the goals of second cycle medical studies. Eighteen second-cyle students were evaluated in a semi-directed manner and in ecological situation, a website specifically designed for the national curricula on cardiovascular rehabilitation for obtaining the Diploma of Specialty Studies (DES) for physical medicine and rehabilitation (PM&R) residents. Students also had to fill out a pretest and a posttest (5 MCQs). The overall quality of the site was deemed satisfactory (65.6 ± 7.7 points/100). Medical information was considered better than non-medical data and site's design (respectively 84.8 ± 8.1, 61.1 ± 20 and 64.4 ± 14.9/100). Students found the site useful in terms of understanding the items related to cardiovascular rehabilitation, although they judged it not completely in line with the NRE goals. The average score increased significantly between the pre-and post-test (6.8 ± 0.8 vs. 5 ± 1.5/8, plearning for the NRE. These elements could serve as building grounds for the future version of this website. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  5. Performance appraisal of expectations

    Directory of Open Access Journals (Sweden)

    Russkikh G.A.

    2016-11-01

    Full Text Available this article provides basic concepts for teachers to estimate and reach planned students’ expectations, describes functions and elements of expectations; nature of external and internal estimate, technology to estimate the results, gives recommendations how to create diagnostic assignments.

  6. Spiking the expectancy profiles

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.; Loui, Psyche; Vuust, Peter

    Melodic expectations are generated with different degrees of certainty. Given distributions of expectedness ratings for multiple continuations of each context, as obtained with the probe-tone paradigm, this certainty can be quantified in terms of Shannon entropy. Because expectations arise from s...

  7. Using mixed methods to evaluate efficacy and user expectations of a virtual reality-based training system for upper-limb recovery in patients after stroke: a study protocol for a randomised controlled trial.

    Science.gov (United States)

    Schuster-Amft, Corina; Eng, Kynan; Lehmann, Isabelle; Schmid, Ludwig; Kobashi, Nagisa; Thaler, Irène; Verra, Martin L; Henneke, Andrea; Signer, Sandra; McCaskey, Michael; Kiper, Daniel

    2014-09-06

    In recent years, virtual reality has been introduced to neurorehabilitation, in particular with the intention of improving upper-limb training options and facilitating motor function recovery. The proposed study incorporates a quantitative part and a qualitative part, termed a mixed-methods approach: (1) a quantitative investigation of the efficacy of virtual reality training compared to conventional therapy in upper-limb motor function are investigated, (2a) a qualitative investigation of patients' experiences and expectations of virtual reality training and (2b) a qualitative investigation of therapists' experiences using the virtual reality training system in the therapy setting. At three participating clinics, 60 patients at least 6 months after stroke onset will be randomly allocated to an experimental virtual reality group (EG) or to a control group that will receive conventional physiotherapy or occupational therapy (16 sessions, 45 minutes each, over the course of 4 weeks). Using custom data gloves, patients' finger and arm movements will be displayed in real time on a monitor, and they will move and manipulate objects in various virtual environments. A blinded assessor will test patients' motor and cognitive performance twice before, once during, and twice after the 4-week intervention. The primary outcome measure is the Box and Block Test. Secondary outcome measures are the Chedoke-McMaster Stroke Assessments (hand, arm and shoulder pain subscales), the Chedoke-McMaster Arm and Hand Activity Inventory, the Line Bisection Test, the Stroke Impact Scale, the MiniMentalState Examination and the Extended Barthel Index. Semistructured face-to-face interviews will be conducted with patients in the EG after intervention finalization with a focus on the patients' expectations and experiences regarding the virtual reality training. Therapists' perspectives on virtual reality training will be reviewed in three focus groups comprising four to six occupational

  8. African Swine Fever in Uganda: Qualitative Evaluation of Three Surveillance Methods with Implications for Other Resource-Poor Settings.

    Science.gov (United States)

    Chenais, Erika; Sternberg-Lewerin, Susanna; Boqvist, Sofia; Emanuelson, Ulf; Aliro, Tonny; Tejler, Emma; Cocca, Giampaolo; Masembe, Charles; Ståhl, Karl

    2015-01-01

    Animal diseases impact negatively on households and on national economies. In low-income countries, this pertains especially to socio-economic effects on household level. To control animal diseases and mitigate their impact, it is necessary to understand the epidemiology of the disease in its local context. Such understanding, gained through disease surveillance, is often lacking in resource-poor settings. Alternative surveillance methods have been developed to overcome some of the hurdles obstructing surveillance. The objective of this study was to evaluate and qualitatively compare three methods for surveillance of acute infectious diseases using African swine fever in northern Uganda as an example. Report-driven outbreak investigations, participatory rural appraisals (PRAs), and a household survey using a smartphone application were evaluated. All three methods had good disease-detecting capacity, and each of them detected many more outbreaks compared to those reported to the World Organization for Animal Health during the same time period. Apparent mortality rates were similar for the three methods although highest for the report-driven outbreak investigations, followed by the PRAs, and then the household survey. The three methods have different characteristics and the method of choice will depend on the surveillance objective. The optimal situation might be achieved by a combination of the methods: outbreak detection via smartphone-based real-time surveillance, outbreak investigation for collection of biological samples, and a PRA for a better understanding of the epidemiology of the specific outbreak. All three methods require initial investments and continuous efforts. The sustainability of the surveillance system should, therefore, be carefully evaluated before making such investments.

  9. African swine fever in Uganda: qualitative evaluation of three surveillance methods with implications for other resource-poor settings

    Directory of Open Access Journals (Sweden)

    Erika eChenais

    2015-10-01

    Full Text Available Animal diseases impact negatively on households and on national economies. In low-income countries this pertains especially to socio-economic effects on household level. To control animal diseases and mitigate their impact, it is necessary to understand the epidemiology of the disease in its local context. Such understanding, gained through disease surveillance, is often lacking in resource-poor settings. Alternative surveillance methods have been developed to overcome some of the hurdles obstructing surveillance. The objective of this study was to evaluate and qualitatively compare three methods for surveillance of acute infectious diseases using African swine fever (ASF in northern Uganda as an example. Report-driven outbreak investigations, participatory rural appraisals (PRA, and a household survey using a smartphone application were evaluated. All three methods had good disease-detecting capacity, each of them detected many more outbreaks compared to those reported to the World Organization for Animal Health (OIE during the same time period. Apparent mortality rates were similar for the three methods although highest for the report-driven outbreak investigations, followed by the PRAs, and then the household survey. The three methods have different characteristics and the method of choice will depend on the surveillance objective. The optimal situation might be achieved by a combination of the methods: outbreak detection via smartphone-based real-time surveillance, outbreak investigation for collection of biological samples, and a PRA for a better understanding of the epidemiology of the specific outbreak. All three methods require initial investments and continuous efforts. The sustainability of the surveillance system should therefore be carefully evaluated before making such investments.

  10. Nursing Education Interventions for Managing Acute Pain in Hospital Settings: A Systematic Review of Clinical Outcomes and Teaching Methods.

    Science.gov (United States)

    Drake, Gareth; de C Williams, Amanda C

    2017-02-01

    The objective of this review was to examine the effects of nursing education interventions on clinical outcomes for acute pain management in hospital settings, relating interventions to health care behavior change theory. Three databases were searched for nursing education interventions from 2002 to 2015 in acute hospital settings with clinical outcomes reported. Methodological quality was rated as strong, moderate, or weak using the Effective Public Health Practice Project Quality Assessment Tool for quantitative studies. The 12 eligible studies used varied didactic and interactive teaching methods. Several studies had weaknesses attributable to selection biases, uncontrolled confounders, and lack of blinding of outcome assessors. No studies made reference to behavior change theory in their design. Eight of the 12 studies investigated nursing documentation of pain assessment as the main outcome, with the majority reporting positive effects of education interventions on nursing pain assessment. Of the remaining studies, two reported mixed findings on patient self-report of pain scores as the key measure, one reported improvements in patient satisfaction with pain management after a nursing intervention, and one study found an increase in nurses' delivery of a relaxation treatment following an intervention. Improvements in design and evaluation of nursing education interventions are suggested, drawing on behavior change theory and emphasizing the relational, contextual, and emotionally demanding nature of nursing pain management in hospital settings. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  11. Research on the Method of Setting Waiting Area for Non-motor Vehicle at Signal Control Intersection

    Directory of Open Access Journals (Sweden)

    Wang Yun Xia

    2018-01-01

    Full Text Available Electric bicycle has become an indispensable important component of the transportation system. The fact is that traffic organization and channelizing design of signal control intersection is not intensive, which cannot adapt to the current traffic demand of non-motor vehicle, such as unclear traffic rules and poor visibility, thus the traffic safety of non-motor vehicle is not optimistic. Therefore, it is necessary to study on traffic organization method based on the demand of non-motor vehicle, which can provide certain theoretical basis for traffic administrative department to make policy and traffic design. This article focuses on the method of setting waiting area for non-motor vehicle at signal control intersection, including the advantages, disadvantages and the applicable conditions.

  12. A New Method of Multiattribute Decision-Making Based on Interval-Valued Hesitant Fuzzy Soft Sets and Its Application

    Directory of Open Access Journals (Sweden)

    Yan Yang

    2017-01-01

    Full Text Available Combining interval-valued hesitant fuzzy soft sets (IVHFSSs and a new comparative law, we propose a new method, which can effectively solve multiattribute decision-making (MADM problems. Firstly, a characteristic function of two interval values and a new comparative law of interval-valued hesitant fuzzy elements (IVHFEs based on the possibility degree are proposed. Then, we define two important definitions of IVHFSSs including the interval-valued hesitant fuzzy soft quasi subset and soft quasi equal based on the new comparative law. Finally, an algorithm is presented to solve MADM problems. We also use the method proposed in this paper to evaluate the importance of major components of the well drilling mud pump.

  13. Barriers in implementing evidence-informed health decisions in rural rehabilitation settings: a mixed methods pilot study.

    Science.gov (United States)

    Prakash, V; Hariohm, K; Balaganapathy, M

    2014-08-01

    Literature on the barriers to implementing research findings into physiotherapy practice are often urban centric, using self report based on the hypothetical patient scenario. The objective of this study was to investigate the occurrence of barriers, encountered by evidence informed practice-trained physiotherapists in the management of "real world" patients in rural rehabilitation settings. A mixed-methods research design was used. Physiotherapists working in rural outpatient rehabilitation settings participated in the study. In the first phase, we asked all participants (N = 5) to maintain a log book for a 4-week period to record questions that arose during their routine clinical encounters and asked them also to follow first four of the five steps of evidence-informed practice (ask, access, appraise and apply). In the second phase (after 4 weeks), we conducted a semistructured, direct interviews with the participants exploring their experiences involved in the process of implementing evidence-informed clinical decisions made during the study period. At the end of 4 weeks, 30 questions were recorded. For 17 questions, the participants found evidence but applied that evidence into their practice only in 9 instances. Being generalist practitioners, lack of outcomes specific to the patients were reported as barriers more so than time constraints in implementing evidence-informed practice. Practice setting, lack of patient-centered research and evidence-informed practice competency of physiotherapists can be significant barriers to implementing evidence-informed health decisions in rural rehabilitation setting. © 2014 Chinese Cochrane Center, West China Hospital of Sichuan University and Wiley Publishing Asia Pty Ltd.

  14. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings.

    Science.gov (United States)

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-08-01

    Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. A 'Rich Picture' was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further consideration. Published by the BMJ Publishing Group

  15. Identification of Arbitrary Zonation in Groundwater Parameters using the Level Set Method and a Parallel Genetic Algorithm

    Science.gov (United States)

    Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.

    2017-12-01

    Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.

  16. An Approximate Method for Solving Optimal Control Problems for Discrete Systems Based on Local Approximation of an Attainability Set

    Directory of Open Access Journals (Sweden)

    V. A. Baturin

    2017-03-01

    Full Text Available An optimal control problem for discrete systems is considered. A method of successive improvements along with its modernization based on the expansion of the main structures of the core algorithm about the parameter is suggested. The idea of the method is based on local approximation of attainability set, which is described by the zeros of the Bellman function in the special problem of optimal control. The essence of the problem is as follows: from the end point of the phase is required to find a path that minimizes functional deviations of the norm from the initial state. If the initial point belongs to the attainability set of the original controlled system, the value of the Bellman function equal to zero, otherwise the value of the Bellman function is greater than zero. For this special task Bellman equation is considered. The support approximation and Bellman equation are selected. The Bellman function is approximated by quadratic terms. Along the allowable trajectory, this approximation gives nothing, because Bellman function and its expansion coefficients are zero. We used a special trick: an additional variable is introduced, which characterizes the degree of deviation of the system from the initial state, thus it is obtained expanded original chain. For the new variable initial nonzero conditions is selected, thus obtained trajectory is lying outside attainability set and relevant Bellman function is greater than zero, which allows it to hold a non-trivial approximation. As a result of these procedures algorithms of successive improvements is designed. Conditions for relaxation algorithms and conditions for the necessary conditions of optimality are also obtained.

  17. Risk adjustment methods for Home Care Quality Indicators (HCQIs based on the minimum data set for home care

    Directory of Open Access Journals (Sweden)

    Hirdes John P

    2005-01-01

    Full Text Available Abstract Background There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs based on the Minimum Data Set for Home Care (MDS-HC. Methods A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a client covariates only; b client covariates plus an "Agency Intake Profile" (AIP to adjust for ascertainment and selection bias by the agency; and c client covariates plus the intake Case Mix Index (CMI. Results The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario. Conclusions Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did

  18. Risk adjustment methods for Home Care Quality Indicators (HCQIs) based on the minimum data set for home care

    Science.gov (United States)

    Dalby, Dawn M; Hirdes, John P; Fries, Brant E

    2005-01-01

    Background There has been increasing interest in enhancing accountability in health care. As such, several methods have been developed to compare the quality of home care services. These comparisons can be problematic if client populations vary across providers and no adjustment is made to account for these differences. The current paper explores the effects of risk adjustment for a set of home care quality indicators (HCQIs) based on the Minimum Data Set for Home Care (MDS-HC). Methods A total of 22 home care providers in Ontario and the Winnipeg Regional Health Authority (WRHA) in Manitoba, Canada, gathered data on their clients using the MDS-HC. These assessment data were used to generate HCQIs for each agency and for the two regions. Three types of risk adjustment methods were contrasted: a) client covariates only; b) client covariates plus an "Agency Intake Profile" (AIP) to adjust for ascertainment and selection bias by the agency; and c) client covariates plus the intake Case Mix Index (CMI). Results The mean age and gender distribution in the two populations was very similar. Across the 19 risk-adjusted HCQIs, Ontario CCACs had a significantly higher AIP adjustment value for eight HCQIs, indicating a greater propensity to trigger on these quality issues on admission. On average, Ontario had unadjusted rates that were 0.3% higher than the WRHA. Following risk adjustment with the AIP covariate, Ontario rates were, on average, 1.5% lower than the WRHA. In the WRHA, individual agencies were likely to experience a decline in their standing, whereby they were more likely to be ranked among the worst performers following risk adjustment. The opposite was true for sites in Ontario. Conclusions Risk adjustment is essential when comparing quality of care across providers when home care agencies provide services to populations with different characteristics. While such adjustment had a relatively small effect for the two regions, it did substantially affect the

  19. A novel method for predicting activity of cis-regulatory modules, based on a diverse training set.

    Science.gov (United States)

    Yang, Wei; Sinha, Saurabh

    2017-01-01

    With the rapid emergence of technologies for locating cis-regulatory modules (CRMs) genome-wide, the next pressing challenge is to assign precise functions to each CRM, i.e. to determine the spatiotemporal domains or cell-types where it drives expression. A popular approach to this task is to model the typical k-mer composition of a set of CRMs known to drive a common expression pattern, and assign that pattern to other CRMs exhibiting a similar k-mer composition. This approach does not rely on prior knowledge of transcription factors relevant to the CRM or their binding motifs, and is thus more widely applicable than motif-based methods for predicting CRM activity, but is also prone to false positive predictions. We present a novel strategy to improve the above-mentioned approach: to predict if a CRM drives a specific gene expression pattern, assess not only how similar the CRM is to other CRMs with similar activity but also to CRMs with distinct activities. We use a state-of-the-art statistical method to quantify a CRM's sequence similarity to many different training sets of CRMs, and employ a classification algorithm to integrate these similarity scores into a single prediction of the CRM's activity. This strategy is shown to significantly improve CRM activity prediction over current approaches. Our implementation of the new method, called IMMBoost, is freely available as source code, at https://github.com/weiyangedward/IMMBoost CONTACT: sinhas@illinois.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Research on goal-setting method of rock drivage based on the balance among period, cost and quality

    Institute of Scientific and Technical Information of China (English)

    SHAN Ren-liang; CAI Wei-ling; WANG Yu-bao; LI Dong-gang; CHEN Xiang

    2012-01-01

    On the basis of the analysis on the disadvantages of the original goal-serting about rock drivage,this paper defined the "life cycle quality".With project management theory and the Cobb-Douglas finction,"quality-cost and quality-price curve model" and the "total cost-period prediction model" were built.Then the goal-setting method of the balance among quality,cost and period of rock drivage was constructed by finding "life cycle cost" through "life cycle quality" using "quality-cost andquality-price curve model" and ensuring period through "life cycle cost" using "total cost-period prediction model" (hereinafter referred to as the "three goals balance method")."Value contribution" which is the value of the contribution to a mine because of rock drivage,was found in the process of constructing the "quality-cost and quality-price curve model".An industrial test was done in coal mine A with the research results,staff footage efficiency improved by 24.24%,the period shortened by 14.3%,the "life cycle cost" dropped by 2.09%,the "life cycle quality price" improved by 3.29%,and value contribution increased by 25.3%.The result shows that the new goal method setting on the basis of coal mine profit maximization can ensure construction period.At the same time,it can realize cost and quality objectives and the optimization and balance of relationship among them; rewarding excavation teams by "value contribution" can combine organizational goal with personal goal,it significantly raise the employee's work efficiency.

  1. Life expectancy and education

    DEFF Research Database (Denmark)

    Hansen, Casper Worm; Strulik, Holger

    2017-01-01

    , we find that US states with higher mortality rates from cardiovascular disease prior to the 1970s experienced greater increases in adult life expectancy and higher education enrollment. Our estimates suggest that a one-standard deviation higher treatment intensity is associated with an increase...... in adult life expectancy of 0.37 years and 0.07–0.15 more years of higher education....

  2. Expected Classification Accuracy

    Directory of Open Access Journals (Sweden)

    Lawrence M. Rudner

    2005-08-01

    Full Text Available Every time we make a classification based on a test score, we should expect some number..of misclassifications. Some examinees whose true ability is within a score range will have..observed scores outside of that range. A procedure for providing a classification table of..true and expected scores is developed for polytomously scored items under item response..theory and applied to state assessment data. A simplified procedure for estimating the..table entries is also presented.

  3. Expected utility without utility

    OpenAIRE

    Castagnoli, E.; Licalzi, M.

    1996-01-01

    This paper advances an interpretation of Von Neumann–Morgenstern’s expected utility model for preferences over lotteries which does not require the notion of a cardinal utility over prizes and can be phrased entirely in the language of probability. According to it, the expected utility of a lottery can be read as the probability that this lottery outperforms another given independent lottery. The implications of this interpretation for some topics and models in decision theory are considered....

  4. Expected years ever married

    Directory of Open Access Journals (Sweden)

    Ryohei Mogi

    2018-04-01

    Full Text Available Background: In the second half of the 20th century, remarkable marriage changes were seen: a great proportion of never married population, high average age at first marriage, and large variance in first marriage timing. Although it is theoretically possible to separate these three elements, disentangling them analytically remains a challenge. Objective: This study's goal is to answer the following questions: Which of the three effects, nonmarriage, delayed marriage, or expansion, has the most impact on nuptiality changes? How does the most influential factor differ by time periods, birth cohorts, and countries? Methods: To quantify nuptiality changes over time, we define the measure 'expected years ever married' (EYEM. We illustrate the use of EYEM, looking at time trends in 15 countries (six countries for cohort analysis and decompose these trends into three components: scale (the changes in the proportion of never married - nonmarriage, location (the changes in timing of first marriage - delayed marriage, and variance (the changes in the standard deviation of first marriage age - expansion. We used population counts by sex, age, and marital status from national statistical offices and the United Nations database. Results: Results show that delayed marriage is the most influential factor on period EYEM's changes, while nonmarriage has recently begun to contribute to the change in North and West Europe and Canada. Period and cohort analysis complement each other. Conclusions: This study introduces a new index of nuptiality and decomposes its change into the contribution of three components: scale, location, and variance. The decomposition steps presented here offer an open possibility for more elaborate parametric marriage models.

  5. Measuring Risk When Expected Losses Are Unbounded

    Directory of Open Access Journals (Sweden)

    Alejandro Balbás

    2014-09-01

    Full Text Available This paper proposes a new method to introduce coherent risk measures for risks with infinite expectation, such as those characterized by some Pareto distributions. Extensions of the conditional value at risk, the weighted conditional value at risk and other examples are given. Actuarial applications are analyzed, such as extensions of the expected value premium principle when expected losses are unbounded.

  6. Validity of verbal autopsy method to determine causes of death among adults in the urban setting of Ethiopia

    Science.gov (United States)

    2012-01-01

    Background Verbal autopsy has been widely used to estimate causes of death in settings with inadequate vital registries, but little is known about its validity. This analysis was part of Addis Ababa Mortality Surveillance Program to examine the validity of verbal autopsy for determining causes of death compared with hospital medical records among adults in the urban setting of Ethiopia. Methods This validation study consisted of comparison of verbal autopsy final diagnosis with hospital diagnosis taken as a “gold standard”. In public and private hospitals of Addis Ababa, 20,152 adult deaths (15 years and above) were recorded between 2007 and 2010. With the same period, a verbal autopsy was conducted for 4,776 adult deaths of which, 1,356 were deceased in any of Addis Ababa hospitals. Then, verbal autopsy and hospital data sets were merged using the variables; full name of the deceased, sex, address, age, place and date of death. We calculated sensitivity, specificity and positive predictive values with 95% confidence interval. Results After merging, a total of 335 adult deaths were captured. For communicable diseases, the values of sensitivity, specificity and positive predictive values of verbal autopsy diagnosis were 79%, 78% and 68% respectively. For non-communicable diseases, sensitivity of the verbal autopsy diagnoses was 69%, specificity 78% and positive predictive value 79%. Regarding injury, sensitivity of the verbal autopsy diagnoses was 70%, specificity 98% and positive predictive value 83%. Higher sensitivity was achieved for HIV/AIDS and tuberculosis, but lower specificity with relatively more false positives. Conclusion These findings may indicate the potential of verbal autopsy to provide cost-effective information to guide policy on communicable and non communicable diseases double burden among adults in Ethiopia. Thus, a well structured verbal autopsy method, followed by qualified physician reviews could be capable of providing reasonable cause

  7. Validity of verbal autopsy method to determine causes of death among adults in the urban setting of Ethiopia

    Directory of Open Access Journals (Sweden)

    Misganaw Awoke

    2012-08-01

    Full Text Available Abstract Background Verbal autopsy has been widely used to estimate causes of death in settings with inadequate vital registries, but little is known about its validity. This analysis was part of Addis Ababa Mortality Surveillance Program to examine the validity of verbal autopsy for determining causes of death compared with hospital medical records among adults in the urban setting of Ethiopia. Methods This validation study consisted of comparison of verbal autopsy final diagnosis with hospital diagnosis taken as a “gold standard”. In public and private hospitals of Addis Ababa, 20,152 adult deaths (15 years and above were recorded between 2007 and 2010. With the same period, a verbal autopsy was conducted for 4,776 adult deaths of which, 1,356 were deceased in any of Addis Ababa hospitals. Then, verbal autopsy and hospital data sets were merged using the variables; full name of the deceased, sex, address, age, place and date of death. We calculated sensitivity, specificity and positive predictive values with 95% confidence interval. Results After merging, a total of 335 adult deaths were captured. For communicable diseases, the values of sensitivity, specificity and positive predictive values of verbal autopsy diagnosis were 79%, 78% and 68% respectively. For non-communicable diseases, sensitivity of the verbal autopsy diagnoses was 69%, specificity 78% and positive predictive value 79%. Regarding injury, sensitivity of the verbal autopsy diagnoses was 70%, specificity 98% and positive predictive value 83%. Higher sensitivity was achieved for HIV/AIDS and tuberculosis, but lower specificity with relatively more false positives. Conclusion These findings may indicate the potential of verbal autopsy to provide cost-effective information to guide policy on communicable and non communicable diseases double burden among adults in Ethiopia. Thus, a well structured verbal autopsy method, followed by qualified physician reviews could be capable of

  8. Development of a core outcome set for orthodontic trials using a mixed-methods approach: protocol for a multicentre study.

    Science.gov (United States)

    Tsichlaki, Aliki; O'Brien, Kevin; Johal, Ama; Marshman, Zoe; Benson, Philip; Colonio Salazar, Fiorella B; Fleming, Padhraig S

    2017-08-04

    Orthodontic treatment is commonly undertaken in young people, with over 40% of children in the UK needing treatment and currently one third having treatment, at a cost to the National Health Service in England and Wales of £273 million each year. Most current research about orthodontic care does not consider what patients truly feel about, or want, from treatment, and a diverse range of outcomes is being used with little consistency between studies. This study aims to address these problems, using established methodology to develop a core outcome set for use in future clinical trials of orthodontic interventions in children and young people. This is a mixed-methods study incorporating four distinct stages. The first stage will include a scoping review of the scientific literature to identify primary and secondary outcome measures that have been used in previous orthodontic clinical trials. The second stage will involve qualitative interviews and focus groups with orthodontic patients aged 10 to 16 years to determine what outcomes are important to them. The outcomes elicited from these two stages will inform the third stage of the study in which a long-list of outcomes will be ranked in terms of importance using electronic Delphi surveys involving clinicians and patients. The final stage of the study will involve face-to-face consensus meetings with all stakeholders to discuss and agree on the outcome measures that should be included in the final core outcome set. This research will help to inform patients, parents, clinicians and commissioners about outcomes that are important to young people undergoing orthodontic treatment. Adoption of the core outcome set in future clinical trials of orthodontic treatment will make it easier for results to be compared, contrasted and combined. This should translate into improved decision-making by all stakeholders involved. The project has been registered on the Core Outcome Measures in Effectiveness Trials ( COMET ) website

  9. Expectations, Bond Yields and Monetary Policy

    DEFF Research Database (Denmark)

    Chun, Albert Lee

    2011-01-01

    expectations about inflation, output growth, and the anticipated path of monetary policy actions contain important information for explaining movements in bond yields. Estimates from a forward-looking monetary policy rule suggest that the central bank exhibits a preemptive response to inflationary expectations...... of this type may provide traders and policymakers with a new set of tools for formally assessing the reaction of bond yields to shifts in market expectations...

  10. Understanding low uptake of contraceptives in resource-limited settings: a mixed-methods study in rural Burundi.

    Science.gov (United States)

    Ndayizigiye, M; Fawzi, M C Smith; Lively, C Thompson; Ware, N C

    2017-03-15

    Family planning can reduce deaths, improve health, and facilitate economic development in resource-limited settings. Yet, modern contraceptive methods are often underused. This mixed-methods study, conducted in rural Burundi, sought to explain low uptake of contraceptives by identifying utilization barriers. Results may inform development of family planning interventions in Burundi and elsewhere. We investigated uptake of contraceptives among women of reproductive age in two rural districts of Burundi, using an explanatory sequential, mixed-methods research design. We first assessed availability and utilization rates of modern contraceptives through a facility-based survey in 39 health clinics. Barriers to uptake of contraceptives were then explored through qualitative interviews (N = 10) and focus groups (N = 7). Contraceptives were generally available in the 39 clinics studied, yet uptake of family planning averaged only 2.96%. Greater uptake was positively associated with the number of health professionals engaged and trained in family planning service provision, and with the number of different types of contraceptives available. Four uptake barriers were identified: (1) lack of providers to administer contraception, (2) lack of fit between available and preferred contraceptive methods, (3) a climate of fear surrounding contraceptive use, and (4) provider refusal to offer family planning services. Where resources are scarce, availability of modern contraceptives alone will likely not ensure uptake. Interventions addressing multiple uptake barriers simultaneously have the greatest chance of success. In rural Burundi, examples are community distribution of contraceptive methods, public information campaigns, improved training for health professionals and community health workers, and strengthening of the health infrastructure.

  11. A decision making method based on interval type-2 fuzzy sets: An approach for ambulance location preference

    Directory of Open Access Journals (Sweden)

    Lazim Abdullah

    2018-01-01

    Full Text Available Selecting the best solution to deploy an ambulance in a strategic location is of the important variables that need to be accounted for improving the emergency medical services. The selection requires both quantitative and qualitative evaluation. Fuzzy set based approach is one of the well-known theories that help decision makers to handle fuzziness, uncertainty in decision making and vagueness of information. This paper proposes a new decision making method of Interval Type-2 Fuzzy Simple Additive Weighting (IT2 FSAW as to deal with uncertainty and vagueness. The new IT2 FSAW is applied to establish a preference in ambulance location. The decision making framework defines four criteria and five alternatives of ambulance location preference. Four experts attached to a Malaysian government hospital and a university medical center were interviewed to provide linguistic evaluation prior to analyzing with the new IT2 FSAW. Implementation of the proposed method in the case of ambulance location preference suggests that the ‘road network’ is the best alternative for ambulance location. The results indicate that the proposed method offers a consensus solution for handling the vague and qualitative criteria of ambulance location preference.

  12. A comparative study of reinitialization approaches of the level set method for simulating free-surface flows

    Energy Technology Data Exchange (ETDEWEB)

    Sufyan, Muhammad; Ngo, Long Cu; Choi, Hyoung Gwon [Seoul National University, Seoul (Korea, Republic of)

    2016-04-15

    Unstructured grids were used to compare the performance of a direct reinitialization scheme with those of two reinitialization approaches based on the solution of a hyperbolic Partial differential equation (PDE). The problems of moving interface were solved in the context of a finite element method. A least-square weighted residual method was used to discretize the advection equation of the level set method. The benchmark problems of rotating Zalesak's disk, time-reversed single vortex, and two-dimensional sloshing were examined. Numerical results showed that the direct reinitialization scheme performed better than the PDE-based reinitialization approaches in terms of mass conservation, dissipation and dispersion error, and computational time. In the case of sloshing, numerical results were found to be in good agreement with existing experimental data. The direct reinitialization approach consumed considerably less CPU time than the PDE-based simulations for 20 time periods of sloshing. This approach was stable, accurate, and efficient for all the problems considered in this study.

  13. A Bayesian method to mine spatial data sets to evaluate the vulnerability of human beings to catastrophic risk.

    Science.gov (United States)

    Li, Lianfa; Wang, Jinfeng; Leung, Hareton; Zhao, Sisi

    2012-06-01

    Vulnerability of human beings exposed to a catastrophic disaster is affected by multiple factors that include hazard intensity, environment, and individual characteristics. The traditional approach to vulnerability assessment, based on the aggregate-area method and unsupervised learning, cannot incorporate spatial information; thus, vulnerability can be only roughly assessed. In this article, we propose Bayesian network (BN) and spatial analysis techniques to mine spatial data sets to evaluate the vulnerability of human beings. In our approach, spatial analysis is leveraged to preprocess the data; for example, kernel density analysis (KDA) and accumulative road cost surface modeling (ARCSM) are employed to quantify the influence of geofeatures on vulnerability and relate such influence to spatial distance. The knowledge- and data-based BN provides a consistent platform to integrate a variety of factors, including those extracted by KDA and ARCSM to model vulnerability uncertainty. We also consider the model's uncertainty and use the Bayesian model average and Occam's Window to average the multiple models obtained by our approach to robust prediction of the risk and vulnerability. We compare our approach with other probabilistic models in the case study of seismic risk and conclude that our approach is a good means to mining spatial data sets for evaluating vulnerability. © 2012 Society for Risk Analysis.

  14. Do Students Expect Compensation for Wage Risk?

    Science.gov (United States)

    Schweri, Juerg; Hartog, Joop; Wolter, Stefan C.

    2011-01-01

    We use a unique data set about the wage distribution that Swiss students expect for themselves ex ante, deriving parametric and non-parametric measures to capture expected wage risk. These wage risk measures are unfettered by heterogeneity which handicapped the use of actual market wage dispersion as risk measure in earlier studies. Students in…

  15. A nonparametric statistical method for determination of a confidence interval for the mean of a set of results obtained in a laboratory intercomparison

    International Nuclear Information System (INIS)

    Veglia, A.

    1981-08-01

    In cases where sets of data are obviously not normally distributed, the application of a nonparametric method for the estimation of a confidence interval for the mean seems to be more suitable than some other methods because such a method requires few assumptions about the population of data. A two-step statistical method is proposed which can be applied to any set of analytical results: elimination of outliers by a nonparametric method based on Tchebycheff's inequality, and determination of a confidence interval for the mean by a non-parametric method based on binominal distribution. The method is appropriate only for samples of size n>=10

  16. An approach for maximizing the smallest eigenfrequency of structure vibration based on piecewise constant level set method

    Science.gov (United States)

    Zhang, Zhengfang; Chen, Weifeng

    2018-05-01

    Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.

  17. TAGUCHI METHOD FOR THREE-STAGE ASSEMBLY FLOW SHOP SCHEDULING PROBLEM WITH BLOCKING AND SEQUENCE-DEPENDENT SET UP TIMES

    Directory of Open Access Journals (Sweden)

    AREF MALEKI-DARONKOLAEI

    2013-10-01

    Full Text Available This article considers a three-stage assembly flowshop scheduling problem minimizing the weighted sum of mean completion time and makespan with sequence-dependent setup times at the first stage and blocking times between each stage. To tackle such an NP-hard, two meta-heuristic algorithms are presented. The novelty of our approach is to develop a variable neighborhood search algorithm (VNS and a well-known simulated annealing (SA for the problem. Furthermore, to enhance the performance of the (SA, its parameters are optimized by the use of Taguchi method, but to setting parameters of VNS just one parameter has been used without Taguchi. The computational results show that the proposed VNS is better in mean and standard deviation for all sizes of the problem than SA, but on the contrary about CPU Time SA outperforms VNS.

  18. Computational Fluid Dynamics Analysis of Cold Plasma Plume Mixing with Blood Using Level Set Method Coupled with Heat Transfer

    Directory of Open Access Journals (Sweden)

    Mehrdad Shahmohammadi Beni

    2017-06-01

    Full Text Available Cold plasmas were proposed for treatment of leukemia. In the present work, conceptual designs of mixing chambers that increased the contact between the two fluids (plasma and blood through addition of obstacles within rectangular-block-shaped chambers were proposed and the dynamic mixing between the plasma and blood were studied using the level set method coupled with heat transfer. Enhancement of mixing between blood and plasma in the presence of obstacles was demonstrated. Continuous tracking of fluid mixing with determination of temperature distributions was enabled by the present model, which would be a useful tool for future development of cold plasma devices for treatment of blood-related diseases such as leukemia.

  19. Expectations and Experiences of Information Literacy Instruction

    Directory of Open Access Journals (Sweden)

    Saga Pohjola-Ahlin

    2016-11-01

    Full Text Available In May 2016, 48 third semester undergraduate students enrolled in the physiotherapy program at Karolinska Institutet in Sweden were given three sets of questionnaires; before the information literacy instruction (ILI started, at the end of the first session, and a week after, at the end of the second and last session. The aim of this small-scale pilot study was to shed some light on students’ motivation to attend ILI, how they value the sessions afterwards and how they assess their learning outcome. Furthermore, it was an attempt to do a "students’ user experience study” in a pedagogical setting, with the intention to evaluate and improve teaching in ILI to meet student expectations. The average response rate for the three questionnaires was 92%. The results show that students’ expectations were similar to the actual content of ILI, and that the students were satisfied with their own learning outcome. Both motivation and the sense of relevance got higher scores after students attended ILI. Motivation rose from 7,4 to 8,12 out of 10. This is positive because a high level of motivation often improves the learning outcome (Schunk, 2012. When asked which areas most needed improvement in order to further enhance their learning outcome, the most common responses were “the pedagogy” and “my own achievement”. It would be interesting to start collaborating with a group of students in order to explore new methods and learning activities.

  20. An optimized process flow for rapid segmentation of cortical bones of the craniofacial skeleton using the level-set method.

    Science.gov (United States)

    Szwedowski, T D; Fialkov, J; Pakdel, A; Whyne, C M

    2013-01-01

    Accurate representation of skeletal structures is essential for quantifying structural integrity, for developing accurate models, for improving patient-specific implant design and in image-guided surgery applications. The complex morphology of thin cortical structures of the craniofacial skeleton (CFS) represents a significant challenge with respect to accurate bony segmentation. This technical study presents optimized processing steps to segment the three-dimensional (3D) geometry of thin cortical bone structures from CT images. In this procedure, anoisotropic filtering and a connected components scheme were utilized to isolate and enhance the internal boundaries between craniofacial cortical and trabecular bone. Subsequently, the shell-like nature of cortical bone was exploited using boundary-tracking level-set methods with optimized parameters determined from large-scale sensitivity analysis. The process was applied to clinical CT images acquired from two cadaveric CFSs. The accuracy of the automated segmentations was determined based on their volumetric concurrencies with visually optimized manual segmentations, without statistical appraisal. The full CFSs demonstrated volumetric concurrencies of 0.904 and 0.719; accuracy increased to concurrencies of 0.936 and 0.846 when considering only the maxillary region. The highly automated approach presented here is able to segment the cortical shell and trabecular boundaries of the CFS in clinical CT images. The results indicate that initial scan resolution and cortical-trabecular bone contrast may impact performance. Future application of these steps to larger data sets will enable the determination of the method's sensitivity to differences in image quality and CFS morphology.

  1. Reconstruction of gene regulatory modules from RNA silencing of IFN-α modulators: experimental set-up and inference method.

    Science.gov (United States)

    Grassi, Angela; Di Camillo, Barbara; Ciccarese, Francesco; Agnusdei, Valentina; Zanovello, Paola; Amadori, Alberto; Finesso, Lorenzo; Indraccolo, Stefano; Toffolo, Gianna Maria

    2016-03-12

    Inference of gene regulation from expression data may help to unravel regulatory mechanisms involved in complex diseases or in the action of specific drugs. A challenging task for many researchers working in the field of systems biology is to build up an experiment with a limited budget and produce a dataset suitable to reconstruct putative regulatory modules worth of biological validation. Here, we focus on small-scale gene expression screens and we introduce a novel experimental set-up and a customized method of analysis to make inference on regulatory modules starting from genetic perturbation data, e.g. knockdown and overexpression data. To illustrate the utility of our strategy, it was applied to produce and analyze a dataset of quantitative real-time RT-PCR data, in which interferon-α (IFN-α) transcriptional response in endothelial cells is investigated by RNA silencing of two candidate IFN-α modulators, STAT1 and IFIH1. A putative regulatory module was reconstructed by our method, revealing an intriguing feed-forward loop, in which STAT1 regulates IFIH1 and they both negatively regulate IFNAR1. STAT1 regulation on IFNAR1 was object of experimental validation at the protein level. Detailed description of the experimental set-up and of the analysis procedure is reported, with the intent to be of inspiration for other scientists who want to realize similar experiments to reconstruct gene regulatory modules starting from perturbations of possible regulators. Application of our approach to the study of IFN-α transcriptional response modulators in endothelial cells has led to many interesting novel findings and new biological hypotheses worth of validation.

  2. Interactions between lean management and the psychosocial work environment in a hospital setting - a multi-method study.

    Science.gov (United States)

    Ulhassan, Waqar; von Thiele Schwarz, Ulrica; Thor, Johan; Westerlund, Hugo

    2014-10-22

    As health care struggles to meet increasing demands with limited resources, Lean has become a popular management approach. It has mainly been studied in relation to health care performance. The empirical evidence as to how Lean affects the psychosocial work environment has been contradictory. This study aims to study the interaction between Lean and the psychosocial work environment using a comprehensive model that takes Lean implementation information, as well as Lean theory and the particular context into consideration. The psychosocial work environment was measured twice with the Copenhagen Psychosocial Questionnaire (COPSOQ) employee survey during Lean implementations on May-June 2010 (T1) (n = 129) and November-December 2011 (T2) (n = 131) at three units (an Emergency Department (ED), Ward-I and Ward-II). Information based on qualitative data analysis of the Lean implementations and context from a previous paper was used to predict expected change patterns in the psychosocial work environment from T1 to T2 and subsequently compared with COPSOQ-data through linear regression analysis. Between T1 and T2, qualitative information showed a well-organized and steady Lean implementation on Ward-I with active employee participation, a partial Lean implementation on Ward-II with employees not seeing a clear need for such an intervention, and deterioration in already implemented Lean activities at ED, due to the declining interest of top management. Quantitative data analysis showed a significant relation between the expected and actual results regarding changes in the psychosocial work environment. Ward-I showed major improvements especially related to job control and social support, ED showed a major decline with some exceptions while Ward-II also showed improvements similar to Ward-I. The results suggest that Lean may have a positive impact on the psychosocial work environment given that it is properly implemented. Also, the psychosocial work environment may even

  3. Accurate prediction of complex free surface flow around a high speed craft using a single-phase level set method

    Science.gov (United States)

    Broglia, Riccardo; Durante, Danilo

    2017-11-01

    This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to

  4. The TRICLOBS Dynamic Multi-Band Image Data Set for the Development and Evaluation of Image Fusion Methods.

    Directory of Open Access Journals (Sweden)

    Alexander Toet

    Full Text Available The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7μm, near-infrared (NIR, 0.7-1.0μm and long-wave infrared (LWIR, 8-14μm motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer. The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs

  5. Sex and life expectancy.

    Science.gov (United States)

    Seifarth, Joshua E; McGowan, Cheri L; Milne, Kevin J

    2012-12-01

    A sexual dimorphism in human life expectancy has existed in almost every country for as long as records have been kept. Although human life expectancy has increased each year, females still live longer, on average, than males. Undoubtedly, the reasons for the sex gap in life expectancy are multifaceted, and it has been discussed from both sociological and biological perspectives. However, even if biological factors make up only a small percentage of the determinants of the sex difference in this phenomenon, parity in average life expectancy should not be anticipated. The aim of this review is to highlight biological mechanisms that may underlie the sexual dimorphism in life expectancy. Using PubMed, ISI Web of Knowledge, and Google Scholar, as well as cited and citing reference histories of articles through August 2012, English-language articles were identified, read, and synthesized into categories that could account for biological sex differences in human life expectancy. The examination of biological mechanisms accounting for the female-based advantage in human life expectancy has been an active area of inquiry; however, it is still difficult to prove the relative importance of any 1 factor. Nonetheless, biological differences between the sexes do exist and include differences in genetic and physiological factors such as progressive skewing of X chromosome inactivation, telomere attrition, mitochondrial inheritance, hormonal and cellular responses to stress, immune function, and metabolic substrate handling among others. These factors may account for at least a part of the female advantage in human life expectancy. Despite noted gaps in sex equality, higher body fat percentages and lower physical activity levels globally at all ages, a sex-based gap in life expectancy exists in nearly every country for which data exist. There are several biological mechanisms that may contribute to explaining why females live longer than men on average, but the complexity of the

  6. Advanced quadrature sets and acceleration and preconditioning techniques for the discrete ordinates method in parallel computing environments

    Science.gov (United States)

    Longoni, Gianluca

    In the nuclear science and engineering field, radiation transport calculations play a key-role in the design and optimization of nuclear devices. The linear Boltzmann equation describes the angular, energy and spatial variations of the particle or radiation distribution. The discrete ordinates method (S N) is the most widely used technique for solving the linear Boltzmann equation. However, for realistic problems, the memory and computing time require the use of supercomputers. This research is devoted to the development of new formulations for the SN method, especially for highly angular dependent problems, in parallel environments. The present research work addresses two main issues affecting the accuracy and performance of SN transport theory methods: quadrature sets and acceleration techniques. New advanced quadrature techniques which allow for large numbers of angles with a capability for local angular refinement have been developed. These techniques have been integrated into the 3-D SN PENTRAN (Parallel Environment Neutral-particle TRANsport) code and applied to highly angular dependent problems, such as CT-Scan devices, that are widely used to obtain detailed 3-D images for industrial/medical applications. In addition, the accurate simulation of core physics and shielding problems with strong heterogeneities and transport effects requires the numerical solution of the transport equation. In general, the convergence rate of the solution methods for the transport equation is reduced for large problems with optically thick regions and scattering ratios approaching unity. To remedy this situation, new acceleration algorithms based on the Even-Parity Simplified SN (EP-SSN) method have been developed. A new stand-alone code system, PENSSn (Parallel Environment Neutral-particle Simplified SN), has been developed based on the EP-SSN method. The code is designed for parallel computing environments with spatial, angular and hybrid (spatial/angular) domain

  7. Performance expectation plan

    Energy Technology Data Exchange (ETDEWEB)

    Ray, P.E.

    1998-09-04

    This document outlines the significant accomplishments of fiscal year 1998 for the Tank Waste Remediation System (TWRS) Project Hanford Management Contract (PHMC) team. Opportunities for improvement to better meet some performance expectations have been identified. The PHMC has performed at an excellent level in administration of leadership, planning, and technical direction. The contractor has met and made notable improvement of attaining customer satisfaction in mission execution. This document includes the team`s recommendation that the PHMC TWRS Performance Expectation Plan evaluation rating for fiscal year 1998 be an Excellent.

  8. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    2017-01-01

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economistís model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...

  9. The Qualitative Expectations Hypothesis

    DEFF Research Database (Denmark)

    Frydman, Roman; Johansen, Søren; Rahbek, Anders

    We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economist's model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...

  10. Flexible and scalable methods for quantifying stochastic variability in the era of massive time-domain astronomical data sets

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Brandon C. [Department of Physics, Broida Hall, University of California, Santa Barbara, CA 93106-9530 (United States); Becker, Andrew C. [Department of Astronomy, University of Washington, P.O. Box 351580, Seattle, WA 98195-1580 (United States); Sobolewska, Malgosia [Nicolaus Copernicus Astronomical Center, Bartycka 18, 00-716, Warsaw (Poland); Siemiginowska, Aneta [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Uttley, Phil [Astronomical Institute Anton Pannekoek, University of Amsterdam, Postbus 94249, 1090 GE Amsterdam (Netherlands)

    2014-06-10

    We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placing them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.

  11. Budget- and Priority-Setting Criteria at State Health Agencies in Times of Austerity: A Mixed-Methods Study

    Science.gov (United States)

    Resnick, Beth; Kass, Nancy; Sellers, Katie; Young, Jessica; Bernet, Patrick; Jarris, Paul

    2014-01-01

    Objectives. We examined critical budget and priority criteria for state health agencies to identify likely decision-making factors, pressures, and opportunities in times of austerity. Methods. We have presented findings from a 2-stage, mixed-methods study with state public health leaders regarding public health budget- and priority-setting processes. In stage 1, we conducted hour-long interviews in 2011 with 45 health agency executive and division or bureau leaders from 6 states. Stage 2 was an online survey of 207 executive and division or bureau leaders from all state health agencies (66% response rate). Results. Respondents identified 5 key criteria: whether a program was viewed as “mission critical,” the seriousness of the consequences of not funding the program, financing considerations, external directives and mandates, and the magnitude of the problem the program addressed. Conclusions. We have presented empirical findings on criteria used in state health agency budgetary decision-making. These criteria suggested a focus and interest on core public health and the largest public health problems with the most serious ramifications. PMID:24825212

  12. Flexible and scalable methods for quantifying stochastic variability in the era of massive time-domain astronomical data sets

    International Nuclear Information System (INIS)

    Kelly, Brandon C.; Becker, Andrew C.; Sobolewska, Malgosia; Siemiginowska, Aneta; Uttley, Phil

    2014-01-01

    We present the use of continuous-time autoregressive moving average (CARMA) models as a method for estimating the variability features of a light curve, and in particular its power spectral density (PSD). CARMA models fully account for irregular sampling and measurement errors, making them valuable for quantifying variability, forecasting and interpolating light curves, and variability-based classification. We show that the PSD of a CARMA model can be expressed as a sum of Lorentzian functions, which makes them extremely flexible and able to model a broad range of PSDs. We present the likelihood function for light curves sampled from CARMA processes, placing them on a statistically rigorous foundation, and we present a Bayesian method to infer the probability distribution of the PSD given the measured light curve. Because calculation of the likelihood function scales linearly with the number of data points, CARMA modeling scales to current and future massive time-domain data sets. We conclude by applying our CARMA modeling approach to light curves for an X-ray binary, two active galactic nuclei, a long-period variable star, and an RR Lyrae star in order to illustrate their use, applicability, and interpretation.

  13. The South Asian Heart Lifestyle Intervention (SAHELI) study to improve cardiovascular risk factors in a community setting: design and methods.

    Science.gov (United States)

    Kandula, Namratha R; Patel, Yasin; Dave, Swapna; Seguil, Paola; Kumar, Santosh; Baker, David W; Spring, Bonnie; Siddique, Juned

    2013-11-01

    Disseminating and implementing evidence-based, cardiovascular disease (CVD) prevention lifestyle interventions in community settings and in ethnic minority populations is a challenge. We describe the design and methods for the South Asian Heart Lifestyle Intervention (SAHELI) study, a pilot study designed to determine the feasibility and initial efficacy of a culturally-targeted, community-based lifestyle intervention to improve physical activity and diet behaviors among medically underserved South Asians (SAs). Participants with at least one CVD risk factor will be randomized to either a lifestyle intervention or a control group. Participants in both groups will be screened in a community setting and receive a primary care referral after randomization. Intervention participants will receive 6weeks of group classes, followed by 12weeks of individual telephone support where they will be encouraged to initiate and maintain a healthy lifestyle goal. Control participants will receive their screening results and monthly mailings on CVD prevention. Primary outcomes will be changes in moderate/vigorous physical activity and saturated fat intake between baseline, 3-, and 6-month follow-up. Secondary outcomes will be changes in weight, clinical risk factors, primary care visits, self-efficacy, and social support. This study will be one of the first to pilot-test a lifestyle intervention for SAs, one of the fastest growing racial/ethnic groups in the U.S. and one with disparate CVD risk. Results of this pilot study will provide preliminary data about the efficacy of a lifestyle intervention on CVD risk in SAs and inform community-engaged CVD prevention efforts in an increasingly diverse U.S. population. © 2013.

  14. Identification of growth phases and influencing factors in cultivations with AGE1.HN cells using set-based methods.

    Directory of Open Access Journals (Sweden)

    Steffen Borchers

    Full Text Available Production of bio-pharmaceuticals in cell culture, such as mammalian cells, is challenging. Mathematical models can provide support to the analysis, optimization, and the operation of production processes. In particular, unstructured models are suited for these purposes, since they can be tailored to particular process conditions. To this end, growth phases and the most relevant factors influencing cell growth and product formation have to be identified. Due to noisy and erroneous experimental data, unknown kinetic parameters, and the large number of combinations of influencing factors, currently there are only limited structured approaches to tackle these issues. We outline a structured set-based approach to identify different growth phases and the factors influencing cell growth and metabolism. To this end, measurement uncertainties are taken explicitly into account to bound the time-dependent specific growth rate based on the observed increase of the cell concentration. Based on the bounds on the specific growth rate, we can identify qualitatively different growth phases and (in-validate hypotheses on the factors influencing cell growth and metabolism. We apply the approach to a mammalian suspension cell line (AGE1.HN. We show that growth in batch culture can be divided into two main growth phases. The initial phase is characterized by exponential growth dynamics, which can be described consistently by a relatively simple unstructured and segregated model. The subsequent phase is characterized by a decrease in the specific growth rate, which, as shown, results from substrate limitation and the pH of the medium. An extended model is provided which describes the observed dynamics of cell growth and main metabolites, and the corresponding kinetic parameters as well as their confidence intervals are estimated. The study is complemented by an uncertainty and outlier analysis. Overall, we demonstrate utility of set-based methods for analyzing cell

  15. Identification of growth phases and influencing factors in cultivations with AGE1.HN cells using set-based methods.

    Science.gov (United States)

    Borchers, Steffen; Freund, Susann; Rath, Alexander; Streif, Stefan; Reichl, Udo; Findeisen, Rolf

    2013-01-01

    Production of bio-pharmaceuticals in cell culture, such as mammalian cells, is challenging. Mathematical models can provide support to the analysis, optimization, and the operation of production processes. In particular, unstructured models are suited for these purposes, since they can be tailored to particular process conditions. To this end, growth phases and the most relevant factors influencing cell growth and product formation have to be identified. Due to noisy and erroneous experimental data, unknown kinetic parameters, and the large number of combinations of influencing factors, currently there are only limited structured approaches to tackle these issues. We outline a structured set-based approach to identify different growth phases and the factors influencing cell growth and metabolism. To this end, measurement uncertainties are taken explicitly into account to bound the time-dependent specific growth rate based on the observed increase of the cell concentration. Based on the bounds on the specific growth rate, we can identify qualitatively different growth phases and (in-)validate hypotheses on the factors influencing cell growth and metabolism. We apply the approach to a mammalian suspension cell line (AGE1.HN). We show that growth in batch culture can be divided into two main growth phases. The initial phase is characterized by exponential growth dynamics, which can be described consistently by a relatively simple unstructured and segregated model. The subsequent phase is characterized by a decrease in the specific growth rate, which, as shown, results from substrate limitation and the pH of the medium. An extended model is provided which describes the observed dynamics of cell growth and main metabolites, and the corresponding kinetic parameters as well as their confidence intervals are estimated. The study is complemented by an uncertainty and outlier analysis. Overall, we demonstrate utility of set-based methods for analyzing cell growth and

  16. Behavior, Expectations and Status

    Science.gov (United States)

    Webster, Jr, Murray; Rashotte, Lisa Slattery

    2010-01-01

    We predict effects of behavior patterns and status on performance expectations and group inequality using an integrated theory developed by Fisek, Berger and Norman (1991). We next test those predictions using new experimental techniques we developed to control behavior patterns as independent variables. In a 10-condition experiment, predictions…

  17. Life Expectancy in 2040

    DEFF Research Database (Denmark)

    Canudas-Romo, Vladimir; DuGoff, Eva H; Wu, Albert W.

    2016-01-01

    We use expert clinical and public health opinion to estimate likely changes in the prevention and treatment of important disease conditions and how they will affect future life expectancy. Focus groups were held including clinical and public health faculty with expertise in the six leading causes...

  18. Examination of 2 times 8.5 Gy method as palliative therapy of the case that convalescence is expected for a short term

    International Nuclear Information System (INIS)

    Saito, Akira; Onishi, Hiroshi; Aoki, Shinichi; Araya, Masayuki

    2008-01-01

    The objective of this study was to report on the clinical course of cases in which the 8.5 Gy x 2 method was used as a palliative irradiation method at our hospital. There were 21 cases in which irradiation with 8.5 Gy x 2 was used at our hospital from June 2004 to March 2006. These included 15 male cases and 6 female cases. The ages of the subjects ranged from 49 to 89 years (median value: 65 years of age). Karnofsky performance status (KPS) ranged from 50 to 90% (median value: 70%). The disorders (symptoms) included 7 cases of mediastinal lymph node metastasis (respiratory discomfort, coughing, and hemosputum), 4 cases of esophageal cancer (dysphagia), 5 cases of lung tumors (hemosputum and superior vena cava (SVC) syndrome), 1 case each of bone infiltration by soft tissue tumors in the abdomen and in the extremities (pain), 2 cases of abdominal lymph node metastasis (jaundice and pain), and 1 case of hepatocellular carcinoma (HCC) biliary infiltration (jaundice). Ten MVX was used in all cases. The treatment plan was carried out using CT simulation. Irradiation with 8.5 Gy was used twice. The site attributable to the symptoms was defined as gross tumor volume (GTV), and the region sufficiently containing GTV was defined as planning target volume (PTV). In 2 cases, the general condition of the patients worsened prior to the second irradiation, and therefore irradiation was discontinued. A total of 19 cases were treated with irradiation. A temporary improvement of the symptoms was observed in 12 cases. Early-stage adverse events (NCI-CTC ver. 2, grade 2 or higher) included 2 cases of grade 2 esophagitis, and 1 case of grade 2 nausea. Late-stage adverse events could not be evaluated. A temporary improvement of the symptoms was observed in about half of the cases. There were no grade 3 or higher early-stage adverse events. It is believed that this palliative irradiation method is acceptable if long-term irradiation is impossible. (author)

  19. Acceptability of self-collection sampling for HPV-DNA testing in low-resource settings: a mixed methods approach.

    Science.gov (United States)

    Bansil, Pooja; Wittet, Scott; Lim, Jeanette L; Winkler, Jennifer L; Paul, Proma; Jeronimo, Jose

    2014-06-12

    Vaginal self-sampling with HPV-DNA tests is a promising primary screening method for cervical cancer. However, women's experiences, concerns and the acceptability of such tests in low-resource settings remain unknown. In India, Nicaragua, and Uganda, a mixed-method design was used to collect data from surveys (N = 3,863), qualitative interviews (N = 72; 20 providers and 52 women) and focus groups (N = 30 women) on women's and providers' experiences with self-sampling, women's opinions of sampling at home, and their future needs. Among surveyed women, 90% provided a self- collected sample. Of these, 75% reported it was easy, although 52% were initially concerned about hurting themselves and 24% were worried about not getting a good sample. Most surveyed women preferred self-sampling (78%). However it was not clear if they responded to the privacy of self-sampling or the convenience of avoiding a pelvic examination, or both. In follow-up interviews, most women reported that they didn't mind self-sampling, but many preferred to have a provider collect the vaginal sample. Most women also preferred clinic-based screening (as opposed to home-based self-sampling), because the sample could be collected by a provider, women could receive treatment if needed, and the clinic was sanitary and provided privacy. Self-sampling acceptability was higher when providers prepared women through education, allowed women to examine the collection brush, and were present during the self-collection process. Among survey respondents, aids that would facilitate self-sampling in the future were: staff help (53%), additional images in the illustrated instructions (31%), and a chance to practice beforehand with a doll/model (26%). Self-and vaginal-sampling are widely acceptable among women in low-resource settings. Providers have a unique opportunity to educate and prepare women for self-sampling and be flexible in accommodating women's preference for self-sampling.

  20. Establishing midwifery in low-resource settings: guidance from a mixed-methods evaluation of the Afghanistan midwifery education program.

    Science.gov (United States)

    Zainullah, Partamin; Ansari, Nasratullah; Yari, Khalid; Azimi, Mahmood; Turkmani, Sabera; Azfar, Pashtoon; LeFevre, Amnesty; Mungia, Jaime; Gubin, Rehana; Kim, Young-Mi; Bartlett, Linda

    2014-10-01

    The shortage of skilled birth attendants has been a key factor in the high maternal and newborn mortality in Afghanistan. Efforts to strengthen pre-service midwifery education in Afghanistan have increased the number of midwives from 467 in 2002 to 2954 in 2010. We analyzed the costs and graduate performance outcomes of the two types of pre-service midwifery education programs in Afghanistan that were either established or strengthened between 2002 and 2010 to guide future program implementation and share lessons learned. We performed a mixed-methods evaluation of selected midwifery schools between June 2008 and November 2010. This paper focuses on the evaluation's quantitative methods, which included (a) an assessment of a sample of midwifery school graduates (n=138) to measure their competencies in six clinical skills; (b) prospective documentation of the actual clinical practices of a subsample of these graduates (n=26); and (c) a costing analysis to estimate the resources required to educate students enrolled in these programs. For the clinical competency assessment and clinical practices components, two Institutes for Health Sciences (IHS) schools and six Community Midwifery Education (CME) schools; for the costing analysis, a different set of nine schools (two IHS, seven CME), all of which were funded by the US Agency for International Development. Midwives who had graduated from either IHS or CME schools. CME graduates (n=101) achieved an overall mean competency score of 63.2% (59.9-66.6%) on the clinical competency assessment compared to 57.3% (49.9-64.7%) for IHS graduates (n=37). Reproductive health activities accounted for 76% of midwives' time over an average of three months. Approximately 1% of childbirths required referral or resulted in maternal death. On the basis of known costs for the programs, the estimated cost of graduating a class with 25 students averaged US$298,939, or US$10,784 per graduate. The pre-service midwifery education experience of