WorldWideScience

Sample records for maximized selectivity reliable

  1. Assessment of the Maximal Split-Half Coefficient to Estimate Reliability

    Science.gov (United States)

    Thompson, Barry L.; Green, Samuel B.; Yang, Yanyun

    2010-01-01

    The maximal split-half coefficient is computed by calculating all possible split-half reliability estimates for a scale and then choosing the maximal value as the reliability estimate. Osburn compared the maximal split-half coefficient with 10 other internal consistency estimates of reliability and concluded that it yielded the most consistently…

  2. Maximally reliable Markov chains under energy constraints.

    Science.gov (United States)

    Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam

    2009-07-01

    Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.

  3. A method of bias correction for maximal reliability with dichotomous measures.

    Science.gov (United States)

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  4. Maximal network reliability for a stochastic power transmission network

    International Nuclear Information System (INIS)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2011-01-01

    Many studies regarded a power transmission network as a binary-state network and constructed it with several arcs and vertices to evaluate network reliability. In practice, the power transmission network should be stochastic because each arc (transmission line) combined with several physical lines is multistate. Network reliability is the probability that the network can transmit d units of electric power from a power plant (source) to a high voltage substation at a specific area (sink). This study focuses on searching for the optimal transmission line assignment to the power transmission network such that network reliability is maximized. A genetic algorithm based method integrating the minimal paths and the Recursive Sum of Disjoint Products is developed to solve this assignment problem. A real power transmission network is adopted to demonstrate the computational efficiency of the proposed method while comparing with the random solution generation approach.

  5. Selection of suitable hand gestures for reliable myoelectric human computer interface.

    Science.gov (United States)

    Castro, Maria Claudia F; Arjunan, Sridhar P; Kumar, Dinesh K

    2015-04-09

    Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Experiments were conducted where sEMG was recorded from the muscles of the forearm while subjects performed hand gestures and then was classified off-line. The performances of ten gestures were ranked using the proposed Positive-Negative Performance Measurement Index (PNM), generated by a series of confusion matrices. When using all the ten gestures, the sensitivity and specificity was 80.0% and 97.8%. After ranking the gestures using the PNM, six gestures were selected and these gave sensitivity and specificity greater than 95% (96.5% and 99.3%); Hand open, Hand close, Little finger flexion, Ring finger flexion, Middle finger flexion and Thumb flexion. This work has shown that reliable myoelectric based human computer interface systems require careful selection of the gestures that have to be recognized and without such selection, the reliability is poor.

  6. Techniques to maximize software reliability in radiation fields

    International Nuclear Information System (INIS)

    Eichhorn, G.; Piercey, R.B.

    1986-01-01

    Microprocessor system failures due to memory corruption by single event upsets (SEUs) and/or latch-up in RAM or ROM memory are common in environments where there is high radiation flux. Traditional methods to harden microcomputer systems against SEUs and memory latch-up have usually involved expensive large scale hardware redundancy. Such systems offer higher reliability, but they tend to be more complex and non-standard. At the Space Astronomy Laboratory the authors have developed general programming techniques for producing software which is resistant to such memory failures. These techniques, which may be applied to standard off-the-shelf hardware, as well as custom designs, include an implementation of Maximally Redundant Software (MRS) model, error detection algorithms and memory verification and management

  7. The reliability and validity of fatigue measures during short-duration maximal-intensity intermittent cycling.

    Science.gov (United States)

    Glaister, Mark; Stone, Michael H; Stewart, Andrew M; Hughes, Michael; Moir, Gavin L

    2004-08-01

    The purpose of the present study was to assess the reliability and validity of fatigue measures, as derived from 4 separate formulae, during tests of repeat sprint ability. On separate days over a 3-week period, 2 groups of 7 recreationally active men completed 6 trials of 1 of 2 maximal (20 x 5 seconds) intermittent cycling tests with contrasting recovery periods (10 or 30 seconds). All trials were conducted on a friction-braked cycle ergometer, and fatigue scores were derived from measures of mean power output for each sprint. Apart from formula 1, which calculated fatigue from the percentage difference in mean power output between the first and last sprint, all remaining formulae produced fatigue scores that showed a reasonably good level of test-retest reliability in both intermittent test protocols (intraclass correlation range: 0.78-0.86; 95% likely range of true values: 0.54-0.97). Although between-protocol differences in the magnitude of the fatigue scores suggested good construct validity, within-protocol differences highlighted limitations with each formula. Overall, the results support the use of the percentage decrement score as the most valid and reliable measure of fatigue during brief maximal intermittent work.

  8. Reliability of surface electromyography activity of gluteal and hamstring muscles during sub-maximal and maximal voluntary isometric contractions.

    Science.gov (United States)

    Bussey, Melanie D; Aldabe, Daniela; Adhia, Divya; Mani, Ramakrishnan

    2018-04-01

    Normalizing to a reference signal is essential when analysing and comparing electromyography signals across or within individuals. However, studies have shown that MVC testing may not be as reliable in persons with acute and chronic pain. The purpose of this study was to compare the test-retest reliability of the muscle activity in the biceps femoris and gluteus maximus between a novel sub-MVC and standard MVC protocols. This study utilized a single individual repeated measures design with 12 participants performing multiple trials of both the sub-MVC and MVC tasks on two separate days. The participant position in the prone leg raise task was standardised with an ultrasonic sensor to improve task precession between trials/days. Day-to-day and trial-to-trial reliability of the maximal muscle activity was examined using ICC and SEM. Day-to-day and trial-to-trial reliability of the EMG activity in the BF and GM were high (0.70-0.89) to very high (≥0.90) for both test procedures. %SEM was <5-10% for both tests on a given day but higher in the day-to-day comparisons. The lower amplitude of the sub-MVC is a likely contributor to increased %SEM (8-13%) in the day-to-day comparison. The findings show that the sub-MVC modified prone double leg raise results in GM and BF EMG measures similar in reliability and precision to the standard MVC tasks. Therefore, the modified prone double leg raise may be a useful substitute for traditional MVC testing for normalizing EMG signals of the BF and GM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Reliability criteria selection for integrated resource planning

    International Nuclear Information System (INIS)

    Ruiu, D.; Ye, C.; Billinton, R.; Lakhanpal, D.

    1993-01-01

    A study was conducted on the selection of a generating system reliability criterion that ensures a reasonable continuity of supply while minimizing the total costs to utility customers. The study was conducted using the Institute for Electronic and Electrical Engineers (IEEE) reliability test system as the study system. The study inputs and results for conditions and load forecast data, new supply resources data, demand-side management resource data, resource planning criterion, criterion value selection, supply side development, integrated resource development, and best criterion values, are tabulated and discussed. Preliminary conclusions are drawn as follows. In the case of integrated resource planning, the selection of the best value for a given type of reliability criterion can be done using methods similar to those used for supply side planning. The reliability criteria values previously used for supply side planning may not be economically justified when integrated resource planning is used. Utilities may have to revise and adopt new, and perhaps lower supply reliability criteria for integrated resource planning. More complex reliability criteria, such as energy related indices, which take into account the magnitude, frequency and duration of the expected interruptions are better adapted than the simpler capacity-based reliability criteria such as loss of load expectation. 7 refs., 5 figs., 10 tabs

  10. Diurnal variation and reliability of the urine lactate concentration after maximal exercise.

    Science.gov (United States)

    Nikolaidis, Stefanos; Kosmidis, Ioannis; Sougioultzis, Michail; Kabasakalis, Athanasios; Mougios, Vassilis

    2018-01-01

    The postexercise urine lactate concentration is a novel valid exercise biomarker, which has exhibited satisfactory reliability in the morning hours under controlled water intake. The aim of the present study was to investigate the diurnal variation of the postexercise urine lactate concentration and its reliability in the afternoon hours. Thirty-two healthy children (11 boys and 21 girls) and 23 adults (13 men and 10 women) participated in the study. All participants performed two identical sessions of eight 25 m bouts of maximal freestyle swimming executed every 2 min with passive recovery in between. These sessions were performed in the morning and afternoon and were separated by 3-4 days. Adults performed an additional afternoon session that was also separated by 3-4 days. All swimmers drank 500 mL of water before and another 500 mL after each test. Capillary blood and urine samples were collected before and after each test for lactate determination. Urine creatinine, urine density and body water content were also measured. The intraclass correlation coefficient was used as a reliability index between the morning and afternoon tests, as well as between the afternoon test and retest. Swimming performance and body water content exhibited excellent reliability in both children and adults. The postexercise blood lactate concentration did not show diurnal variation, showing a good reliability between the morning and afternoon tests, as well as high reliability between the afternoon test and retest. The postexercise urine density and lactate concentration were affected by time of day. However, when lactate was normalized to creatinine, it exhibited excellent reliability in children and good-to-high reliability in adults. The postexercise urine lactate concentration showed high reliability between the afternoon test and retest, independent of creatinine normalization. The postexercise blood and urine lactate concentrations were significantly correlated in all

  11. Application of the maximal covering location problem to habitat reserve site selection: a review

    Science.gov (United States)

    Stephanie A. Snyder; Robert G. Haight

    2016-01-01

    The Maximal Covering Location Problem (MCLP) is a classic model from the location science literature which has found wide application. One important application is to a fundamental problem in conservation biology, the Maximum Covering Species Problem (MCSP), which identifies land parcels to protect to maximize the number of species represented in the selected sites. We...

  12. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  13. Test-retest reliability of maximal leg muscle power and functional performance measures in patients with severe osteoarthritis (OA)

    DEFF Research Database (Denmark)

    Villadsen, Allan; Roos, Ewa M.; Overgaard, Søren

    Abstract : Purpose To evaluate the reliability of single-joint and multi-joint maximal leg muscle power and functional performance measures in patients with severe OA. Background Muscle power, taking both strength and velocity into account, is a more functional measure of lower extremity muscle...... and scheduled for unilateral total hip (n=9) or knee (n=11) replacement. Patients underwent a test battery on two occasions separated by approximately one week (range 7 to 11 days). Muscle power was measured using: 1. A linear encoder, unilateral lower limb isolated single-joint dynamic movement, e.g. knee...... flexion 2. A leg extension press, unilateral multi-joint knee and hip extension Functional performance was measured using: 1. 20 m walk usual pace 2. 20 m walk maximal pace 3. 5 times chair stands 4. Maximal number of knee bends/30sec Pain was measured on a VAS prior to and after conducting the entire...

  14. Intrarater Reliability of Muscle Strength and Hamstring to Quadriceps Strength Imbalance Ratios During Concentric, Isometric, and Eccentric Maximal Voluntary Contractions Using the Isoforce Dynamometer.

    Science.gov (United States)

    Mau-Moeller, Anett; Gube, Martin; Felser, Sabine; Feldhege, Frank; Weippert, Matthias; Husmann, Florian; Tischer, Thomas; Bader, Rainer; Bruhn, Sven; Behrens, Martin

    2017-08-17

    To determine intrasession and intersession reliability of strength measurements and hamstrings to quadriceps strength imbalance ratios (H/Q ratios) using the new isoforce dynamometer. Repeated measures. Exercise science laboratory. Thirty healthy subjects (15 females, 15 males, 27.8 years). Coefficient of variation (CV) and intraclass correlation coefficients (ICC) were calculated for (1) strength parameters, that is peak torque, mean work, and mean power for concentric and eccentric maximal voluntary contractions; isometric maximal voluntary torque (IMVT); rate of torque development (RTD), and (2) H/Q ratios, that is conventional concentric, eccentric, and isometric H/Q ratios (Hcon/Qcon at 60 deg/s, 120 deg/s, and 180 deg/s, Hecc/Qecc at -60 deg/s and Hiso/Qiso) and functional eccentric antagonist to concentric agonist H/Q ratios (Hecc/Qcon and Hcon/Qecc). High reliability: CV 0.90; moderate reliability: CV between 10% and 20%, ICC between 0.80 and 0.90; low reliability: CV >20%, ICC Strength parameters: (a) high intrasession reliability for concentric, eccentric, and isometric measurements, (b) moderate-to-high intersession reliability for concentric and eccentric measurements and IMVT, and (c) moderate-to-high intrasession reliability but low intersession reliability for RTD. (2) H/Q ratios: (a) moderate-to-high intrasession reliability for conventional ratios, (b) high intrasession reliability for functional ratios, (c) higher intersession reliability for Hcon/Qcon and Hiso/Qiso (moderate to high) than Hecc/Qecc (low to moderate), and (d) higher intersession reliability for conventional H/Q ratios (low to high) than functional H/Q ratios (low to moderate). The results have confirmed the reliability of strength parameters and the most frequently used H/Q ratios.

  15. Reliable Path Selection Problem in Uncertain Traffic Network after Natural Disaster

    Directory of Open Access Journals (Sweden)

    Jing Wang

    2013-01-01

    Full Text Available After natural disaster, especially for large-scale disasters and affected areas, vast relief materials are often needed. In the meantime, the traffic networks are always of uncertainty because of the disaster. In this paper, we assume that the edges in the network are either connected or blocked, and the connection probability of each edge is known. In order to ensure the arrival of these supplies at the affected areas, it is important to select a reliable path. A reliable path selection model is formulated, and two algorithms for solving this model are presented. Then, adjustable reliable path selection model is proposed when the edge of the selected reliable path is broken. And the corresponding algorithms are shown to be efficient both theoretically and numerically.

  16. Reliability of Maximal Strength Testing in Novice Weightlifters

    Science.gov (United States)

    Loehr, James A.; Lee, Stuart M. C.; Feiveson, Alan H.; Ploutz-Snyder, Lori L.

    2009-01-01

    The one repetition maximum (1RM) is a criterion measure of muscle strength. However, the reliability of 1RM testing in novice subjects has received little attention. Understanding this information is crucial to accurately interpret changes in muscle strength. To evaluate the test-retest reliability of a squat (SQ), heel raise (HR), and deadlift (DL) 1RM in novice subjects. Twenty healthy males (31 plus or minus 5 y, 179.1 plus or minus 6.1 cm, 81.4 plus or minus 10.6 kg) with no weight training experience in the previous six months participated in four 1RM testing sessions, with each session separated by 5-7 days. SQ and HR 1RM were conducted using a smith machine; DL 1RM was assessed using free weights. Session 1 was considered a familiarization and was not included in the statistical analyses. Repeated measures analysis of variance with Tukey fs post-hoc tests were used to detect between-session differences in 1RM (p.0.05). Test-retest reliability was evaluated by intraclass correlation coefficients (ICC). During Session 2, the SQ and DL 1RM (SQ: 90.2 }4.3, DL: 75.9 }3.3 kg) were less than Session 3 (SQ: 95.3 }4.1, DL: 81.5 plus or minus 3.5 kg) and Session 4 (SQ: 96.6 }4.0, DL: 82.4 }3.9 kg), but there were no differences between Session 3 and Session 4. HR 1RM measured during Session 2 (150.1 }3.7 kg) and Session 3 (152.5 }3.9 kg) were not different from one another, but both were less than Session 4 (157.5 }3.8 kg). The reliability (ICC) of 1RM measures for Sessions 2-4 were 0.88, 0.83, and 0.87, for SQ, HR, and DL, respectively. When considering only Sessions 3 and 4, the reliability was 0.93, 0.91, and 0.86 for SQ, HR, and DL, respectively. One familiarization session and 2 test sessions (for SQ and DL) were required to obtain excellent reliability (ICC greater than or equal to 0.90) in 1RM values with novice subjects. We were unable to attain this level of reliability following 3 HR testing sessions therefore additional sessions may be required to obtain an

  17. Reliability of maximal isometric knee strength testing with modified hand-held dynamometry in patients awaiting total knee arthroplasty: useful in research and individual patient settings? A reliability study

    Directory of Open Access Journals (Sweden)

    Koblbauer Ian FH

    2011-10-01

    Full Text Available Abstract Background Patients undergoing total knee arthroplasty (TKA often experience strength deficits both pre- and post-operatively. As these deficits may have a direct impact on functional recovery, strength assessment should be performed in this patient population. For these assessments, reliable measurements should be used. This study aimed to determine the inter- and intrarater reliability of hand-held dynamometry (HHD in measuring isometric knee strength in patients awaiting TKA. Methods To determine interrater reliability, 32 patients (81.3% female were assessed by two examiners. Patients were assessed consecutively by both examiners on the same individual test dates. To determine intrarater reliability, a subgroup (n = 13 was again assessed by the examiners within four weeks of the initial testing procedure. Maximal isometric knee flexor and extensor strength were tested using a modified Citec hand-held dynamometer. Both the affected and unaffected knee were tested. Reliability was assessed using the Intraclass Correlation Coefficient (ICC. In addition, the Standard Error of Measurement (SEM and the Smallest Detectable Difference (SDD were used to determine reliability. Results In both the affected and unaffected knee, the inter- and intrarater reliability were good for knee flexors (ICC range 0.76-0.94 and excellent for knee extensors (ICC range 0.92-0.97. However, measurement error was high, displaying SDD ranges between 21.7% and 36.2% for interrater reliability and between 19.0% and 57.5% for intrarater reliability. Overall, measurement error was higher for the knee flexors than for the knee extensors. Conclusions Modified HHD appears to be a reliable strength measure, producing good to excellent ICC values for both inter- and intrarater reliability in a group of TKA patients. High SEM and SDD values, however, indicate high measurement error for individual measures. This study demonstrates that a modified HHD is appropriate to

  18. The reliability of a maximal isometric hip strength and simultaneous surface EMG screening protocol in elite, junior rugby league athletes.

    Science.gov (United States)

    Charlton, Paula C; Mentiplay, Benjamin F; Grimaldi, Alison; Pua, Yong-Hao; Clark, Ross A

    2017-02-01

    Firstly to describe the reliability of assessing maximal isometric strength of the hip abductor and adductor musculature using a hand held dynamometry (HHD) protocol with simultaneous wireless surface electromyographic (sEMG) evaluation of the gluteus medius (GM) and adductor longus (AL). Secondly, to describe the correlation between isometric strength recorded with the HHD protocol and a laboratory standard isokinetic device. Reliability and correlational study. A sample of 24 elite, male, junior, rugby league athletes, age 16-20 years participated in repeated HHD and isometric Kin-Com (KC) strength testing with simultaneous sEMG assessment, on average (range) 6 (5-7) days apart by a single assessor. Strength tests included; unilateral hip abduction (ABD) and adduction (ADD) and bilateral ADD assessed with squeeze (SQ) tests in 0 and 45° of hip flexion. HHD demonstrated good to excellent inter-session reliability for all outcome measures (ICC (2,1) =0.76-0.91) and good to excellent association with the laboratory reference KC (ICC (2,1) =0.80-0.88). Whilst intra-session, inter-trial reliability of EMG activation and co-activation outcome measures ranged from moderate to excellent (ICC (2,1) =0.70-0.94), inter-session reliability was poor (all ICC (2,1) Isometric strength testing of the hip ABD and ADD musculature using HHD may be measured reliably in elite, junior rugby league athletes. Due to the poor inter-session reliability of sEMG measures, it is not recommended for athlete screening purposes if using the techniques implemented in this study. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  19. Reliable Portfolio Selection Problem in Fuzzy Environment: An mλ Measure Based Approach

    Directory of Open Access Journals (Sweden)

    Yuan Feng

    2017-04-01

    Full Text Available This paper investigates a fuzzy portfolio selection problem with guaranteed reliability, in which the fuzzy variables are used to capture the uncertain returns of different securities. To effectively handle the fuzziness in a mathematical way, a new expected value operator and variance of fuzzy variables are defined based on the m λ measure that is a linear combination of the possibility measure and necessity measure to balance the pessimism and optimism in the decision-making process. To formulate the reliable portfolio selection problem, we particularly adopt the expected total return and standard variance of the total return to evaluate the reliability of the investment strategies, producing three risk-guaranteed reliable portfolio selection models. To solve the proposed models, an effective genetic algorithm is designed to generate the approximate optimal solution to the considered problem. Finally, the numerical examples are given to show the performance of the proposed models and algorithm.

  20. Maximizing Energy Savings Reliability in BC Hydro Industrial Demand-side Management Programs: An Assessment of Performance Incentive Models

    Science.gov (United States)

    Gosman, Nathaniel

    of alternative performance incentive program models to manage DSM risk in BC. Three performance incentive program models were assessed and compared to BC Hydro's current large industrial DSM incentive program, Power Smart Partners -- Transmission Project Incentives, itself a performance incentive-based program. Together, the selected program models represent a continuum of program design and implementation in terms of the schedule and level of incentives provided, the duration and rigour of measurement and verification (M&V), energy efficiency measures targeted and involvement of the private sector. A multi criteria assessment framework was developed to rank the capacity of each program model to manage BC large industrial DSM risk factors. DSM risk management rankings were then compared to program costeffectiveness, targeted energy savings potential in BC and survey results from BC industrial firms on the program models. The findings indicate that the reliability of DSM energy savings in the BC large industrial sector can be maximized through performance incentive program models that: (1) offer incentives jointly for capital and low-cost operations and maintenance (O&M) measures, (2) allow flexible lead times for project development, (3) utilize rigorous M&V methods capable of measuring variable load, process-based energy savings, (4) use moderate contract lengths that align with effective measure life, and (5) integrate energy management software tools capable of providing energy performance feedback to customers to maximize the persistence of energy savings. While this study focuses exclusively on the BC large industrial sector, the findings of this research have applicability to all energy utilities serving large, energy intensive industrial sectors.

  1. Advances in ranking and selection, multiple comparisons, and reliability methodology and applications

    CERN Document Server

    Balakrishnan, N; Nagaraja, HN

    2007-01-01

    S. Panchapakesan has made significant contributions to ranking and selection and has published in many other areas of statistics, including order statistics, reliability theory, stochastic inequalities, and inference. Written in his honor, the twenty invited articles in this volume reflect recent advances in these areas and form a tribute to Panchapakesan's influence and impact on these areas. Thematically organized, the chapters cover a broad range of topics from: Inference; Ranking and Selection; Multiple Comparisons and Tests; Agreement Assessment; Reliability; and Biostatistics. Featuring

  2. EVALUATION OF HUMAN RELIABILITY IN SELECTED ACTIVITIES IN THE RAILWAY INDUSTRY

    Directory of Open Access Journals (Sweden)

    Erika SUJOVÁ

    2016-07-01

    Full Text Available The article focuses on evaluation of human reliability in the human – machine system in the railway industry. Based on a survey of a train dispatcher and of selected activities, we have identified risk factors affecting the dispatcher‘s work and the evaluated risk level of their influence on the reliability and safety of preformed activities. The research took place at the authors‘ work place between 2012-2013. A survey method was used. With its help, authors were able to identify selected work activities of train dispatcher’s risk factors that affect his/her work and the evaluated seriousness of its in-fluence on the reliability and safety of performed activities. Amongst the most important finding fall expressions of un-clear and complicated internal regulations and work processes, a feeling of being overworked, fear for one’s safety at small, insufficiently protected stations.

  3. Improving inspection reliability through operator selection and training

    International Nuclear Information System (INIS)

    McGrath, Bernard; Carter, Luke

    2013-01-01

    A number of years ago the UK's Health and Safety Executive sponsored a series of three PANI projects investigating the application of manual ultrasonics, which endeavoured to establish the necessary steps that ensure a reliable inspection is performed. The results of the three projects were each reported separately on completion and also presented at number of international conferences. This paper summarises the results of these projects from the point of view of operator performance. The correlation of operator ultrasonic performance with results of aptitude tests is presented along with observations on the impact of training and qualifications of the operators. The results lead to conclusions on how the selection and training of operators could be modified to improve reliability of inspections.

  4. Crossover and maximal fat-oxidation points in sedentary healthy subjects: methodological issues.

    Science.gov (United States)

    Gmada, N; Marzouki, H; Haboubi, M; Tabka, Z; Shephard, R J; Bouhlel, E

    2012-02-01

    Our study aimed to assess the influence of protocol on the crossover point and maximal fat-oxidation (LIPOX(max)) values in sedentary, but otherwise healthy, young men. Maximal oxygen intake was assessed in 23 subjects, using a progressive maximal cycle ergometer test. Twelve sedentary males (aged 20.5±1.0 years) whose directly measured maximal aerobic power (MAP) values were lower than their theoretical maximal values (tMAP) were selected from this group. These individuals performed, in random sequence, three submaximal graded exercise tests, separated by three-day intervals; work rates were based on the tMAP in one test and on MAP in the remaining two. The third test was used to assess the reliability of data. Heart rate, respiratory parameters, blood lactate, the crossover point and LIPOX(max) values were measured during each of these tests. The crossover point and LIPOX(max) values were significantly lower when the testing protocol was based on tMAP rather than on MAP (PtMAP at 30, 40, 50 and 60% of maximal aerobic power (PtMAP rather than MAP (P<0.001). During the first 5 min of recovery, EPOC(5 min) and blood lactate were significantly correlated (r=0.89; P<0.001). Our data show that, to assess the crossover point and LIPOX(max) values for research purposes, the protocol must be based on the measured MAP rather than on a theoretical value. Such a determination should improve individualization of training for initially sedentary subjects. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  5. Inclusive Fitness Maximization:An Axiomatic Approach

    OpenAIRE

    Okasha, Samir; Weymark, John; Bossert, Walter

    2014-01-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...

  6. Selective maintenance of multi-state systems with structural dependence

    International Nuclear Information System (INIS)

    Dao, Cuong D.; Zuo, Ming J.

    2017-01-01

    This paper studies the selective maintenance problem for multi-state systems with structural dependence. Each component can be in one of multiple working levels and several maintenance actions are possible to a component in a maintenance break. The components structurally form multiple hierarchical levels and dependence groups. A directed graph is used to represent the precedence relations of components in the system. A selective maintenance optimization model is developed to maximize the system reliability in the next mission under time and cost constraints. A backward search algorithm is used to determine the assembly sequence for a selective maintenance scenario. The maintenance model helps maintenance managers in determining the best combination of maintenance activities to maximize the probability of successfully completing the next mission. Examples showing the use of the proposed method are presented. - Highlights: • A selective maintenance model for multi-state systems is proposed considering both economic and structural dependence. • Structural dependence is modeled as precedence relationship when disassembling components for maintenance. • Resources for disassembly and maintenance are evaluated using a backward search algorithm. • Maintenance strategies with and without structural dependence are analyzed. • Ignoring structural dependence may lead to over-estimation of system reliability.

  7. Reliability of pedigree-based and genomic evaluations in selected populations

    NARCIS (Netherlands)

    Gorjanc, G.; Bijma, P.; Hickey, J.M.

    2015-01-01

    Background: Reliability is an important parameter in breeding. It measures the precision of estimated breeding values (EBV) and, thus, potential response to selection on those EBV. The precision of EBV is commonly measured by relating the prediction error variance (PEV) of EBV to the base population

  8. The reliability of randomly selected final year pharmacy students in ...

    African Journals Online (AJOL)

    Employing ANOVA, factorial experimental analysis, and the theory of error, reliability studies were conducted on the assessment of the drug product chloroquine phosphate tablets. The G–Study employed equal numbers of the factors for uniform control, and involved three analysts (randomly selected final year Pharmacy ...

  9. Material Selection for Cable Gland to Improved Reliability of the High-hazard Industries

    Science.gov (United States)

    Vashchuk, S. P.; Slobodyan, S. M.; Deeva, V. S.; Vashchuk, D. S.

    2018-01-01

    The sealed cable glands (SCG) are available to ensure safest connection sheathed single wire for the hazard production facility (nuclear power plant and others) the same as pilot cable, control cables, radio-frequency cables et al. In this paper, we investigate the specifics of the material selection of SCG with the express aim of hazardous man-made facility. We discuss the safe working conditions for cable glands. The research indicates the sintering powdered metals cables provide the reliability growth due to their properties. A number of studies have demonstrated the verification of material selection. On the face of it, we make findings indicating that double glazed sealed units could enhance reliability. We had evaluated sample reliability under fire conditions, seismic load, and pressure containment failure. We used the samples mineral insulated thermocouple cable.

  10. Feasibility and reliability of digital imaging for estimating food selection and consumption from students' packed lunches.

    Science.gov (United States)

    Taylor, Jennifer C; Sutter, Carolyn; Ontai, Lenna L; Nishina, Adrienne; Zidenberg-Cherr, Sheri

    2018-01-01

    Although increasing attention is placed on the quality of foods in children's packed lunches, few studies have examined the capacity of observational methods to reliably determine both what is selected and consumed from these lunches. The objective of this project was to assess the feasibility and inter-rater reliability of digital imaging for determining selection and consumption from students' packed lunches, by adapting approaches previously applied to school lunches. Study 1 assessed feasibility and reliability of data collection among a sample of packed lunches (n = 155), while Study 2 further examined reliability in a larger sample of packed (n = 386) as well as school (n = 583) lunches. Based on the results from Study 1, it was feasible to collect and code most items in packed lunch images; missing data were most commonly attributed to packaging that limited visibility of contents. Across both studies, there was satisfactory reliability for determining food types selected, quantities selected, and quantities consumed in the eight food categories examined (weighted kappa coefficients 0.68-0.97 for packed lunches, 0.74-0.97 for school lunches), with lowest reliability for estimating condiments and meats/meat alternatives in packed lunches. In extending methods predominately applied to school lunches, these findings demonstrate the capacity of digital imaging for the objective estimation of selection and consumption from both school and packed lunches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research.

    Science.gov (United States)

    Koo, Terry K; Li, Mae Y

    2016-06-01

    Intraclass correlation coefficient (ICC) is a widely used reliability index in test-retest, intrarater, and interrater reliability analyses. This article introduces the basic concept of ICC in the content of reliability analysis. There are 10 forms of ICCs. Because each form involves distinct assumptions in their calculation and will lead to different interpretations, researchers should explicitly specify the ICC form they used in their calculation. A thorough review of the research design is needed in selecting the appropriate form of ICC to evaluate reliability. The best practice of reporting ICC should include software information, "model," "type," and "definition" selections. When coming across an article that includes ICC, readers should first check whether information about the ICC form has been reported and if an appropriate ICC form was used. Based on the 95% confident interval of the ICC estimate, values less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and greater than 0.90 are indicative of poor, moderate, good, and excellent reliability, respectively. This article provides a practical guideline for clinical researchers to choose the correct form of ICC and suggests the best practice of reporting ICC parameters in scientific publications. This article also gives readers an appreciation for what to look for when coming across ICC while reading an article.

  12. A strong response to selection on mass-independent maximal metabolic rate without a correlated response in basal metabolic rate

    DEFF Research Database (Denmark)

    Wone, B W M; Madsen, Per; Donovan, E R

    2015-01-01

    Metabolic rates are correlated with many aspects of ecology, but how selection on different aspects of metabolic rates affects their mutual evolution is poorly understood. Using laboratory mice, we artificially selected for high maximal mass-independent metabolic rate (MMR) without direct selection...... on mass-independent basal metabolic rate (BMR). Then we tested for responses to selection in MMR and correlated responses to selection in BMR. In other lines, we antagonistically selected for mice with a combination of high mass-independent MMR and low mass-independent BMR. All selection protocols...... and data analyses included body mass as a covariate, so effects of selection on the metabolic rates are mass adjusted (that is, independent of effects of body mass). The selection lasted eight generations. Compared with controls, MMR was significantly higher (11.2%) in lines selected for increased MMR...

  13. Inclusive fitness maximization: An axiomatic approach.

    Science.gov (United States)

    Okasha, Samir; Weymark, John A; Bossert, Walter

    2014-06-07

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Intersession reliability of self-selected and narrow stance balance testing in older adults.

    Science.gov (United States)

    Riemann, Bryan L; Piersol, Kelsey

    2017-10-01

    Despite the common practice of using force platforms to assess balance of older adults, few investigations have examined the reliability of postural screening tests in this population. We sought to determine the test-retest reliability of self-selected and narrow stance balance testing with eyes open and eyes closed in healthy older adults. Thirty older adults (>65 years) completed 45 s trials of eyes open and eyes closed stability tests using self-selected and narrow stances on two separate days (1.9 ± .7 days). Average medial-lateral center of pressure velocity was computed. The ICC results ranged from .74 to .86, and no significant systematic changes (P eyes open and closed balance testing using self-selected and narrow stances in older adults was established which should provide a foundation for the development of fall risk screening tests.

  15. Making literature reviews more reliable through application of lessons from systematic reviews.

    Science.gov (United States)

    Haddaway, N R; Woodcock, P; Macura, B; Collins, A

    2015-12-01

    Review articles can provide valuable summaries of the ever-increasing volume of primary research in conservation biology. Where findings may influence important resource-allocation decisions in policy or practice, there is a need for a high degree of reliability when reviewing evidence. However, traditional literature reviews are susceptible to a number of biases during the identification, selection, and synthesis of included studies (e.g., publication bias, selection bias, and vote counting). Systematic reviews, pioneered in medicine and translated into conservation in 2006, address these issues through a strict methodology that aims to maximize transparency, objectivity, and repeatability. Systematic reviews will always be the gold standard for reliable synthesis of evidence. However, traditional literature reviews remain popular and will continue to be valuable where systematic reviews are not feasible. Where traditional reviews are used, lessons can be taken from systematic reviews and applied to traditional reviews in order to increase their reliability. Certain key aspects of systematic review methods that can be used in a context-specific manner in traditional reviews include focusing on mitigating bias; increasing transparency, consistency, and objectivity, and critically appraising the evidence and avoiding vote counting. In situations where conducting a full systematic review is not feasible, the proposed approach to reviewing evidence in a more systematic way can substantially improve the reliability of review findings, providing a time- and resource-efficient means of maximizing the value of traditional reviews. These methods are aimed particularly at those conducting literature reviews where systematic review is not feasible, for example, for graduate students, single reviewers, or small organizations. © 2015 Society for Conservation Biology.

  16. [Employees in high-reliability organizations: systematic selection of personnel as a final criterion].

    Science.gov (United States)

    Oubaid, V; Anheuser, P

    2014-05-01

    Employees represent an important safety factor in high-reliability organizations. The combination of clear organizational structures, a nonpunitive safety culture, and psychological personnel selection guarantee a high level of safety. The cockpit personnel selection process of a major German airline is presented in order to demonstrate a possible transferability into medicine and urology.

  17. Revenue-Maximizing Radio Access Technology Selection with Net Neutrality Compliance in Heterogeneous Wireless Networks

    Directory of Open Access Journals (Sweden)

    Elissar Khloussy

    2018-01-01

    Full Text Available The net neutrality principle states that users should have equal access to all Internet content and that Internet Service Providers (ISPs should not practice differentiated treatment on any of the Internet traffic. While net neutrality aims to restrain any kind of discrimination, it also grants exemption to a certain category of traffic known as specialized services (SS, by allowing the ISP to dedicate part of the resources for the latter. In this work, we consider a heterogeneous LTE/WiFi wireless network and we investigate revenue-maximizing Radio Access Technology (RAT selection strategies that are net neutrality-compliant, with exemption granted to SS traffic. Our objective is to find out how the bandwidth reservation for SS traffic would be made in a way that allows maximizing the revenue while being in compliance with net neutrality and how the choice of the ratio of reserved bandwidth would affect the revenue. The results show that reserving bandwidth for SS traffic in one RAT (LTE can achieve higher revenue. On the other hand, when the capacity is reserved across both LTE and WiFi, higher social benefit in terms of number of admitted users can be realized, as well as lower blocking probability for the Internet access traffic.

  18. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  19. Maximizing Statistical Power When Verifying Probabilistic Forecasts of Hydrometeorological Events

    Science.gov (United States)

    DeChant, C. M.; Moradkhani, H.

    2014-12-01

    Hydrometeorological events (i.e. floods, droughts, precipitation) are increasingly being forecasted probabilistically, owing to the uncertainties in the underlying causes of the phenomenon. In these forecasts, the probability of the event, over some lead time, is estimated based on some model simulations or predictive indicators. By issuing probabilistic forecasts, agencies may communicate the uncertainty in the event occurring. Assuming that the assigned probability of the event is correct, which is referred to as a reliable forecast, the end user may perform some risk management based on the potential damages resulting from the event. Alternatively, an unreliable forecast may give false impressions of the actual risk, leading to improper decision making when protecting resources from extreme events. Due to this requisite for reliable forecasts to perform effective risk management, this study takes a renewed look at reliability assessment in event forecasts. Illustrative experiments will be presented, showing deficiencies in the commonly available approaches (Brier Score, Reliability Diagram). Overall, it is shown that the conventional reliability assessment techniques do not maximize the ability to distinguish between a reliable and unreliable forecast. In this regard, a theoretical formulation of the probabilistic event forecast verification framework will be presented. From this analysis, hypothesis testing with the Poisson-Binomial distribution is the most exact model available for the verification framework, and therefore maximizes one's ability to distinguish between a reliable and unreliable forecast. Application of this verification system was also examined within a real forecasting case study, highlighting the additional statistical power provided with the use of the Poisson-Binomial distribution.

  20. Tri-maximal vs. bi-maximal neutrino mixing

    International Nuclear Information System (INIS)

    Scott, W.G

    2000-01-01

    It is argued that data from atmospheric and solar neutrino experiments point strongly to tri-maximal or bi-maximal lepton mixing. While ('optimised') bi-maximal mixing gives an excellent a posteriori fit to the data, tri-maximal mixing is an a priori hypothesis, which is not excluded, taking account of terrestrial matter effects

  1. Optimal design of water supply networks for enhancing seismic reliability

    International Nuclear Information System (INIS)

    Yoo, Do Guen; Kang, Doosun; Kim, Joong Hoon

    2016-01-01

    The goal of the present study is to construct a reliability evaluation model of a water supply system taking seismic hazards and present techniques to enhance hydraulic reliability of the design into consideration. To maximize seismic reliability with limited budgets, an optimal design model is developed using an optimization technique called harmony search (HS). The model is applied to actual water supply systems to determine pipe diameters that can maximize seismic reliability. The reliabilities between the optimal design and existing designs were compared and analyzed. The optimal design would both enhance reliability by approximately 8.9% and have a construction cost of approximately 1.3% less than current pipe construction cost. In addition, the reinforcement of the durability of individual pipes without considering the system produced ineffective results in terms of both cost and reliability. Therefore, to increase the supply ability of the entire system, optimized pipe diameter combinations should be derived. Systems in which normal status hydraulic stability and abnormal status available demand could be maximally secured if configured through the optimal design. - Highlights: • We construct a seismic reliability evaluation model of water supply system. • We present technique to enhance hydraulic reliability in the aspect of design. • Harmony search algorithm is applied in optimal designs process. • The effects of the proposed optimal design are improved reliability about by 9%. • Optimized pipe diameter combinations should be derived indispensably.

  2. Gamma prior distribution selection for Bayesian analysis of failure rate and reliability

    International Nuclear Information System (INIS)

    Waler, R.A.; Johnson, M.M.; Waterman, M.S.; Martz, H.F. Jr.

    1977-01-01

    It is assumed that the phenomenon under study is such that the time-to-failure may be modeled by an exponential distribution with failure-rate parameter, lambda. For Bayesian analyses of the assumed model, the family of gamma distributions provides conjugate prior models for lambda. Thus, an experimenter needs to select a particular gamma model to conduct a Bayesian reliability analysis. The purpose of this paper is to present a methodology which can be used to translate engineering information, experience, and judgment into a choice of a gamma prior distribution. The proposed methodology assumes that the practicing engineer can provide percentile data relating to either the failure rate or the reliability of the phenomenon being investigated. For example, the methodology will select the gamma prior distribution which conveys an engineer's belief that the failure rate, lambda, simultaneously satisfies the probability statements, P(lambda less than 1.0 x 10 -3 ) = 0.50 and P(lambda less than 1.0 x 10 -5 ) = 0.05. That is, two percentiles provided by an engineer are used to determine a gamma prior model which agrees with the specified percentiles. For those engineers who prefer to specify reliability percentiles rather than the failure-rate percentiles illustrated above, one can use the induced negative-log gamma prior distribution which satisfies the probability statements, P(R(t 0 ) less than 0.99) = 0.50 and P(R(t 0 ) less than 0.99999) = 0.95 for some operating time t 0 . Also, the paper includes graphs for selected percentiles which assist an engineer in applying the methodology

  3. The Impact Analysis of Psychological Reliability of Population Pilot Study For Selection of Particular Reliable Multi-Choice Item Test in Foreign Language Research Work

    Directory of Open Access Journals (Sweden)

    Seyed Hossein Fazeli

    2010-10-01

    Full Text Available The purpose of research described in the current study is the psychological reliability, its’ importance, application, and more to investigate on the impact analysis of psychological reliability of population pilot study for selection of particular reliable multi-choice item test in foreign language research work. The population for subject recruitment was all under graduated students from second semester at large university in Iran (both male and female that study English as a compulsory paper. In Iran, English is taught as a foreign language.

  4. Gamma prior distribution selection for Bayesian analysis of failure rate and reliability

    International Nuclear Information System (INIS)

    Waller, R.A.; Johnson, M.M.; Waterman, M.S.; Martz, H.F. Jr.

    1976-07-01

    It is assumed that the phenomenon under study is such that the time-to-failure may be modeled by an exponential distribution with failure rate lambda. For Bayesian analyses of the assumed model, the family of gamma distributions provides conjugate prior models for lambda. Thus, an experimenter needs to select a particular gamma model to conduct a Bayesian reliability analysis. The purpose of this report is to present a methodology that can be used to translate engineering information, experience, and judgment into a choice of a gamma prior distribution. The proposed methodology assumes that the practicing engineer can provide percentile data relating to either the failure rate or the reliability of the phenomenon being investigated. For example, the methodology will select the gamma prior distribution which conveys an engineer's belief that the failure rate lambda simultaneously satisfies the probability statements, P(lambda less than 1.0 x 10 -3 ) equals 0.50 and P(lambda less than 1.0 x 10 -5 ) equals 0.05. That is, two percentiles provided by an engineer are used to determine a gamma prior model which agrees with the specified percentiles. For those engineers who prefer to specify reliability percentiles rather than the failure rate percentiles illustrated above, it is possible to use the induced negative-log gamma prior distribution which satisfies the probability statements, P(R(t 0 ) less than 0.99) equals 0.50 and P(R(t 0 ) less than 0.99999) equals 0.95, for some operating time t 0 . The report also includes graphs for selected percentiles which assist an engineer in applying the procedure. 28 figures, 16 tables

  5. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

  6. Towards higher reliability of CMS computing facilities

    International Nuclear Information System (INIS)

    Bagliesi, G; Bloom, K; Brew, C; Flix, J; Kreuzer, P; Sciabà, A

    2012-01-01

    The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.

  7. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation.

    Science.gov (United States)

    Chen, Qing; Zhang, Jinxiu; Hu, Ze

    2017-02-23

    This article investigates the dynamic topology control problemof satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites' relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.

  8. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation

    Directory of Open Access Journals (Sweden)

    Qing Chen

    2017-02-01

    Full Text Available This article investigates the dynamic topology control problemof satellite cluster networks (SCNs in Earth observation (EO missions by applying a novel metric of stability for inter-satellite links (ISLs. The properties of the periodicity and predictability of satellites’ relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.

  9. MHTGR thermal performance envelopes: Reliability by design

    International Nuclear Information System (INIS)

    Etzel, K.T.; Howard, W.W.; Zgliczynski, J.B.

    1992-05-01

    This document discusses thermal performance envelopes which are used to specify steady-state design requirements for the systems of the Modular High Temperature Gas-Cooled Reactor to maximize plant performance reliability with optimized design. The thermal performance envelopes are constructed around the expected operating point accounting for uncertainties in actual plant as-built parameters and plant operation. The components are then designed to perform successfully at all points within the envelope. As a result, plant reliability is maximized by accounting for component thermal performance variation in the design. The design is optimized by providing a means to determine required margins in a disciplined and visible fashion

  10. Determine the optimal carrier selection for a logistics network based on multi-commodity reliability criterion

    Science.gov (United States)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-05-01

    From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.

  11. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  12. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  13. Inter-tester reliability of selected clinical tests for long-lasting temporomandibular disorders.

    Science.gov (United States)

    Julsvoll, Elisabeth Heggem; Vøllestad, Nina Køpke; Opseth, Gro; Robinson, Hilde Stendal

    2017-09-01

    Clinical tests used to examine patients with temporomandibular disorders vary in methodological quality, and some are not tested for reliability. The purpose of this cross-sectional study was to evaluate inter-tester reliability of clinical tests and a cluster of tests used to examine patients with long-lasting temporomandibular disorders. Forty patients with pain in the temporomandibular area treated by health-professionals were included. They were between 18-70 years, had 65 symptomatic (33 right/32 left) and 15 asymptomatic joints. Two manual therapists examined all participants with selected tests. Percentage agreement and the kappa coefficient ( k ) with 95% confidence interval (CI) were used to evaluate the tests with categorical outcomes. For tests with continuous outcomes, the relative inter-tester reliability was assessed by the intraclass-correlation-coefficient (ICC 3,1 , 95% CI) and the absolute reliability was calculated by the smallest detectable change (SDC). The best reliability among single tests was found for the dental stick test, the joint-sound test ( k  = 0.80-1.0) and range of mouth-opening (ICC 3,1 (95% CI) = 0.97 (0.95-0.98) and SDC = 4 mm). The reliability of cluster of tests was excellent with both four and five positive tests out of seven. The reliability was good to excellent for the clinical tests and the cluster of tests when performed by experienced therapists. The tests are feasible for use in the clinical setting. They require no advanced equipment and are easy to perform.

  14. Maximizing Crossbred Performance through Purebred Genomic Selection

    DEFF Research Database (Denmark)

    Esfandyari, Hadi; Sørensen, Anders Christian; Bijma, Pieter

    Genomic selection (GS) can be used to select purebreds for crossbred performance (CP). As dominance is the likely genetic basis of heterosis, explicitly including dominance in the GS model may be beneficial for selection of purebreds for CP, when estimating allelic effects from pure line data. Th...

  15. A mechanism of extreme growth and reliable signaling in sexually selected ornaments and weapons.

    Science.gov (United States)

    Emlen, Douglas J; Warren, Ian A; Johns, Annika; Dworkin, Ian; Lavine, Laura Corley

    2012-08-17

    Many male animals wield ornaments or weapons of exaggerated proportions. We propose that increased cellular sensitivity to signaling through the insulin/insulin-like growth factor (IGF) pathway may be responsible for the extreme growth of these structures. We document how rhinoceros beetle horns, a sexually selected weapon, are more sensitive to nutrition and more responsive to perturbation of the insulin/IGF pathway than other body structures. We then illustrate how enhanced sensitivity to insulin/IGF signaling in a growing ornament or weapon would cause heightened condition sensitivity and increased variability in expression among individuals--critical properties of reliable signals of male quality. The possibility that reliable signaling arises as a by-product of the growth mechanism may explain why trait exaggeration has evolved so many different times in the context of sexual selection.

  16. Reliability of shade selection using an intraoral spectrophotometer.

    Science.gov (United States)

    Witkowski, Siegbert; Yajima, Nao-Daniel; Wolkewitz, Martin; Strub, Jorge R

    2012-06-01

    In this study, we evaluate the accuracy and reproducibility of human tooth shade selection using a digital spectrophotometer. Variability among examiners and illumination conditions were tested for possible influence on measurement reproducibility. Fifteen intact anterior teeth of 15 subjects were evaluated for their shade using a digital spectrophotometer (Crystaleye, Olympus, Tokyo, Japan) by two examiners under the same light conditions representing a dental laboratory situation. Each examiner performed the measurement ten times on the labial surface of each tooth containing three evaluation sides (cervical, body, incisal). Commission International on Illumination color space values for L* (lightness), a* (red/green), and b* (yellow/blue) were obtained from each evaluated side. Examiner 2 repeated the measurements of the same subjects under different light conditions (i.e., a dental unit with a chairside lamp). To describe measurement precision, the mean color difference from the mean metric was used. The computed confidence interval (CI) value 5.228 (4.6598-5.8615) reflected (represented) the validity of the measurements. Least square mean analysis of the values obtained by examiners 1 and 2 or under different illumination conditions revealed no statistically significant differences (CI = 95%). Within the limits of the present study, the accuracy and reproducibility of dental shade selection using the tested spectrophotometer with respect to examiner and illumination conditions reflected the reliability of this device. This study suggests that the tested spectrophotometer can be recommended for the clinical application of shade selection.

  17. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  18. Derivative pricing based on local utility maximization

    OpenAIRE

    Jan Kallsen

    2002-01-01

    This paper discusses a new approach to contingent claim valuation in general incomplete market models. We determine the neutral derivative price which occurs if investors maximize their local utility and if derivative demand and supply are balanced. We also introduce the sensitivity process of a contingent claim. This process quantifies the reliability of the neutral derivative price and it can be used to construct price bounds. Moreover, it allows to calibrate market models in order to be co...

  19. Reliable Geographical Forwarding in Cognitive Radio Sensor Networks Using Virtual Clusters

    Science.gov (United States)

    Zubair, Suleiman; Fisal, Norsheila

    2014-01-01

    The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme. PMID:24854362

  20. Reliable Geographical Forwarding in Cognitive Radio Sensor Networks Using Virtual Clusters

    Directory of Open Access Journals (Sweden)

    Suleiman Zubair

    2014-05-01

    Full Text Available The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme.

  1. Phenomenology of maximal and near-maximal lepton mixing

    International Nuclear Information System (INIS)

    Gonzalez-Garcia, M. C.; Pena-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.

    2001-01-01

    The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ε(equivalent to)1-2sin 2 θ ex and quantify the present experimental status for |ε| e mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10 -8 eV 2 ∼ 2 ∼ -7 eV 2 . In the mass ranges Δm 2 ∼>1.5x10 -5 eV 2 and 4x10 -10 eV 2 ∼ 2 ∼ -7 eV 2 the full interval |ε| e mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay

  2. Evaluation and comparison of alternative fleet-level selective maintenance models

    International Nuclear Information System (INIS)

    Schneider, Kellie; Richard Cassady, C.

    2015-01-01

    Fleet-level selective maintenance refers to the process of identifying the subset of maintenance actions to perform on a fleet of repairable systems when the maintenance resources allocated to the fleet are insufficient for performing all desirable maintenance actions. The original fleet-level selective maintenance model is designed to maximize the probability that all missions in a future set are completed successfully. We extend this model in several ways. First, we consider a cost-based optimization model and show that a special case of this model maximizes the expected value of the number of successful missions in the future set. We also consider the situation in which one or more of the future missions may be canceled. These models and the original fleet-level selective maintenance optimization models are nonlinear. Therefore, we also consider an alternative model in which the objective function can be linearized. We show that the alternative model is a good approximation to the other models. - Highlights: • Investigate nonlinear fleet-level selective maintenance optimization models. • A cost based model is used to maximize the expected number of successful missions. • Another model is allowed to cancel missions if reliability is sufficiently low. • An alternative model has an objective function that can be linearized. • We show that the alternative model is a good approximation to the other models

  3. Addressing the reliability issues of intelligent well systems

    International Nuclear Information System (INIS)

    Drakeley, Brian; Douglas, Neil

    2000-01-01

    New Technology receives its fair share of 'risk aversion' both in good and not so good economic times from oil and gas operators evaluating application opportunities. This paper presents details of a strategy developed and implemented to bring to market an Intelligent Well system designed from day one to maximize system reliability, while offering the customer a high degree of choice in system functionality. A team of engineers and scientists skilled in all aspects of Reliability Analysis and Assessment analyzed the Intelligent Well system under development, gathered reliability performance data from other sources and using various analytical techniques developed matrices of system survival probability estimates for various scenarios. Interaction with the system and design engineers has been an on-going process as designs are modified to maximize reliability predictions and extensive qualification test programs developed from the component to the overall system level. The techniques used in the development project will be presented. A comparative model now exists that facilitates the evaluation of future design alternative considerations and also contains databases that can be readily updated with actual field data etc. (author)

  4. Maximizing crossbred performance through purebred genomic selection

    DEFF Research Database (Denmark)

    Esfandyari, Hadi; Sørensen, Anders Christian; Bijma, Piter

    2015-01-01

    Background In livestock production, many animals are crossbred, with two distinct advantages: heterosis and breed complementarity. Genomic selection (GS) can be used to select purebred parental lines for crossbred performance (CP). Dominance being the likely genetic basis of heterosis, explicitly...

  5. Peer-review for selection of oral presentations for conferences: Are we reliable?

    Science.gov (United States)

    Deveugele, Myriam; Silverman, Jonathan

    2017-11-01

    Although peer-review for journal submission, grant-applications and conference submissions has been called 'a counter- stone of science', and even 'the gold standard for evaluating scientific merit', publications on this topic remain scares. Research that has investigated peer-review reveals several issues and criticisms concerning bias, poor quality review, unreliability and inefficiency. The most important weakness of the peer review process is the inconsistency between reviewers leading to inadequate inter-rater reliability. To report the reliability of ratings for a large international conference and to suggest possible solutions to overcome the problem. In 2016 during the International Conference on Communication in Healthcare, organized by EACH: International Association for Communication in Healthcare, a calibration exercise was proposed and feedback was reported back to the participants of the exercise. Most abstracts, as well as most peer-reviewers, receive and give scores around the median. Contrary to the general assumption that there are high and low scorers, in this group only 3 peer-reviewers could be identified with a high mean, while 7 has a low mean score. Only 2 reviewers gave only high ratings (4 and 5). Of the eight abstracts included in this exercise, only one abstract received a high mean score and one a low mean score. Nevertheless, both these abstracts received both low and high scores; all other abstracts received all possible scores. Peer-review of submissions for conferences are, in accordance with the literature, unreliable. New and creative methods will be needed to give the participants of a conference what they really deserve: a more reliable selection of the best abstracts. More raters per abstract improves the inter-rater reliability; training of reviewers could be helpful; providing feedback to reviewers can lead to less inter-rater disagreement; fostering negative peer-review (rejecting the inappropriate submissions) rather than a

  6. Efficient and reliable characterization of the corticospinal system using transcranial magnetic stimulation.

    Science.gov (United States)

    Kukke, Sahana N; Paine, Rainer W; Chao, Chi-Chao; de Campos, Ana C; Hallett, Mark

    2014-06-01

    The purpose of this study is to develop a method to reliably characterize multiple features of the corticospinal system in a more efficient manner than typically done in transcranial magnetic stimulation studies. Forty transcranial magnetic stimulation pulses of varying intensity were given over the first dorsal interosseous motor hot spot in 10 healthy adults. The first dorsal interosseous motor-evoked potential size was recorded during rest and activation to create recruitment curves. The Boltzmann sigmoidal function was fit to the data, and parameters relating to maximal motor-evoked potential size, curve slope, and stimulus intensity leading to half-maximal motor-evoked potential size were computed from the curve fit. Good to excellent test-retest reliability was found for all corticospinal parameters at rest and during activation with 40 transcranial magnetic stimulation pulses. Through the use of curve fitting, important features of the corticospinal system can be determined with fewer stimuli than typically used for the same information. Determining the recruitment curve provides a basis to understand the state of the corticospinal system and select subject-specific parameters for transcranial magnetic stimulation testing quickly and without unnecessary exposure to magnetic stimulation. This method can be useful in individuals who have difficulty in maintaining stillness, including children and patients with motor disorders.

  7. Preparation of methodology for reliability analysis of selected digital segments of the instrumentation and control systems of NPPs. Pt. 1

    International Nuclear Information System (INIS)

    Hustak, S.; Patrik, M.; Babic, P.

    2000-12-01

    The report is structured as follows: (i) Introduction; (ii) Important notions relating to the safety and dependability of software systems for nuclear power plants (selected notions from IAEA Technical Report No. 397; safety aspects of software application; reliability/dependability aspects of digital systems); (iii) Peculiarities of digital systems and ways to a dependable performance of the required function (failures in the system and principles of defence against them; ensuring resistance of digital systems against failures at various hardware and software levels); (iv) The issue of analytical procedures to assess the safety and reliability of safety-related digital systems (safety and reliability assessment at an early stage of the project; general framework of reliability analysis of complex systems; choice of an appropriate quantitative measure of software reliability); (v) Selected qualitative and quantitative information about the reliability of digital systems; the use of relations between the incidence of various types of faults); and (vi) Conclusions and recommendations. (P.A.)

  8. Sensor Selection and Data Validation for Reliable Integrated System Health Management

    Science.gov (United States)

    Garg, Sanjay; Melcher, Kevin J.

    2008-01-01

    For new access to space systems with challenging mission requirements, effective implementation of integrated system health management (ISHM) must be available early in the program to support the design of systems that are safe, reliable, highly autonomous. Early ISHM availability is also needed to promote design for affordable operations; increased knowledge of functional health provided by ISHM supports construction of more efficient operations infrastructure. Lack of early ISHM inclusion in the system design process could result in retrofitting health management systems to augment and expand operational and safety requirements; thereby increasing program cost and risk due to increased instrumentation and computational complexity. Having the right sensors generating the required data to perform condition assessment, such as fault detection and isolation, with a high degree of confidence is critical to reliable operation of ISHM. Also, the data being generated by the sensors needs to be qualified to ensure that the assessments made by the ISHM is not based on faulty data. NASA Glenn Research Center has been developing technologies for sensor selection and data validation as part of the FDDR (Fault Detection, Diagnosis, and Response) element of the Upper Stage project of the Ares 1 launch vehicle development. This presentation will provide an overview of the GRC approach to sensor selection and data quality validation and will present recent results from applications that are representative of the complexity of propulsion systems for access to space vehicles. A brief overview of the sensor selection and data quality validation approaches is provided below. The NASA GRC developed Systematic Sensor Selection Strategy (S4) is a model-based procedure for systematically and quantitatively selecting an optimal sensor suite to provide overall health assessment of a host system. S4 can be logically partitioned into three major subdivisions: the knowledge base, the down-select

  9. Maximal Bell's inequality violation for non-maximal entanglement

    International Nuclear Information System (INIS)

    Kobayashi, M.; Khanna, F.; Mann, A.; Revzen, M.; Santana, A.

    2004-01-01

    Bell's inequality violation (BIQV) for correlations of polarization is studied for a product state of two two-mode squeezed vacuum (TMSV) states. The violation allowed is shown to attain its maximal limit for all values of the squeezing parameter, ζ. We show via an explicit example that a state whose entanglement is not maximal allow maximal BIQV. The Wigner function of the state is non-negative and the average value of either polarization is nil

  10. Reliability: How much is it worth? Beyond its estimation or prediction, the (net) present value of reliability

    International Nuclear Information System (INIS)

    Saleh, J.H.; Marais, K.

    2006-01-01

    In this article, we link an engineering concept, reliability, to a financial and managerial concept, net present value, by exploring the impact of a system's reliability on its revenue generation capability. The framework here developed for non-repairable systems quantitatively captures the value of reliability from a financial standpoint. We show that traditional present value calculations of engineering systems do not account for system reliability, thus over-estimate a system's worth and can therefore lead to flawed investment decisions. It is therefore important to involve reliability engineers upfront before investment decisions are made in technical systems. In addition, the analyses here developed help designers identify the optimal level of reliability that maximizes a system's net present value-the financial value reliability provides to the system minus the cost to achieve this level of reliability. Although we recognize that there are numerous considerations driving the specification of an engineering system's reliability, we contend that the financial analysis of reliability here developed should be made available to decision-makers to support in part, or at least be factored into, the system reliability specification

  11. Thermal performance envelopes for MHTGRs - Reliability by design

    International Nuclear Information System (INIS)

    Etzel, K.T.; Howard, W.W.; Zgliczynski, J.

    1992-01-01

    Thermal performance envelopes are used to specify steady-state design requirements for the systems of the modular high-temperature gas-cooled reactor (MHTGR) to maximize plant performance reliability with optimized design. The thermal performance envelopes are constructed around the expected operating point to account for uncertainties in actual plant as-built parameters and plant operation. The components are then designed to perform successfully at all points within the envelope. As a result, plant reliability is maximized by accounting for component thermal performance variation in the design. The design is optimized by providing a means to determine required margins in a disciplined and visible fashion. This is accomplished by coordinating these requirements with the various system and component designers in the early stages of the design, applying the principles of total quality management. The design is challenged by the more complex requirements associated with a range of operating conditions, but in return, high probability of delivering reliable performance throughout the plant life is ensured

  12. Semi-structured interview is a reliable and feasible tool for selection of doctors for general practice specialist training.

    Science.gov (United States)

    Isaksen, Jesper Hesselbjerg; Hertel, Niels Thomas; Kjær, Niels Kristian

    2013-09-01

    In order to optimise the selection process for admission to specialist training in family medicine, we developed a new design for structured applications and selection interviews. The design contains semi-structured interviews, which combine individualised elements from the applications with standardised behaviour-based questions. This paper describes the design of the tool, and offers reflections concerning its acceptability, reliability and feasibility. We used a combined quantitative and qualitative evaluation method. Ratings obtained by the applicants in two selection rounds were analysed for reliability and generalisability using the GENOVA programme. Applicants and assessors were randomly selected for individual semi-structured in-depth interviews. The qualitative data were analysed in accordance with the grounded theory method. Quantitative analysis yielded a high Cronbach's alpha of 0.97 for the first round and 0.90 for the second round, and a G coefficient of the first round of 0.74 and of the second round of 0.40. Qualitative analysis demonstrated high acceptability and fairness and it improved the assessors' judgment. Applicants reported concerns about loss of personality and some anxiety. The applicants' ability to reflect on their competences was important. The developed selection tool demonstrated an acceptable level of reliability, but only moderate generalisability. The users found that the tool provided a high degree of acceptability; it is a feasible and useful tool for -selection of doctors for specialist training if combined with work-based assessment. Studies on the benefits and drawbacks of this tool compared with other selection models are relevant. not relevant. not relevant.

  13. Maximizing and customer loyalty: Are maximizers less loyal?

    Directory of Open Access Journals (Sweden)

    Linda Lai

    2011-06-01

    Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.

  14. A strong response to selection on mass-independent maximal metabolic rate without a correlated response in basal metabolic rate.

    Science.gov (United States)

    Wone, B W M; Madsen, P; Donovan, E R; Labocha, M K; Sears, M W; Downs, C J; Sorensen, D A; Hayes, J P

    2015-04-01

    Metabolic rates are correlated with many aspects of ecology, but how selection on different aspects of metabolic rates affects their mutual evolution is poorly understood. Using laboratory mice, we artificially selected for high maximal mass-independent metabolic rate (MMR) without direct selection on mass-independent basal metabolic rate (BMR). Then we tested for responses to selection in MMR and correlated responses to selection in BMR. In other lines, we antagonistically selected for mice with a combination of high mass-independent MMR and low mass-independent BMR. All selection protocols and data analyses included body mass as a covariate, so effects of selection on the metabolic rates are mass adjusted (that is, independent of effects of body mass). The selection lasted eight generations. Compared with controls, MMR was significantly higher (11.2%) in lines selected for increased MMR, and BMR was slightly, but not significantly, higher (2.5%). Compared with controls, MMR was significantly higher (5.3%) in antagonistically selected lines, and BMR was slightly, but not significantly, lower (4.2%). Analysis of breeding values revealed no positive genetic trend for elevated BMR in high-MMR lines. A weak positive genetic correlation was detected between MMR and BMR. That weak positive genetic correlation supports the aerobic capacity model for the evolution of endothermy in the sense that it fails to falsify a key model assumption. Overall, the results suggest that at least in these mice there is significant capacity for independent evolution of metabolic traits. Whether that is true in the ancestral animals that evolved endothermy remains an important but unanswered question.

  15. Quantized hopfield networks for reliability optimization

    International Nuclear Information System (INIS)

    Nourelfath, Mustapha; Nahas, Nabil

    2003-01-01

    The use of neural networks in the reliability optimization field is rare. This paper presents an application of a recent kind of neural networks in a reliability optimization problem for a series system with multiple-choice constraints incorporated at each subsystem, to maximize the system reliability subject to the system budget. The problem is formulated as a nonlinear binary integer programming problem and characterized as an NP-hard problem. Our design of neural network to solve efficiently this problem is based on a quantized Hopfield network. This network allows us to obtain optimal design solutions very frequently and much more quickly than others Hopfield networks

  16. Rate Adaptive Selective Segment Assignment for Reliable Wireless Video Transmission

    Directory of Open Access Journals (Sweden)

    Sajid Nazir

    2012-01-01

    Full Text Available A reliable video communication system is proposed based on data partitioning feature of H.264/AVC, used to create a layered stream, and LT codes for erasure protection. The proposed scheme termed rate adaptive selective segment assignment (RASSA is an adaptive low-complexity solution to varying channel conditions. The comparison of the results of the proposed scheme is also provided for slice-partitioned H.264/AVC data. Simulation results show competitiveness of the proposed scheme compared to optimized unequal and equal error protection solutions. The simulation results also demonstrate that a high visual quality video transmission can be maintained despite the adverse effect of varying channel conditions and the number of decoding failures can be reduced.

  17. A Selective Role for Dopamine in Learning to Maximize Reward But Not to Minimize Effort: Evidence from Patients with Parkinson's Disease.

    Science.gov (United States)

    Skvortsova, Vasilisa; Degos, Bertrand; Welter, Marie-Laure; Vidailhet, Marie; Pessiglione, Mathias

    2017-06-21

    Instrumental learning is a fundamental process through which agents optimize their choices, taking into account various dimensions of available options such as the possible reward or punishment outcomes and the costs associated with potential actions. Although the implication of dopamine in learning from choice outcomes is well established, less is known about its role in learning the action costs such as effort. Here, we tested the ability of patients with Parkinson's disease (PD) to maximize monetary rewards and minimize physical efforts in a probabilistic instrumental learning task. The implication of dopamine was assessed by comparing performance ON and OFF prodopaminergic medication. In a first sample of PD patients ( n = 15), we observed that reward learning, but not effort learning, was selectively impaired in the absence of treatment, with a significant interaction between learning condition (reward vs effort) and medication status (OFF vs ON). These results were replicated in a second, independent sample of PD patients ( n = 20) using a simplified version of the task. According to Bayesian model selection, the best account for medication effects in both studies was a specific amplification of reward magnitude in a Q-learning algorithm. These results suggest that learning to avoid physical effort is independent from dopaminergic circuits and strengthen the general idea that dopaminergic signaling amplifies the effects of reward expectation or obtainment on instrumental behavior. SIGNIFICANCE STATEMENT Theoretically, maximizing reward and minimizing effort could involve the same computations and therefore rely on the same brain circuits. Here, we tested whether dopamine, a key component of reward-related circuitry, is also implicated in effort learning. We found that patients suffering from dopamine depletion due to Parkinson's disease were selectively impaired in reward learning, but not effort learning. Moreover, anti-parkinsonian medication restored the

  18. A standard for test reliability in group research.

    Science.gov (United States)

    Ellis, Jules L

    2013-03-01

    Many authors adhere to the rule that test reliabilities should be at least .70 or .80 in group research. This article introduces a new standard according to which reliabilities can be evaluated. This standard is based on the costs or time of the experiment and of administering the test. For example, if test administration costs are 7 % of the total experimental costs, the efficient value of the reliability is .93. If the actual reliability of a test is equal to this efficient reliability, the test size maximizes the statistical power of the experiment, given the costs. As a standard in experimental research, it is proposed that the reliability of the dependent variable be close to the efficient reliability. Adhering to this standard will enhance the statistical power and reduce the costs of experiments.

  19. System reliability analysis using dominant failure modes identified by selective searching technique

    International Nuclear Information System (INIS)

    Kim, Dong-Seok; Ok, Seung-Yong; Song, Junho; Koh, Hyun-Moo

    2013-01-01

    The failure of a redundant structural system is often described by innumerable system failure modes such as combinations or sequences of local failures. An efficient approach is proposed to identify dominant failure modes in the space of random variables, and then perform system reliability analysis to compute the system failure probability. To identify dominant failure modes in the decreasing order of their contributions to the system failure probability, a new simulation-based selective searching technique is developed using a genetic algorithm. The system failure probability is computed by a multi-scale matrix-based system reliability (MSR) method. Lower-scale MSR analyses evaluate the probabilities of the identified failure modes and their statistical dependence. A higher-scale MSR analysis evaluates the system failure probability based on the results of the lower-scale analyses. Three illustrative examples demonstrate the efficiency and accuracy of the approach through comparison with existing methods and Monte Carlo simulations. The results show that the proposed method skillfully identifies the dominant failure modes, including those neglected by existing approaches. The multi-scale MSR method accurately evaluates the system failure probability with statistical dependence fully considered. The decoupling between the failure mode identification and the system reliability evaluation allows for effective applications to larger structural systems

  20. The behavioral economics of consumer brand choice: patterns of reinforcement and utility maximization.

    Science.gov (United States)

    Foxall, Gordon R; Oliveira-Castro, Jorge M; Schrezenmaier, Teresa C

    2004-06-30

    Purchasers of fast-moving consumer goods generally exhibit multi-brand choice, selecting apparently randomly among a small subset or "repertoire" of tried and trusted brands. Their behavior shows both matching and maximization, though it is not clear just what the majority of buyers are maximizing. Each brand attracts, however, a small percentage of consumers who are 100%-loyal to it during the period of observation. Some of these are exclusively buyers of premium-priced brands who are presumably maximizing informational reinforcement because their demand for the brand is relatively price-insensitive or inelastic. Others buy exclusively the cheapest brands available and can be assumed to maximize utilitarian reinforcement since their behavior is particularly price-sensitive or elastic. Between them are the majority of consumers whose multi-brand buying takes the form of selecting a mixture of economy -- and premium-priced brands. Based on the analysis of buying patterns of 80 consumers for 9 product categories, the paper examines the continuum of consumers so defined and seeks to relate their buying behavior to the question of how and what consumers maximize.

  1. Comparison of maximal voluntary isometric contraction and hand-held dynamometry in measuring muscle strength of patients with progressive lower motor neuron syndrome

    NARCIS (Netherlands)

    Visser, J.; Mans, E.; de Visser, M.; van den Berg-Vos, R. M.; Franssen, H.; de Jong, J. M. B. V.; van den Berg, L. H.; Wokke, J. H. J.; de Haan, R. J.

    2003-01-01

    Context. Maximal voluntary isometric contraction, a method quantitatively assessing muscle strength, has proven to be reliable, accurate and sensitive in amyotrophic lateral sclerosis. Hand-held dynamometry is less expensive and more quickly applicable than maximal voluntary isometric contraction.

  2. Network reliability assessment using a cellular automata approach

    International Nuclear Information System (INIS)

    Rocco S, Claudio M.; Moreno, Jose Ali

    2002-01-01

    Two cellular automata (CA) models that evaluate the s-t connectedness and shortest path in a network are presented. CA based algorithms enhance the performance of classical algorithms, since they allow a more reliable and straightforward parallel implementation resulting in a dynamic network evaluation, where changes in the connectivity and/or link costs can readily be incorporated avoiding recalculation from scratch. The paper also demonstrates how these algorithms can be applied for network reliability evaluation (based on Monte-Carlo approach) and for finding s-t path with maximal reliability

  3. Improving accuracy of overhanging structures for selective laser melting through reliability characterization of single track formation on thick powder beds

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2016-01-01

    Repeatability and reproducibility of parts produced by selective laser melting is a standing issue, and coupled with a lack of standardized quality control presents a major hindrance towards maturing of selective laser melting as an industrial scale process. Consequently, numerical process...... modelling has been adopted towards improving the predictability of the outputs from the selective laser melting process. Establishing the reliability of the process, however, is still a challenge, especially in components having overhanging structures.In this paper, a systematic approach towards...... establishing reliability of overhanging structure production by selective laser melting has been adopted. A calibrated, fast, multiscale thermal model is used to simulate the single track formation on a thick powder bed. Single tracks are manufactured on a thick powder bed using same processing parameters...

  4. A decision theoretic framework for profit maximization in direct marketing

    NARCIS (Netherlands)

    Muus, L.; van der Scheer, H.; Wansbeek, T.J.; Montgomery, A.; Franses, P.H.B.F.

    2002-01-01

    One of the most important issues facing a firm involved in direct marketing is the selection of addresses from a mailing list. When the parameters of the model describing consumers' reaction to a mailing are known, addresses for a future mailing can be selected in a profit-maximizing way. Usually,

  5. Principles of maximally classical and maximally realistic quantum ...

    Indian Academy of Sciences (India)

    Principles of maximally classical and maximally realistic quantum mechanics. S M ROY. Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India. Abstract. Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2N-dimensional phase space, ...

  6. Profit maximization mitigates competition

    DEFF Research Database (Denmark)

    Dierker, Egbert; Grodal, Birgit

    1996-01-01

    We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...

  7. Implications of maximal Jarlskog invariant and maximal CP violation

    International Nuclear Information System (INIS)

    Rodriguez-Jauregui, E.; Universidad Nacional Autonoma de Mexico

    2001-04-01

    We argue here why CP violating phase Φ in the quark mixing matrix is maximal, that is, Φ=90 . In the Standard Model CP violation is related to the Jarlskog invariant J, which can be obtained from non commuting Hermitian mass matrices. In this article we derive the conditions to have Hermitian mass matrices which give maximal Jarlskog invariant J and maximal CP violating phase Φ. We find that all squared moduli of the quark mixing elements have a singular point when the CP violation phase Φ takes the value Φ=90 . This special feature of the Jarlskog invariant J and the quark mixing matrix is a clear and precise indication that CP violating Phase Φ is maximal in order to let nature treat democratically all of the quark mixing matrix moduli. (orig.)

  8. Reliability-Based Structural Optimization of Wave Energy Converters

    Directory of Open Access Journals (Sweden)

    Simon Ambühl

    2014-12-01

    Full Text Available More and more wave energy converter (WEC concepts are reaching prototypelevel. Once the prototype level is reached, the next step in order to further decrease thelevelized cost of energy (LCOE is optimizing the overall system with a focus on structuraland maintenance (inspection costs, as well as on the harvested power from the waves.The target of a fully-developed WEC technology is not maximizing its power output,but minimizing the resulting LCOE. This paper presents a methodology to optimize thestructural design of WECs based on a reliability-based optimization problem and the intentto maximize the investor’s benefits by maximizing the difference between income (e.g., fromselling electricity and the expected expenses (e.g., structural building costs or failure costs.Furthermore, different development levels, like prototype or commercial devices, may havedifferent main objectives and will be located at different locations, as well as receive varioussubsidies. These points should be accounted for when performing structural optimizationsof WECs. An illustrative example on the gravity-based foundation of the Wavestar deviceis performed showing how structural design can be optimized taking target reliability levelsand different structural failure modes due to extreme loads into account.

  9. Estimation of maximal oxygen uptake without exercise testing in Korean healthy adult workers.

    Science.gov (United States)

    Jang, Tae-Won; Park, Shin-Goo; Kim, Hyoung-Ryoul; Kim, Jung-Man; Hong, Young-Seoub; Kim, Byoung-Gwon

    2012-08-01

    Maximal oxygen uptake is generally accepted as the most valid and reliable index of cardiorespiratory fitness and functional aerobic capacity. The exercise test for measuring maximal oxygen uptake is unsuitable for screening tests in public heath examinations, because of the potential risks of exercise exertion and time demands. We designed this study to determine whether work-related physical activity is a potential predictor of maximal oxygen uptake, and to develop a maximal oxygen uptake equation using a non-exercise regression model for the cardiorespiratory fitness test in Korean adult workers. Study subjects were adult workers of small-sized companies in Korea. Subjects with history of disease such as hypertension, diabetes, asthma and angina were excluded. In total, 217 adult subjects (113 men of 21-55 years old and 104 women of 20-64 years old) were included. Self-report questionnaire survey was conducted on study subjects, and maximal oxygen uptake of each subject was measured with the exercise test. The statistical analysis was carried out to develop an equation for estimating maximal oxygen uptake. The predictors for estimating maximal oxygen uptake included age, gender, body mass index, smoking, leisure-time physical activity and the factors representing work-related physical activity. The work-related physical activity was identified to be a predictor of maximal oxygen uptake. Moreover, the equation showed high validity according to the statistical analysis. The equation for estimating maximal oxygen uptake developed in the present study could be used as a screening test for assessing cardiorespiratory fitness in Korean adult workers.

  10. "The Theory was Beautiful Indeed": Rise, Fall and Circulation of Maximizing Methods in Population Genetics (1930-1980).

    Science.gov (United States)

    Grodwohl, Jean-Baptiste

    2017-08-01

    Describing the theoretical population geneticists of the 1960s, Joseph Felsenstein reminisced: "our central obsession was finding out what function evolution would try to maximize. Population geneticists used to think, following Sewall Wright, that mean relative fitness, W, would be maximized by natural selection" (Felsenstein 2000). The present paper describes the genesis, diffusion and fall of this "obsession", by giving a biography of the mean fitness function in population genetics. This modeling method devised by Sewall Wright in the 1930s found its heyday in the late 1950s and early 1960s, in the wake of Motoo Kimura's and Richard Lewontin's works. It seemed a reliable guide in the mathematical study of deterministic effects (the study of natural selection in populations of infinite size, with no drift), leading to powerful generalizations presenting law-like properties. Progress in population genetics theory, it then seemed, would come from the application of this method to the study of systems with several genes. This ambition came to a halt in the context of the influential objections made by the Australian mathematician Patrick Moran in 1963. These objections triggered a controversy between mathematically- and biologically-inclined geneticists, with affected both the formal standards and the aims of population genetics as a science. Over the course of the 1960s, the mean fitness method withered with the ambition of developing the deterministic theory. The mathematical theory became increasingly complex. Kimura re-focused his modeling work on the theory of random processes; as a result of his computer simulations, Lewontin became the staunchest critic of maximizing principles in evolutionary biology. The mean fitness method then migrated to other research areas, being refashioned and used in evolutionary quantitative genetics and behavioral ecology.

  11. Maximizers versus satisficers

    Directory of Open Access Journals (Sweden)

    Andrew M. Parker

    2007-12-01

    Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.

  12. Quantitative approaches for profit maximization in direct marketing

    NARCIS (Netherlands)

    van der Scheer, H.R.

    1998-01-01

    An effective direct marketing campaign aims at selecting those targets, offer and communication elements - at the right time - that maximize the net profits. The list of individuals to be mailed, i.e. the targets, is considered to be the most important component. Therefore, a large amount of direct

  13. Resource allocation for maximizing prediction accuracy and genetic gain of genomic selection in plant breeding: a simulation experiment.

    Science.gov (United States)

    Lorenz, Aaron J

    2013-03-01

    Allocating resources between population size and replication affects both genetic gain through phenotypic selection and quantitative trait loci detection power and effect estimation accuracy for marker-assisted selection (MAS). It is well known that because alleles are replicated across individuals in quantitative trait loci mapping and MAS, more resources should be allocated to increasing population size compared with phenotypic selection. Genomic selection is a form of MAS using all marker information simultaneously to predict individual genetic values for complex traits and has widely been found superior to MAS. No studies have explicitly investigated how resource allocation decisions affect success of genomic selection. My objective was to study the effect of resource allocation on response to MAS and genomic selection in a single biparental population of doubled haploid lines by using computer simulation. Simulation results were compared with previously derived formulas for the calculation of prediction accuracy under different levels of heritability and population size. Response of prediction accuracy to resource allocation strategies differed between genomic selection models (ridge regression best linear unbiased prediction [RR-BLUP], BayesCπ) and multiple linear regression using ordinary least-squares estimation (OLS), leading to different optimal resource allocation choices between OLS and RR-BLUP. For OLS, it was always advantageous to maximize population size at the expense of replication, but a high degree of flexibility was observed for RR-BLUP. Prediction accuracy of doubled haploid lines included in the training set was much greater than of those excluded from the training set, so there was little benefit to phenotyping only a subset of the lines genotyped. Finally, observed prediction accuracies in the simulation compared well to calculated prediction accuracies, indicating these theoretical formulas are useful for making resource allocation

  14. Developing maximal neuromuscular power: Part 1--biological basis of maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-01-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances, the ability to generate maximal muscular power. Part 1 focuses on the factors that affect maximal power production, while part 2, which will follow in a forthcoming edition of Sports Medicine, explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability of the neuromuscular system to generate maximal power is affected by a range of interrelated factors. Maximal muscular power is defined and limited by the force-velocity relationship and affected by the length-tension relationship. The ability to generate maximal power is influenced by the type of muscle action involved and, in particular, the time available to develop force, storage and utilization of elastic energy, interactions of contractile and elastic elements, potentiation of contractile and elastic filaments as well as stretch reflexes. Furthermore, maximal power production is influenced by morphological factors including fibre type contribution to whole muscle area, muscle architectural features and tendon properties as well as neural factors including motor unit recruitment, firing frequency, synchronization and inter-muscular coordination. In addition, acute changes in the muscle environment (i.e. alterations resulting from fatigue, changes in hormone milieu and muscle temperature) impact the ability to generate maximal power. Resistance training has been shown to impact each of these neuromuscular factors in quite specific ways. Therefore, an understanding of the biological basis of maximal power production is essential for developing training programmes that effectively enhance maximal power production in the human.

  15. Rolling Bearing Fault Diagnosis Using Modified Neighborhood Preserving Embedding and Maximal Overlap Discrete Wavelet Packet Transform with Sensitive Features Selection

    Directory of Open Access Journals (Sweden)

    Fei Dong

    2018-01-01

    Full Text Available In order to enhance the performance of bearing fault diagnosis and classification, features extraction and features dimensionality reduction have become more important. The original statistical feature set was calculated from single branch reconstruction vibration signals obtained by using maximal overlap discrete wavelet packet transform (MODWPT. In order to reduce redundancy information of original statistical feature set, features selection by adjusted rand index and sum of within-class mean deviations (FSASD was proposed to select fault sensitive features. Furthermore, a modified features dimensionality reduction method, supervised neighborhood preserving embedding with label information (SNPEL, was proposed to realize low-dimensional representations for high-dimensional feature space. Finally, vibration signals collected from two experimental test rigs were employed to evaluate the performance of the proposed procedure. The results show that the effectiveness, adaptability, and superiority of the proposed procedure can serve as an intelligent bearing fault diagnosis system.

  16. Semi-structured interview is a reliable and feasible tool for selection of doctors for general practice specialist training

    DEFF Research Database (Denmark)

    Isaksen, Jesper; Hertel, Niels Thomas; Kjær, Niels Kristian

    2013-01-01

    In order to optimise the selection process for admission to specialist training in family medicine, we developed a new design for structured applications and selection interviews. The design contains semi-structured interviews, which combine individualised elements from the applications...... with standardised behaviour-based questions. This paper describes the design of the tool, and offers reflections concerning its acceptability, reliability and feasibility....

  17. Comparison of Critical Power and W' Derived From 2 or 3 Maximal Tests.

    Science.gov (United States)

    Simpson, Len Parker; Kordi, Mehdi

    2017-07-01

    Typically, accessing the asymptote (critical power; CP) and curvature constant (W') parameters of the hyperbolic power-duration relationship requires multiple constant-power exhaustive-exercise trials spread over several visits. However, more recently single-visit protocols and personal power meters have been used. This study investigated the practicality of using a 2-trial, single-visit protocol in providing reliable CP and W' estimates. Eight trained cyclists underwent 3- and 12-min maximal-exercise trials in a single session to derive (2-trial) CP and W' estimates. On a separate occasion a 5-min trial was performed, providing a 3rd trial to calculate (3-trial) CP and W'. There were no differences in CP (283 ± 66 vs 282 ± 65 W) or W' (18.72 ± 6.21 vs 18.27 ± 6.29 kJ) obtained from either the 2-trial or 3-trial method, respectively. After 2 familiarization sessions (completing a 3- and a 12-min trial on both occasions), both CP and W' remained reliable over additional separate measurements. The current study demonstrates that after 2 familiarization sessions, reliable CP and W' parameters can be obtained from trained cyclists using only 2 maximal-exercise trials. These results offer practitioners a practical, time-efficient solution for incorporating power-duration testing into applied athlete support.

  18. A GA based penalty function technique for solving constrained redundancy allocation problem of series system with interval valued reliability of components

    Science.gov (United States)

    Gupta, R. K.; Bhunia, A. K.; Roy, D.

    2009-10-01

    In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.

  19. Maximizers versus satisficers

    OpenAIRE

    Andrew M. Parker; Wandi Bruine de Bruin; Baruch Fischhoff

    2007-01-01

    Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...

  20. Development of reliability-based safety enhancement technology

    International Nuclear Information System (INIS)

    Kim, Kil Yoo; Han, Sang Hoon; Jang, Seung Cherl

    2002-04-01

    This project aims to develop critical technologies and the necessary reliability DB for maximizing the economics in the NPP operation with keeping the safety using the information of the risk (or reliability). For the research goal, firstly the four critical technologies(Risk Informed Tech. Spec. Optimization, Risk Informed Inservice Testing, On-line Maintenance, Maintenance Rule) for RIR and A have been developed. Secondly, KIND (Korea Information System for Nuclear Reliability Data) has been developed. Using KIND, YGN 3,4 and UCN 3,4 component reliability DB have been established. A reactor trip history DB for all NPP in Korea also has been developed and analyzed. Finally, a detailed reliability analysis of RPS/ESFAS for KNSP has been performed. With the result of the analysis, the sensitivity analysis also has been performed to optimize the AOT/STI of tech. spec. A statistical analysis procedure and computer code have been developed for the set point drift analysis

  1. Maximal frustration as an immunological principle.

    Science.gov (United States)

    de Abreu, F Vistulo; Mostardinha, P

    2009-03-06

    A fundamental problem in immunology is that of understanding how the immune system selects promptly which cells to kill without harming the body. This problem poses an apparent paradox. Strong reactivity against pathogens seems incompatible with perfect tolerance towards self. We propose a different view on cellular reactivity to overcome this paradox: effector functions should be seen as the outcome of cellular decisions which can be in conflict with other cells' decisions. We argue that if cellular systems are frustrated, then extensive cross-reactivity among the elements in the system can decrease the reactivity of the system as a whole and induce perfect tolerance. Using numerical and mathematical analyses, we discuss two simple models that perform optimal pathogenic detection with no autoimmunity if cells are maximally frustrated. This study strongly suggests that a principle of maximal frustration could be used to build artificial immune systems. It would be interesting to test this principle in the real adaptive immune system.

  2. Determination of the exercise intensity that elicits maximal fat oxidation in individuals with obesity.

    Science.gov (United States)

    Dandanell, Sune; Præst, Charlotte Boslev; Søndergård, Stine Dam; Skovborg, Camilla; Dela, Flemming; Larsen, Steen; Helge, Jørn Wulff

    2017-04-01

    Maximal fat oxidation (MFO) and the exercise intensity that elicits MFO (Fat Max ) are commonly determined by indirect calorimetry during graded exercise tests in both obese and normal-weight individuals. However, no protocol has been validated in individuals with obesity. Thus, the aims were to develop a graded exercise protocol for determination of Fat Max in individuals with obesity, and to test validity and inter-method reliability. Fat oxidation was assessed over a range of exercise intensities in 16 individuals (age: 28 (26-29) years; body mass index: 36 (35-38) kg·m -2 ; 95% confidence interval) on a cycle ergometer. The graded exercise protocol was validated against a short continuous exercise (SCE) protocol, in which Fat Max was determined from fat oxidation at rest and during 10 min of continuous exercise at 35%, 50%, and 65% of maximal oxygen uptake. Intraclass and Pearson correlation coefficients between the protocols were 0.75 and 0.72 and within-subject coefficient of variation (CV) was 5 (3-7)%. A Bland-Altman plot revealed a bias of -3% points of maximal oxygen uptake (limits of agreement: -12 to 7). A tendency towards a systematic difference (p = 0.06) was observed, where Fat Max occurred at 42 (40-44)% and 45 (43-47)% of maximal oxygen uptake with the graded and the SCE protocol, respectively. In conclusion, there was a high-excellent correlation and a low CV between the 2 protocols, suggesting that the graded exercise protocol has a high inter-method reliability. However, considerable intra-individual variation and a trend towards systematic difference between the protocols reveal that further optimization of the graded exercise protocol is needed to improve validity.

  3. Reliability of maximal mitochondrial oxidative phosphorylation in permeabilized fibers from the vastus lateralis employing high-resolution respirometry

    DEFF Research Database (Denmark)

    Cardinale, Daniele A; Gejl, Kasper D; Ørtenblad, Niels

    2018-01-01

    The purpose was to assess the impact of various factors on methodological errors associated with measurement of maximal oxidative phosphorylation (OXPHOS) in human skeletal muscle determined by high-resolution respirometry in saponin-permeabilized fibers. Biopsies were collected from 25 men...

  4. Cardiorespiratory Coordination in Repeated Maximal Exercise

    Directory of Open Access Journals (Sweden)

    Sergi Garcia-Retortillo

    2017-06-01

    Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC

  5. Entropy maximization

    Indian Academy of Sciences (India)

    Abstract. It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf f that satisfy. ∫ fhi dμ = λi for i = 1, 2,...,...k the maximizer of entropy is an f0 that is pro- portional to exp(. ∑ ci hi ) for some choice of ci . An extension of this to a continuum of.

  6. Reliability and Validity of a Submaximal Warm-up Test for Monitoring Training Status in Professional Soccer Players.

    Science.gov (United States)

    Rabbani, Alireza; Kargarfard, Mehdi; Twist, Craig

    2018-02-01

    Rabbani, A, Kargarfard, M, and Twist, C. Reliability and validity of a submaximal warm-up test for monitoring training status in professional soccer players. J Strength Cond Res 32(2): 326-333, 2018-Two studies were conducted to assess the reliability and validity of a submaximal warm-up test (SWT) in professional soccer players. For the reliability study, 12 male players performed an SWT over 3 trials, with 1 week between trials. For the validity study, 14 players of the same team performed an SWT and a 30-15 intermittent fitness test (30-15IFT) 7 days apart. Week-to-week reliability in selected heart rate (HR) responses (exercise heart rate [HRex], heart rate recovery [HRR] expressed as the number of beats recovered within 1 minute [HRR60s], and HRR expressed as the mean HR during 1 minute [HRpost1]) was determined using the intraclass correlation coefficient (ICC) and typical error of measurement expressed as coefficient of variation (CV). The relationships between HR measures derived from the SWT and the maximal speed reached at the 30-15IFT (VIFT) were used to assess validity. The range for ICC and CV values was 0.83-0.95 and 1.4-7.0% in all HR measures, respectively, with the HRex as the most reliable HR measure of the SWT. Inverse large (r = -0.50 and 90% confidence limits [CLs] [-0.78 to -0.06]) and very large (r = -0.76 and CL, -0.90 to -0.45) relationships were observed between HRex and HRpost1 with VIFT in relative (expressed as the % of maximal HR) measures, respectively. The SWT is a reliable and valid submaximal test to monitor high-intensity intermittent running fitness in professional soccer players. In addition, the test's short duration (5 minutes) and simplicity mean that it can be used regularly to assess training status in high-level soccer players.

  7. The selection of field component reliability data for use in nuclear safety studies

    International Nuclear Information System (INIS)

    Coxson, B.A.; Tabaie, Mansour

    1990-01-01

    The paper reviews the user requirements for field component failure data in nuclear safety studies, and the capability of various data sources to satisfy these requirements. Aspects such as estimating the population of items exposed to failure, incompleteness, and under-reporting problems are discussed. The paper takes as an example the selection of component reliability data for use in the Pre-Operational Safety Report (POSR) for Sizewell 'B' Power Station, where field data has in many cases been derived from equipment other than that to be procured and operated on site. The paper concludes that the main quality sought in the available data sources for such studies is the ability to examine failure narratives in component reliability data systems for equipment performing comparable duties to the intended plant application. The main benefit brought about in the last decade is the interactive access to data systems which are adequately structured with regard to the equipment covered, and also provide a text-searching capability of quality-controlled event narratives. (author)

  8. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability.

    Science.gov (United States)

    Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M

    2014-12-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.

  9. Reliability issues : a Canadian perspective

    International Nuclear Information System (INIS)

    Konow, H.

    2004-01-01

    A Canadian perspective of power reliability issues was presented. Reliability depends on adequacy of supply and a framework for standards. The challenges facing the electric power industry include new demand, plant replacement and exports. It is expected that demand will by 670 TWh by 2020, with 205 TWh coming from new plants. Canada will require an investment of $150 billion to meet this demand and the need is comparable in the United States. As trade grows, the challenge becomes a continental issue and investment in the bi-national transmission grid will be essential. The 5 point plan of the Canadian Electricity Association is to: (1) establish an investment climate to ensure future electricity supply, (2) move government and industry towards smart and effective regulation, (3) work to ensure a sustainable future for the next generation, (4) foster innovation and accelerate skills development, and (5) build on the strengths of an integrated North American system to maximize opportunity for Canadians. The CEA's 7 measures that enhance North American reliability were listed with emphasis on its support for a self-governing international organization for developing and enforcing mandatory reliability standards. CEA also supports the creation of a binational Electric Reliability Organization (ERO) to identify and solve reliability issues in the context of a bi-national grid. tabs., figs

  10. Entropy Maximization

    Indian Academy of Sciences (India)

    It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy ∫ f h i d = i for i = 1 , 2 , … , … k the maximizer of entropy is an f 0 that is proportional to exp ⁡ ( ∑ c i h i ) for some choice of c i . An extension of this to a continuum of ...

  11. Maximizing Selective Cleavages at Aspartic Acid and Proline Residues for the Identification of Intact Proteins

    Science.gov (United States)

    Foreman, David J.; Dziekonski, Eric T.; McLuckey, Scott A.

    2018-04-01

    A new approach for the identification of intact proteins has been developed that relies on the generation of relatively few abundant products from specific cleavage sites. This strategy is intended to complement standard approaches that seek to generate many fragments relatively non-selectively. Specifically, this strategy seeks to maximize selective cleavage at aspartic acid and proline residues via collisional activation of precursor ions formed via electrospray ionization (ESI) under denaturing conditions. A statistical analysis of the SWISS-PROT database was used to predict the number of arginine residues for a given intact protein mass and predict a m/z range where the protein carries a similar charge to the number of arginine residues thereby enhancing cleavage at aspartic acid residues by limiting proton mobility. Cleavage at aspartic acid residues is predicted to be most favorable in the m/z range of 1500-2500, a range higher than that normally generated by ESI at low pH. Gas-phase proton transfer ion/ion reactions are therefore used for precursor ion concentration from relatively high charge states followed by ion isolation and subsequent generation of precursor ions within the optimal m/z range via a second proton transfer reaction step. It is shown that the majority of product ion abundance is concentrated into cleavages C-terminal to aspartic acid residues and N-terminal to proline residues for ions generated by this process. Implementation of a scoring system that weights both ion fragment type and ion fragment area demonstrated identification of standard proteins, ranging in mass from 8.5 to 29.0 kDa. [Figure not available: see fulltext.

  12. Derating design for optimizing reliability and cost with an application to liquid rocket engines

    International Nuclear Information System (INIS)

    Kim, Kyungmee O.; Roh, Taeseong; Lee, Jae-Woo; Zuo, Ming J.

    2016-01-01

    Derating is the operation of an item at a stress that is lower than its rated design value. Previous research has indicated that reliability can be increased from operational derating. In order to derate an item in field operation, however, an engineer must rate the design of the item at a stress level higher than the operational stress level, which increases the item's nominal failure rate and development costs. At present, there is no model available to quantify the cost and reliability that considers the design uprating as well as the operational derating. In this paper, we establish the reliability expression in terms of the derating level assuming that the nominal failure rate is constant with time for a fixed rated design value. The total development cost is expressed in terms of the rated design value and the number of tests necessary to demonstrate the reliability requirement. The properties of the optimal derating level are explained for maximizing the reliability or for minimizing the cost. As an example, the proposed model is applied to the design of liquid rocket engines. - Highlights: • Modeled the effect of derating design on the reliability and the development cost. • Discovered that derating design may reduce the cost of reliability demonstration test. • Optimized the derating design parameter for reliability maximization or cost minimization.

  13. Maximally incompatible quantum observables

    Energy Technology Data Exchange (ETDEWEB)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)

    2014-05-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  14. Maximally incompatible quantum observables

    International Nuclear Information System (INIS)

    Heinosaari, Teiko; Schultz, Jussi; Toigo, Alessandro; Ziman, Mario

    2014-01-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  15. Strategic defense and attack for reliability systems

    International Nuclear Information System (INIS)

    Hausken, Kjell

    2008-01-01

    This article illustrates a method by which arbitrarily complex series/parallel reliability systems can be analyzed. The method is illustrated with the series-parallel and parallel-series systems. Analytical expressions are determined for the investments and utilities of the defender and the attacker, depend on their unit costs of investment for each component, the contest intensity for each component, and their evaluations of the value of system functionality. For a series-parallel system, infinitely many components in parallel benefit the defender maximally regardless of the finite number of parallel subsystems in series. Conversely, infinitely many components in series benefit the attacker maximally regardless of the finite number of components in parallel in each subsystem. For a parallel-series system, the results are opposite. With equivalent components, equal unit costs for defender and attacker, equal intensity for all components, and equally many components in series and parallel, the defender always prefers the series-parallel system rather than the parallel-series system, and converse holds for the attacker. Hence from the defender's perspective, ceteris paribus, the series-parallel system is more reliable, and has fewer 'cut sets' or failure modes

  16. Relay Selections for Security and Reliability in Mobile Communication Networks over Nakagami-m Fading Channels

    Directory of Open Access Journals (Sweden)

    Hongji Huang

    2017-01-01

    Full Text Available This paper studies the relay selection schemes in mobile communication system over Nakagami-m channel. To make efficient use of licensed spectrum, both single relay selection (SRS scheme and multirelays selection (MRS scheme over the Nakagami-m channel are proposed. Also, the intercept probability (IP and outage probability (OP of the proposed SRS and MRS for the communication links depending on realistic spectrum sensing are derived. Furthermore, this paper assesses the manifestation of conventional direct transmission scheme to compare with the proposed SRS and MRS ones based on the Nakagami-m channel, and the security-reliability trade-off (SRT performance of the proposed schemes and the conventional schemes is well investigated. Additionally, the SRT of the proposed SRS and MRS schemes is demonstrated better than that of direct transmission scheme over the Nakagami-m channel, which can protect the communication transmissions against eavesdropping attacks. Additionally, simulation results show that our proposed relay selection schemes achieve better SRT performance than that of conventional direct transmission over the Nakagami-m channel.

  17. Maximal combustion temperature estimation

    International Nuclear Information System (INIS)

    Golodova, E; Shchepakina, E

    2006-01-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models

  18. Developing maximal neuromuscular power: part 2 - training considerations for improving maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-02-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and

  19. Influence of previous experience on resistance training on reliability of one-repetition maximum test.

    Science.gov (United States)

    Ritti-Dias, Raphael Mendes; Avelar, Ademar; Salvador, Emanuel Péricles; Cyrino, Edilson Serpeloni

    2011-05-01

    The 1-repetition maximum test (1RM) has been widely used to assess maximal strength. However, to improve accuracy in assessing maximal strength, several sessions of the 1RM test are recommended. The aim of this study was to analyze the influence of previous resistance training experience on the reliability of 1RM test. Thirty men were assigned to the following 2 groups according to their previous resistance training experience: no previous resistance training experience (NOEXP) and more than 24 months of resistance training experience (EXP). All subjects performed the 1RM tests in bench press and squat in 4 sessions on distinct days. There was a significant session × group effect in bench press (F = 3.09; p reliability of the 1RM test is influenced by the subject's previous experience in resistance training. Subjects without experience in resistance training require more practice and familiarization and show greater increases in maximal strength between sessions than subjects with previous experience in resistance training.

  20. Laboratory and Field-Based Evaluation of Short-Term Effort with Maximal Intensity in Individuals with Intellectual Disabilities

    Directory of Open Access Journals (Sweden)

    Lencse-Mucha Judit

    2015-12-01

    Full Text Available Results of previous studies have not indicated clearly which tests should be used to assess short-term efforts of people with intellectual disabilities. Thus, the aim of the present study was to evaluate laboratory and field-based tests of short-term effort with maximal intensity of subjects with intellectual disabilities. Twenty four people with intellectual disability, who trained soccer, participated in this study. The 30 s Wingate test and additionally an 8 s test with maximum intensity were performed on a bicycle ergometer. The fatigue index, maximal and mean power, relative maximal and relative mean power were measured. Overall, nine field-based tests were conducted: 5, 10 and 20 m sprints, a 20 m shuttle run, a seated medicine ball throw, a bent arm hang test, a standing broad jump, sit-ups and a hand grip test. The reliability of the 30 s and 8 s Wingate tests for subjects with intellectual disability was confirmed. Significant correlation was observed for mean power between the 30 s and 8 s tests on the bicycle ergometer at a moderate level (r >0.4. Moreover, significant correlations were indicated between the results of laboratory tests and field tests, such as the 20 m sprint, the 20 m shuttle run, the standing long jump and the medicine ball throw. The strongest correlation was in the medicine ball throw. The 30 s Wingate test is a reliable test assessing maximal effort in subjects with intellectual disability. The results of this research confirmed that the 8 s test on a bicycle ergometer had a moderate correlation with the 30 s Wingate test in this population, thus, this comparison needs further investigation to examine alternativeness of the 8 s to 30 s Wingate tests. The non-laboratory tests could be used to indirectly assess performance in short-term efforts with maximal intensity.

  1. Laboratory and Field-Based Evaluation of Short-Term Effort with Maximal Intensity in Individuals with Intellectual Disabilities

    Science.gov (United States)

    Lencse-Mucha, Judit; Molik, Bartosz; Marszałek, Jolanta; Kaźmierska-Kowalewska, Kalina; Ogonowska-Słodownik, Anna

    2015-01-01

    Results of previous studies have not indicated clearly which tests should be used to assess short-term efforts of people with intellectual disabilities. Thus, the aim of the present study was to evaluate laboratory and field-based tests of short-term effort with maximal intensity of subjects with intellectual disabilities. Twenty four people with intellectual disability, who trained soccer, participated in this study. The 30 s Wingate test and additionally an 8 s test with maximum intensity were performed on a bicycle ergometer. The fatigue index, maximal and mean power, relative maximal and relative mean power were measured. Overall, nine field-based tests were conducted: 5, 10 and 20 m sprints, a 20 m shuttle run, a seated medicine ball throw, a bent arm hang test, a standing broad jump, sit-ups and a hand grip test. The reliability of the 30 s and 8 s Wingate tests for subjects with intellectual disability was confirmed. Significant correlation was observed for mean power between the 30 s and 8 s tests on the bicycle ergometer at a moderate level (r >0.4). Moreover, significant correlations were indicated between the results of laboratory tests and field tests, such as the 20 m sprint, the 20 m shuttle run, the standing long jump and the medicine ball throw. The strongest correlation was in the medicine ball throw. The 30 s Wingate test is a reliable test assessing maximal effort in subjects with intellectual disability. The results of this research confirmed that the 8 s test on a bicycle ergometer had a moderate correlation with the 30 s Wingate test in this population, thus, this comparison needs further investigation to examine alternativeness of the 8 s to 30 s Wingate tests. The non-laboratory tests could be used to indirectly assess performance in short-term efforts with maximal intensity. PMID:26834874

  2. Reliability enhancement of portal frame structure by finite element synthesis

    International Nuclear Information System (INIS)

    Nakagiri, S.

    1989-01-01

    The stochastic finite element methods have been applied to the evaluation of structural response and reliability of uncertain structural systems. The structural reliability index of the advanced first-order second moment (AFOSM) method is a candidate of the measure of assessing structural safety and reliability. The reliability index can be evaluated when a baseline design of structures under interest is proposed and the covariance matrix of the probabilistic variables is acquired to represent uncertainties involved in the structure systems. The reliability index thus evaluated is not assured the largest one for the structure. There is left a possibility to enhance the structural reliability for the given covariance matrix by changing the baseline design. From such a viewpoint of structural optimization, some ideas have been proposed to maximize the reliability or to minimize the failure probability of uncertain structural systems. A method of changing the design is proposed to increase the reliability index from its baseline value to another desired value. The reliability index in this paper is calculated mainly by the method of Lagrange multiplier

  3. High-resolution imaging of expertise reveals reliable object selectivity in the fusiform face area related to perceptual performance.

    Science.gov (United States)

    McGugin, Rankin Williams; Gatenby, J Christopher; Gore, John C; Gauthier, Isabel

    2012-10-16

    The fusiform face area (FFA) is a region of human cortex that responds selectively to faces, but whether it supports a more general function relevant for perceptual expertise is debated. Although both faces and objects of expertise engage many brain areas, the FFA remains the focus of the strongest modular claims and the clearest predictions about expertise. Functional MRI studies at standard-resolution (SR-fMRI) have found responses in the FFA for nonface objects of expertise, but high-resolution fMRI (HR-fMRI) in the FFA [Grill-Spector K, et al. (2006) Nat Neurosci 9:1177-1185] and neurophysiology in face patches in the monkey brain [Tsao DY, et al. (2006) Science 311:670-674] reveal no reliable selectivity for objects. It is thus possible that FFA responses to objects with SR-fMRI are a result of spatial blurring of responses from nonface-selective areas, potentially driven by attention to objects of expertise. Using HR-fMRI in two experiments, we provide evidence of reliable responses to cars in the FFA that correlate with behavioral car expertise. Effects of expertise in the FFA for nonface objects cannot be attributed to spatial blurring beyond the scale at which modular claims have been made, and within the lateral fusiform gyrus, they are restricted to a small area (200 mm(2) on the right and 50 mm(2) on the left) centered on the peak of face selectivity. Experience with a category may be sufficient to explain the spatially clustered face selectivity observed in this region.

  4. Quantification of colour Doppler activity in the wrist in patients with rheumatoid arthritis - the reliability of different methods for image selection and evaluation

    DEFF Research Database (Denmark)

    Ellegaard, K.; Torp-Pedersen, S.; Lund, H.

    2008-01-01

    measurements in the wrist of patients with rheumatoid arthritis (RA) using different selection and quantification methods. Materials and Methods: 14 patients with RA had their wrist scanned twice by the same investigator with an interval of 30 Minutes, The images for analysis were selected either......Purpose: The amount Of colour Doppler activity in the inflamed synovium is used to quantity inflammatory activity. The measurements may vary due to image selection, quantification method, and point in cardiac cycle. This study investigated the test-retest reliability Of ultrasound colour Doppler...... was obtained when the images were selected guided by colour Doppler and the Subsequent quantification was (done in an area defined by anatomical Structures. With this method, the intra-class coefficient ICC (2.1) was 0.95 and the within-subject SD (SW) was 0.017, indicating good reliability. In contrast, poor...

  5. Reliability analysis based on the losses from failures.

    Science.gov (United States)

    Todinov, M T

    2006-04-01

    early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.

  6. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    Science.gov (United States)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  7. Improving preimplantation genetic diagnosis (PGD) reliability by selection of sperm donor with the most informative haplotype.

    Science.gov (United States)

    Malcov, Mira; Gold, Veronica; Peleg, Sagit; Frumkin, Tsvia; Azem, Foad; Amit, Ami; Ben-Yosef, Dalit; Yaron, Yuval; Reches, Adi; Barda, Shimi; Kleiman, Sandra E; Yogev, Leah; Hauser, Ron

    2017-04-26

    The study is aimed to describe a novel strategy that increases the accuracy and reliability of PGD in patients using sperm donation by pre-selecting the donor whose haplotype does not overlap the carrier's one. A panel of 4-9 informative polymorphic markers, flanking the mutation in carriers of autosomal dominant/X-linked disorders, was tested in DNA of sperm donors before PGD. Whenever the lengths of donors' repeats overlapped those of the women, additional donors' DNA samples were analyzed. The donor that demonstrated the minimal overlapping with the patient was selected for IVF. In 8 out of 17 carriers the markers of the initially chosen donors overlapped the patients' alleles and 2-8 additional sperm donors for each patient were haplotyped. The selection of additional sperm donors increased the number of informative markers and reduced misdiagnosis risk from 6.00% ± 7.48 to 0.48% ±0.68. The PGD results were confirmed and no misdiagnosis was detected. Our study demonstrates that pre-selecting a sperm donor whose haplotype has minimal overlapping with the female's haplotype, is critical for reducing the misdiagnosis risk and ensuring a reliable PGD. This strategy may contribute to prevent the transmission of affected IVF-PGD embryos using a simple and economical procedure. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. DNA testing of donors was approved by the institutional Helsinki committee (registration number 319-08TLV, 2008). The present study was approved by the institutional Helsinki committee (registration number 0385-13TLV, 2013).

  8. The value of reliability

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Karlström, Anders

    2010-01-01

    We derive the value of reliability in the scheduling of an activity of random duration, such as travel under congested conditions. Using a simple formulation of scheduling utility, we show that the maximal expected utility is linear in the mean and standard deviation of trip duration, regardless...... of the form of the standardised distribution of trip durations. This insight provides a unification of the scheduling model and models that include the standard deviation of trip duration directly as an argument in the cost or utility function. The results generalise approximately to the case where the mean...

  9. Innovations in Agriculture in Oregon: Farmers Irrigation District Improves Water Quality, Maximizes Water Conservation, and Generates Clean, Renewable Energy

    Science.gov (United States)

    The Hood River Farmers Irrigation District used $36.2 million in CWSRF loans for a multiple-year endeavor to convert the open canal system to a piped, pressurized irrigation system to maximize water conservation and restore reliable water delivery to crops

  10. Scyllac equipment reliability analysis

    International Nuclear Information System (INIS)

    Gutscher, W.D.; Johnson, K.J.

    1975-01-01

    Most of the failures in Scyllac can be related to crowbar trigger cable faults. A new cable has been designed, procured, and is currently undergoing evaluation. When the new cable has been proven, it will be worked into the system as quickly as possible without causing too much additional down time. The cable-tip problem may not be easy or even desirable to solve. A tightly fastened permanent connection that maximizes contact area would be more reliable than the plug-in type of connection in use now, but it would make system changes and repairs much more difficult. The balance of the failures have such a low occurrence rate that they do not cause much down time and no major effort is underway to eliminate them. Even though Scyllac was built as an experimental system and has many thousands of components, its reliability is very good. Because of this the experiment has been able to progress at a reasonable pace

  11. AUC-Maximizing Ensembles through Metalearning.

    Science.gov (United States)

    LeDell, Erin; van der Laan, Mark J; Petersen, Maya

    2016-05-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.

  12. Test-retest reliability of selected items of Health Behaviour in School-aged Children (HBSC survey questionnaire in Beijing, China

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2010-08-01

    Full Text Available Abstract Background Children's health and health behaviour are essential for their development and it is important to obtain abundant and accurate information to understand young people's health and health behaviour. The Health Behaviour in School-aged Children (HBSC study is among the first large-scale international surveys on adolescent health through self-report questionnaires. So far, more than 40 countries in Europe and North America have been involved in the HBSC study. The purpose of this study is to assess the test-retest reliability of selected items in the Chinese version of the HBSC survey questionnaire in a sample of adolescents in Beijing, China. Methods A sample of 95 male and female students aged 11 or 15 years old participated in a test and retest with a three weeks interval. Student Identity numbers of respondents were utilized to permit matching of test-retest questionnaires. 23 items concerning physical activity, sedentary behaviour, sleep and substance use were evaluated by using the percentage of response shifts and the single measure Intraclass Correlation Coefficients (ICC with 95% confidence interval (CI for all respondents and stratified by gender and age. Items on substance use were only evaluated for school children aged 15 years old. Results The percentage of no response shift between test and retest varied from 32% for the item on computer use at weekends to 92% for the three items on smoking. Of all the 23 items evaluated, 6 items (26% showed a moderate reliability, 12 items (52% displayed a substantial reliability and 4 items (17% indicated almost perfect reliability. No gender and age group difference of the test-retest reliability was found except for a few items on sedentary behaviour. Conclusions The overall findings of this study suggest that most selected indicators in the HBSC survey questionnaire have satisfactory test-retest reliability for the students in Beijing. Further test-retest studies in a large

  13. Optimized Interface Diversity for Ultra-Reliable Low Latency Communication (URLLC)

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Liu, Rongkuan; Popovski, Petar

    2017-01-01

    An important ingredient of the future 5G systems will be Ultra-Reliable Low-Latency Communication (URLLC). A way to offer URLLC without intervention in the baseband/PHY layer design is to use interface diversity and integrate multiple communication interfaces, each interface based on a different...... technology. Our approach is to use rateless codes to seamlessly distribute coded payload and redundancy data across multiple available communication interfaces. We formulate an optimization problem to find the payload allocation weights that maximize the reliability at specific target latency values...

  14. Failure database and tools for wind turbine availability and reliability analyses. The application of reliability data for selected wind turbines

    DEFF Research Database (Denmark)

    Kozine, Igor; Christensen, P.; Winther-Jensen, M.

    2000-01-01

    The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking the ...... similar safety systems. The database was established with Microsoft Access DatabaseManagement System, the software for reliability and availability assessments was created with Visual Basic....... the contributions at both the component and system levels. The project resulted in a software package combining a failure database with programs for predicting WTB availability and the reliability of all thecomponents and systems, especially the safety system. The report consists of a description of the theoretical......The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking...

  15. Is CP violation maximal

    International Nuclear Information System (INIS)

    Gronau, M.

    1984-01-01

    Two ambiguities are noted in the definition of the concept of maximal CP violation. The phase convention ambiguity is overcome by introducing a CP violating phase in the quark mixing matrix U which is invariant under rephasing transformations. The second ambiguity, related to the parametrization of U, is resolved by finding a single empirically viable definition of maximal CP violation when assuming that U does not single out one generation. Considerable improvement in the calculation of nonleptonic weak amplitudes is required to test the conjecture of maximal CP violation. 21 references

  16. Shareholder, stakeholder-owner or broad stakeholder maximization

    OpenAIRE

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating stakeholder-owner. Maximization of shareholder value is a special case of owner-maximization, and only under quite re-strictive assumptions shareholder maximization is larger or equal to stakeholder-owner...

  17. Informative sensor selection and learning for prediction of lower limb kinematics using generative stochastic neural networks.

    Science.gov (United States)

    Eunsuk Chong; Taejin Choi; Hyungmin Kim; Seung-Jong Kim; Yoha Hwang; Jong Min Lee

    2017-07-01

    We propose a novel approach of selecting useful input sensors as well as learning a mathematical model for predicting lower limb joint kinematics. We applied a feature selection method based on the mutual information called the variational information maximization, which has been reported as the state-of-the-art work among information based feature selection methods. The main difficulty in applying the method is estimating reliable probability density of input and output data, especially when the data are high dimensional and real-valued. We addressed this problem by applying a generative stochastic neural network called the restricted Boltzmann machine, through which we could perform sampling based probability estimation. The mutual informations between inputs and outputs are evaluated in each backward sensor elimination step, and the least informative sensor is removed with its network connections. The entire network is fine-tuned by maximizing conditional likelihood in each step. Experimental results are shown for 4 healthy subjects walking with various speeds, recording 64 sensor measurements including electromyogram, acceleration, and foot-pressure sensors attached on both lower limbs for predicting hip and knee joint angles. For test set of walking with arbitrary speed, our results show that our suggested method can select informative sensors while maintaining a good prediction accuracy.

  18. The risk function approach to profit maximizing estimation in direct mailing

    NARCIS (Netherlands)

    Muus, Lars; Scheer, Hiek van der; Wansbeek, Tom

    1999-01-01

    When the parameters of the model describing consumers' reaction to a mailing are known, addresses for a future mailing can be selected in a profit-maximizing way. Usually, these parameters are unknown and are to be estimated. Standard estimation are based on a quadratic loss function. In the present

  19. Neuron selection based on deflection coefficient maximization for the neural decoding of dexterous finger movements.

    Science.gov (United States)

    Kim, Yong-Hee; Thakor, Nitish V; Schieber, Marc H; Kim, Hyoung-Nam

    2015-05-01

    Future generations of brain-machine interface (BMI) will require more dexterous motion control such as hand and finger movements. Since a population of neurons in the primary motor cortex (M1) area is correlated with finger movements, neural activities recorded in M1 area are used to reconstruct an intended finger movement. In a BMI system, decoding discrete finger movements from a large number of input neurons does not guarantee a higher decoding accuracy in spite of the increase in computational burden. Hence, we hypothesize that selecting neurons important for coding dexterous flexion/extension of finger movements would improve the BMI performance. In this paper, two metrics are presented to quantitatively measure the importance of each neuron based on Bayes risk minimization and deflection coefficient maximization in a statistical decision problem. Since motor cortical neurons are active with movements of several different fingers, the proposed method is more suitable for a discrete decoding of flexion-extension finger movements than the previous methods for decoding reaching movements. In particular, the proposed metrics yielded high decoding accuracies across all subjects and also in the case of including six combined two-finger movements. While our data acquisition and analysis was done off-line and post processing, our results point to the significance of highly coding neurons in improving BMI performance.

  20. Task-oriented maximally entangled states

    International Nuclear Information System (INIS)

    Agrawal, Pankaj; Pradhan, B

    2010-01-01

    We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.

  1. Reliability analysis for dynamic configurations of systems with three failure modes

    International Nuclear Information System (INIS)

    Pham, Hoang

    1999-01-01

    Analytical models for computing the reliability of dynamic configurations of systems, such as majority and k-out-of-n, assuming that units and systems are subject to three types of failures: stuck-at-0, stuck-at-1, and stuck-at-x are presented in this paper. Formulas for determining the optimal design policies that maximize the reliability of dynamic k-out-of-n configurations subject to three types of failures are defined. The comparisons of the reliability modeling functions are also obtained. The optimum system size and threshold value k that minimize the expected cost of dynamic k-out-of-n configurations are also determined

  2. Reference ranges and reliability of transabdominal ultrasonographic renal dimensions in thoroughbred horses.

    Science.gov (United States)

    Draper, Alexandra C E; Bowen, I Mark; Hallowell, Gayle D

    2012-01-01

    The aims of this study were to establish a normal reference range (mean ± 2 SD) and assess reliability of renal dimensions obtained using transabdominal ultrasonography in Thoroughbred horses (n = 7). A minimum of three ultrasonographic cineloops were obtained from each intercostal space and the left paralumbar fossa by two observers daily for three consecutive days. Renal length, width, and thickness and cortex, medulla, and pelvic dimensions were obtained. Measurements were undertaken by both observers, who were unaware of prior measurements, to assess reproducibility and measured on three separate occasions to evaluate short-term measurement repeatability. Measurements from images obtained by both operators were compared to evaluate image repeatability. The left kidney was consistently identified in the left 15th-17th intercostal space and the paralumbar fossa with maximal length in the 16th intercostal space (12.7 ± 2.0 cm) and maximal width in the paralumbar fossa (7.9 ± 1.1 cm). The right kidney was consistently identified in the right 15th-17th intercostal space with maximal length and maximal width in the 15th intercostal space (16.0 ± 0.7 cm and 7.9 ± 1.0 cm). Reproducibility, image repeatability, measurement repeatability were good to excellent, although were less good for the smaller structures. There were no differences in renal dimensions between horses. Overall renal ultrasonography was reliable and a normal reference range for Thoroughbred horses was established. Renal dimensions vary between rib spaces. As repeatability and reproducibility were excellent for renal length and width, it may be prudent to use those measurements in rib spaces where parameters were maximal. © 2011 Veterinary Radiology & Ultrasound.

  3. FLOUTING MAXIMS IN INDONESIA LAWAK KLUB CONVERSATION

    Directory of Open Access Journals (Sweden)

    Rahmawati Sukmaningrum

    2017-04-01

    Full Text Available This study aims to identify the types of maxims flouted in the conversation in famous comedy show, Indonesia Lawak Club. Likewise, it also tries to reveal the speakers‘ intention of flouting the maxim in the conversation during the show. The writers use descriptive qualitative method in conducting this research. The data is taken from the dialogue of Indonesia Lawak club and then analyzed based on Grice‘s cooperative principles. The researchers read the dialogue‘s transcripts, identify the maxims, and interpret the data to find the speakers‘ intention for flouting the maxims in the communication. The results show that there are four types of maxims flouted in the dialogue. Those are maxim of quality (23%, maxim of quantity (11%, maxim of manner (31%, and maxim of relevance (35. Flouting the maxims in the conversations is intended to make the speakers feel uncomfortable with the conversation, show arrogances, show disagreement or agreement, and ridicule other speakers.

  4. VIOLATION OF CONVERSATION MAXIM ON TV ADVERTISEMENTS

    Directory of Open Access Journals (Sweden)

    Desak Putu Eka Pratiwi

    2015-07-01

    Full Text Available Maxim is a principle that must be obeyed by all participants textually and interpersonally in order to have a smooth communication process. Conversation maxim is divided into four namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner of speaking. Violation of the maxim may occur in a conversation in which the information the speaker has is not delivered well to his speaking partner. Violation of the maxim in a conversation will result in an awkward impression. The example of violation is the given information that is redundant, untrue, irrelevant, or convoluted. Advertisers often deliberately violate the maxim to create unique and controversial advertisements. This study aims to examine the violation of maxims in conversations of TV ads. The source of data in this research is food advertisements aired on TV media. Documentation and observation methods are applied to obtain qualitative data. The theory used in this study is a maxim theory proposed by Grice (1975. The results of the data analysis are presented with informal method. The results of this study show an interesting fact that the violation of maxim in a conversation found in the advertisement exactly makes the advertisements very attractive and have a high value.

  5. Finding Maximal Quasiperiodicities in Strings

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Pedersen, Christian N. S.

    2000-01-01

    of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...... in the suffix tree that have a superprimitive path-label....

  6. Objective method to report planner-independent skin/rib maximal dose in balloon-based high dose rate (HDR) brachytherapy for breast cancer

    International Nuclear Information System (INIS)

    Kim, Yongbok; Trombetta, Mark G.

    2011-01-01

    Purpose: An objective method was proposed and compared with a manual selection method to determine planner-independent skin and rib maximal dose in balloon-based high dose rate (HDR) brachytherapy planning. Methods: The maximal dose to skin and rib was objectively extracted from a dose volume histogram (DVH) of skin and rib volumes. A virtual skin volume was produced by expanding the skin surface in three dimensions (3D) external to the breast with a certain thickness in the planning computed tomography (CT) images. Therefore, the maximal dose to this volume occurs on the skin surface the same with a conventional manual selection method. The rib was also delineated in the planning CT images and its maximal dose was extracted from its DVH. The absolute (Abdiff=|D max Man -D max DVH |) and relative (Rediff[%]=100x(|D max Man -D max DVH |)/D max DVH ) maximal skin and rib dose differences between the manual selection method (D max Man ) and the objective method (D max DVH ) were measured for 50 balloon-based HDR (25 MammoSite and 25 Contura) patients. Results: The average±standard deviation of maximal dose difference was 1.67%±1.69% of the prescribed dose (PD). No statistical difference was observed between MammoSite and Contura patients for both Abdiff and Rediff[%] values. However, a statistically significant difference (p value max >90%) compared with lower dose range (D max <90%): 2.16%±1.93% vs 1.19%±1.25% with p value of 0.0049. However, the Rediff[%] analysis eliminated the inverse square factor and there was no statistically significant difference (p value=0.8931) between high and low dose ranges. Conclusions: The objective method using volumetric information of skin and rib can determine the planner-independent maximal dose compared with the manual selection method. However, the difference was <2% of PD, on average, if appropriate attention is paid to selecting a manual dose point in 3D planning CT images.

  7. Between-day reliability of a method for non-invasive estimation of muscle composition.

    Science.gov (United States)

    Simunič, Boštjan

    2012-08-01

    Tensiomyography is a method for valid and non-invasive estimation of skeletal muscle fibre type composition. The validity of selected temporal tensiomyographic measures has been well established recently; there is, however, no evidence regarding the method's between-day reliability. Therefore it is the aim of this paper to establish the between-day repeatability of tensiomyographic measures in three skeletal muscles. For three consecutive days, 10 healthy male volunteers (mean±SD: age 24.6 ± 3.0 years; height 177.9 ± 3.9 cm; weight 72.4 ± 5.2 kg) were examined in a supine position. Four temporal measures (delay, contraction, sustain, and half-relaxation time) and maximal amplitude were extracted from the displacement-time tensiomyogram. A reliability analysis was performed with calculations of bias, random error, coefficient of variation (CV), standard error of measurement, and intra-class correlation coefficient (ICC) with a 95% confidence interval. An analysis of ICC demonstrated excellent agreement (ICC were over 0.94 in 14 out of 15 tested parameters). However, lower CV was observed in half-relaxation time, presumably because of the specifics of the parameter definition itself. These data indicate that for the three muscles tested, tensiomyographic measurements were reproducible across consecutive test days. Furthermore, we indicated the most possible origin of the lowest reliability detected in half-relaxation time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. The relative reliability of actively participating and passively observing raters in a simulation-based assessment for selection to specialty training in anaesthesia.

    Science.gov (United States)

    Roberts, M J; Gale, T C E; Sice, P J A; Anderson, I R

    2013-06-01

    Selection to specialty training is a high-stakes assessment demanding valuable consultant time. In one initial entry level and two higher level anaesthesia selection centres, we investigated the feasibility of using staff participating in simulation scenarios, rather than observing consultants, to rate candidate performance. We compared participant and observer scores using four different outcomes: inter-rater reliability; score distributions; correlation of candidate rankings; and percentage of candidates whose selection might be affected by substituting participants' for observers' ratings. Inter-rater reliability between observers was good (correlation coefficient 0.73-0.96) but lower between participants (correlation coefficient 0.39-0.92), particularly at higher level where participants also rated candidates more favourably than did observers. Station rank orderings were strongly correlated between the rater groups at entry level (rho 0.81, p training posts available. We conclude that using participating raters is feasible at initial entry level only. Anaesthesia © 2013 The Association of Anaesthetists of Great Britain and Ireland.

  9. Shareholder, stakeholder-owner or broad stakeholder maximization

    DEFF Research Database (Denmark)

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating...... including the shareholders of a company. Although it may be the ultimate goal for Corporate Social Responsibility to achieve this kind of maximization, broad stakeholder maximization is quite difficult to give a precise definition. There is no one-dimensional measure to add different stakeholder benefits...... not traded on the mar-ket, and therefore there is no possibility for practical application. Broad stakeholder maximization instead in practical applications becomes satisfying certain stakeholder demands, so that the practical application will be stakeholder-owner maximization un-der constraints defined...

  10. On the maximal superalgebras of supersymmetric backgrounds

    International Nuclear Information System (INIS)

    Figueroa-O'Farrill, Jose; Hackett-Jones, Emily; Moutsopoulos, George; Simon, Joan

    2009-01-01

    In this paper we give a precise definition of the notion of a maximal superalgebra of certain types of supersymmetric supergravity backgrounds, including the Freund-Rubin backgrounds, and propose a geometric construction extending the well-known construction of its Killing superalgebra. We determine the structure of maximal Lie superalgebras and show that there is a finite number of isomorphism classes, all related via contractions from an orthosymplectic Lie superalgebra. We use the structure theory to show that maximally supersymmetric waves do not possess such a maximal superalgebra, but that the maximally supersymmetric Freund-Rubin backgrounds do. We perform the explicit geometric construction of the maximal superalgebra of AdS 4 X S 7 and find that it is isomorphic to osp(1|32). We propose an algebraic construction of the maximal superalgebra of any background asymptotic to AdS 4 X S 7 and we test this proposal by computing the maximal superalgebra of the M2-brane in its two maximally supersymmetric limits, finding agreement.

  11. Reliability and Validity of Dual-Task Mobility Assessments in People with Chronic Stroke

    Science.gov (United States)

    Yang, Lei; He, Chengqi; Pang, Marco Yiu Chung

    2016-01-01

    Background The ability to perform a cognitive task while walking simultaneously (dual-tasking) is important in real life. However, the psychometric properties of dual-task walking tests have not been well established in stroke. Objective To assess the test-retest reliability, concurrent and known-groups validity of various dual-task walking tests in people with chronic stroke. Design Observational measurement study with a test-retest design. Methods Eighty-eight individuals with chronic stroke participated. The testing protocol involved four walking tasks (walking forward at self-selected and maximal speed, walking backward at self-selected speed, and crossing over obstacles) performed simultaneously with each of the three attention-demanding tasks (verbal fluency, serial 3 subtractions or carrying a cup of water). For each dual-task condition, the time taken to complete the walking task, the correct response rate (CRR) of the cognitive task, and the dual-task effect (DTE) for the walking time and CRR were calculated. Forty-six of the participants were tested twice within 3–4 days to establish test-retest reliability. Results The walking time in various dual-task assessments demonstrated good to excellent reliability [Intraclass correlation coefficient (ICC2,1) = 0.70–0.93; relative minimal detectable change at 95% confidence level (MDC95%) = 29%-45%]. The reliability of the CRR (ICC2,1 = 0.58–0.81) and the DTE in walking time (ICC2,1 = 0.11–0.80) was more varied. The reliability of the DTE in CRR (ICC2,1 = -0.31–0.40) was poor to fair. The walking time and CRR obtained in various dual-task walking tests were moderately to strongly correlated with those of the dual-task Timed-up-and-Go test, thus demonstrating good concurrent validity. None of the tests could discriminate fallers (those who had sustained at least one fall in the past year) from non-fallers. Limitation The results are generalizable to community-dwelling individuals with chronic stroke only

  12. Reliability estimates for selected sensors in fusion applications

    International Nuclear Information System (INIS)

    Cadwallader, L.C.

    1996-09-01

    This report presents the results of a study to define several types of sensors in use, the qualitative reliability (failure modes) and quantitative reliability (average failure rates) for these types of process sensors. Temperature, pressure, flow, and level sensors are discussed for water coolant and for cryogenic coolants. The failure rates that have been found are useful for risk assessment and safety analysis. Repair times and calibration intervals are also given when found in the literature. All of these values can also be useful to plant operators and maintenance personnel. Designers may be able to make use of these data when planning systems. The final chapter in this report discusses failure rates for several types of personnel safety sensors, including ionizing radiation monitors, toxic and combustible gas detectors, humidity sensors, and magnetic field sensors. These data could be useful to industrial hygienists and other safety professionals when designing or auditing for personnel safety

  13. Exploratory factor analysis and reliability analysis with missing data: A simple method for SPSS users

    Directory of Open Access Journals (Sweden)

    Bruce Weaver

    2014-09-01

    Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.

  14. Modeling of the thermal physical process and study on the reliability of linear energy density for selective laser melting

    Directory of Open Access Journals (Sweden)

    Zhaowei Xiang

    2018-06-01

    Full Text Available A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM. Keywords: Selective laser melting, Volume shrinkage, Powder-to-dense process, Numerical modeling, Thermal analysis, Linear energy density

  15. Maximally multipartite entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Parisi, Giorgio; Pascazio, Saverio

    2008-06-01

    We introduce the notion of maximally multipartite entangled states of n qubits as a generalization of the bipartite case. These pure states have a bipartite entanglement that does not depend on the bipartition and is maximal for all possible bipartitions. They are solutions of a minimization problem. Examples for small n are investigated, both analytically and numerically.

  16. The Reliability of Individualized Load-Velocity Profiles.

    Science.gov (United States)

    Banyard, Harry G; Nosaka, K; Vernon, Alex D; Haff, G Gregory

    2017-11-15

    This study examined the reliability of peak velocity (PV), mean propulsive velocity (MPV), and mean velocity (MV) in the development of load-velocity profiles (LVP) in the full depth free-weight back squat performed with maximal concentric effort. Eighteen resistance-trained men performed a baseline one-repetition maximum (1RM) back squat trial and three subsequent 1RM trials used for reliability analyses, with 48-hours interval between trials. 1RM trials comprised lifts from six relative loads including 20, 40, 60, 80, 90, and 100% 1RM. Individualized LVPs for PV, MPV, or MV were derived from loads that were highly reliable based on the following criteria: intra-class correlation coefficient (ICC) >0.70, coefficient of variation (CV) ≤10%, and Cohen's d effect size (ES) 0.05) between trials, movement velocities, or between linear regression versus second order polynomial fits. PV 20-100% , MPV 20-90% , and MV 20-90% are reliable and can be utilized to develop LVPs using linear regression. Conceptually, LVPs can be used to monitor changes in movement velocity and employed as a method for adjusting sessional training loads according to daily readiness.

  17. Maximally Symmetric Composite Higgs Models.

    Science.gov (United States)

    Csáki, Csaba; Ma, Teng; Shu, Jing

    2017-09-29

    Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.

  18. Selected Methods For Increases Reliability The Of Electronic Systems Security

    Directory of Open Access Journals (Sweden)

    Paś Jacek

    2015-11-01

    Full Text Available The article presents the issues related to the different methods to increase the reliability of electronic security systems (ESS for example, a fire alarm system (SSP. Reliability of the SSP in the descriptive sense is a property preservation capacity to implement the preset function (e.g. protection: fire airport, the port, logistics base, etc., at a certain time and under certain conditions, e.g. Environmental, despite the possible non-compliance by a specific subset of elements this system. Analyzing the available literature on the ESS-SSP is not available studies on methods to increase the reliability (several works similar topics but moving with respect to the burglary and robbery (Intrusion. Based on the analysis of the set of all paths in the system suitability of the SSP for the scenario mentioned elements fire events (device critical because of security.

  19. Self-Tuning Method for Increased Obstacle Detection Reliability Based on Internet of Things LiDAR Sensor Models.

    Science.gov (United States)

    Castaño, Fernando; Beruvides, Gerardo; Villalonga, Alberto; Haber, Rodolfo E

    2018-05-10

    On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the 'Internet of Things' (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds.

  20. Value maximizing maintenance policies under general repair

    International Nuclear Information System (INIS)

    Marais, Karen B.

    2013-01-01

    One class of maintenance optimization problems considers the notion of general repair maintenance policies where systems are repaired or replaced on failure. In each case the optimality is based on minimizing the total maintenance cost of the system. These cost-centric optimizations ignore the value dimension of maintenance and can lead to maintenance strategies that do not maximize system value. This paper applies these ideas to the general repair optimization problem using a semi-Markov decision process, discounted cash flow techniques, and dynamic programming to identify the value-optimal actions for any given time and system condition. The impact of several parameters on maintenance strategy, such as operating cost and revenue, system failure characteristics, repair and replacement costs, and the planning time horizon, is explored. This approach provides a quantitative basis on which to base maintenance strategy decisions that contribute to system value. These decisions are different from those suggested by traditional cost-based approaches. The results show (1) how the optimal action for a given time and condition changes as replacement and repair costs change, and identifies the point at which these costs become too high for profitable system operation; (2) that for shorter planning horizons it is better to repair, since there is no time to reap the benefits of increased operating profit and reliability; (3) how the value-optimal maintenance policy is affected by the system's failure characteristics, and hence whether it is worthwhile to invest in higher reliability; and (4) the impact of the repair level on the optimal maintenance policy. -- Highlights: •Provides a quantitative basis for maintenance strategy decisions that contribute to system value. •Shows how the optimal action for a given condition changes as replacement and repair costs change. •Shows how the optimal policy is affected by the system's failure characteristics. •Shows when it is

  1. Effects of ethnicity on the relationship between vertical jump and maximal power on a cycle ergometer

    Directory of Open Access Journals (Sweden)

    Rouis Majdi

    2016-06-01

    Full Text Available The aim of this study was to verify the impact of ethnicity on the maximal power-vertical jump relationship. Thirty-one healthy males, sixteen Caucasian (age: 26.3 ± 3.5 years; body height: 179.1 ± 5.5 cm; body mass: 78.1 ± 9.8 kg and fifteen Afro-Caribbean (age: 24.4 ±2.6 years; body height: 178.9 ± 5.5 cm; body mass: 77.1 ± 10.3 kg completed three sessions during which vertical jump height and maximal power of lower limbs were measured. The results showed that the values of vertical jump height and maximal power were higher for Afro-Caribbean participants (62.92 ± 6.7 cm and 14.70 ± 1.75 W∙kg-1 than for Caucasian ones (52.92 ± 4.4 cm and 12.75 ± 1.36 W∙kg-1. Moreover, very high reliability indices were obtained on vertical jump (e.g. 0.95 < ICC < 0.98 and maximal power performance (e.g. 0.75 < ICC < 0.97. However, multiple linear regression analysis showed that, for a given value of maximal power, the Afro-Caribbean participants jumped 8 cm higher than the Caucasians. Together, these results confirmed that ethnicity impacted the maximal power-vertical jump relationship over three sessions. In the current context of cultural diversity, the use of vertical jump performance as a predictor of muscular power should be considered with caution when dealing with populations of different ethnic origins.

  2. Maximal quantum Fisher information matrix

    International Nuclear Information System (INIS)

    Chen, Yu; Yuan, Haidong

    2017-01-01

    We study the existence of the maximal quantum Fisher information matrix in the multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher information matrix by deriving various trade-off relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit. (paper)

  3. Test-retest reliability of a handheld dynamometer for measurement of isometric cervical muscle strength.

    Science.gov (United States)

    Vannebo, Katrine Tranaas; Iversen, Vegard Moe; Fimland, Marius Steiro; Mork, Paul Jarle

    2018-03-02

    There is a lack of test-retest reliability studies of measurements of cervical muscle strength, taking into account gender and possible learning effects. To investigate test-retest reliability of measurement of maximal isometric cervical muscle strength by handheld dynamometry. Thirty women (age 20-58 years) and 28 men (age 20-60 years) participated in the study. Maximal isometric strength (neck flexion, neck extension, and right/left lateral flexion) was measured on three separate days at least five days apart by one evaluator. Intra-rater consistency tended to improve from day 1-2 measurements to day 2-3 measurements in both women and men. In women, the intra-class correlation coefficients (ICC) for day 2 to day 3 measurements were 0.91 (95% confidence interval [CI], 0.82-0.95) for neck flexion, 0.88 (95% CI, 0.76-0.94) for neck extension, 0.84 (95% CI, 0.68-0.92) for right lateral flexion, and 0.89 (95% CI, 0.78-0.95) for left lateral flexion. The corresponding ICCs among men were 0.86 (95% CI, 0.72-0.93) for neck flexion, 0.93 (95% CI, 0.85-0.97) for neck extension, 0.82 (95% CI, 0.65-0.91) for right lateral flexion and 0.73 (95% CI, 0.50-0.87) for left lateral flexion. This study describes a reliable and easy-to-administer test for assessing maximal isometric cervical muscle strength.

  4. Understanding Violations of Gricean Maxims in Preschoolers and Adults

    Directory of Open Access Journals (Sweden)

    Mako eOkanda

    2015-07-01

    Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.

  5. Problematics of Reliability of Road Rollers

    Science.gov (United States)

    Stawowiak, Michał; Kuczaj, Mariusz

    2018-06-01

    This article refers to the reliability of road rollers used in a selected roadworks company. Information on the method of road rollers service and how the service affects the reliability of these rollers is presented. Attention was paid to the process of the implemented maintenance plan with regard to the machine's operational time. The reliability of road rollers was analyzed by determining and interpreting readiness coefficients.

  6. solveME: fast and reliable solution of nonlinear ME models

    DEFF Research Database (Denmark)

    Yang, Laurence; Ma, Ding; Ebrahim, Ali

    2016-01-01

    Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstr......Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic...... reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Results: Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models...

  7. Reliability and validity of two isometric squat tests.

    Science.gov (United States)

    Blazevich, Anthony J; Gill, Nicholas; Newton, Robert U

    2002-05-01

    The purpose of the present study was first to examine the reliability of isometric squat (IS) and isometric forward hack squat (IFHS) tests to determine if repeated measures on the same subjects yielded reliable results. The second purpose was to examine the relation between isometric and dynamic measures of strength to assess validity. Fourteen male subjects performed maximal IS and IFHS tests on 2 occasions and 1 repetition maximum (1-RM) free-weight squat and forward hack squat (FHS) tests on 1 occasion. The 2 tests were found to be highly reliable (intraclass correlation coefficient [ICC](IS) = 0.97 and ICC(IFHS) = 1.00). There was a strong relation between average IS and 1-RM squat performance, and between IFHS and 1-RM FHS performance (r(squat) = 0.77, r(FHS) = 0.76; p squat and FHS test performances (r squat and FHS test performance can be attributed to differences in the movement patterns of the tests

  8. Reliability and Availability of Cloud Computing

    CERN Document Server

    Bauer, Eric

    2012-01-01

    A holistic approach to service reliability and availability of cloud computing Reliability and Availability of Cloud Computing provides IS/IT system and solution architects, developers, and engineers with the knowledge needed to assess the impact of virtualization and cloud computing on service reliability and availability. It reveals how to select the most appropriate design for reliability diligence to assure that user expectations are met. Organized in three parts (basics, risk analysis, and recommendations), this resource is accessible to readers of diverse backgrounds and experience le

  9. Mechanically braked elliptical Wingate test: modification considerations, load optimization, and reliability.

    Science.gov (United States)

    Ozkaya, Ozgur; Colakoglu, Muzaffer; Kuzucu, Erinc O; Yildiztepe, Engin

    2012-05-01

    The 30-second, all-out Wingate test evaluates anaerobic performance using an upper or lower body cycle ergometer (cycle Wingate test). A recent study showed that using a modified electromagnetically braked elliptical trainer for Wingate testing (EWT) leads to greater power outcomes because of larger muscle group recruitment. The main purpose of this study was to modify an elliptical trainer using an easily understandable mechanical brake system instead of an electromagnetically braked modification. Our secondary aim was to determine a proper test load for the EWT to reveal the most efficient anaerobic test outcomes such as peak power (PP), average power (AP), minimum power (MP), power drop (PD), and fatigue index ratio (FI%) and to evaluate the retest reliability of the selected test load. Delta lactate responses (ΔLa) were also analyzed to confirm all the anaerobic performance of the athletes. Thirty healthy and well-trained male university athletes were selected to participate in the study. By analysis of variance, an 18% body mass workload yielded significantly greater test outcomes (PP = 19.5 ± 2.4 W·kg, AP = 13.7 ± 1.7 W·kg, PD = 27.9 ± 5 W·s, FI% = 58.4 ± 3.3%, and ΔLa = 15.4 ± 1.7 mM) than the other (12-24% body mass) tested loads (p braked modification of an elliptical trainer successfully estimated anaerobic power and capacity. A workload of 18% body mass was optimal for measuring maximal and reliable anaerobic power outcomes. Anaerobic testing using an EWT may be more useful to athletes and coaches than traditional cycle ergometers because a greater proportion of muscle groups are worked during exercise on an elliptical trainer.

  10. Aerospace reliability applied to biomedicine.

    Science.gov (United States)

    Lalli, V. R.; Vargo, D. J.

    1972-01-01

    An analysis is presented that indicates that the reliability and quality assurance methodology selected by NASA to minimize failures in aerospace equipment can be applied directly to biomedical devices to improve hospital equipment reliability. The Space Electric Rocket Test project is used as an example of NASA application of reliability and quality assurance (R&QA) methods. By analogy a comparison is made to show how these same methods can be used in the development of transducers, instrumentation, and complex systems for use in medicine.

  11. Cryogenic Selective Surfaces

    Data.gov (United States)

    National Aeronautics and Space Administration — Selective surfaces have wavelength dependent emissivity/absorption. These surfaces can be designed to reflect solar radiation, while maximizing infrared emittance,...

  12. Durability and Reliability of Large Diameter HDPE Pipe for Water Main Applications (Web Report 4485)

    Science.gov (United States)

    Research validates HDPE as a suitable material for use in municipal piping systems, and more research may help users maximize their understanding of its durability and reliability. Overall, corrosion resistance, hydraulic efficiency, flexibility, abrasion resistance, toughness, f...

  13. On maximal massive 3D supergravity

    OpenAIRE

    Bergshoeff , Eric A; Hohm , Olaf; Rosseel , Jan; Townsend , Paul K

    2010-01-01

    ABSTRACT We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric " general massive supergravity " and the maximally supersymmetric N = 8 " new massive supergravity ". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level. (Bergshoeff, Eric A) (Hohm, Olaf) (Rosseel, Jan) P.K.Townsend@da...

  14. MAXIMIZING THE BENEFITS OF ERP SYSTEMS

    Directory of Open Access Journals (Sweden)

    Paulo André da Conceição Menezes

    2010-04-01

    Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.

  15. Application of reliability methods in Ontario Hydro

    International Nuclear Information System (INIS)

    Jeppesen, R.; Ravishankar, T.J.

    1985-01-01

    Ontario Hydro have established a reliability program in support of its substantial nuclear program. Application of the reliability program to achieve both production and safety goals is described. The value of such a reliability program is evident in the record of Ontario Hydro's operating nuclear stations. The factors which have contributed to the success of the reliability program are identified as line management's commitment to reliability; selective and judicious application of reliability methods; establishing performance goals and monitoring the in-service performance; and collection, distribution, review and utilization of performance information to facilitate cost-effective achievement of goals and improvements. (orig.)

  16. THE DEVELOPMENT OF AN INSTRUMENT FOR MEASURING THE UNDERSTANDING OF PROFIT-MAXIMIZING PRINCIPLES.

    Science.gov (United States)

    MCCORMICK, FLOYD G.

    THE PURPOSE OF THE STUDY WAS TO DEVELOP AN INSTRUMENT FOR MEASURING PROFIT-MAXIMIZING PRINCIPLES IN FARM MANAGEMENT WITH IMPLICATIONS FOR VOCATIONAL AGRICULTURE. PRINCIPLES WERE IDENTIFIED FROM LITERATURE SELECTED BY AGRICULTURAL ECONOMISTS. FORTY-FIVE MULTIPLE-CHOICE QUESTIONS WERE REFINED ON THE BASIS OF RESULTS OF THREE PRETESTS AND…

  17. Towards early software reliability prediction for computer forensic tools (case study).

    Science.gov (United States)

    Abu Talib, Manar

    2016-01-01

    Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.

  18. Maximal Entanglement in High Energy Physics

    Directory of Open Access Journals (Sweden)

    Alba Cervera-Lierta, José I. Latorre, Juan Rojo, Luca Rottoli

    2017-11-01

    Full Text Available We analyze how maximal entanglement is generated at the fundamental level in QED by studying correlations between helicity states in tree-level scattering processes at high energy. We demonstrate that two mechanisms for the generation of maximal entanglement are at work: i $s$-channel processes where the virtual photon carries equal overlaps of the helicities of the final state particles, and ii the indistinguishable superposition between $t$- and $u$-channels. We then study whether requiring maximal entanglement constrains the coupling structure of QED and the weak interactions. In the case of photon-electron interactions unconstrained by gauge symmetry, we show how this requirement allows reproducing QED. For $Z$-mediated weak scattering, the maximal entanglement principle leads to non-trivial predictions for the value of the weak mixing angle $\\theta_W$. Our results are a first step towards understanding the connections between maximal entanglement and the fundamental symmetries of high-energy physics.

  19. Maximal Inequalities for Dependent Random Variables

    DEFF Research Database (Denmark)

    Hoffmann-Jorgensen, Jorgen

    2016-01-01

    Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X......-k. Then a maximal inequality gives conditions ensuring that the maximal partial sum M-n = max(1) (...

  20. An ethical justification of profit maximization

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2010-01-01

    In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...... utility) maximizing actions are ruled out, e.g., by behavioural norms or formal institutions....

  1. Optimally Fortifying Logic Reliability through Criticality Ranking

    Directory of Open Access Journals (Sweden)

    Yu Bai

    2015-02-01

    Full Text Available With CMOS technology aggressively scaling towards the 22-nm node, modern FPGA devices face tremendous aging-induced reliability challenges due to bias temperature instability (BTI and hot carrier injection (HCI. This paper presents a novel anti-aging technique at the logic level that is both scalable and applicable for VLSI digital circuits implemented with FPGA devices. The key idea is to prolong the lifetime of FPGA-mapped designs by strategically elevating the VDD values of some LUTs based on their modular criticality values. Although the idea of scaling VDD in order to improve either energy efficiency or circuit reliability has been explored extensively, our study distinguishes itself by approaching this challenge through an analytical procedure, therefore being able to maximize the overall reliability of the target FPGA design by rigorously modeling the BTI-induced device reliability and optimally solving the VDD assignment problem. Specifically, we first develop a systematic framework to analytically model the reliability of an FPGA LUT (look-up table, which consists of both RAM memory bits and associated switching circuit. We also, for the first time, establish the relationship between signal transition density and a LUT’s reliability in an analytical way. This key observation further motivates us to define the modular criticality as the product of signal transition density and the logic observability of each LUT. Finally, we analytically prove, for the first time, that the optimal way to improve the overall reliability of a whole FPGA device is to fortify individual LUTs according to their modular criticality. To the best of our knowledge, this work is the first to draw such a conclusion.

  2. Does mental exertion alter maximal muscle activation?

    Directory of Open Access Journals (Sweden)

    Vianney eRozand

    2014-09-01

    Full Text Available Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 minutes each: i high mental exertion (incongruent Stroop task, ii moderate mental exertion (congruent Stroop task, iii low mental exertion (watching a movie. In each condition, mental exertion was combined with ten intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 minutes. Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors.

  3. Determination of the exercise intensity that elicits maximal fat oxidation in individuals with obesity

    DEFF Research Database (Denmark)

    Jørgensen, Sune Dandanell; Præst, Charlotte Boslev; Søndergård, Stine Dam

    2017-01-01

    . The graded exercise protocol was validated against a short continuous exercise (SCE) protocol, in which FatMax was determined from fat oxidation at rest and during 10-min continuous exercise at 35, 50 and 65% of maximal oxygen uptake (VO2max). Intraclass and Pearson correlation coefficients between......2max with the graded and the SCE protocol, respectively. In conclusion, there was a high-excellent correlation and a low CV between the two protocols, suggesting that the graded exercise protocol has a high inter-method reliability. However, considerable intra-individual variation and a trend...

  4. Seven Reliability Indices for High-Stakes Decision Making: Description, Selection, and Simple Calculation

    Science.gov (United States)

    Smith, Stacey L.; Vannest, Kimberly J.; Davis, John L.

    2011-01-01

    The reliability of data is a critical issue in decision-making for practitioners in the school. Percent Agreement and Cohen's kappa are the two most widely reported indices of inter-rater reliability, however, a recent Monte Carlo study on the reliability of multi-category scales found other indices to be more trustworthy given the type of data…

  5. INDICATORS OF MAXIMAL FLEXOR FORCE OF LEFT AND RIGHT HAND FOR THE POLICE SELECTION CRITERIA PURPOSES

    Directory of Open Access Journals (Sweden)

    Milivoj Dopsaj

    2006-06-01

    factor the right hand participated with 95.8% force, while left hand participated with 95.0% force. We have therefore demonstrated, among tested population, measurement of right hand force is more representative estimation of the given variable. Based on the distribution of results for right hand force, as function of isolated cluster criterion, distribution of the tested population in respect to Cluster1-7 is following: 18.53%, 27.94%, 24.62%, 17.98%, 8.02%, 2.63%, 0.28%, respectively. The value of the bordering minimum for right hand force of Cluster 2 is 56.87 DaN, which represents 18.5‰ (percentile of tested population. In regard to tested policemen population between 19 and 24 years of age, results of right hand grip force is test of choice for estimation of maximal hand flexor force. The value of inflexion point (point of separation in regard to selection criterion – acceptable/ unacceptable is on the level of 56.87 DaN, for the right hand grip force and its placed among 18.5‰ (percentile of tested population.

  6. Reliability and validity of the test of incremental respiratory endurance measures of inspiratory muscle performance in COPD.

    Science.gov (United States)

    Formiga, Magno F; Roach, Kathryn E; Vital, Isabel; Urdaneta, Gisel; Balestrini, Kira; Calderon-Candelario, Rafael A; Campos, Michael A; Cahalin, Lawrence P

    2018-01-01

    The Test of Incremental Respiratory Endurance (TIRE) provides a comprehensive assessment of inspiratory muscle performance by measuring maximal inspiratory pressure (MIP) over time. The integration of MIP over inspiratory duration (ID) provides the sustained maximal inspiratory pressure (SMIP). Evidence on the reliability and validity of these measurements in COPD is not currently available. Therefore, we assessed the reliability, responsiveness and construct validity of the TIRE measures of inspiratory muscle performance in subjects with COPD. Test-retest reliability, known-groups and convergent validity assessments were implemented simultaneously in 81 male subjects with mild to very severe COPD. TIRE measures were obtained using the portable PrO2 device, following standard guidelines. All TIRE measures were found to be highly reliable, with SMIP demonstrating the strongest test-retest reliability with a nearly perfect intraclass correlation coefficient (ICC) of 0.99, while MIP and ID clustered closely together behind SMIP with ICC values of about 0.97. Our findings also demonstrated known-groups validity of all TIRE measures, with SMIP and ID yielding larger effect sizes when compared to MIP in distinguishing between subjects of different COPD status. Finally, our analyses confirmed convergent validity for both SMIP and ID, but not MIP. The TIRE measures of MIP, SMIP and ID have excellent test-retest reliability and demonstrated known-groups validity in subjects with COPD. SMIP and ID also demonstrated evidence of moderate convergent validity and appear to be more stable measures in this patient population than the traditional MIP.

  7. Selection and reporting of statistical methods to assess reliability of a diagnostic test: Conformity to recommended methods in a peer-reviewed journal

    International Nuclear Information System (INIS)

    Park, Ji Eun; Sung, Yu Sub; Han, Kyung Hwa

    2017-01-01

    To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary

  8. Selection and reporting of statistical methods to assess reliability of a diagnostic test: Conformity to recommended methods in a peer-reviewed journal

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ji Eun; Sung, Yu Sub [Dept. of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of); Han, Kyung Hwa [Dept. of Radiology, Research Institute of Radiological Science, Yonsei University College of Medicine, Seoul (Korea, Republic of); and others

    2017-11-15

    To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary.

  9. Several submaximal exercise tests are reliable, valid and acceptable in people with chronic pain, fibromyalgia or chronic fatigue: a systematic review.

    Science.gov (United States)

    Ratter, Julia; Radlinger, Lorenz; Lucas, Cees

    2014-09-01

    Are submaximal and maximal exercise tests reliable, valid and acceptable in people with chronic pain, fibromyalgia and fatigue disorders? Systematic review of studies of the psychometric properties of exercise tests. People older than 18 years with chronic pain, fibromyalgia and chronic fatigue disorders. Studies of the measurement properties of tests of physical capacity in people with chronic pain, fibromyalgia or chronic fatigue disorders were included. Studies were required to report: reliability coefficients (intraclass correlation coefficient, alpha reliability coefficient, limits of agreements and Bland-Altman plots); validity coefficients (intraclass correlation coefficient, Spearman's correlation, Kendal T coefficient, Pearson's correlation); or dropout rates. Fourteen studies were eligible: none had low risk of bias, 10 had unclear risk of bias and four had high risk of bias. The included studies evaluated: Åstrand test; modified Åstrand test; Lean body mass-based Åstrand test; submaximal bicycle ergometer test following another protocol other than Åstrand test; 2-km walk test; 5-minute, 6-minute and 10-minute walk tests; shuttle walk test; and modified symptom-limited Bruce treadmill test. None of the studies assessed maximal exercise tests. Where they had been tested, reliability and validity were generally high. Dropout rates were generally acceptable. The 2-km walk test was not recommended in fibromyalgia. Moderate evidence was found for reliability, validity and acceptability of submaximal exercise tests in patients with chronic pain, fibromyalgia or chronic fatigue. There is no evidence about maximal exercise tests in patients with chronic pain, fibromyalgia and chronic fatigue. Copyright © 2014. Published by Elsevier B.V.

  10. On maximal surfaces in asymptotically flat space-times

    International Nuclear Information System (INIS)

    Bartnik, R.; Chrusciel, P.T.; O Murchadha, N.

    1990-01-01

    Existence of maximal and 'almost maximal' hypersurfaces in asymptotically flat space-times is established under boundary conditions weaker than those considered previously. We show in particular that every vacuum evolution of asymptotically flat data for Einstein equations can be foliated by slices maximal outside a spatially compact set and that every (strictly) stationary asymptotically flat space-time can be foliated by maximal hypersurfaces. Amongst other uniqueness results, we show that maximal hypersurface can be used to 'partially fix' an asymptotic Poincare group. (orig.)

  11. Insulin resistance and maximal oxygen uptake

    DEFF Research Database (Denmark)

    Seibaek, Marie; Vestergaard, Henrik; Burchardt, Hans

    2003-01-01

    BACKGROUND: Type 2 diabetes, coronary atherosclerosis, and physical fitness all correlate with insulin resistance, but the relative importance of each component is unknown. HYPOTHESIS: This study was undertaken to determine the relationship between insulin resistance, maximal oxygen uptake......, and the presence of either diabetes or ischemic heart disease. METHODS: The study population comprised 33 patients with and without diabetes and ischemic heart disease. Insulin resistance was measured by a hyperinsulinemic euglycemic clamp; maximal oxygen uptake was measured during a bicycle exercise test. RESULTS......: There was a strong correlation between maximal oxygen uptake and insulin-stimulated glucose uptake (r = 0.7, p = 0.001), and maximal oxygen uptake was the only factor of importance for determining insulin sensitivity in a model, which also included the presence of diabetes and ischemic heart disease. CONCLUSION...

  12. Improving Stochastic Communication Network Performance: Reliability vs. Throughput

    Science.gov (United States)

    1991-12-01

    increased to one. 2) arc survivabil.. ities will be increased in increments of one tenths. and 3) the costs to increase- arc si’rvivabilities were equal and...This reliability value is leni used to maximize the associated expected flow. For Net work A. a bIdget of (8)() pro(duces a tradcoff point at (.58.37...Network B for a buidgel of 2000 which allows a nel \\\\ork relial)ilitv of one to be achieved and a bidget of 1200 which allows for ;, maximum 57

  13. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  14. Reliability of ultrasound for measurement of selected foot structures.

    Science.gov (United States)

    Crofts, G; Angin, S; Mickle, K J; Hill, S; Nester, C J

    2014-01-01

    Understanding the relationship between the lower leg muscles, foot structures and function is essential to explain how disease or injury may relate to changes in foot function and clinical pathology. The aim of this study was to investigate the inter-operator reliability of an ultrasound protocol to quantify features of: rear, mid and forefoot sections of the plantar fascia (PF); flexor hallucis brevis (FHB); flexor digitorum brevis (FDB); abductor hallucis (AbH); flexor digitorum longus (FDL); flexor hallucis longus (FHL); tibialis anterior (TA); and peroneus longus and brevis (PER). A sample of 6 females and 4 males (mean age 29.1 ± 7.2 years, mean BMI 25.5 ± 4.8) was recruited from a university student and staff population. Scans were obtained using a portable Venue 40 musculoskeletal ultrasound system (GE Healthcare UK) with a 5-13 MHz wideband linear array probe with a 12.7 mm × 47.1mm footprint by two operators in the same scanning session. Intraclass Correlation Coefficients (ICC) values for muscle thickness (ICC range 0.90-0.97), plantar fascia thickness (ICC range 0.94-0.98) and cross sectional muscle measurements (ICC range 0.91-0.98) revealed excellent inter-operator reliability. The limits of agreement, relative to structure size, ranged from 9.0% to 17.5% for muscle thickness, 11.0-18.0% for plantar fascia, and 11.0-26.0% for cross sectional area measurements. The ultrasound protocol implemented in this work has been shown to be reliable. It therefore offers the opportunity to quantify the structures concerned and better understand their contributions to foot function. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.

  15. POLITENESS MAXIM OF MAIN CHARACTER IN SECRET FORGIVEN

    Directory of Open Access Journals (Sweden)

    Sang Ayu Isnu Maharani

    2017-06-01

    Full Text Available Maxim of Politeness is an interesting subject to be discussed, since politeness has been criticized from our childhood. We are obliques to be polite to anyone either in speaking or in acting. Somehow we are manage to show politeness in our spoken expression though our intention might be not so polite. For example we must appriciate others opinion although we feel objection toward the opinion. In this article the analysis of politeness is based on maxim proposes by Leech. He proposed six types of politeness maxim. The discussion shows that the main character (Kristen and Kami use all types of maxim in their conversation. The most commonly used are approbation maxim and agreement maxim

  16. Reliability-oriented multi-resource allocation in a stochastic-flow network

    International Nuclear Information System (INIS)

    Hsieh, C.-C.; Lin, M.-H.

    2003-01-01

    A stochastic-flow network consists of a set of nodes, including source nodes which supply various resources and sink nodes at which resource demands take place, and a collection of arcs whose capacities have multiple operational states. The network reliability of such a stochastic-flow network is the probability that resources can be successfully transmitted from source nodes through multi-capacitated arcs to sink nodes. Although the evaluation schemes of network reliability in stochastic-flow networks have been extensively studied in the literature, how to allocate various resources at source nodes in a reliable means remains unanswered. In this study, a resource allocation problem in a stochastic-flow network is formulated that aims to determine the optimal resource allocation policy at source nodes subject to given resource demands at sink nodes such that the network reliability of the stochastic-flow network is maximized, and an algorithm for computing the optimal resource allocation is proposed that incorporates the principle of minimal path vectors. A numerical example is given to illustrate the proposed algorithm

  17. Non-Weight-Bearing and Weight-Bearing Ultrasonography of Select Foot Muscles in Young, Asymptomatic Participants: A Descriptive and Reliability Study.

    Science.gov (United States)

    Battaglia, Patrick J; Mattox, Ross; Winchester, Brett; Kettner, Norman W

    The primary aim of this study was to determine the reliability of diagnostic ultrasound imaging for select intrinsic foot muscles using both non-weight-bearing and weight-bearing postures. Our secondary aim was to describe the change in muscle cross-sectional area (CSA) and dorsoplantar thickness when bearing weight. An ultrasound examination was performed with a linear ultrasound transducer operating between 9 and 12 MHz. Long-axis and short-axis ultrasound images of the abductor hallucis, flexor digitorum brevis, and quadratus plantae were obtained in both the non-weight-bearing and weight-bearing postures. Two examiners independently collected ultrasound images to allow for interexaminer and intraexaminer reliability calculation. The change in muscle CSA and dorsoplantar thickness when bearing weight was also studied. There were 26 participants (17 female) with a mean age of 25.5 ± 3.8 years and a mean body mass index of 28.0 ± 7.8 kg/m 2 . Inter-examiner reliability was excellent when measuring the muscles in short axis (intraclass correlation coefficient >0.75) and fair to good in long axis (intraclass correlation coefficient >0.4). Intraexaminer reliability was excellent for the abductor hallucis and flexor digitorum brevis and ranged from fair to good to excellent for the quadratus plantae. Bearing weight did not reduce interexaminer or intraexaminer reliability. All muscles exhibited a significant increase in CSA when bearing weight. This is the first report to describe weight-bearing diagnostic ultrasound of the intrinsic foot muscles. Ultrasound imaging is reliable when imaging these muscles bearing weight. Furthermore, muscle CSA increases in the weight-bearing posture. Copyright © 2016. Published by Elsevier Inc.

  18. Reliability of the spent fuel identification for flask loading procedure used by COGEMA for fuel transport to La Hague

    International Nuclear Information System (INIS)

    Eid, M.; Zachar, M.; Pretesacque, P.

    1991-01-01

    The Spent Fuel Identification for Flask Loading (SFIFL) procedure designed by COGEMA is analysed and its reliability calculated. The reliability of the procedure is defined as the probability of transporting only approved fuel elements for a given number of shipments. The procedure describes a non-coherent system. A non-coherent system is the one in which two successive failures could result in a success, from the system mission point of view. A technique that describes the system with the help of its maximal cuts (states) is used for calculations. A maximal cut contains more than one failure which can split into two cuts (sub-states). Cuts splitting will enable us to analyse, in a systematic way, non-coherent systems with independent basic components. (author)

  19. Reliability of the spent fuel identification for flask loading procedure used by COGEMA for fuel transport to La Hague

    International Nuclear Information System (INIS)

    Eid, M.; Zachar, M.; Pretesacque, P.

    1990-01-01

    The Spent Fuel Identification for Flask Loading, SFIFL, procedure designed by COGEMA is analysed and its reliability is calculated. The reliability of the procedure is defined as the probability of transporting only approved fuel elements for a given number of shipments. The procedure describes a non-coherent system. A non-coherent system is the one in which two successive failures could result in a success, from the system mission point of view. A technique that describes the system with the help of its maximal cuts (states), is used for calculations. A maximal cut contains more than one failure can split into two cuts, (sub-states). Cuts splitting will enable us to analyse, in a systematic way, non-coherent systems with independent basic components. (author)

  20. Reliability on the move: safety and reliability in transportation

    International Nuclear Information System (INIS)

    Guy, G.B.

    1989-01-01

    The development of transportation has been a significant factor in the development of civilisation as a whole. Our technical ability to move people and goods now seems virtually limitless when one considers for example the achievements of the various space programmes. Yet our current achievements rely heavily on high standards of safety and reliability from equipment and the human component of transportation systems. Recent failures have highlighted our dependence on equipment and human reliability. This book represents the proceedings of the 1989 Safety and Reliability Society symposium held at Bath on 11-12 October 1989. The structure of the book follows the structure of the symposium itself and the papers selected represent current thinking the the wide field of transportation, and the areas of rail (6 papers, three on railway signalling), air including space (two papers), road (one paper), road and rail (two papers) and sea (three papers) are covered. There are four papers concerned with general transport issues. Three papers concerned with the transport of radioactive materials are indexed separately. (author)

  1. Reliability of thermal interface materials: A review

    International Nuclear Information System (INIS)

    Due, Jens; Robinson, Anthony J.

    2013-01-01

    Thermal interface materials (TIMs) are used extensively to improve thermal conduction across two mating parts. They are particularly crucial in electronics thermal management since excessive junction-to-ambient thermal resistances can cause elevated temperatures which can negatively influence device performance and reliability. Of particular interest to electronic package designers is the thermal resistance of the TIM layer at the end of its design life. Estimations of this allow the package to be designed to perform adequately over its entire useful life. To this end, TIM reliability studies have been performed using accelerated stress tests. This paper reviews the body of work which has been performed on TIM reliability. It focuses on the various test methodologies with commentary on the results which have been obtained for the different TIM materials. Based on the information available in the open literature, a test procedure is proposed for TIM selection based on beginning and end of life performance. - Highlights: ► This paper reviews the body of work which has been performed on TIM reliability. ► Test methodologies for reliability testing are outlined. ► Reliability results for the different TIM materials are discussed. ► A test procedure is proposed for TIM selection BOLife and EOLife performance.

  2. Human reliability assessors guide: an overview

    International Nuclear Information System (INIS)

    Humphreys, P.

    1988-01-01

    The Human Reliability Assessors Guide 1 provides a review of techniques currently available for the quantification of Human Error Probabilities. The Guide has two main objectives. The first is to provide a clear and comprehensive description of eight major techniques which can be used to assess human reliability. This is supplemented by case studies taken from practical applications of each technique to industrial problems. The second objective is to provide practical guidelines for the selection of techniques. The selection process is aided by reference to a set of criteria against which each of the eight techniques have been evaluated. Utilising the criteria and critiques, a selection method is presented. This is designed to assist the potential user in choosing the technique, or combination of techniques, most suited to answering the users requirements. For each of the eight selected techniques, a summary of the origins of the technique is provided, together with a method description, detailed case studies, abstracted case studies and supporting references. (author)

  3. Reliabilities of genomic estimated breeding values in Danish Jersey

    DEFF Research Database (Denmark)

    Thomasen, Jørn Rind; Guldbrandtsen, Bernt; Su, Guosheng

    2012-01-01

    In order to optimize the use of genomic selection in breeding plans, it is essential to have reliable estimates of the genomic breeding values. This study investigated reliabilities of direct genomic values (DGVs) in the Jersey population estimated by three different methods. The validation methods...... were (i) fivefold cross-validation and (ii) validation on the most recent 3 years of bulls. The reliability of DGV was assessed using squared correlations between DGV and deregressed proofs (DRPs). In the recent 3-year validation model, estimated reliabilities were also used to assess the reliabilities...... of DGV. The data set consisted of 1003 Danish Jersey bulls with conventional estimated breeding values (EBVs) for 14 different traits included in the Nordic selection index. The bulls were genotyped for Single-nucleotide polymorphism (SNP) markers using the Illumina 54 K chip. A Bayesian method was used...

  4. Natural maximal νμ-ντ mixing

    International Nuclear Information System (INIS)

    Wetterich, C.

    1999-01-01

    The naturalness of maximal mixing between myon- and tau-neutrinos is investigated. A spontaneously broken nonabelian generation symmetry can explain a small parameter which governs the deviation from maximal mixing. In many cases all three neutrino masses are almost degenerate. Maximal ν μ -ν τ -mixing suggests that the leading contribution to the light neutrino masses arises from the expectation value of a heavy weak triplet rather than from the seesaw mechanism. In this scenario the deviation from maximal mixing is predicted to be less than about 1%. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  5. Gaussian maximally multipartite-entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio

    2009-12-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .

  6. Gaussian maximally multipartite-entangled states

    International Nuclear Information System (INIS)

    Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano

    2009-01-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7.

  7. Comparative pharmacokinetic profiles of tectorigenin in rat plasma by UPLC-MS/MS after oral administration of Iris tectorum Maxim extract and pure tectoridin.

    Science.gov (United States)

    Yang, Min; Yang, Xiaolin; An, Jinmeng; Xiao, Wei; Wang, Zhenzhong; Huang, Wenzhe; Yang, Zhonglin; Li, Fei

    2015-10-10

    Iris tectorum Maxim, a well-known herb medicine, is commonly used for treatment of inflammation, cough, and pharyngitis for a long time in China. Tectoridin, main active ingredient of Iris tectorum Maxim, is often used for its quality control. This study was aimed to analyze the pharmacokinetic profile of tectorigenin (the metabolite of tectoridin) after oral administration of I. tectorum Maxim extract, and to compare the pharmacokinetic characterization of tectorigenin after oral administration of I. tectorum Maxim extract (ITME) and pure tectoridin (PT) in rats. In addition, a simple, reliable and sensitive UPLC-MS/MS method was developed for determination of tectorigenin in rat plasma, using kaempferol as internal standard. The processed samples were separated on a Poroshell 120 SB-C₁₈ column and detected by positive electrospray ionization in multiple reaction monitoring (MRM) mode. The method validation results indicated that the established method was simple, specific and reliable. The pharmacokinetic results showed that the plasma concentration of tectorigenin in ITME group was much higher than that of the PT group (p<0.01). Moreover, compared to PT group, t₁/₂ value and AUC(0-∞) value were also notably increased in ITME group (p<0.01). In conclusion, potential interaction exists between those chemical components in ITME, and the co-existing components in ITME could notably promote the absorption of tectoridin in rats, however, the exact compound(s) which enhance the absorption of tectoridin should be investigated in future study. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Utility maximization and mode of payment

    NARCIS (Netherlands)

    Koning, R.H.; Ridder, G.; Heijmans, R.D.H.; Pollock, D.S.G.; Satorra, A.

    2000-01-01

    The implications of stochastic utility maximization in a model of choice of payment are examined. Three types of compatibility with utility maximization are distinguished: global compatibility, local compatibility on an interval, and local compatibility on a finite set of points. Keywords:

  9. Multi-Objective Distribution Network Operation Based on Distributed Generation Optimal Placement Using New Antlion Optimizer Considering Reliability

    Directory of Open Access Journals (Sweden)

    KHANBABAZADEH Javad

    2016-10-01

    Full Text Available Distribution network designers and operators are trying to deliver electrical energy with high reliability and quality to their subscribers. Due to high losses in the distribution systems, using distributed generation can improves reliability, reduces losses and improves voltage profile of distribution network. Therefore, the choice of the location of these resources and also determining the amount of their generated power to maximize the benefits of this type of resource is an important issue which is discussed from different points of view today. In this paper, a new multi-objective optimal location and sizing of distributed generation resources is performed to maximize its benefits on the 33 bus distribution test network considering reliability and using a new Antlion Optimizer (ALO. The benefits for DG are considered as system losses reduction, system reliability improvement and benefits from the sale electricity and voltage profile improvement. For each of the mentioned benefits, the ALO algorithm is used to optimize the location and sizing of distributed generation resources. In order to verify the proposed approach, the obtained results have been analyzed and compared with the results of particle swarm optimization (PSO algorithm. The results show that the ALO has shown better performance in optimization problem solution versus PSO.

  10. Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Baran, Ismet; Tutum, Cem Celal; Hattel, Jesper Henri

    2013-01-01

    In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles...... with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature (Tmax) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative...

  11. Survival associated pathway identification with group Lp penalized global AUC maximization

    Directory of Open Access Journals (Sweden)

    Liu Zhenqiu

    2010-08-01

    Full Text Available Abstract It has been demonstrated that genes in a cell do not act independently. They interact with one another to complete certain biological processes or to implement certain molecular functions. How to incorporate biological pathways or functional groups into the model and identify survival associated gene pathways is still a challenging problem. In this paper, we propose a novel iterative gradient based method for survival analysis with group Lp penalized global AUC summary maximization. Unlike LASSO, Lp (p 1. We first extend Lp for individual gene identification to group Lp penalty for pathway selection, and then develop a novel iterative gradient algorithm for penalized global AUC summary maximization (IGGAUCS. This method incorporates the genetic pathways into global AUC summary maximization and identifies survival associated pathways instead of individual genes. The tuning parameters are determined using 10-fold cross validation with training data only. The prediction performance is evaluated using test data. We apply the proposed method to survival outcome analysis with gene expression profile and identify multiple pathways simultaneously. Experimental results with simulation and gene expression data demonstrate that the proposed procedures can be used for identifying important biological pathways that are related to survival phenotype and for building a parsimonious model for predicting the survival times.

  12. Reliability-Based Structural Optimization of Wave Energy Converters

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kramer, Morten; Sørensen, John Dalsgaard

    2014-01-01

    More and more wave energy converter (WEC) concepts are reaching prototype level. Once the prototype level is reached, the next step in order to further decrease the levelized cost of energy (LCOE) is optimizing the overall system with a focus on structural and maintenance (inspection) costs......, as well as on the harvested power from the waves. The target of a fully-developed WEC technology is not maximizing its power output, but minimizing the resulting LCOE. This paper presents a methodology to optimize the structural design of WECs based on a reliability-based optimization problem...

  13. Effects of a shade-matching light and background color on reliability in tooth shade selection.

    Science.gov (United States)

    Najafi-Abrandabadi, Siamak; Vahidi, Farhad; Janal, Malvin N

    2018-01-01

    The purpose of this study was to evaluate the effects of a shade-matching light (Rite-Lite-2, AdDent) and different viewing backgrounds on reliability in a test of shade tab matching. Four members of the Prosthodontic faculty matched 10 shade tabs selected for a range of shades against the shade guide. All raters were tested for color blindness and were calibrated prior to the study. Matching took place under four combinations of conditions: with operatory light or the shade-matching light, and using either a pink or a blue background. Reliability was quantified with the kappa statistic, separately for agreement of value, hue, and chroma for each shade tab. In general, raters showed fair to moderate levels of agreement when judging the value of the shade tabs, but could not agree on the hue and chroma of the stimuli. The pink background led to higher levels of agreement than the blue background, and the shade-matching light improved agreement when used in conjunction with the pink but not the blue background. Moderate levels of agreement were found in matching shade tab value. Agreement was generally better when using the pink rather than the blue background, regardless of light source. The use of the shade-matching light tended to amplify the advantage of the pink background.

  14. Selected Problems of Sensitivity and Reliability of a Jack-Up Platform

    Directory of Open Access Journals (Sweden)

    Rozmarynowski Bogdan

    2018-03-01

    Full Text Available The paper deals with sensitivity and reliability applications to numerical studies of an off-shore platform model. Structural parameters and sea conditions are referred to the Baltic jack-up drilling platform. The sudy aims at the influence of particular basic variables on static and dynamic response as well as the probability of failure due to water waves and wind loads. The paper presents the sensitivity approach to a generalized eigenvalue problem and evaluation of the performace functions. The first order time-invariant problems of structural reliability analysis are under concern.

  15. Activity versus outcome maximization in time management.

    Science.gov (United States)

    Malkoc, Selin A; Tonietto, Gabriela N

    2018-04-30

    Feeling time-pressed has become ubiquitous. Time management strategies have emerged to help individuals fit in more of their desired and necessary activities. We provide a review of these strategies. In doing so, we distinguish between two, often competing, motives people have in managing their time: activity maximization and outcome maximization. The emerging literature points to an important dilemma: a given strategy that maximizes the number of activities might be detrimental to outcome maximization. We discuss such factors that might hinder performance in work tasks and enjoyment in leisure tasks. Finally, we provide theoretically grounded recommendations that can help balance these two important goals in time management. Published by Elsevier Ltd.

  16. Cost analysis of reliability investigations

    International Nuclear Information System (INIS)

    Schmidt, F.

    1981-01-01

    Taking Epsteins testing theory as a basis, premisses are formulated for the selection of cost-optimized reliability inspection plans. Using an example, the expected testing costs and inspection time periods of various inspection plan types, standardized on the basis of the exponential distribution, are compared. It can be shown that sequential reliability tests usually involve lower costs than failure or time-fixed tests. The most 'costly' test is to be expected with the inspection plan type NOt. (orig.) [de

  17. Reliability of goniometry in Labrador Retrievers.

    Science.gov (United States)

    Jaegger, Gayle; Marcellin-Little, Denis J; Levine, David

    2002-07-01

    To evaluate the reliability of goniometry by comparing goniometric measurements with radiographic measurements and evaluate the effects of sedation on range of joint motion. 16 healthy adult Labrador Retrievers. 3 investigators blindly and independently measured range of motion of the carpus, elbow, shoulder, tarsus, stifle, and hip joints of 16 Labrador Retrievers in triplicate before and after dogs were sedated. Radiographs of all joints in maximal flexion and extension were made during under sedation. Goniometric measurements were compared with radiographic measurements. The influence of sedation and the intra- and intertester variability were evaluated; 95% confidence intervals for all ranges of motion were determined. Results of goniometric and radiographic measurements were not significantly different. Results of measurements made by the 3 investigators were not significantly different. Multiple measurements made by 1 investigator varied from 1 to 6 degrees (median, 3 degrees) depending on the joint. Sedation did not influence the range of motion of the evaluated joints. Goniometry is a reliable and objective method for determining range of motion of joints in healthy Labrador Retrievers.

  18. Issues in cognitive reliability

    International Nuclear Information System (INIS)

    Woods, D.D.; Hitchler, M.J.; Rumancik, J.A.

    1984-01-01

    This chapter examines some problems in current methods to assess reactor operator reliability at cognitive tasks and discusses new approaches to solve these problems. The two types of human failures are errors in the execution of an intention and errors in the formation/selection of an intention. Topics considered include the types of description, error correction, cognitive performance and response time, the speed-accuracy tradeoff function, function based task analysis, and cognitive task analysis. One problem of human reliability analysis (HRA) techniques in general is the question of what are the units of behavior whose reliability are to be determined. A second problem for HRA is that people often detect and correct their errors. The use of function based analysis, which maps the problem space for plant control, is recommended

  19. The Reliability of Anthropometric Measurements Used Preoperatively in Aesthetic Breast Surgery.

    Science.gov (United States)

    Isaac, Kathryn V; Murphy, Blake D; Beber, Brett; Brown, Mitchell

    2016-04-01

    Patient outcomes in aesthetic breast surgery are highly dependent on breast measurements used in preoperative planning. The purpose of this study is to determine the reliability of anthropometric breast measurements. Four raters measured 28 women using 7 measurements: sternal notch to nipple distance (Sn-N), nipple to midline (N-M), nipple to inframammary-fold distance under maximal stretch (N-IMF), breast base width (BW), soft tissue pinch thickness of the upper pole (STPT:UP), STPT at the inframammary fold (STPT:IMF), and anterior pull skin stretch (APSS). Reliability was assessed using intra-class correlation coefficients (ICCs). Inter-rater reliability was excellent for Sn-N, N-M, and BW (ICC = 0.94, 0.90, and 0.76, respectively) and was good for N-IMF (ICC = 0.70). The STPT:UP, STPT:IMF, and APSS measurements were not reliable between raters (ICC reliability was excellent for Sn-N, N-M, and BW for all raters (all ICC > 0.75). The N-IMF intra-rater reliability was excellent in senior raters (ICC > 0.75) and good in junior raters (ICC > 0.6). The STPT:UP, STPT:IMF, and APSS measurements showed fair or poor reliability for most raters (ICC reliable. Dynamic measurements including APSS, STPT:UP, and STUP:IMF are unreliable. N-IMF is the only reliable dynamic measurement, and its reliability improves with increasing clinical experience. The variable reliability of preoperative measurements must be considered in the planning of aesthetic breast surgery. 4 Diagnostic. © 2015 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  20. AVC: Selecting discriminative features on basis of AUC by maximizing variable complementarity.

    Science.gov (United States)

    Sun, Lei; Wang, Jun; Wei, Jinmao

    2017-03-14

    The Receiver Operator Characteristic (ROC) curve is well-known in evaluating classification performance in biomedical field. Owing to its superiority in dealing with imbalanced and cost-sensitive data, the ROC curve has been exploited as a popular metric to evaluate and find out disease-related genes (features). The existing ROC-based feature selection approaches are simple and effective in evaluating individual features. However, these approaches may fail to find real target feature subset due to their lack of effective means to reduce the redundancy between features, which is essential in machine learning. In this paper, we propose to assess feature complementarity by a trick of measuring the distances between the misclassified instances and their nearest misses on the dimensions of pairwise features. If a misclassified instance and its nearest miss on one feature dimension are far apart on another feature dimension, the two features are regarded as complementary to each other. Subsequently, we propose a novel filter feature selection approach on the basis of the ROC analysis. The new approach employs an efficient heuristic search strategy to select optimal features with highest complementarities. The experimental results on a broad range of microarray data sets validate that the classifiers built on the feature subset selected by our approach can get the minimal balanced error rate with a small amount of significant features. Compared with other ROC-based feature selection approaches, our new approach can select fewer features and effectively improve the classification performance.

  1. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2013-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....

  2. Maximizing entropy over Markov processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2014-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...

  3. HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL

    CERN Document Server

    HR Division

    2000-01-01

    Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...

  4. On the maximal diphoton width

    CERN Document Server

    Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo

    2016-01-01

    Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.

  5. Chemically Functionalized Arrays Comprising Micro and Nano-Etro-Mechanizal Systems for Reliable and Selective Characterization of Tank Waste

    International Nuclear Information System (INIS)

    Sepaniak, Michael J.

    2008-01-01

    Innovative technology of sensory and selective chemical monitoring of hazardous wastes present in storage tanks are of continued importance to the environment. This multifaceted research program exploits the unique characteristics of micro and nano-fabricated cantilever-based, micro-electro-mechanical systems (MEMES) and nano-electro-mechanical systems (NEMS) in chemical sensing. Significant progress was made in tasks that were listed in the work plan for DOE EMSP project 'Hybrid Micro-Electro-Mechanical Systems for Highly Reliable and Selective Characterization of Tank Waste'. These tasks are listed below in modified form followed by the report on progress. (1) Deposit chemically selective phases on model MEMS devices with nanostructured surface layers to identify optimal technological approaches. (2) Monitor mechanical (deflection) and optical (SERS) responses of the created MEMS to organic and inorganic species in aqueous environments. (3) Explore and compare different approaches to immobilization of selective phases on the thermal detectors. (4) Demonstrate improvements in selectivity and sensitivity to model pollutants due to implemented technologies of nanostructuring and multi-mode read-out. (5) Demonstrate detection of different analytes on a single hybrid MEMS (6) Implement the use of differential pairs of cantilever sensors (coated and reference) with the associated detector electronics which is expected to have an enhanced sensitivity with a low-noise low-drift response. (7) Development of methods to create differential arrays and test effectiveness at creating distinctive differential responses.

  6. Engineering Design Handbook: Development Guide for Reliability. Part Three. Reliability Prediction

    Science.gov (United States)

    1976-01-01

    to t is pa(t)=l-qa(t) (10-6) This is the reliability of being closed, defined for this interval. 2 The probability that a contact viH be open...Monte Carlo simulation. Few people can know all about all available programs. Special- ists can assist in selecting a few from the avail- able many

  7. Towards Reliable, Scalable, and Energy Efficient Cognitive Radio Systems

    KAUST Repository

    Sboui, Lokman

    2017-11-01

    The cognitive radio (CR) concept is expected to be adopted along with many technologies to meet the requirements of the next generation of wireless and mobile systems, the 5G. Consequently, it is important to determine the performance of the CR systems with respect to these requirements. In this thesis, after briefly describing the 5G requirements, we present three main directions in which we aim to enhance the CR performance. The first direction is the reliability. We study the achievable rate of a multiple-input multiple-output (MIMO) relay-assisted CR under two scenarios; an unmanned aerial vehicle (UAV) one-way relaying (OWR) and a fixed two-way relaying (TWR). We propose special linear precoding schemes that enable the secondary user (SU) to take advantage of the primary-free channel eigenmodes. We study the SU rate sensitivity to the relay power, the relay gain, the UAV altitude, the number of antennas and the line of sight availability. The second direction is the scalability. We first study a multiple access channel (MAC) with multiple SUs scenario. We propose a particular linear precoding and SUs selection scheme maximizing their sum-rate. We show that the proposed scheme provides a significant sum-rate improvement as the number of SUs increases. Secondly, we expand our scalability study to cognitive cellular networks. We propose a low-complexity algorithm for base station activation/deactivation and dynamic spectrum management maximizing the profits of primary and secondary networks subject to green constraints. We show that our proposed algorithms achieve performance close to those obtained with the exhaustive search method. The third direction is the energy efficiency (EE). We present a novel power allocation scheme based on maximizing the EE of both single-input and single-output (SISO) and MIMO systems. We solve a non-convex problem and derive explicit expressions of the corresponding optimal power. When the instantaneous channel is not available, we

  8. Energy efficiency and SINR maximization beamformers for cognitive radio utilizing sensing information

    KAUST Repository

    Alabbasi, Abdulrahman

    2014-06-01

    In this paper we consider a cognitive radio multi-input multi-output environment in which we adapt our beamformer to maximize both energy efficiency and signal to interference plus noise ratio (SINR) metrics. Our design considers an underlaying communication using adaptive beamforming schemes combined with the sensing information to achieve an optimal energy efficient system. The proposed schemes maximize the energy efficiency and SINR metrics subject to cognitive radio and quality of service constraints. Since the optimization of energy efficiency problem is not a convex problem, we transform it into a standard semi-definite programming (SDP) form to guarantee a global optimal solution. Analytical solution is provided for one scheme, while the other scheme is left in a standard SDP form. Selected numerical results are used to quantify the impact of the sensing information on the proposed schemes compared to the benchmark ones.

  9. BRBN-T validation: adaptation of the Selective Reminding Test and Word List Generation

    Directory of Open Access Journals (Sweden)

    Mariana Rigueiro Neves

    2015-10-01

    Full Text Available Objective This study aims to present the Selective Reminding Test(SRT and Word List Generation (WLG adaptation to the Portuguese population, within the validation of the Brief Repeatable Battery of Neuropsychological Tests (BRBN-Tfor multiple sclerosis (MS patients.Method 66 healthy participants (54.5% female recruited from the community volunteered to participate in this study.Results A combination of procedures from Classical Test Theory (CTT and Item Response Theory (ITR were applied to item analysis and selection. For each SRT list, 12 words were selected and 3 letters were chosen for WLG to constitute the final versions of these tests for the Portuguese population.Conclusion The combination of CTT and ITR maximized the decision making process in the adaptation of the SRT and WLG to a different culture and language (Portuguese. The relevance of this study lies on the production of reliable standardized neuropsychological tests, so that they can be used to facilitate a more rigorous monitoring of the evolution of MS, as well as any therapeutic effects and cognitive rehabilitation.

  10. Comparison of three protocols for measuring the maximal respiratory pressures

    Directory of Open Access Journals (Sweden)

    Isabela Maria B. Sclauser Pessoa

    Full Text Available Introduction To avoid the selection of submaximal efforts during the assessment of maximal inspiratory and expiratory pressures (MIP and MEP, some reproducibility criteria have been suggested. Criteria that stand out are those proposed by the American Thoracic Society (ATS and European Respiratory Society (ERS and by the Brazilian Thoracic Association (BTA. However, no studies were found that compared these criteria or assessed the combination of both protocols. Objectives To assess the pressure values selected and the number of maneuvers required to achieve maximum performance using the reproducibility criteria proposed by the ATS/ERS, the BTA and the present study. Materials and method 113 healthy subjects (43.04 ± 16.94 years from both genders were assessed according to the criteria proposed by the ATS/ERS, BTA and the present study. Descriptive statistics were used for analysis, followed by ANOVA for repeated measures and post hoc LSD or by Friedman test and post hoc Wilcoxon, according to the data distribution. Results The criterion proposed by the present study resulted in a significantly higher number of maneuvers (MIP and MEP – median and 25%-75% interquartile range: 5[5-6], 4[3-5] and 3[3-4] for the present study criterion, BTA and ATS/ERS, respectively; p < 0.01 and higher pressure values (MIP – mean and 95% confidence interval: 103[91.43-103.72], 100[97.19-108.83] and 97.6[94.06-105.95]; MEP: median and 25%-75% interquartile range: 124.2[101.4-165.9], 123.3[95.4-153.8] and 118.4[95.5-152.7]; p < 0.05. Conclusion The proposed criterion resulted in the selection of pressure values closer to the individual’s maximal capacity. This new criterion should be considered in future studies concerning MIP and MEP measurements.

  11. Dopaminergic balance between reward maximization and policy complexity

    Directory of Open Access Journals (Sweden)

    Naama eParush

    2011-05-01

    Full Text Available Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor. Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost and maximizing the expected future reward (gain. We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the trade-off between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems.

  12. Fault-tolerant embedded system design and optimization considering reliability estimation uncertainty

    International Nuclear Information System (INIS)

    Wattanapongskorn, Naruemon; Coit, David W.

    2007-01-01

    In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed

  13. Reliability of visual and instrumental color matching.

    Science.gov (United States)

    Igiel, Christopher; Lehmann, Karl Martin; Ghinea, Razvan; Weyhrauch, Michael; Hangx, Ysbrand; Scheller, Herbert; Paravina, Rade D

    2017-09-01

    The aim of this investigation was to evaluate intra-rater and inter-rater reliability of visual and instrumental shade matching. Forty individuals with normal color perception participated in this study. The right maxillary central incisor of a teaching model was prepared and restored with 10 feldspathic all-ceramic crowns of different shades. A shade matching session consisted of the observer (rater) visually selecting the best match by using VITA classical A1-D4 (VC) and VITA Toothguide 3D Master (3D) shade guides and the VITA Easyshade Advance intraoral spectrophotometer (ES) to obtain both VC and 3D matches. Three shade matching sessions were held with 4 to 6 weeks between sessions. Intra-rater reliability was assessed based on the percentage of agreement for the three sessions for the same observer, whereas the inter-rater reliability was calculated as mean percentage of agreement between different observers. The Fleiss' Kappa statistical analysis was used to evaluate visual inter-rater reliability. The mean intra-rater reliability for the visual shade selection was 64(11) for VC and 48(10) for 3D. The corresponding ES values were 96(4) for both VC and 3D. The percentages of observers who matched the same shade with VC and 3D were 55(10) and 43(12), respectively, while corresponding ES values were 88(8) for VC and 92(4) for 3D. The results for visual shade matching exhibited a high to moderate level of inconsistency for both intra-rater and inter-rater comparisons. The VITA Easyshade Advance intraoral spectrophotometer exhibited significantly better reliability compared with visual shade selection. This study evaluates the ability of observers to consistently match the same shade visually and with a dental spectrophotometer in different sessions. The intra-rater and inter-rater reliability (agreement of repeated shade matching) of visual and instrumental tooth color matching strongly suggest the use of color matching instruments as a supplementary tool in

  14. Development in structural systems reliability theory

    Energy Technology Data Exchange (ETDEWEB)

    Murotsu, Y

    1986-07-01

    This paper is concerned with two topics on structural systems reliability theory. One covers automatic generation of failure mode equations, identifications of stochastically dominant failure modes, and reliability assessment of redundant structures. Reduced stiffness matrixes and equivalent nodal forces representing the failed elements are introduced for expressing the safety of the elements, using a matrix method. Dominant failure modes are systematically selected by a branch-and-bound technique and heuristic operations. The other discusses the various optimum design problems based on reliability concept. Those problems are interpreted through a solution to a multi-objective optimization problem.

  15. Development in structural systems reliability theory

    International Nuclear Information System (INIS)

    Murotsu, Y.

    1986-01-01

    This paper is concerned with two topics on structural systems reliability theory. One covers automatic generation of failure mode equations, identifications of stochastically dominant failure modes, and reliability assessment of redundant structures. Reduced stiffness matrixes and equivalent nodal forces representing the failed elements are introduced for expressing the safety of the elements, using a matrix method. Dominant failure modes are systematically selected by a branch-and-bound technique and heuristic operations. The other discusses the various optimum design problems based on reliability concept. Those problems are interpreted through a solution to a multi-objective optimization problem. (orig.)

  16. Maximizing Consensus in Portfolio Selection in Multicriteria Group Decision Making

    NARCIS (Netherlands)

    Michael, Emmerich T. M.; Deutz, A.H.; Li, L.; Asep, Maulana A.; Yevseyeva, I.

    2016-01-01

    This paper deals with a scenario of decision making where a moderator selects a (sub)set (aka portfolio) of decision alternatives from a larger set. The larger the number of decision makers who agree on a solution in the portfolio the more successful the moderator is. We assume that decision makers

  17. Modeling of the thermal physical process and study on the reliability of linear energy density for selective laser melting

    Science.gov (United States)

    Xiang, Zhaowei; Yin, Ming; Dong, Guanhua; Mei, Xiaoqin; Yin, Guofu

    2018-06-01

    A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM) is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM.

  18. Managing the Public Sector Research and Development Portfolio Selection Process: A Case Study of Quantitative Selection and Optimization

    Science.gov (United States)

    2016-09-01

    PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION by Jason A. Schwartz...PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION 5. FUNDING NUMBERS 6...describing how public sector organizations can implement a research and development (R&D) portfolio optimization strategy to maximize the cost

  19. Reliability constrained generation expansion planning with consideration of wind farms uncertainties in deregulated electricity market

    International Nuclear Information System (INIS)

    Hemmati, Reza; Hooshmand, Rahmat-Allah; Khodabakhshian, Amin

    2013-01-01

    Highlights: • Generation expansion planning is presented in deregulated electricity market. • Wind farm uncertainty is modeled in the problem. • The profit of each GENCO is maximized and also the safe operation of system is satisfied. • Salve sector is managed as an optimization programming and solved by using PSO technique. • Master sector is considered in pool market and Cournot model is used to simulate it. - Abstract: This paper addresses reliability constrained generation expansion planning (GEP) in the presence of wind farm uncertainty in deregulated electricity market. The proposed GEP aims at maximizing the expected profit of all generation companies (GENCOs), while considering security and reliability constraints such as reserve margin and loss of load expectation (LOLE). Wind farm uncertainty is also considered in the planning and GENCOs denote their planning in the presence of wind farm uncertainty. The uncertainty is modeled by probability distribution function (PDF) and Monte-Carlo simulation (MCS) is used to insert uncertainty into the problem. The proposed GEP is a constrained, nonlinear, mixed-integer optimization programming and solved by using particle swarm optimization (PSO) method. In this paper, Electricity market structure is modeled as a pool market. Simulation results verify the effectiveness and validity of the proposed planning for maximizing GENCOs profit in the presence of wind farms uncertainties in electricity market

  20. Strategies for maximizing clinical effectiveness in the treatment of schizophrenia.

    Science.gov (United States)

    Tandon, Rajiv; Targum, Steven D; Nasrallah, Henry A; Ross, Ruth

    2006-11-01

    The ultimate clinical objective in the treatment of schizophrenia is to enable affected individuals to lead maximally productive and personally meaningful lives. As with other chronic diseases that lack a definitive cure, the individual's service/recovery plan must include treatment interventions directed towards decreasing manifestations of the illness, rehabilitative services directed towards enhancing adaptive skills, and social support mobilization aimed at optimizing function and quality of life. In this review, we provide a conceptual framework for considering approaches for maximizing the effectiveness of the array of treatments and other services towards promoting recovery of persons with schizophrenia. We discuss pharmacological, psychological, and social strategies that decrease the burden of the disease of schizophrenia on affected individuals and their families while adding the least possible burden of treatment. In view of the multitude of treatments necessary to optimize outcomes for individuals with schizophrenia, effective coordination of these services is essential. In addition to providing best possible clinical assessment and pharmacological treatment, the psychiatrist must function as an effective leader of the treatment team. To do so, however, the psychiatrist must be knowledgeable about the range of available services, must have skills in clinical-administrative leadership, and must accept the responsibility of coordinating the planning and delivery of this multidimensional array of treatments and services. Finally, the effectiveness of providing optimal individualized treatment/rehabilitation is best gauged by measuring progress on multiple effectiveness domains. Approaches for efficient and reliable assessment are discussed.

  1. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  2. Formula I(1 and I(2: Race Tracks for Likelihood Maximization Algorithms of I(1 and I(2 Cointegrated VAR Models

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2017-11-01

    Full Text Available This paper provides some test cases, called circuits, for the evaluation of Gaussian likelihood maximization algorithms of the cointegrated vector autoregressive model. Both I(1 and I(2 models are considered. The performance of algorithms is compared first in terms of effectiveness, defined as the ability to find the overall maximum. The next step is to compare their efficiency and reliability across experiments. The aim of the paper is to commence a collective learning project by the profession on the actual properties of algorithms for cointegrated vector autoregressive model estimation, in order to improve their quality and, as a consequence, also the reliability of empirical research.

  3. Introduction to quality and reliability engineering

    CERN Document Server

    Jiang, Renyan

    2015-01-01

    This book presents the state-of-the-art in quality and reliability engineering from a product life cycle standpoint. Topics in reliability include reliability models, life data analysis and modeling, design for reliability and accelerated life testing, while topics in quality include design for quality, acceptance sampling and supplier selection, statistical process control, production tests such as screening and burn-in, warranty and maintenance. The book provides comprehensive insights into two closely related subjects, and includes a wealth of examples and problems to enhance reader comprehension and link theory and practice. All numerical examples can be easily solved using Microsoft Excel. The book is intended for senior undergraduate and post-graduate students in related engineering and management programs such as mechanical engineering, manufacturing engineering, industrial engineering and engineering management programs, as well as for researchers and engineers in the quality and reliability fields. D...

  4. Selection of reliable reference genes in Caenorhabditis elegans for analysis of nanotoxicity.

    Science.gov (United States)

    Zhang, Yanqiong; Chen, Dongliang; Smith, Michael A; Zhang, Baohong; Pan, Xiaoping

    2012-01-01

    Despite rapid development and application of a wide range of manufactured metal oxide nanoparticles (NPs), the understanding of potential risks of using NPs is less completed, especially at the molecular level. The nematode Caenorhabditis elegans (C.elegans) has been emerging as an environmental model to study the molecular mechanism of environmental contaminations, using standard genetic tools such as the real-time quantitative PCR (RT-qPCR). The most important factor that may affect the accuracy of RT-qPCR is to choose appropriate genes for normalization. In this study, we selected 13 reference gene candidates (act-1, cdc-42, pmp-3, eif-3.C, actin, act-2, csq-1, Y45F10D.4, tba-1, mdh-1, ama-1, F35G12.2, and rbd-1) to test their expression stability under different doses of nano-copper oxide (CuO 0, 1, 10, and 50 µg/mL) using RT-qPCR. Four algorithms, geNorm, NormFinder, BestKeeper, and the comparative ΔCt method, were employed to evaluate these 13 candidates expressions. As a result, tba-1, Y45F10D.4 and pmp-3 were the most reliable, which may be used as reference genes in future study of nanoparticle-induced genetic response using C.elegans.

  5. A Criterion to Identify Maximally Entangled Four-Qubit State

    International Nuclear Information System (INIS)

    Zha Xinwei; Song Haiyang; Feng Feng

    2011-01-01

    Paolo Facchi, et al. [Phys. Rev. A 77 (2008) 060304(R)] presented a maximally multipartite entangled state (MMES). Here, we give a criterion for the identification of maximally entangled four-qubit states. Using this criterion, we not only identify some existing maximally entangled four-qubit states in the literature, but also find several new maximally entangled four-qubit states as well. (general)

  6. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  7. Ecological neighborhoods as a framework for umbrella species selection

    Science.gov (United States)

    Stuber, Erica F.; Fontaine, Joseph J.

    2018-01-01

    Umbrella species are typically chosen because they are expected to confer protection for other species assumed to have similar ecological requirements. Despite its popularity and substantial history, the value of the umbrella species concept has come into question because umbrella species chosen using heuristic methods, such as body or home range size, are not acting as adequate proxies for the metrics of interest: species richness or population abundance in a multi-species community for which protection is sought. How species associate with habitat across ecological scales has important implications for understanding population size and species richness, and therefore may be a better proxy for choosing an umbrella species. We determined the spatial scales of ecological neighborhoods important for predicting abundance of 8 potential umbrella species breeding in Nebraska using Bayesian latent indicator scale selection in N-mixture models accounting for imperfect detection. We compare the conservation value measured as collective avian abundance under different umbrella species selected following commonly used criteria and selected based on identifying spatial land cover characteristics within ecological neighborhoods that maximize collective abundance. Using traditional criteria to select an umbrella species resulted in sub-maximal expected collective abundance in 86% of cases compared to selecting an umbrella species based on land cover characteristics that maximized collective abundance directly. We conclude that directly assessing the expected quantitative outcomes, rather than ecological proxies, is likely the most efficient method to maximize the potential for conservation success under the umbrella species concept.

  8. Criterion validity and reliability of a smartphone delivered sub-maximal fitness test for people with type 2 diabetes

    DEFF Research Database (Denmark)

    Brinklov, Cecilie Fau; Thorsen, Ida Kær; Karstoft, Kristian

    2016-01-01

    Background: Prevention of multi-morbidities following non-communicable diseases requires a systematic registration of adverse modifiable risk factors, including low physical fitness. The aim of the study was to establish criterion validity and reliability of a smartphone app (InterWalk) delivered....... The algorithm was validated using leave-one-out cross validation. Test-retest reliability was tested in a subset of participants (N = 10). Results: The overall VO2peak prediction of the algorithm (R2) was 0.60 and 0.45 when the smartphone was placed in the pockets of the pants and jacket, respectively (p ... calorimetry and the acceleration (vector magnitude) from the smartphone was obtained. The vector magnitude was used to predict VO2peak along with the co-variates weight, height and sex. The validity of the algorithm was tested when the smartphone was placed in the right pocket of the pants or jacket...

  9. Technology success: Integration of power plant reliability and effective maintenance

    International Nuclear Information System (INIS)

    Ferguson, K.

    2008-01-01

    The nuclear power generation sector has a tradition of utilizing technology as a key attribute for advancement. Companies that own, manage, and operate nuclear power plants can be expected to continue to rely on technology as a vital element of success. Inherent with the operations of the nuclear power industry in many parts of the world is the close connection between efficiency of power plant operations and successful business survival. The relationship among power plant availability, reliability of systems and components, and viability of the enterprise is more evident than ever. Technology decisions need to be accomplished that reflect business strategies, work processes, as well as needs of stakeholders and authorities. Such rigor is needed to address overarching concerns such as power plant life extension and license renewal, new plant orders, outage management, plant safety, inventory management etc. Particular to power plant reliability, the prudent leveraging of technology as a key to future success is vital. A dominant concern is effective asset management as physical plant assets age. Many plants are in, or are entering in, a situation in which systems and component design life and margins are converging such that failure threats can come into play with increasing frequency. Wisely selected technologies can be vital to the identification of emerging threats to reliable performance of key plant features and initiating effective maintenance actions and investments that can sustain or enhance current reliability in a cost effective manner. This attention to detail is vital to investment in new plants as well This paper and presentation will address (1) specific technology success in place at power plants, including nuclear, that integrates attention to attaining high plant reliability and effective maintenance actions as well as (2) complimentary actions that maximize technology success. In addition, the range of benefits that accrue as a result of

  10. Reliability of near-infrared spectroscopy for measuring biceps brachii oxygenation during sustained and repeated isometric contractions.

    Science.gov (United States)

    Muthalib, Makii; Millet, Guillaume Y; Quaresima, Valentina; Nosaka, Kazunori

    2010-01-01

    We examine the test-retest reliability of biceps brachii tissue oxygenation index (TOI) parameters measured by near-infrared spectroscopy during a 10-s sustained and a 30-repeated (1-s contraction, 1-s relaxation) isometric contraction task at 30% of maximal voluntary contraction (30% MVC) and maximal (100% MVC) intensities. Eight healthy men (23 to 33 yr) were tested on three sessions separated by 3 h and 24 h, and the within-subject reliability of torque and each TOI parameter were determined by Bland-Altman+/-2 SD limits of agreement plots and coefficient of variation (CV). No significant (P>0.05) differences between the three sessions were found for mean values of torque and TOI parameters during the sustained and repeated tasks at both contraction intensities. All TOI parameters were within+/-2 SD limits of agreement. The CVs for torque integral were similar between the sustained and repeated task at both intensities (4 to 7%); however, the CVs for TOI parameters during the sustained and repeated task were lower for 100% MVC (7 to 11%) than for 30% MVC (22 to 36%). It is concluded that the reliability of the biceps brachii NIRS parameters during both sustained and repeated isometric contraction tasks is acceptable.

  11. Vacua of maximal gauged D=3 supergravities

    International Nuclear Information System (INIS)

    Fischbacher, T; Nicolai, H; Samtleben, H

    2002-01-01

    We analyse the scalar potentials of maximal gauged three-dimensional supergravities which reveal a surprisingly rich structure. In contrast to maximal supergravities in dimensions D≥4, all these theories possess a maximally supersymmetric (N=16) ground state with negative cosmological constant Λ 2 gauged theory, whose maximally supersymmetric groundstate has Λ = 0. We compute the mass spectra of bosonic and fermionic fluctuations around these vacua and identify the unitary irreducible representations of the relevant background (super)isometry groups to which they belong. In addition, we find several stationary points which are not maximally supersymmetric, and determine their complete mass spectra as well. In particular, we show that there are analogues of all stationary points found in higher dimensions, among them are de Sitter (dS) vacua in the theories with noncompact gauge groups SO(5, 3) 2 and SO(4, 4) 2 , as well as anti-de Sitter (AdS) vacua in the compact gauged theory preserving 1/4 and 1/8 of the supersymmetries. All the dS vacua have tachyonic instabilities, whereas there do exist nonsupersymmetric AdS vacua which are stable, again in contrast to the D≥4 theories

  12. Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks.

    Science.gov (United States)

    Chen, Xi; Xu, Yixuan; Liu, Anfeng

    2017-04-19

    High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.

  13. CASSAVA BREEDING II: PHENOTYPIC CORRELATIONS THROUGH THE DIFFERENT STAGES OF SELECTION

    Directory of Open Access Journals (Sweden)

    Orlando Joaqui Barandica

    2016-12-01

    Full Text Available Breeding cassava relies on a phenotypic recurrent selection that takes advantage of the vegetative propagation of this crop. Successive stages of selection (single row trial- SRT; preliminary yield trial – PYT; advanced yield trial – AYT; and uniform yield trials UYT, gradually reduce the number of genotypes as the plot size, number of replications and locations increase. An important feature of this scheme is that, because of the clonal, reproduction of cassava, the same identical genotypes are evaluated throughout these four successive stages of selection. For this study data, from 14 years (more than 30,000 data points of evaluation in a sub-humid tropical environment was consolidated for a meta-analysis. Correlation coefficients for fresh root yield (FRY, dry matter content (DMC, harvest index (HIN and plant type score (PTS along the different stages of selection were estimated. DMC and PTS measured in different trials showed the highest correlation coefficients, indicating a relatively good repeatability. HIN had an intermediate repeatability, whereas FRY had the lowest value. The association between HIN and FRY was lower than expected, suggesting that HIN in early stages was not reliable as indirect selection for FRY in later stages. There was a consistent decrease in the average performance of clones grown in PYTs compared with the earlier evaluation of the same genotypes at SRTs. A feasible explanation for this trend is the impact of the environment on the physiological and nutritional status of the planting material and/or epigenetic effects. The usefulness of HIN is questioned. Measuring this variable takes considerable efforts at harvest time. DMC and FRY showed a weak positive association in SRT (r= 0.21 but a clearly negative one at UYT (r= -0.42. The change if the relationship between these variables is the result of selection. In later stages of selection, the plant is forced to maximize productivity on a dry weight basis

  14. Reliability of the Emergency Severity Index: Meta-analysis

    Directory of Open Access Journals (Sweden)

    Amir Mirhaghi

    2015-01-01

    Full Text Available Objectives: Although triage systems based on the Emergency Severity Index (ESI have many advantages in terms of simplicity and clarity, previous research has questioned their reliability in practice. Therefore, the aim of this meta-analysis was to determine the reliability of ESI triage scales. Methods: This metaanalysis was performed in March 2014. Electronic research databases were searched and articles conforming to the Guidelines for Reporting Reliability and Agreement Studies were selected. Two researchers independently examined selected abstracts. Data were extracted in the following categories: version of scale (latest/older, participants (adult/paediatric, raters (nurse, physician or expert, method of reliability (intra/inter-rater, reliability statistics (weighted/unweighted kappa and the origin and publication year of the study. The effect size was obtained by the Z-transformation of reliability coefficients. Data were pooled with random-effects models and a meta-regression was performed based on the method of moments estimator. Results: A total of 19 studies from six countries were included in the analysis. The pooled coefficient for the ESI triage scales was substantial at 0.791 (95% confidence interval: 0.787‒0.795. Agreement was higher with the latest and adult versions of the scale and among expert raters, compared to agreement with older and paediatric versions of the scales and with other groups of raters, respectively. Conclusion: ESI triage scales showed an acceptable level of overall reliability. However, ESI scales require more development in order to see full agreement from all rater groups. Further studies concentrating on other aspects of reliability assessment are needed.

  15. Optimal reliability allocation for large software projects through soft computing techniques

    DEFF Research Database (Denmark)

    Madsen, Henrik; Albeanu, Grigore; Popentiu-Vladicescu, Florin

    2012-01-01

    or maximizing the system reliability subject to budget constraints. These kinds of optimization problems were considered both in deterministic and stochastic frameworks in literature. Recently, the intuitionistic-fuzzy optimization approach was considered as a soft computing successful modelling approach....... Firstly, a review on existing soft computing approaches to optimization is given. The main section extends the results considering self-organizing migrating algorithms for solving intuitionistic-fuzzy optimization problems attached to complex fault-tolerant software architectures which proved...

  16. Validity, Reliability, and Performance Determinants of a New Job-Specific Anaerobic Work Capacity Test for the Norwegian Navy Special Operations Command.

    Science.gov (United States)

    Angeltveit, Andreas; Paulsen, Gøran; Solberg, Paul A; Raastad, Truls

    2016-02-01

    Operators in Special Operation Forces (SOF) have a particularly demanding profession where physical and psychological capacities can be challenged to the extremes. The diversity of physical capacities needed depend on the mission. Consequently, tests used to monitor SOF operators' physical fitness should cover a broad range of physical capacities. Whereas tests for strength and aerobic endurance are established, there is no test for specific anaerobic work capacity described in the literature. The purpose of this study was therefore to evaluate the reliability, validity, and to identify performance determinants of a new test developed for testing specific anaerobic work capacity in SOF operators. Nineteen active young students were included in the concurrent validity part of the study. The students performed the evacuation (EVAC) test 3 times and the results were compared for reliability and with performance in the Wingate cycle test, 300-m sprint, and a maximal accumulated oxygen deficit (MAOD) test. In part II of the study, 21 Norwegian Navy Special Operations Command operators conducted the EVAC test, anthropometric measurements, a dual x-ray absorptiometry scan, leg press, isokinetic knee extensions, maximal oxygen uptake test, and countermovement jump (CMJ) test. The EVAC test showed good reliability after 1 familiarization trial (intraclass correlation = 0.89; coefficient of variance = 3.7%). The EVAC test correlated well with the Wingate test (r = -0.68), 300-m sprint time (r = 0.51), and 300-m mean power (W) (r = -0.67). No significant correlation was found with the MAOD test. In part II of the study, height, body mass, lean body mass, isokinetic knee extension torque, maximal oxygen uptake, and maximal power in a CMJ was significantly correlated with performance in the EVAC test. The EVAC test is a reliable and valid test for anaerobic work capacity for SOF operators, and muscle mass, leg strength, and leg power seem to be the most important determinants

  17. Reliability of the Handgrip Strength Test in Elderly Subjects With Parkinson Disease.

    Science.gov (United States)

    Villafañe, Jorge H; Valdes, Kristin; Buraschi, Riccardo; Martinelli, Marco; Bissolotti, Luciano; Negrini, Stefano

    2016-03-01

    The handgrip strength test is widely used by clinicians; however, little has been investigated about its reliability when used in subjects with Parkinson disease (PD). The purpose of this study was to investigate the test-retest reliability of the handgrip strength test for subjects with PD. The PD group consisted of 15 patients, and the control group consisted of 15 healthy subjects. Each patient performed 3 pain-free maximal isometric contractions on each hand on 2 occasions, 1 week apart. Intraclass correlation coefficient (ICC), standard error of measurement (SEM), and 95% limits of agreement (LOA) were calculated. The 2-way analysis of variance (ANOVA) was conducted to determine the differences between sides and groups. Test-retest reliability of measurements of grip strength was excellent for dominant (ICC = 0.97; P = .001) and non-dominant (ICC = 0.98; P = .001) hand of participant with PD and (ICC = 0.99; P = .001) and (ICC = 0.99; P = .001) respectively, of healthy group. The Jamar hand dynamometer had fair to excellent test-retest reliability to test grip strength in participants with PD.

  18. Revealing kinetics and state-dependent binding properties of IKur-targeting drugs that maximize atrial fibrillation selectivity

    Science.gov (United States)

    Ellinwood, Nicholas; Dobrev, Dobromir; Morotti, Stefano; Grandi, Eleonora

    2017-09-01

    The KV1.5 potassium channel, which underlies the ultra-rapid delayed-rectifier current (IKur) and is predominantly expressed in atria vs. ventricles, has emerged as a promising target to treat atrial fibrillation (AF). However, while numerous KV1.5-selective compounds have been screened, characterized, and tested in various animal models of AF, evidence of antiarrhythmic efficacy in humans is still lacking. Moreover, current guidelines for pre-clinical assessment of candidate drugs heavily rely on steady-state concentration-response curves or IC50 values, which can overlook adverse cardiotoxic effects. We sought to investigate the effects of kinetics and state-dependent binding of IKur-targeting drugs on atrial electrophysiology in silico and reveal the ideal properties of IKur blockers that maximize anti-AF efficacy and minimize pro-arrhythmic risk. To this aim, we developed a new Markov model of IKur that describes KV1.5 gating based on experimental voltage-clamp data in atrial myocytes from patient right-atrial samples in normal sinus rhythm. We extended the IKur formulation to account for state-specificity and kinetics of KV1.5-drug interactions and incorporated it into our human atrial cell model. We simulated 1- and 3-Hz pacing protocols in drug-free conditions and with a [drug] equal to the IC50 value. The effects of binding and unbinding kinetics were determined by examining permutations of the forward (kon) and reverse (koff) binding rates to the closed, open, and inactivated states of the KV1.5 channel. We identified a subset of ideal drugs exhibiting anti-AF electrophysiological parameter changes at fast pacing rates (effective refractory period prolongation), while having little effect on normal sinus rhythm (limited action potential prolongation). Our results highlight that accurately accounting for channel interactions with drugs, including kinetics and state-dependent binding, is critical for developing safer and more effective pharmacological anti

  19. Application of Expectation Maximization Method for Purchase Decision-Making Support in Welding Branch

    Directory of Open Access Journals (Sweden)

    Kujawińska Agnieszka

    2016-06-01

    Full Text Available The article presents a study of applying the proposed method of cluster analysis to support purchasing decisions in the welding industry. The authors analyze the usefulness of the non-hierarchical method, Expectation Maximization (EM, in the selection of material (212 combinations of flux and wire melt for the SAW (Submerged Arc Welding method process. The proposed approach to cluster analysis is proved as useful in supporting purchase decisions.

  20. Sex differences in autonomic function following maximal exercise.

    Science.gov (United States)

    Kappus, Rebecca M; Ranadive, Sushant M; Yan, Huimin; Lane-Cordova, Abbi D; Cook, Marc D; Sun, Peng; Harvey, I Shevon; Wilund, Kenneth R; Woods, Jeffrey A; Fernhall, Bo

    2015-01-01

    Heart rate variability (HRV), blood pressure variability, (BPV) and heart rate recovery (HRR) are measures that provide insight regarding autonomic function. Maximal exercise can affect autonomic function, and it is unknown if there are sex differences in autonomic recovery following exercise. Therefore, the purpose of this study was to determine sex differences in several measures of autonomic function and the response following maximal exercise. Seventy-one (31 males and 40 females) healthy, nonsmoking, sedentary normotensive subjects between the ages of 18 and 35 underwent measurements of HRV and BPV at rest and following a maximal exercise bout. HRR was measured at minute one and two following maximal exercise. Males have significantly greater HRR following maximal exercise at both minute one and two; however, the significance between sexes was eliminated when controlling for VO2 peak. Males had significantly higher resting BPV-low-frequency (LF) values compared to females and did not significantly change following exercise, whereas females had significantly increased BPV-LF values following acute maximal exercise. Although males and females exhibited a significant decrease in both HRV-LF and HRV-high frequency (HF) with exercise, females had significantly higher HRV-HF values following exercise. Males had a significantly higher HRV-LF/HF ratio at rest; however, both males and females significantly increased their HRV-LF/HF ratio following exercise. Pre-menopausal females exhibit a cardioprotective autonomic profile compared to age-matched males due to lower resting sympathetic activity and faster vagal reactivation following maximal exercise. Acute maximal exercise is a sufficient autonomic stressor to demonstrate sex differences in the critical post-exercise recovery period.

  1. Eccentric exercise decreases maximal insulin action in humans

    DEFF Research Database (Denmark)

    Asp, Svend; Daugaard, J R; Kristiansen, S

    1996-01-01

    subjects participated in two euglycaemic clamps, performed in random order. One clamp was preceded 2 days earlier by one-legged eccentric exercise (post-eccentric exercise clamp (PEC)) and one was without the prior exercise (control clamp (CC)). 2. During PEC the maximal insulin-stimulated glucose uptake...... for all three clamp steps used (P maximal activity of glycogen synthase was identical in the two thighs for all clamp steps. 3. The glucose infusion rate (GIR......) necessary to maintain euglycaemia during maximal insulin stimulation was lower during PEC compared with CC (15.7%, 81.3 +/- 3.2 vs. 96.4 +/- 8.8 mumol kg-1 min-1, P maximal...

  2. Maximize x(a - x)

    Science.gov (United States)

    Lange, L. H.

    1974-01-01

    Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)

  3. People consider reliability and cost when verifying their autobiographical memories.

    Science.gov (United States)

    Wade, Kimberley A; Nash, Robert A; Garry, Maryanne

    2014-02-01

    Because memories are not always accurate, people rely on a variety of strategies to verify whether the events that they remember really did occur. Several studies have examined which strategies people tend to use, but none to date has asked why people opt for certain strategies over others. Here we examined the extent to which people's beliefs about the reliability and the cost of different strategies would determine their strategy selection. Subjects described a childhood memory and then suggested strategies they might use to verify the accuracy of that memory. Next, they rated the reliability and cost of each strategy, and the likelihood that they might use it. Reliability and cost each predicted strategy selection, but a combination of the two ratings provided even greater predictive value. Cost was significantly more influential than reliability, which suggests that a tendency to seek and to value "cheap" information more than reliable information could underlie many real-world memory errors. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Reliability engineering for nuclear and other high technology systems

    International Nuclear Information System (INIS)

    Lakner, A.A.; Anderson, R.T.

    1985-01-01

    This book is written for the reliability instructor, program manager, system engineer, design engineer, reliability engineer, nuclear regulator, probability risk assessment (PRA) analyst, general manager and others who are involved in system hardware acquisition, design and operation and are concerned with plant safety and operational cost-effectiveness. It provides criteria, guidelines and comprehensive engineering data affecting reliability; it covers the key aspects of system reliability as it relates to conceptual planning, cost tradeoff decisions, specification, contractor selection, design, test and plant acceptance and operation. It treats reliability as an integrated methodology, explicitly describing life cycle management techniques as well as the basic elements of a total hardware development program, including: reliability parameters and design improvement attributes, reliability testing, reliability engineering and control. It describes how these elements can be defined during procurement, and implemented during design and development to yield reliable equipment. (author)

  5. Test-Retest Reliability of Isokinetic Knee Strength Measurements in Children Aged 8 to 10 Years.

    Science.gov (United States)

    Fagher, Kristina; Fritzson, Annelie; Drake, Anna Maria

    Isokinetic dynamometry is a useful tool to objectively assess muscle strength of children and adults in athletic and rehabilitative settings. This study examined test-retest reliability of isokinetic knee strength measurements in children aged 8 to 10 years and defined limits for the minimum difference (MD) in strength that indicates a clinically important change. Isokinetic knee strength measurements (using the Biodex System 4) in children will provide reliable results. Descriptive laboratory study. In 22 healthy children, 5 maximal concentric (CON) knee extensor (KE) and knee flexor (KF) contractions at 2 angular velocities (60 deg/s and 180 deg/s) and 5 maximal eccentric (ECC) KE/KF contractions at 60 deg/s were assessed 7 days apart. The intraclass correlation coefficient (ICC 2.1 ) was used to examine relative reliability, and the MD was calculated on the basis of standard error of measurement. ICCs for CON KE/KF peak torque measurements were fair to excellent (range, 0.49-0.81). The MD% values for CON KE and KF ranged from 31% to 37% at 60 deg/s and from 34% to 39% at 180 deg/s. ICCs in the ECC mode were good (range, 0.60-0.70), but associated MD% values were high (>50%). There was no systematic error for CON KE/KF and ECC KE strength measurements at 60 deg/s, but systematic error was found for all other measurements. The dynamometer provides a reliable analysis of isokinetic CON knee strength measurements at 60 deg/s in children aged 8 to 10 years. Measurements at 180 deg/s and in the ECC mode were not reliable, indicating a need for more familiarization prior to testing. The MD values may help clinicians to determine whether a change in knee strength is due to error or intervention.

  6. Energy Efficiency and SINR Maximization Beamformers for Spectrum Sharing With Sensing Information

    KAUST Repository

    Alabbasi, Abdulrahman

    2014-09-01

    In this paper, we consider a cognitive radio multi-input-multi-output environment, in which we adapt our beamformer to maximize both energy efficiency (EE) and signal-to-interference-plus-noise ratio (SINR) metrics. Our design considers an underlaying communication using adaptive beamforming schemes combined with sensing information to achieve optimal energy-efficient systems. The proposed schemes maximize EE and SINR metrics subject to cognitive radio and quality-of-service constraints. The analysis of the proposed schemes is classified into two categories based on knowledge of the secondary-transmitter-to-primary-receiver channel. Since the optimizations of EE and SINR problems are not convex problems, we transform them into a standard semidefinite programming (SDP) form to guarantee that the optimal solutions are global. An analytical solution is provided for one scheme, while the second scheme is left in a standard SDP form. Selected numerical results are used to quantify the impact of the sensing information on the proposed schemes compared to the benchmark ones.

  7. Effect of knee angle on neuromuscular assessment of plantar flexor muscles: A reliability study

    Science.gov (United States)

    Cornu, Christophe; Jubeau, Marc

    2018-01-01

    Introduction This study aimed to determine the intra- and inter-session reliability of neuromuscular assessment of plantar flexor (PF) muscles at three knee angles. Methods Twelve young adults were tested for three knee angles (90°, 30° and 0°) and at three time points separated by 1 hour (intra-session) and 7 days (inter-session). Electrical (H reflex, M wave) and mechanical (evoked and maximal voluntary torque, activation level) parameters were measured on the PF muscles. Intraclass correlation coefficients (ICC) and coefficients of variation were calculated to determine intra- and inter-session reliability. Results The mechanical measurements presented excellent (ICC>0.75) intra- and inter-session reliabilities regardless of the knee angle considered. The reliability of electrical measurements was better for the 90° knee angle compared to the 0° and 30° angles. Conclusions Changes in the knee angle may influence the reliability of neuromuscular assessments, which indicates the importance of considering the knee angle to collect consistent outcomes on the PF muscles. PMID:29596480

  8. Design for Reliability of Power Electronics in Renewable Energy Systems

    DEFF Research Database (Denmark)

    Ma, Ke; Yang, Yongheng; Wang, Huai

    2014-01-01

    Power electronics is the enabling technology for maximizing the power captured from renewable electrical generation, e.g., the wind and solar technology, and also for an efficient integration into the grid. Therefore, it is important that the power electronics are reliable and do not have too many...... failures during operation which otherwise will increase cost for operation, maintenance and reputation. Typically, power electronics in renewable electrical generation has to be designed for 20–30 years of operation, and in order to do that, it is crucial to know about the mission profile of the power...... electronics technology as well as to know how the power electronics technology is loaded in terms of temperature and other stressors relevant, to reliability. Hence, this chapter will show the basics of power electronics technology for renewable energy systems, describe the mission profile of the technology...

  9. Geometric mean for subspace selection.

    Science.gov (United States)

    Tao, Dacheng; Li, Xuelong; Wu, Xindong; Maybank, Stephen J

    2009-02-01

    Subspace selection approaches are powerful tools in pattern classification and data visualization. One of the most important subspace approaches is the linear dimensionality reduction step in the Fisher's linear discriminant analysis (FLDA), which has been successfully employed in many fields such as biometrics, bioinformatics, and multimedia information management. However, the linear dimensionality reduction step in FLDA has a critical drawback: for a classification task with c classes, if the dimension of the projected subspace is strictly lower than c - 1, the projection to a subspace tends to merge those classes, which are close together in the original feature space. If separate classes are sampled from Gaussian distributions, all with identical covariance matrices, then the linear dimensionality reduction step in FLDA maximizes the mean value of the Kullback-Leibler (KL) divergences between different classes. Based on this viewpoint, the geometric mean for subspace selection is studied in this paper. Three criteria are analyzed: 1) maximization of the geometric mean of the KL divergences, 2) maximization of the geometric mean of the normalized KL divergences, and 3) the combination of 1 and 2. Preliminary experimental results based on synthetic data, UCI Machine Learning Repository, and handwriting digits show that the third criterion is a potential discriminative subspace selection method, which significantly reduces the class separation problem in comparing with the linear dimensionality reduction step in FLDA and its several representative extensions.

  10. Maximizing Lifetime of Wireless Sensor Networks with Mobile Sink Nodes

    Directory of Open Access Journals (Sweden)

    Yourong Chen

    2014-01-01

    Full Text Available In order to maximize network lifetime and balance energy consumption when sink nodes can move, maximizing lifetime of wireless sensor networks with mobile sink nodes (MLMS is researched. The movement path selection method of sink nodes is proposed. Modified subtractive clustering method, k-means method, and nearest neighbor interpolation method are used to obtain the movement paths. The lifetime optimization model is established under flow constraint, energy consumption constraint, link transmission constraint, and other constraints. The model is solved from the perspective of static and mobile data gathering of sink nodes. Subgradient method is used to solve the lifetime optimization model when one sink node stays at one anchor location. Geometric method is used to evaluate the amount of gathering data when sink nodes are moving. Finally, all sensor nodes transmit data according to the optimal data transmission scheme. Sink nodes gather the data along the shortest movement paths. Simulation results show that MLMS can prolong network lifetime, balance node energy consumption, and reduce data gathering latency under appropriate parameters. Under certain conditions, it outperforms Ratio_w, TPGF, RCC, and GRND.

  11. Resolution of GSI B-56 - Emergency diesel generator reliability

    International Nuclear Information System (INIS)

    Serkiz, A.W.

    1989-01-01

    The need for an emergency diesel generator (EDG) reliability program has been established by 10 CFR Part 50, Section 50.63, Loss of All Alternating Current Power, which requires that licensees assess their station blackout coping and recovery capability. EDGs are the principal emergency ac power sources for avoiding a station blackout. Regulatory Guide 1.155, Station Blackout, identifies a need for (1) a nuclear unit EDG reliability level of at least 0.95, and (2) an EDG reliability program to monitor and maintain the required EDG reliability levels. NUMARC-8700, Guidelines and Technical Bases for NUMARC Initiatives Addressing Station Blackout at Light Water Reactors, also provides guidance on such needs. The resolution of GSI B-56, Diesel Reliability will be accomplished by issuing Regulatory Guide 1.9, Rev. 3, Selection, Design, Qualification, Testing, and Reliability of Diesel Generator Units Used as Onsite Electric Power Systems at Nuclear Plants. This revision will integrate into a single regulatory guide pertinent guidance previously addressed in R.G. 1.9, Rev. 2, R.G. 1.108, and Generic Letter 84-15. R.G. 1.9 has been expanded to define the principal elements of an EDG reliability program for monitoring and maintaining EDG reliability levels selected for SBO. In addition, alert levels and corrective actions have been defined to detect a deteriorating situation for all EDGs assigned to a particular nuclear unit, as well as an individual problem EDG

  12. Improving fMRI reliability in presurgical mapping for brain tumours.

    Science.gov (United States)

    Stevens, M Tynan R; Clarke, David B; Stroink, Gerhard; Beyea, Steven D; D'Arcy, Ryan Cn

    2016-03-01

    Functional MRI (fMRI) is becoming increasingly integrated into clinical practice for presurgical mapping. Current efforts are focused on validating data quality, with reliability being a major factor. In this paper, we demonstrate the utility of a recently developed approach that uses receiver operating characteristic-reliability (ROC-r) to: (1) identify reliable versus unreliable data sets; (2) automatically select processing options to enhance data quality; and (3) automatically select individualised thresholds for activation maps. Presurgical fMRI was conducted in 16 patients undergoing surgical treatment for brain tumours. Within-session test-retest fMRI was conducted, and ROC-reliability of the patient group was compared to a previous healthy control cohort. Individually optimised preprocessing pipelines were determined to improve reliability. Spatial correspondence was assessed by comparing the fMRI results to intraoperative cortical stimulation mapping, in terms of the distance to the nearest active fMRI voxel. The average ROC-r reliability for the patients was 0.58±0.03, as compared to 0.72±0.02 in healthy controls. For the patient group, this increased significantly to 0.65±0.02 by adopting optimised preprocessing pipelines. Co-localisation of the fMRI maps with cortical stimulation was significantly better for more reliable versus less reliable data sets (8.3±0.9 vs 29±3 mm, respectively). We demonstrated ROC-r analysis for identifying reliable fMRI data sets, choosing optimal postprocessing pipelines, and selecting patient-specific thresholds. Data sets with higher reliability also showed closer spatial correspondence to cortical stimulation. ROC-r can thus identify poor fMRI data at time of scanning, allowing for repeat scans when necessary. ROC-r analysis provides optimised and automated fMRI processing for improved presurgical mapping. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence

  13. Utility Maximization in Nonconvex Wireless Systems

    CERN Document Server

    Brehmer, Johannes

    2012-01-01

    This monograph formulates a framework for modeling and solving utility maximization problems in nonconvex wireless systems. First, a model for utility optimization in wireless systems is defined. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed. The development is based on a careful examination of the properties that are required for the application of each method. The focus is on problems whose initial formulation does not allow for a solution by standard convex methods. Solution approaches that take into account the nonconvexities inherent to wireless systems are discussed in detail. The monograph concludes with two case studies that demonstrate the application of the proposed framework to utility maximization in multi-antenna broadcast channels.

  14. Reliability and Validity of Selected PROMIS Measures in People with Rheumatoid Arthritis.

    Directory of Open Access Journals (Sweden)

    Susan J Bartlett

    Full Text Available To evaluate the reliability and validity of 11 PROMIS measures to assess symptoms and impacts identified as important by people with rheumatoid arthritis (RA.Consecutive patients (N = 177 in an observational study completed PROMIS computer adapted tests (CATs and a short form (SF assessing pain, fatigue, physical function, mood, sleep, and participation. We assessed test-test reliability and internal consistency using correlation and Cronbach's alpha. We assessed convergent validity by examining Pearson correlations between PROMIS measures and existing measures of similar domains and known groups validity by comparing scores across disease activity levels using ANOVA.Participants were mostly female (82% and white (83% with mean (SD age of 56 (13 years; 24% had ≤ high school, 29% had RA ≤ 5 years with 13% ≤ 2 years, and 22% were disabled. PROMIS Physical Function, Pain Interference and Fatigue instruments correlated moderately to strongly (rho's ≥ 0.68 with corresponding PROs. Test-retest reliability ranged from .725-.883, and Cronbach's alpha from .906-.991. A dose-response relationship with disease activity was evident in Physical Function with similar trends in other scales except Anger.These data provide preliminary evidence of reliability and construct validity of PROMIS CATs to assess RA symptoms and impacts, and feasibility of use in clinical care. PROMIS instruments captured the experiences of RA patients across the broad continuum of RA symptoms and function, especially at low disease activity levels. Future research is needed to evaluate performance in relevant subgroups, assess responsiveness and identify clinically meaningful changes.

  15. Selection of reliable reference genes in Caenorhabditis elegans for analysis of nanotoxicity.

    Directory of Open Access Journals (Sweden)

    Yanqiong Zhang

    Full Text Available Despite rapid development and application of a wide range of manufactured metal oxide nanoparticles (NPs, the understanding of potential risks of using NPs is less completed, especially at the molecular level. The nematode Caenorhabditis elegans (C.elegans has been emerging as an environmental model to study the molecular mechanism of environmental contaminations, using standard genetic tools such as the real-time quantitative PCR (RT-qPCR. The most important factor that may affect the accuracy of RT-qPCR is to choose appropriate genes for normalization. In this study, we selected 13 reference gene candidates (act-1, cdc-42, pmp-3, eif-3.C, actin, act-2, csq-1, Y45F10D.4, tba-1, mdh-1, ama-1, F35G12.2, and rbd-1 to test their expression stability under different doses of nano-copper oxide (CuO 0, 1, 10, and 50 µg/mL using RT-qPCR. Four algorithms, geNorm, NormFinder, BestKeeper, and the comparative ΔCt method, were employed to evaluate these 13 candidates expressions. As a result, tba-1, Y45F10D.4 and pmp-3 were the most reliable, which may be used as reference genes in future study of nanoparticle-induced genetic response using C.elegans.

  16. Diet selection in a molluscivore shorebird across Western Europe : does it show short- or long-term intake rate-maximization?

    NARCIS (Netherlands)

    Quaintenne, Gwenael; van Gils, Jan A.; Bocher, Pierrick; Dekinga, Anne; Piersma, Theunis; Webb, Tom

    P>1. Studies of diet choice usually assume maximization of energy intake. The well-known 'contingency model' (CM) additionally assumes that foraging animals only spend time searching or handling prey. Despite considerable empirical support, there are many foraging contexts in which the CM fails, but

  17. Summary of the preparation of methodology for digital system reliability analysis for PSA purposes

    International Nuclear Information System (INIS)

    Hustak, S.; Babic, P.

    2001-12-01

    The report is structured as follows: Specific features of and requirements for the digital part of NPP Instrumentation and Control (I and C) systems (Computer-controlled digital technologies and systems of the NPP I and C system; Specific types of digital technology failures and preventive provisions; Reliability requirements for the digital parts of I and C systems; Safety requirements for the digital parts of I and C systems; Defence-in-depth). Qualitative analyses of NPP I and C system reliability and safety (Introductory system analysis; Qualitative requirements for and proof of NPP I and C system reliability and safety). Quantitative reliability analyses of the digital parts of I and C systems (Selection of a suitable quantitative measure of digital system reliability; Selected qualitative and quantitative findings regarding digital system reliability; Use of relations among the occurrences of the various types of failure). Mathematical section in support of the calculation of the various types of indices (Boolean reliability models, Markovian reliability models). Example of digital system analysis (Description of a selected protective function and the relevant digital part of the I and C system; Functional chain examined, its components and fault tree). (P.A.)

  18. Differences and implications in biogeochemistry from maximizing entropy production locally versus globally

    Directory of Open Access Journals (Sweden)

    J. J. Vallino

    2011-06-01

    Full Text Available In this manuscript we investigate the use of the maximum entropy production (MEP principle for modeling biogeochemical processes that are catalyzed by living systems. Because of novelties introduced by the MEP approach, many questions need to be answered and techniques developed in the application of MEP to describe biological systems that are responsible for energy and mass transformations on a planetary scale. In previous work we introduce the importance of integrating entropy production over time to distinguish abiotic from biotic processes under transient conditions. Here we investigate the ramifications of modeling biological systems involving one or more spatial dimensions. When modeling systems over space, entropy production can be maximized either locally at each point in space asynchronously or globally over the system domain synchronously. We use a simple two-box model inspired by two-layer ocean models to illustrate the differences in local versus global entropy maximization. Synthesis and oxidation of biological structure is modeled using two autocatalytic reactions that account for changes in community kinetics using a single parameter each. Our results show that entropy production can be increased if maximized over the system domain rather than locally, which has important implications regarding how biological systems organize and supports the hypothesis for multiple levels of selection and cooperation in biology for the dissipation of free energy.

  19. A reliability program for emergency diesel generators at nuclear power plants: Program structure

    International Nuclear Information System (INIS)

    Lofgren, E.V.; DeMoss, G.M.; Fragola, J.R.; Appignani, P.L.; Delarche, G.; Boccio, J.

    1988-04-01

    The purpose of this report is to provide technical guidelines for NRC staff use in the development of positions for evaluating emergency diesel generator (EDG) reliability programs. Such reviews will likely result following resolution of USI A-44 and GSI B-56. The diesel generator reliability program is a management system for achieving and maintaining a selected (or target) level of reliability. This can be achieved by: (1) understanding the factors that control the EDG reliability and (2) then applying reliability and maintenance techniques in the proper proportion to achieve selected performance goals. The concepts and guidelines discussed in this report are concepts and approaches that have been successful in applications where high levels of reliability must be maintained. Both an EDG reliability program process and a set of review items for NRC use are provided. The review items represent a checklist for reviewing EDG reliability programs. They do not, in themselves, constitute a reliability program. Rather, the review items are those distinctive features of a reliability program that must be present for the program to be effective

  20. Throughput maximization for buffer-aided hybrid half-/full-duplex relaying with self-interference

    KAUST Repository

    Khafagy, Mohammad Galal

    2015-06-01

    In this work, we consider a two-hop cooperative setting where a source communicates with a destination through an intermediate relay node with a buffer. Unlike the existing body of work on buffer-aided half-duplex relaying, we consider a hybrid half-/full-duplex relaying scenario with loopback interference in the full-duplex mode. Depending on the channel outage and buffer states that are assumed available at the transmitters, the source and relay may either transmit simultaneously or revert to orthogonal transmission. Specifically, a joint source/relay scheduling and relaying mode selection mechanism is proposed to maximize the end-to-end throughput. The throughput maximization problem is converted to a linear program where the exact global optimal solution is efficiently obtained via standard convex/linear numerical optimization tools. Finally, the theoretical findings are corroborated with event-based simulations to provide the necessary performance validation.

  1. Reliable Ant Colony Routing Algorithm for Dual-Channel Mobile Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    YongQiang Li

    2018-01-01

    Full Text Available For the problem of poor link reliability caused by high-speed dynamic changes and congestion owing to low network bandwidth in ad hoc networks, an ant colony routing algorithm, based on reliable path under dual-channel condition (DSAR, is proposed. First, dual-channel communication mode is used to improve network bandwidth, and a hierarchical network model is proposed to optimize the dual-layer network. Thus, we reduce network congestion and communication delay. Second, a comprehensive reliable path selection strategy is designed, and the reliable path is selected ahead of time to reduce the probability of routing restart. Finally, the ant colony algorithm is used to improve the adaptability of the routing algorithm to changes of network topology. Simulation results show that DSAR improves the reliability of routing, packet delivery, and throughput.

  2. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization.

    Science.gov (United States)

    Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for

  3. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization

    Directory of Open Access Journals (Sweden)

    Yoanna Arlina Kurnianingsih

    2015-05-01

    Full Text Available We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble and choice strategies (what gamble information influences choices within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning.We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61 to 80 years old were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic

  4. Component reliability for electronic systems

    CERN Document Server

    Bajenescu, Titu-Marius I

    2010-01-01

    The main reason for the premature breakdown of today's electronic products (computers, cars, tools, appliances, etc.) is the failure of the components used to build these products. Today professionals are looking for effective ways to minimize the degradation of electronic components to help ensure longer-lasting, more technically sound products and systems. This practical book offers engineers specific guidance on how to design more reliable components and build more reliable electronic systems. Professionals learn how to optimize a virtual component prototype, accurately monitor product reliability during the entire production process, and add the burn-in and selection procedures that are the most appropriate for the intended applications. Moreover, the book helps system designers ensure that all components are correctly applied, margins are adequate, wear-out failure modes are prevented during the expected duration of life, and system interfaces cannot lead to failure.

  5. Reliability of application of inspection procedures

    Energy Technology Data Exchange (ETDEWEB)

    Murgatroyd, R A

    1988-12-31

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC). 3 refs.

  6. Reliability of application of inspection procedures

    International Nuclear Information System (INIS)

    Murgatroyd, R.A.

    1988-01-01

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC)

  7. Maximally Informative Observables and Categorical Perception

    OpenAIRE

    Tsiang, Elaine

    2012-01-01

    We formulate the problem of perception in the framework of information theory, and prove that categorical perception is equivalent to the existence of an observable that has the maximum possible information on the target of perception. We call such an observable maximally informative. Regardless whether categorical perception is real, maximally informative observables can form the basis of a theory of perception. We conclude with the implications of such a theory for the problem of speech per...

  8. Study of selected problems of reliability of the supply chain in the trading company

    Directory of Open Access Journals (Sweden)

    2010-06-01

    Full Text Available The paper presents the problems of the reliability of the supply chain as a whole in the dependence on the reliability of its elements. Different variants of reserving of canals (prime and reserve ones and issues connected with their switching are discussed.

  9. Reliability of nuclear power plants and equipment

    International Nuclear Information System (INIS)

    1985-01-01

    The standard sets the general principles, a list of reliability indexes and demands on their selection. Reliability indexes of nuclear power plants include the simple indexes of fail-safe operation, life and maintainability, and of storage capability. All terms and notions are explained and methods of evaluating the indexes briefly listed - statistical, and calculation experimental. The dates when the standard comes in force in the individual CMEA countries are given. (M.D.)

  10. Reliability of bounce drop jump parameters within elite male rugby players.

    Science.gov (United States)

    Costley, Lisa; Wallace, Eric; Johnston, Michael; Kennedy, Rodney

    2017-07-25

    The aims of the study were to investigate the number of familiarisation sessions required to establish reliability of the bounce drop jump (BDJ) and subsequent reliability once familiarisation is achieved. Seventeen trained male athletes completed 4 BDJs in 4 separate testing sessions. Force-time data from a 20 cm BDJ was obtained using two force plates (ensuring ground contact < 250 ms). Subjects were instructed to 'jump for maximal height and minimal contact time' while the best and average of four jumps were compared. A series of performance variables were assessed in both eccentric and concentric phases including jump height, contact time, flight time, reactive strength index (RSI), peak power, rate of force development (RFD) and actual dropping height (ADH). Reliability was assessed using the intraclass correlation coefficient (ICC) and coefficient of variation (CV) while familiarisation was assessed using a repeated measures analysis of variance (ANOVA). The majority of DJ parameters exhibited excellent reliability with no systematic bias evident, while the average of 4 trials provided greater reliability. With the exception of vertical stiffness (CV: 12.0 %) and RFD (CV: 16.2 %) all variables demonstrated low within subject variation (CV range: 3.1 - 8.9 %). Relative reliability was very poor for ADH, with heights ranging from 14.87 - 29.85 cm. High levels of reliability can be obtained from the BDJ with the exception of vertical stiffness and RFD, however, extreme caution must be taken when comparing DJ results between individuals and squads due to large discrepancies between actual drop height and platform height.

  11. 78 FR 69018 - Improving the Resiliency of Mobile Wireless Communications Networks; Reliability and Continuity...

    Science.gov (United States)

    2013-11-18

    ... consumers value overall network reliability and quality in selecting mobile wireless service providers, they...-125] Improving the Resiliency of Mobile Wireless Communications Networks; Reliability and Continuity... (Reliability NOI) in 2011 to ``initiate a comprehensive examination of issues regarding the reliability...

  12. Reliability of maximal isometric knee strength testing with modified hand-held dynamometry in patients awaiting total knee arthroplasty: useful in research and individual patient settings? A reliability study

    NARCIS (Netherlands)

    Koblbauer, Ian F. H.; Lambrecht, Yannick; van der Hulst, Micheline L. M.; Neeter, Camille; Engelbert, Raoul H. H.; Poolman, Rudolf W.; Scholtes, Vanessa A.

    2011-01-01

    Patients undergoing total knee arthroplasty (TKA) often experience strength deficits both pre- and post-operatively. As these deficits may have a direct impact on functional recovery, strength assessment should be performed in this patient population. For these assessments, reliable measurements

  13. Reliability of maximal isometric knee strength testing with modified hand-held dynamometry in patients awaiting total knee arthroplasty: useful in research and individual patient settings? A reliability study

    NARCIS (Netherlands)

    Koblbauer, I.F.H.; Lambrecht, Y.; van der Hulst, M.L.M.; Neeter, C.; Engelbert, R.H.H.; Poolman, R.W.; Scholtes, V.A.

    2011-01-01

    Background: Patients undergoing total knee arthroplasty (TKA) often experience strength deficits both pre- and post-operatively. As these deficits may have a direct impact on functional recovery, strength assessment should be performed in this patient population. For these assessments, reliable

  14. Maximally Entangled Multipartite States: A Brief Survey

    International Nuclear Information System (INIS)

    Enríquez, M; Wintrowicz, I; Życzkowski, K

    2016-01-01

    The problem of identifying maximally entangled quantum states of a composite quantum systems is analyzed. We review some states of multipartite systems distinguished with respect to certain measures of quantum entanglement. Numerical results obtained for 4-qubit pure states illustrate the fact that the notion of maximally entangled state depends on the measure used. (paper)

  15. Corporate Social Responsibility and Profit Maximizing Behaviour

    OpenAIRE

    Becchetti, Leonardo; Giallonardo, Luisa; Tessitore, Maria Elisabetta

    2005-01-01

    We examine the behavior of a profit maximizing monopolist in a horizontal differentiation model in which consumers differ in their degree of social responsibility (SR) and consumers SR is dynamically influenced by habit persistence. The model outlines parametric conditions under which (consumer driven) corporate social responsibility is an optimal choice compatible with profit maximizing behavior.

  16. Reliability and validity of selected measures associated with increased fall risk in females over the age of 45 years with distal radius fracture - A pilot study.

    Science.gov (United States)

    Mehta, Saurabh P; MacDermid, Joy C; Richardson, Julie; MacIntyre, Norma J; Grewal, Ruby

    2015-01-01

    Clinical measurement. This study examined test-retest reliability and convergent/divergent construct validity of selected tests and measures that assess balance impairment, fear of falling (FOF), impaired physical activity (PA), and lower extremity muscle strength (LEMS) in females >45 years of age after the distal radius fracture (DRF) population. Twenty one female participants with DRF were assessed on two occasions. Timed Up and Go, Functional Reach, and One Leg Standing tests assessed balance impairment. Shortened Falls Efficacy Scale, Activity-specific Balance Confidence scale, and Fall Risk Perception Questionnaire assessed FOF. International Physical Activity Questionnaire and Rapid Assessment of Physical Activity were administered to assess PA level. Chair stand test and isometric muscle strength testing for hip and knee assessed LEMS. Intraclass correlation coefficients (ICC) examined the test-retest reliability of the measures. Pearson correlation coefficients (r) examined concurrent relationships between the measures. The results demonstrated fair to excellent test-retest reliability (ICC between 0.50 and 0.96) and low to moderate concordance between the measures (low if r ≤ 0.4; moderate if r = 0.4-0.7). The results provide preliminary estimates of test-retest reliability and convergent/divergent construct validity of selected measures associated with increased risk for falling in the females >45 years of age after DRF. Further research directions to advance knowledge regarding fall risk assessment in DRF population have been identified. Copyright © 2015 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.

  17. Multivariate performance reliability prediction in real-time

    International Nuclear Information System (INIS)

    Lu, S.; Lu, H.; Kolarik, W.J.

    2001-01-01

    This paper presents a technique for predicting system performance reliability in real-time considering multiple failure modes. The technique includes on-line multivariate monitoring and forecasting of selected performance measures and conditional performance reliability estimates. The performance measures across time are treated as a multivariate time series. A state-space approach is used to model the multivariate time series. Recursive forecasting is performed by adopting Kalman filtering. The predicted mean vectors and covariance matrix of performance measures are used for the assessment of system survival/reliability with respect to the conditional performance reliability. The technique and modeling protocol discussed in this paper provide a means to forecast and evaluate the performance of an individual system in a dynamic environment in real-time. The paper also presents an example to demonstrate the technique

  18. Guinea pig maximization test

    DEFF Research Database (Denmark)

    Andersen, Klaus Ejner

    1985-01-01

    Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...

  19. Gradient Dynamics and Entropy Production Maximization

    Science.gov (United States)

    Janečka, Adam; Pavelka, Michal

    2018-01-01

    We compare two methods for modeling dissipative processes, namely gradient dynamics and entropy production maximization. Both methods require similar physical inputs-how energy (or entropy) is stored and how it is dissipated. Gradient dynamics describes irreversible evolution by means of dissipation potential and entropy, it automatically satisfies Onsager reciprocal relations as well as their nonlinear generalization (Maxwell-Onsager relations), and it has statistical interpretation. Entropy production maximization is based on knowledge of free energy (or another thermodynamic potential) and entropy production. It also leads to the linear Onsager reciprocal relations and it has proven successful in thermodynamics of complex materials. Both methods are thermodynamically sound as they ensure approach to equilibrium, and we compare them and discuss their advantages and shortcomings. In particular, conditions under which the two approaches coincide and are capable of providing the same constitutive relations are identified. Besides, a commonly used but not often mentioned step in the entropy production maximization is pinpointed and the condition of incompressibility is incorporated into gradient dynamics.

  20. On Maximal Non-Disjoint Families of Subsets

    Directory of Open Access Journals (Sweden)

    Yu. A. Zuev

    2017-01-01

    Full Text Available The paper studies maximal non-disjoint families of subsets of a finite set. Non-disjointness means that any two subsets of a family have a nonempty intersection. The maximality is expressed by the fact that adding a new subset to the family cannot increase its power without violating a non-disjointness condition. Studying the properties of such families is an important section of the extreme theory of sets. Along with purely combinatorial interest, the problems considered here play an important role in informatics, anti-noise coding, and cryptography.In 1961 this problem saw the light of day in the Erdos, Ko and Rado paper, which established a maximum power of the non-disjoint family of subsets of equal power. In 1974 the Erdos and Claytman publication estimated the number of maximal non-disjoint families of subsets without involving the equality of their power. These authors failed to establish an asymptotics of the logarithm of the number of such families when the power of a basic finite set tends to infinity. However, they suggested such an asymptotics as a hypothesis. A.D. Korshunov in two publications in 2003 and 2005 established the asymptotics for the number of non-disjoint families of the subsets of arbitrary powers without maximality condition of these families.The basis for the approach used in the paper to study the families of subsets is their description in the language of Boolean functions. A one-to-one correspondence between a family of subsets and a Boolean function is established by the fact that the characteristic vectors of subsets of a family are considered to be the unit sets of a Boolean function. The main theoretical result of the paper is that the maximal non-disjoint families are in one-to-one correspondence with the monotonic self-dual Boolean functions. When estimating the number of maximal non-disjoint families, this allowed us to use the result of A.A. Sapozhenko, who established the asymptotics of the number of the

  1. Inquiry in bibliography some of the bustan`s maxim

    Directory of Open Access Journals (Sweden)

    sajjad rahmatian

    2016-12-01

    Full Text Available Sa`di is on of those poets who`s has placed a special position to preaching and guiding the people and among his works, allocated throughout the text of bustan to advice and maxim on legal and ethical various subjects. Surely, sa`di on the way of to compose this work and expression of its moral point, direct or indirect have been affected by some previous sources and possibly using their content. The main purpose of this article is that the pay review of basis and sources of bustan`s maxims and show that sa`di when expression the maxims of this work has been affected by which of the texts and works. For this purpose is tried to with search and research on the resources that have been allocated more or less to the aphorisms, to discover and extract traces of influence sa`di from their moral and didactic content. From the most important the finding of this study can be mentioned that indirect effect of some pahlavi books of maxim (like maxims of azarbad marespandan and bozorgmehr book of maxim and also noted sa`di directly influenced of moral and ethical works of poets and writers before him, and of this, sa`di`s influence from abo- shakur balkhi maxims, ferdowsi and keikavus is remarkable and noteworthy.

  2. Can monkeys make investments based on maximized pay-off?

    Directory of Open Access Journals (Sweden)

    Sophie Steelandt

    2011-03-01

    Full Text Available Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella and thirteen macaques (Macaca fascicularis, Macaca tonkeana in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible.

  3. Developing Reliable Life Support for Mars

    Science.gov (United States)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and

  4. Maximal lattice free bodies, test sets and the Frobenius problem

    DEFF Research Database (Denmark)

    Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt

    Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral m...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....

  5. Disk Density Tuning of a Maximal Random Packing.

    Science.gov (United States)

    Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A

    2016-08-01

    We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.

  6. AN APPRAISAL OF INSTRUCTIONAL UNITS TO ENHANCE STUDENT UNDERSTANDING OF PROFIT-MAXIMIZING PRINCIPLES. RESEARCH SERIES IN AGRICULTURAL EDUCATION.

    Science.gov (United States)

    BARKER, RICHARD L.; BENDER, RALPH E.

    TWENTY-TWO SELECTED OHIO VOCATIONAL AGRICULTURE TEACHERS AND 262 JUNIOR AND SENIOR VOCATIONAL AGRICULTURE STUDENTS PARTICIPATED IN A STUDY TO MEASURE THE RELATIVE EFFECTIVENESS OF NEWLY DEVELOPED INSTRUCTIONAL UNITS DESIGNED TO ENHANCE STUDENT UNDERSTANDING OF PROFIT-MAXIMIZING PRINCIPLES IN FARM MANAGEMENT. FARM MANAGEMENT WAS TAUGHT IN THE…

  7. Agreement and Reliability of Functional Performance and Muscle Power in Patients with Advanced Osteoarthritis of the Hip or Knee

    DEFF Research Database (Denmark)

    Villadsen, Allan; Roos, Ewa M; Overgaard, Søren

    2012-01-01

    -time repeated chair stands, and repeated unilateral knee bending). RESULTS: For single-joint and multijoint maximal peak power and functional performance measures, we demonstrated poor (CVws, approximately 25%, single-joint hip extension) and moderate (CVws, approximately 15%, multijoint leg extension press......OBJECTIVE: The purpose of this study was to test the reproducibility and clinical feasibility of three functional performance measures and five single-joint or multijoint muscle power measures. DESIGN: Twenty patients with a mean age of 68.7 ± 7.2 yrs with severe hip or knee osteoarthritis were...... assessed for test-retest reliability and agreement on two occasions 1 wk apart. The outcomes were maximal single-joint muscle power (hip extension/abduction and knee extension/flexion), maximal muscle power during multijoint leg extension press, and functional performance measures (20-m walk, five...

  8. An information maximization model of eye movements

    Science.gov (United States)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  9. A reliability analysis tool for SpaceWire network

    Science.gov (United States)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  10. On the way towards a generalized entropy maximization procedure

    International Nuclear Information System (INIS)

    Bagci, G. Baris; Tirnakli, Ugur

    2009-01-01

    We propose a generalized entropy maximization procedure, which takes into account the generalized averaging procedures and information gain definitions underlying the generalized entropies. This novel generalized procedure is then applied to Renyi and Tsallis entropies. The generalized entropy maximization procedure for Renyi entropies results in the exponential stationary distribution asymptotically for q element of (0,1] in contrast to the stationary distribution of the inverse power law obtained through the ordinary entropy maximization procedure. Another result of the generalized entropy maximization procedure is that one can naturally obtain all the possible stationary distributions associated with the Tsallis entropies by employing either ordinary or q-generalized Fourier transforms in the averaging procedure.

  11. Violating Bell inequalities maximally for two d-dimensional systems

    International Nuclear Information System (INIS)

    Chen Jingling; Wu Chunfeng; Oh, C. H.; Kwek, L. C.; Ge Molin

    2006-01-01

    We show the maximal violation of Bell inequalities for two d-dimensional systems by using the method of the Bell operator. The maximal violation corresponds to the maximal eigenvalue of the Bell operator matrix. The eigenvectors corresponding to these eigenvalues are described by asymmetric entangled states. We estimate the maximum value of the eigenvalue for large dimension. A family of elegant entangled states |Ψ> app that violate Bell inequality more strongly than the maximally entangled state but are somewhat close to these eigenvectors is presented. These approximate states can potentially be useful for quantum cryptography as well as many other important fields of quantum information

  12. Information maximization explains the emergence of complex cell-like neurons

    Directory of Open Access Journals (Sweden)

    Takuma eTanaka

    2013-11-01

    Full Text Available We propose models and a method to qualitatively explain the receptive field properties of complex cells in the primary visual cortex. We apply a learning method based on the information maximization principle in a feedforward network, which comprises an input layer of image patches, simple cell-like first-output-layer neurons, and second-output-layer neurons (Model 1. The information maximization results in the emergence of the complex cell-like receptive field properties in the second-output-layer neurons. After learning, second-output-layer neurons receive connection weights having the same size from two first-output-layer neurons with sign-inverted receptive fields. The second-output-layer neurons replicate the phase invariance and iso-orientation suppression. Furthermore, on the basis of these results, we examine a simplified model showing the emergence of complex cell-like receptive fields (Model 2. We show that after learning, the output neurons of this model exhibit iso-orientation suppression, cross-orientation facilitation, and end stopping, which are similar to those found in complex cells. These properties of model neurons suggest that complex cells in the primary visual cortex become selective to features composed of edges to increase the variability of the output.

  13. Several submaximal exercise tests are reliable, valid and acceptable in people with chronic pain, fibromyalgia or chronic fatigue: a systematic review

    NARCIS (Netherlands)

    Ratter, Julia; Radlinger, Lorenz; Lucas, Cees

    2014-01-01

    Are submaximal and maximal exercise tests reliable, valid and acceptable in people with chronic pain, fibromyalgia and fatigue disorders? Systematic review of studies of the psychometric properties of exercise tests. People older than 18 years with chronic pain, fibromyalgia and chronic fatigue

  14. Reliability data bases: the current picture

    International Nuclear Information System (INIS)

    Fragola, J.R.

    1985-01-01

    The paper addresses specific advances in nuclear power plant reliability data base development, a critical review of a select set of relevant data bases and suggested future data bases and suggested future data development needs required for risk assessment techniques to reach full potential

  15. Real-time topic-aware influence maximization using preprocessing.

    Science.gov (United States)

    Chen, Wei; Lin, Tian; Yang, Cheng

    2016-01-01

    Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.

  16. A least squares approach for efficient and reliable short-term versus long-term optimization

    DEFF Research Database (Denmark)

    Christiansen, Lasse Hjuler; Capolei, Andrea; Jørgensen, John Bagterp

    2017-01-01

    The uncertainties related to long-term forecasts of oil prices impose significant financial risk on ventures of oil production. To minimize risk, oil companies are inclined to maximize profit over short-term horizons ranging from months to a few years. In contrast, conventional production...... optimization maximizes long-term profits over horizons that span more than a decade. To address this challenge, the oil literature has introduced short-term versus long-term optimization. Ideally, this problem is solved by a posteriori multi-objective optimization methods that generate an approximation...... the balance between the objectives, leaving an unfulfilled potential to increase profits. To promote efficient and reliable short-term versus long-term optimization, this paper introduces a natural way to characterize desirable Pareto points and proposes a novel least squares (LS) method. Unlike hierarchical...

  17. Reserve selection with land market feedbacks.

    Science.gov (United States)

    Butsic, Van; Lewis, David J; Radeloff, Volker C

    2013-01-15

    How to best site reserves is a leading question for conservation biologists. Recently, reserve selection has emphasized efficient conservation: maximizing conservation goals given the reality of limited conservation budgets, and this work indicates that land market can potentially undermine the conservation benefits of reserves by increasing property values and development probabilities near reserves. Here we propose a reserve selection methodology which optimizes conservation given both a budget constraint and land market feedbacks by using a combination of econometric models along with stochastic dynamic programming. We show that amenity based feedbacks can be accounted for in optimal reserve selection by choosing property price and land development models which exogenously estimate the effects of reserve establishment. In our empirical example, we use previously estimated models of land development and property prices to select parcels to maximize coarse woody debris along 16 lakes in Vilas County, WI, USA. Using each lake as an independent experiment, we find that including land market feedbacks in the reserve selection algorithm has only small effects on conservation efficacy. Likewise, we find that in our setting heuristic (minloss and maxgain) algorithms perform nearly as well as the optimal selection strategy. We emphasize that land market feedbacks can be included in optimal reserve selection; the extent to which this improves reserve placement will likely vary across landscapes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Sprint running: how changes in step frequency affect running mechanics and leg spring behaviour at maximal speed.

    Science.gov (United States)

    Monte, Andrea; Muollo, Valentina; Nardello, Francesca; Zamparo, Paola

    2017-02-01

    The purpose of this study was to investigate the changes in selected biomechanical variables in 80-m maximal sprint runs while imposing changes in step frequency (SF) and to investigate if these adaptations differ based on gender and training level. A total of 40 athletes (10 elite men and 10 women, 10 intermediate men and 10 women) participated in this study; they were requested to perform 5 trials at maximal running speed (RS): at the self-selected frequency (SF s ) and at SF ±15% and ±30%SF s . Contact time (CT) and flight time (FT) as well as step length (SL) decreased with increasing SF, while k vert increased with it. At SF s , k leg was the lowest (a 20% decrease at ±30%SF s ), while RS was the largest (a 12% decrease at ±30%SF s ). Only small changes (1.5%) in maximal vertical force (F max ) were observed as a function of SF, but maximum leg spring compression (ΔL) was largest at SF s and decreased by about 25% at ±30%SF s . Significant differences in F max , Δy, k leg and k vert were observed as a function of skill and gender (P < 0.001). Our results indicate that RS is optimised at SF s and that, while k vert follows the changes in SF, k leg is lowest at SF s .

  19. El culto de Maximón en Guatemala

    OpenAIRE

    Pédron‑Colombani, Sylvie

    2009-01-01

    Este artículo se enfoca en la figura de Maximón, deidad sincrética de Guatemala, en un contexto de desplazamiento de la religión católica popular por parte de las iglesias protestantes. Esta divinidad híbrida a la cual se agregan santos católicos como Judas Iscariote o el dios maya Mam, permite la apropiación de Maximón por segmentos diferenciados de la población (tanto indígena como mestiza). Permite igualmente ser símbolo de protestas sociales enmascaradas cuando se asocia Maximón con figur...

  20. Reliability assessment of distribution system with the integration of renewable distributed generation

    International Nuclear Information System (INIS)

    Adefarati, T.; Bansal, R.C.

    2017-01-01

    Highlights: • Addresses impacts of renewable DG on the reliability of the distribution system. • Multi-objective formulation for maximizing the cost saving with integration of DG. • Uses Markov model to study the stochastic characteristics of the major components. • The investigation is done using modified RBTS bus test distribution system. • Proposed approach is useful for electric utilities to enhance the reliability. - Abstract: Recent studies have shown that renewable energy resources will contribute substantially to future energy generation owing to the rapid depletion of fossil fuels. Wind and solar energy resources are major sources of renewable energy that have the ability to reduce the energy crisis and the greenhouse gases emitted by the conventional power plants. Reliability assessment is one of the key indicators to measure the impact of the renewable distributed generation (DG) units in the distribution networks and to minimize the cost that is associated with power outage. This paper presents a comprehensive reliability assessment of the distribution system that satisfies the consumer load requirements with the penetration of wind turbine generator (WTG), electric storage system (ESS) and photovoltaic (PV). A Markov model is proposed to access the stochastic characteristics of the major components of the renewable DG resources as well as their influence on the reliability of a conventional distribution system. The results obtained from the case studies have demonstrated the effectiveness of using WTG, ESS and PV to enhance the reliability of the conventional distribution system.

  1. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps.

    Science.gov (United States)

    Varikuti, Deepthi P; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T; Eickhoff, Simon B

    2017-04-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that gray matter masking improved the reliability of connectivity estimates, whereas denoising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources.

  2. Resting-state test-retest reliability of a priori defined canonical networks over different preprocessing steps

    Science.gov (United States)

    Varikuti, Deepthi P.; Hoffstaedter, Felix; Genon, Sarah; Schwender, Holger; Reid, Andrew T.; Eickhoff, Simon B.

    2016-01-01

    Resting-state functional connectivity analysis has become a widely used method for the investigation of human brain connectivity and pathology. The measurement of neuronal activity by functional MRI, however, is impeded by various nuisance signals that reduce the stability of functional connectivity. Several methods exist to address this predicament, but little consensus has yet been reached on the most appropriate approach. Given the crucial importance of reliability for the development of clinical applications, we here investigated the effect of various confound removal approaches on the test-retest reliability of functional-connectivity estimates in two previously defined functional brain networks. Our results showed that grey matter masking improved the reliability of connectivity estimates, whereas de-noising based on principal components analysis reduced it. We additionally observed that refraining from using any correction for global signals provided the best test-retest reliability, but failed to reproduce anti-correlations between what have been previously described as antagonistic networks. This suggests that improved reliability can come at the expense of potentially poorer biological validity. Consistent with this, we observed that reliability was proportional to the retained variance, which presumably included structured noise, such as reliable nuisance signals (for instance, noise induced by cardiac processes). We conclude that compromises are necessary between maximizing test-retest reliability and removing variance that may be attributable to non-neuronal sources. PMID:27550015

  3. Self-consistent collective-coordinate method for ''maximally-decoupled'' collective subspace and its boson mapping: Quantum theory of ''maximally-decoupled'' collective motion

    International Nuclear Information System (INIS)

    Marumori, T.; Sakata, F.; Maskawa, T.; Une, T.; Hashimoto, Y.

    1983-01-01

    The main purpose of this paper is to develop a full quantum theory, which is capable by itself of determining a ''maximally-decoupled'' collective motion. The paper is divided into two parts. In the first part, the motivation and basic idea of the theory are explained, and the ''maximal-decoupling condition'' on the collective motion is formulated within the framework of the time-dependent Hartree-Fock theory, in a general form called the invariance principle of the (time-dependent) Schrodinger equation. In the second part, it is shown that when the author positively utilize the invariance principle, we can construct a full quantum theory of the ''maximally-decoupled'' collective motion. This quantum theory is shown to be a generalization of the kinematical boson-mapping theories so far developed, in such a way that the dynamical ''maximal-decoupling condition'' on the collective motion is automatically satisfied

  4. Maximal feeding with active prey-switching: A kill-the-winner functional response and its effect on global diversity and biogeography

    Science.gov (United States)

    Vallina, S. M.; Ward, B. A.; Dutkiewicz, S.; Follows, M. J.

    2014-01-01

    Predators' switching towards the most abundant prey is a mechanism that stabilizes population dynamics and helps overcome competitive exclusion of species in food webs. Current formulations of active prey-switching, however, display non-maximal feeding in which the predators' total ingestion decays exponentially with the number prey species (i.e. the diet breadth) even though the total prey biomass stays constant. We analyse three previously published multi-species functional responses which have either active switching or maximal feeding, but not both. We identify the cause of this apparent incompatibility and describe a kill-the-winner formulation that combines active switching with maximal feeding. Active switching is shown to be a community response in which some predators become prey-selective and the formulations with maximal or non-maximal feeding are implicitly assuming different food web configurations. Global simulations using a marine ecosystem model with 64 phytoplankton species belonging to 4 major functional groups show that the species richness and biogeography of phytoplankton are very sensitive to the choice of the functional response for grazing. The phytoplankton biogeography reflects the balance between the competitive abilities for nutrient uptake and the degree of apparent competition which occurs indirectly between species that share a common predator species. The phytoplankton diversity significantly increases when active switching is combined with maximal feeding through predator-mediated coexistence.

  5. Maximizing performance in supercritical fluid chromatography using low-density mobile phases.

    Science.gov (United States)

    Gritti, Fabrice; Fogwill, Michael; Gilar, Martin; Jarrell, Joseph A

    2016-10-14

    The performance of a 3.0mm×150mm column packed with 1.8μm fully porous HSS-SB-C 18 particles was investigated in supercritical fluid chromatography (SFC) with low-density, highly expansible carbon dioxide. These conditions are selected for the analysis of semi-volatile compounds. Elevated temperatures (>100°C) were then combined with low column back pressures (<100bar). In this work, the inlet temperature of pure carbon dioxide was set at 107°C, the active back pressure regulator (ABPR) pressure was fixed at 100bar, and the flow rate was set at 2.1mL/min at 12°C (liquefied carbon dioxide) and at an inlet column pressure close to 300bar. Nine n-alkylbenzenes (from benzene to octadecylbenzene) were injected under linear (no sample overload) conditions. The severe steepness of the temperature gradients across the column diameter were predicted from a simplified heat transfer model. Such conditions dramatically lower the column performance by affecting the symmetry of the peak shape. In order to cope with this problem, three different approaches were experimentally tested. They include (1) the decoupling and the proper selection of the inlet eluent temperature with respect to the oven temperature, (2) the partial thermal insulation of the column using polyethylene aerogel, and (3) the application of a high vacuum (10 -5 Torr provided by a turbo-molecular pump) in a housing chamber surrounding the whole column body. The results reveal that (1) the column efficiency can be maximized by properly selecting the difference between the eluent and the oven temperatures, (2) the mere wrapping of the column with an excellent insulating material is insufficient to fully eliminate heat exchanges by conduction and the undesirable radial density gradients across the column i.d., and (3) the complete thermal insulation of the SFC column under high vacuum allows to maximize the column efficiency by maintaining the integrity of the peak shape. Copyright © 2016 Elsevier B.V. All

  6. Major inter-personal variation in the increase and maximal level of 25-hydroxy vitamin D induced by UVB

    DEFF Research Database (Denmark)

    Datta, Pameli; Philipsen, Peter A.; Olsen, Peter

    2016-01-01

    Vitamin D influences skeletal health as well as other aspects of human health. Even when the most obvious sources of variation such as solar UVB exposure, latitude, season, clothing habits, skin pigmentation and ethnicity are selected for, variation in the serum 25-hydroxy vitamin D (25(OH......)D) response to UVB remains extensive and unexplained. Our study assessed the inter-personal variation in 25(OH)D response to UVR and the maximal obtainable 25(OH)D level in 22 healthy participants (220 samples) with similar skin pigmentation during winter with negligible ambient UVB. Participants received...... identical UVB doses on identical body areas until a maximal level of 25(OH)D was reached. Major inter-personal variation in both the maximal obtainable UVB-induced 25(OH)D level (range 85–216 nmol l−1, mean 134 nmol l−1) and the total increase in 25(OH)D (range 3–139 nmol l−1, mean 48 nmol l−1) was found...

  7. Optimisation or satiation, testing diet selection rules in goats

    NARCIS (Netherlands)

    Jansen, D.A.W.A.M.; Langevelde, van F.; Boer, de W.F.; Kirkman, K.P.

    2007-01-01

    Several hypotheses have been formulated to explain diet selection by herbivores, focusing on the maximization of nutrient intake, the minimization of plant secondary compounds, or the satiety hypothesis. This research aimed at studying diet selection revealing which chemical characteristics of

  8. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2015-01-01

    method based uncertainty and reliability analysis. The reliability of the scanning paths are established using cumulative probability distribution functions for process output criteria such as sample density, thermal homogeneity, etc. A customized genetic algorithm is used along with the simulation model...

  9. Demonstration of reliability centered maintenance

    International Nuclear Information System (INIS)

    Schwan, C.A.; Morgan, T.A.

    1991-04-01

    Reliability centered maintenance (RCM) is an approach to preventive maintenance planning and evaluation that has been used successfully by other industries, most notably the airlines and military. Now EPRI is demonstrating RCM in the commercial nuclear power industry. Just completed are large-scale, two-year demonstrations at Rochester Gas ampersand Electric (Ginna Nuclear Power Station) and Southern California Edison (San Onofre Nuclear Generating Station). Both demonstrations were begun in the spring of 1988. At each plant, RCM was performed on 12 to 21 major systems. Both demonstrations determined that RCM is an appropriate means to optimize a PM program and improve nuclear plant preventive maintenance on a large scale. Such favorable results had been suggested by three earlier EPRI pilot studies at Florida Power ampersand Light, Duke Power, and Southern California Edison. EPRI selected the Ginna and San Onofre sites because, together, they represent a broad range of utility and plant size, plant organization, plant age, and histories of availability and reliability. Significant steps in each demonstration included: selecting and prioritizing plant systems for RCM evaluation; performing the RCM evaluation steps on selected systems; evaluating the RCM recommendations by a multi-disciplinary task force; implementing the RCM recommendations; establishing a system to track and verify the RCM benefits; and establishing procedures to update the RCM bases and recommendations with time (a living program). 7 refs., 1 tab

  10. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  11. A COMPARISON STUDY OF DIFFERENT MARKER SELECTION METHODS FOR SPECTRAL-SPATIAL CLASSIFICATION OF HYPERSPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    D. Akbari

    2015-12-01

    Full Text Available An effective approach based on the Minimum Spanning Forest (MSF, grown from automatically selected markers using Support Vector Machines (SVM, has been proposed for spectral-spatial classification of hyperspectral images by Tarabalka et al. This paper aims at improving this approach by using image segmentation to integrate the spatial information into marker selection process. In this study, the markers are extracted from the classification maps, obtained by both SVM and segmentation algorithms, and then are used to build the MSF. The segmentation algorithms are the watershed, expectation maximization (EM and hierarchical clustering. These algorithms are used in parallel and independently to segment the image. Moreover, the pixels of each class, with the largest population in the classification map, are kept for each region of the segmentation map. Lastly, the most reliable classified pixels are chosen from among the exiting pixels as markers. Two benchmark urban hyperspectral datasets are used for evaluation: Washington DC Mall and Berlin. The results of our experiments indicate that, compared to the original MSF approach, the marker selection using segmentation algorithms leads in more accurate classification maps.

  12. Reliability evaluation of a natural circulation system

    International Nuclear Information System (INIS)

    Jafari, Jalil; D'Auria, Francesco; Kazeminejad, Hossein; Davilu, Hadi

    2003-01-01

    This paper discusses a reliability study performed with reference to a passive thermohydraulic natural circulation (NC) system, named TTL-1. A methodology based on probabilistic techniques has been applied with the main purpose to optimize the system design. The obtained results have been adopted to estimate the thermal-hydraulic reliability (TH-R) of the same system. A total of 29 relevant parameters (including nominal values and plausible ranges of variations) affecting the design and the NC performance of the TTL-1 loop are identified and a probability of occurrence is assigned for each value based on expert judgment. Following procedures established for the uncertainty evaluation of thermal-hydraulic system codes results, 137 system configurations have been selected and each configuration has been analyzed via the Relap5 best-estimate code. The reference system configuration and the failure criteria derived from the 'mission' of the passive system are adopted for the evaluation of the system TH-R. Four different definitions of a less-than-unity 'reliability-values' (where unity represents the maximum achievable reliability) are proposed for the performance of the selected passive system. This is normally considered fully reliable, i.e. reliability-value equal one, in typical Probabilistic Safety Assessment (PSA) applications in nuclear reactor safety. The two 'point' TH-R values for the considered NC system were found equal to 0.70 and 0.85, i.e. values comparable with the reliability of a pump installed in an 'equivalent' forced circulation (active) system having the same 'mission'. The design optimization study was completed by a regression analysis addressing the output of the 137 calculations: heat losses, undetected leakage, loop length, riser diameter, and equivalent diameter of the test section have been found as the most important parameters bringing to the optimal system design and affecting the TH-R. As added values for this work, the comparison has

  13. Opportunistic relaying in multipath and slow fading channel: Relay selection and optimal relay selection period

    KAUST Repository

    Sungjoon Park,

    2011-11-01

    In this paper we present opportunistic relay communication strategies of decode and forward relaying. The channel that we are considering includes pathloss, shadowing, and fast fading effects. We find a simple outage probability formula for opportunistic relaying in the channel, and validate the results by comparing it with the exact outage probability. Also, we suggest a new relay selection algorithm that incorporates shadowing. We consider a protocol of broadcasting the channel gain of the previously selected relay. This saves resources in slow fading channel by reducing collisions in relay selection. We further investigate the optimal relay selection period to maximize the throughput while avoiding selection overhead. © 2011 IEEE.

  14. Analysis of operating reliability of WWER-1000 unit

    International Nuclear Information System (INIS)

    Bortlik, J.

    1985-01-01

    The nuclear power unit was divided into 33 technological units. Input data for reliability analysis were surveys of operating results obtained from the IAEA information system and certain indexes of the reliability of technological equipment determined using the Bayes formula. The missing reliability data for technological equipment were used from the basic variant. The fault tree of the WWER-1000 unit was determined for the peak event defined as the impossibility of reaching 100%, 75% and 50% of rated power. The period was observed of the nuclear power plant operation with reduced output owing to defect and the respective time needed for a repair of the equipment. The calculation of the availability of the WWER-1000 unit was made for different variant situations. Certain indexes of the operating reliability of the WWER-1000 unit which are the result of a detailed reliability analysis are tabulated for selected variants. (E.S.)

  15. Human reliability assessment and probabilistic risk assessment

    International Nuclear Information System (INIS)

    Embrey, D.E.; Lucas, D.A.

    1989-01-01

    Human reliability assessment (HRA) is used within Probabilistic Risk Assessment (PRA) to identify the human errors (both omission and commission) which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. There exist a variey of HRA techniques and the selection of an appropriate one is often difficult. This paper reviews a number of available HRA techniques and discusses their strengths and weaknesses. The techniques reviewed include: decompositional methods, time-reliability curves and systematic expert judgement techniques. (orig.)

  16. Learning curves for mutual information maximization

    International Nuclear Information System (INIS)

    Urbanczik, R.

    2003-01-01

    An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed [S. Becker and G. Hinton, Nature (London) 355, 161 (1992)]. For a generic data model, I show that in the large sample limit the structure in the data is recognized by mutual information maximization. For a more restricted model, where the networks are similar to perceptrons, I calculate the learning curves for zero-temperature Gibbs learning. These show that convergence can be rather slow, and a way of regularizing the procedure is considered

  17. Breakdown of maximality conjecture in continuous phase transitions

    International Nuclear Information System (INIS)

    Mukamel, D.; Jaric, M.V.

    1983-04-01

    A Landau-Ginzburg-Wilson model associated with a single irreducible representation which exhibits an ordered phase whose symmetry group is not a maximal isotropy subgroup of the symmetry group of the disordered phase is constructed. This example disproves the maximality conjecture suggested in numerous previous studies. Below the (continuous) transition, the order parameter points along a direction which varies with the temperature and with the other parameters which define the model. An extension of the maximality conjecture to reducible representations was postulated in the context of Higgs symmetry breaking mechanism. Our model can also be extended to provide a counter example in these cases. (author)

  18. Maximizers versus satisficers: Decision-making styles, competence, and outcomes

    OpenAIRE

    Andrew M. Parker; Wändi Bruine de Bruin; Baruch Fischhoff

    2007-01-01

    Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al.\\ (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decision...

  19. IMNN: Information Maximizing Neural Networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  20. Neutrino mass textures with maximal CP violation

    International Nuclear Information System (INIS)

    Aizawa, Ichiro; Kitabayashi, Teruyuki; Yasue, Masaki

    2005-01-01

    We show three types of neutrino mass textures, which give maximal CP violation as well as maximal atmospheric neutrino mixing. These textures are described by six real mass parameters: one specified by two complex flavor neutrino masses and two constrained ones and the others specified by three complex flavor neutrino masses. In each texture, we calculate mixing angles and masses, which are consistent with observed data, as well as Majorana CP phases

  1. Comparison of changes in the mobility of the pelvic floor muscle on during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction

    OpenAIRE

    Jung, Halim; Jung, Sangwoo; Joo, Sunghee; Song, Changho

    2016-01-01

    [Purpose] The purpose of this study was to compare changes in the mobility of the pelvic floor muscle during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. [Subjects] Thirty healthy adults participated in this study (15 men and 15 women). [Methods] All participants performed a bridge exercise and abdominal curl-up during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. Pelvic floor mobility...

  2. Validation and selection of ODE based systems biology models: how to arrive at more reliable decisions.

    Science.gov (United States)

    Hasdemir, Dicle; Hoefsloot, Huub C J; Smilde, Age K

    2015-07-08

    Most ordinary differential equation (ODE) based modeling studies in systems biology involve a hold-out validation step for model validation. In this framework a pre-determined part of the data is used as validation data and, therefore it is not used for estimating the parameters of the model. The model is assumed to be validated if the model predictions on the validation dataset show good agreement with the data. Model selection between alternative model structures can also be performed in the same setting, based on the predictive power of the model structures on the validation dataset. However, drawbacks associated with this approach are usually under-estimated. We have carried out simulations by using a recently published High Osmolarity Glycerol (HOG) pathway from S.cerevisiae to demonstrate these drawbacks. We have shown that it is very important how the data is partitioned and which part of the data is used for validation purposes. The hold-out validation strategy leads to biased conclusions, since it can lead to different validation and selection decisions when different partitioning schemes are used. Furthermore, finding sensible partitioning schemes that would lead to reliable decisions are heavily dependent on the biology and unknown model parameters which turns the problem into a paradox. This brings the need for alternative validation approaches that offer flexible partitioning of the data. For this purpose, we have introduced a stratified random cross-validation (SRCV) approach that successfully overcomes these limitations. SRCV leads to more stable decisions for both validation and selection which are not biased by underlying biological phenomena. Furthermore, it is less dependent on the specific noise realization in the data. Therefore, it proves to be a promising alternative to the standard hold-out validation strategy.

  3. Reliability optimization of series-parallel systems with a choice of redundancy strategies using a genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Tavakkoli-Moghaddam, R. [Department of Industrial Engineering, Faculty of Engineering, University of Tehran, P.O. Box 11365/4563, Tehran (Iran, Islamic Republic of); Department of Mechanical Engineering, The University of British Columbia, Vancouver (Canada)], E-mail: tavakoli@ut.ac.ir; Safari, J. [Department of Industrial Engineering, Science and Research Branch, Islamic Azad University, Tehran (Iran, Islamic Republic of)], E-mail: jalalsafari@pideco.com; Sassani, F. [Department of Mechanical Engineering, The University of British Columbia, Vancouver (Canada)], E-mail: sassani@mech.ubc.ca

    2008-04-15

    This paper proposes a genetic algorithm (GA) for a redundancy allocation problem for the series-parallel system when the redundancy strategy can be chosen for individual subsystems. Majority of the solution methods for the general redundancy allocation problems assume that the redundancy strategy for each subsystem is predetermined and fixed. In general, active redundancy has received more attention in the past. However, in practice both active and cold-standby redundancies may be used within a particular system design and the choice of the redundancy strategy becomes an additional decision variable. Thus, the problem is to select the best redundancy strategy, component, and redundancy level for each subsystem in order to maximize the system reliability under system-level constraints. This belongs to the NP-hard class of problems. Due to its complexity, it is so difficult to optimally solve such a problem by using traditional optimization tools. It is demonstrated in this paper that GA is an efficient method for solving this type of problems. Finally, computational results for a typical scenario are presented and the robustness of the proposed algorithm is discussed.

  4. Reliability optimization of series-parallel systems with a choice of redundancy strategies using a genetic algorithm

    International Nuclear Information System (INIS)

    Tavakkoli-Moghaddam, R.; Safari, J.; Sassani, F.

    2008-01-01

    This paper proposes a genetic algorithm (GA) for a redundancy allocation problem for the series-parallel system when the redundancy strategy can be chosen for individual subsystems. Majority of the solution methods for the general redundancy allocation problems assume that the redundancy strategy for each subsystem is predetermined and fixed. In general, active redundancy has received more attention in the past. However, in practice both active and cold-standby redundancies may be used within a particular system design and the choice of the redundancy strategy becomes an additional decision variable. Thus, the problem is to select the best redundancy strategy, component, and redundancy level for each subsystem in order to maximize the system reliability under system-level constraints. This belongs to the NP-hard class of problems. Due to its complexity, it is so difficult to optimally solve such a problem by using traditional optimization tools. It is demonstrated in this paper that GA is an efficient method for solving this type of problems. Finally, computational results for a typical scenario are presented and the robustness of the proposed algorithm is discussed

  5. Reliability and responsiveness of dynamic contrast-enhanced magnetic resonance imaging in rheumatoid arthritis

    DEFF Research Database (Denmark)

    Axelsen, M.B.; Poggenborg, R.P.; Stoltenberg, M.

    2013-01-01

    intraarticular injection with 80 mg methylprednisolone. Using semi-automated image processing software, DCE-MRI parameters, including the initial rate of enhancement (IRE) and maximal enhancement (ME), were generated for three regions of interest (ROIs): ‘Whole slice’, ‘Quick ROI’, and ‘Precise ROI......Objectives: To investigate the responsiveness to treatment and the reliability of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in rheumatoid arthritis (RA) knee joints. Methods: DCE-MRI was performed in 12 clinically active RA knee joints before and 1, 7, 30, and 180 days after......’. The smallest detectable difference (SDD), the smallest detectable change (SDC), and intra- and inter-reader intraclass correlation coefficients (ICCs) were used to assess the reliability of DCE-MRI. Responsiveness to treatment was assessed by the standardized response mean (SRM). Results: In all patients...

  6. Phenolic compounds from Glycyrrhiza pallidiflora Maxim. and their cytotoxic activity.

    Science.gov (United States)

    Shults, Elvira E; Shakirov, Makhmut M; Pokrovsky, Mikhail A; Petrova, Tatijana N; Pokrovsky, Andrey G; Gorovoy, Petr G

    2017-02-01

    Twenty-one phenolic compounds (1-21) including dihydrocinnamic acid, isoflavonoids, flavonoids, coumestans, pterocarpans, chalcones, isoflavan and isoflaven, were isolated from the roots of Glycyrrhiza pallidiflora Maxim. Phloretinic acid (1), chrysin (6), 9-methoxycoumestan (8), isoglycyrol (9), 6″-O-acetylanonin (19) and 6″-O-acetylwistin (21) were isolated from G. pallidiflora for the first time. Isoflavonoid acetylglycosides 19, 21 might be artefacts that could be produced during the EtOAc fractionation process of whole extract. Compounds 2-4, 10, 11, 19 and 21 were evaluated for their cytotoxic activity with respect to model cancer cell lines (CEM-13, MT-4, U-937) using the conventional MTT assays. Isoflavonoid calycosin (4) showed the best potency against human T-cell leukaemia cells MT-4 (CTD 50 , 2.9 μM). Pterocarpans medicarpin (10) and homopterocarpin (11) exhibit anticancer activity in micromolar range with selectivity on the human monocyte cells U-937. The isoflavan (3R)-vestitol (16) was highly selective on the lymphoblastoid leukaemia cells CEM-13 and was more active than the drug doxorubicin.

  7. Maximally-localized position, Euclidean path-integral, and thermodynamics in GUP quantum mechanics

    Science.gov (United States)

    Bernardo, Reginald Christian S.; Esguerra, Jose Perico H.

    2018-04-01

    In dealing with quantum mechanics at very high energies, it is essential to adapt to a quasiposition representation using the maximally-localized states because of the generalized uncertainty principle. In this paper, we look at maximally-localized states as eigenstates of the operator ξ = X + iβP that we refer to as the maximally-localized position. We calculate the overlap between maximally-localized states and show that the identity operator can be expressed in terms of the maximally-localized states. Furthermore, we show that the maximally-localized position is diagonal in momentum-space and that the maximally-localized position and its adjoint satisfy commutation and anti-commutation relations reminiscent of the harmonic oscillator commutation and anti-commutation relations. As application, we use the maximally-localized position in developing the Euclidean path-integral and introduce the compact form of the propagator for maximal localization. The free particle momentum-space propagator and the propagator for maximal localization are analytically evaluated up to quadratic-order in β. Finally, we obtain a path-integral expression for the partition function of a thermodynamic system using the maximally-localized states. The partition function of a gas of noninteracting particles is evaluated. At temperatures exceeding the Planck energy, we obtain the gas' maximum internal energy N / 2 β and recover the zero heat capacity of an ideal gas.

  8. Why firms should not always maximize profits

    OpenAIRE

    Kolstad, Ivar

    2006-01-01

    Though corporate social responsibility (CSR) is on the agenda of most major corporations, corporate executives still largely support the view that corporations should maximize the returns to their owners. There are two lines of defence for this position. One is the Friedmanian view that maximizing owner returns is the corporate social responsibility of corporations. The other is a position voiced by many executives, that CSR and profits go together. This paper argues that the first position i...

  9. Reliability of the Superimposed-Burst Technique in Patients With Patellofemoral Pain: A Technical Report.

    Science.gov (United States)

    Norte, Grant E; Frye, Jamie L; Hart, Joseph M

    2015-11-01

    The superimposed-burst (SIB) technique is commonly used to quantify central activation failure after knee-joint injury, but its reliability has not been established in pathologic cohorts. To assess within-session and between-sessions reliability of the SIB technique in patients with patellofemoral pain. Descriptive laboratory study. University laboratory. A total of 10 patients with self-reported patellofemoral pain (1 man, 9 women; age = 24.1 ± 3.8 years, height = 167.8 ± 15.2 cm, mass = 71.6 ± 17.5 kg) and 10 healthy control participants (3 men, 7 women; age = 27.4 ± 5.0 years, height = 173.5 ± 9.9 cm, mass = 78.2 ± 16.5 kg) volunteered. Participants were assessed at 6 intervals spanning 21 days. Intraclass correlation coefficients (ICCs [3,3]) were used to assess reliability. Quadriceps central activation ratio, knee-extension maximal voluntary isometric contraction force, and SIB force. The quadriceps central activation ratio was highly reliable within session (ICC [3,3] = 0.97) and between sessions through day 21 (ICC [3,3] = 0.90-0.95). Acceptable reliability of knee extension (ICC [3,3] = 0.75-0.91) and SIB force (ICC [3,3] = 0.77-0.89) was observed through day 21. The SIB technique was reliable for clinical research up to 21 days in patients with patellofemoral pain.

  10. Quality and reliability of technical systems. 2. rev. and enlarged ed.

    International Nuclear Information System (INIS)

    Birolini, A.

    1988-01-01

    The work comprises, besides the definition of fundamentals, mathematical methods and tables, a detailed compilation of theory, practice and management in the field of quality assurance and reliability. Complete chapters are dedicated in particular to the reliability analyses, selection and qualification of electronic components, maintenance analyses in the development phase, quality assurance of the software, reliability and availability of repairable visual display units, statistical quality control as well as the improvement of the quality and reliability in the production phase of electronic components. (DG) With 152 figs., 58 tabs., 92 examples [de

  11. Maximizing band gaps in plate structures

    DEFF Research Database (Denmark)

    Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard

    2006-01-01

    periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated......Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... theoretically and experimentally and the issue of finite size effects is addressed....

  12. Finding Maximal Pairs with Bounded Gap

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Lyngsø, Rune B.; Pedersen, Christian N. S.

    1999-01-01

    . In this paper we present methods for finding all maximal pairs under various constraints on the gap. In a string of length n we can find all maximal pairs with gap in an upper and lower bounded interval in time O(n log n+z) where z is the number of reported pairs. If the upper bound is removed the time reduces...... to O(n+z). Since a tandem repeat is a pair where the gap is zero, our methods can be seen as a generalization of finding tandem repeats. The running time of our methods equals the running time of well known methods for finding tandem repeats....

  13. The key kinematic determinants of undulatory underwater swimming at maximal velocity.

    Science.gov (United States)

    Connaboy, Chris; Naemi, Roozbeh; Brown, Susan; Psycharakis, Stelios; McCabe, Carla; Coleman, Simon; Sanders, Ross

    2016-01-01

    The optimisation of undulatory underwater swimming is highly important in competitive swimming performance. Nineteen kinematic variables were identified from previous research undertaken to assess undulatory underwater swimming performance. The purpose of the present study was to determine which kinematic variables were key to the production of maximal undulatory underwater swimming velocity. Kinematic data at maximal undulatory underwater swimming velocity were collected from 17 skilled swimmers. A series of separate backward-elimination analysis of covariance models was produced with cycle frequency and cycle length as dependent variables (DVs) and participant as a fixed factor, as including cycle frequency and cycle length would explain 100% of the maximal swimming velocity variance. The covariates identified in the cycle-frequency and cycle-length models were used to form the saturated model for maximal swimming velocity. The final parsimonious model identified three covariates (maximal knee joint angular velocity, maximal ankle angular velocity and knee range of movement) as determinants of the variance in maximal swimming velocity (adjusted-r2 = 0.929). However, when participant was removed as a fixed factor there was a large reduction in explained variance (adjusted r2 = 0.397) and only maximal knee joint angular velocity continued to contribute significantly, highlighting its importance to the production of maximal swimming velocity. The reduction in explained variance suggests an emphasis on inter-individual differences in undulatory underwater swimming technique and/or anthropometry. Future research should examine the efficacy of other anthropometric, kinematic and coordination variables to better understand the production of maximal swimming velocity and consider the importance of individual undulatory underwater swimming techniques when interpreting the data.

  14. Kinetic theory in maximal-acceleration invariant phase space

    International Nuclear Information System (INIS)

    Brandt, H.E.

    1989-01-01

    A vanishing directional derivative of a scalar field along particle trajectories in maximal acceleration invariant phase space is identical in form to the ordinary covariant Vlasov equation in curved spacetime in the presence of both gravitational and nongravitational forces. A natural foundation is thereby provided for a covariant kinetic theory of particles in maximal-acceleration invariant phase space. (orig.)

  15. Half-maximal supersymmetry from exceptional field theory

    Energy Technology Data Exchange (ETDEWEB)

    Malek, Emanuel [Arnold Sommerfeld Center for Theoretical Physics, Department fuer Physik, Ludwig-Maximilians-Universitaet Muenchen (Germany)

    2017-10-15

    We study D ≥ 4-dimensional half-maximal flux backgrounds using exceptional field theory. We define the relevant generalised structures and also find the integrability conditions which give warped half-maximal Minkowski{sub D} and AdS{sub D} vacua. We then show how to obtain consistent truncations of type II / 11-dimensional SUGRA which break half the supersymmetry. Such truncations can be defined on backgrounds admitting exceptional generalised SO(d - 1 - N) structures, where d = 11 - D, and N is the number of vector multiplets obtained in the lower-dimensional theory. Our procedure yields the most general embedding tensors satisfying the linear constraint of half-maximal gauged SUGRA. We use this to prove that all D ≥ 4 half-maximal warped AdS{sub D} and Minkowski{sub D} vacua of type II / 11-dimensional SUGRA admit a consistent truncation keeping only the gravitational supermultiplet. We also show to obtain heterotic double field theory from exceptional field theory and comment on the M-theory / heterotic duality. In five dimensions, we find a new SO(5, N) double field theory with a (6 + N)-dimensional extended space. Its section condition has one solution corresponding to 10-dimensional N = 1 supergravity and another yielding six-dimensional N = (2, 0) SUGRA. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  16. Decision analysis for conservation breeding: Maximizing production for reintroduction of whooping cranes

    Science.gov (United States)

    Smith, Des H.V.; Converse, Sarah J.; Gibson, Keith; Moehrenschlager, Axel; Link, William A.; Olsen, Glenn H.; Maguire, Kelly

    2011-01-01

    Captive breeding is key to management of severely endangered species, but maximizing captive production can be challenging because of poor knowledge of species breeding biology and the complexity of evaluating different management options. In the face of uncertainty and complexity, decision-analytic approaches can be used to identify optimal management options for maximizing captive production. Building decision-analytic models requires iterations of model conception, data analysis, model building and evaluation, identification of remaining uncertainty, further research and monitoring to reduce uncertainty, and integration of new data into the model. We initiated such a process to maximize captive production of the whooping crane (Grus americana), the world's most endangered crane, which is managed through captive breeding and reintroduction. We collected 15 years of captive breeding data from 3 institutions and used Bayesian analysis and model selection to identify predictors of whooping crane hatching success. The strongest predictor, and that with clear management relevance, was incubation environment. The incubation period of whooping crane eggs is split across two environments: crane nests and artificial incubators. Although artificial incubators are useful for allowing breeding pairs to produce multiple clutches, our results indicate that crane incubation is most effective at promoting hatching success. Hatching probability increased the longer an egg spent in a crane nest, from 40% hatching probability for eggs receiving 1 day of crane incubation to 95% for those receiving 30 days (time incubated in each environment varied independently of total incubation period). Because birds will lay fewer eggs when they are incubating longer, a tradeoff exists between the number of clutches produced and egg hatching probability. We developed a decision-analytic model that estimated 16 to be the optimal number of days of crane incubation needed to maximize the number of

  17. Dynamic reliability assessment and prediction for repairable systems with interval-censored data

    International Nuclear Information System (INIS)

    Peng, Yizhen; Wang, Yu; Zi, YanYang; Tsui, Kwok-Leung; Zhang, Chuhua

    2017-01-01

    The ‘Test, Analyze and Fix’ process is widely applied to improve the reliability of a repairable system. In this process, dynamic reliability assessment for the system has been paid a great deal of attention. Due to instrument malfunctions, staff omissions and imperfect inspection strategies, field reliability data are often subject to interval censoring, making dynamic reliability assessment become a difficult task. Most traditional methods assume this kind of data as multiple normal distributed variables or the missing mechanism as missing at random, which may cause a large bias in parameter estimation. This paper proposes a novel method to evaluate and predict the dynamic reliability of a repairable system subject to interval-censored problem. First, a multiple imputation strategy based on the assumption that the reliability growth trend follows a nonhomogeneous Poisson process is developed to derive the distributions of missing data. Second, a new order statistic model that can transfer the dependent variables into independent variables is developed to simplify the imputation procedure. The unknown parameters of the model are iteratively inferred by the Monte Carlo expectation maximization (MCEM) algorithm. Finally, to verify the effectiveness of the proposed method, a simulation and a real case study for gas pipeline compressor system are implemented. - Highlights: • A new multiple imputation strategy was developed to derive the PDF of missing data. • A new order statistic model was developed to simplify the imputation procedure. • The parameters of the order statistic model were iteratively inferred by MCEM. • A real cases study was conducted to verify the effectiveness of the proposed method.

  18. Maximal slicing of D-dimensional spherically symmetric vacuum spacetime

    International Nuclear Information System (INIS)

    Nakao, Ken-ichi; Abe, Hiroyuki; Yoshino, Hirotaka; Shibata, Masaru

    2009-01-01

    We study the foliation of a D-dimensional spherically symmetric black-hole spacetime with D≥5 by two kinds of one-parameter families of maximal hypersurfaces: a reflection-symmetric foliation with respect to the wormhole slot and a stationary foliation that has an infinitely long trumpetlike shape. As in the four-dimensional case, the foliations by the maximal hypersurfaces avoid the singularity irrespective of the dimensionality. This indicates that the maximal slicing condition will be useful for simulating higher-dimensional black-hole spacetimes in numerical relativity. For the case of D=5, we present analytic solutions of the intrinsic metric, the extrinsic curvature, the lapse function, and the shift vector for the foliation by the stationary maximal hypersurfaces. These data will be useful for checking five-dimensional numerical-relativity codes based on the moving puncture approach.

  19. Left ventricle expands maximally preceding end-diastole. Radionuclide ventriculography study

    International Nuclear Information System (INIS)

    Horinouchi, Osamu

    2002-01-01

    It has been considered that left ventricle (LV) expands maximally at the end-diastole. However, is it exactly coincident with this point? This study was aimed to determine whether the maximal expansion of LV coincides with the peak of R wave on electrocardiogram. Thirty-three angina pectoris patients with normal LV motion were examined using radionuclide ventriculography. Data were obtained from every 30 ms backward frame from the peak of R wave. All patients showed the time of maximal expansion preceded the peak of R wave. The intervals from the peak of R wave and the onset of P wave to maximal expansion of LV was 105±29 ms and 88±25 ms, respectively. This period corresponds to the timing of maximal excurtion of mitral valve by atrial contraction, and the centripetal motion of LV without losing its volume before end-diastole may be interpreted on account of the movement of mitral valve toward closure. These findings suggest that LV expands maximally between P and R wave after atrial contraction, preceding the peak of R wave thought conventionally as the end-diastole. (author)

  20. Basics of Bayesian reliability estimation from attribute test data

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Waller, R.A.

    1975-10-01

    The basic notions of Bayesian reliability estimation from attribute lifetest data are presented in an introductory and expository manner. Both Bayesian point and interval estimates of the probability of surviving the lifetest, the reliability, are discussed. The necessary formulas are simply stated, and examples are given to illustrate their use. In particular, a binomial model in conjunction with a beta prior model is considered. Particular attention is given to the procedure for selecting an appropriate prior model in practice. Empirical Bayes point and interval estimates of reliability are discussed and examples are given. 7 figures, 2 tables

  1. Shoulder muscle endurance: the development of a standardized and reliable protocol

    Directory of Open Access Journals (Sweden)

    Roy Jean-Sébastien

    2011-01-01

    Full Text Available Abstract Background Shoulder muscle fatigue has been proposed as a possible link to explain the association between repetitive arm use and the development of rotator cuff disorders. To our knowledge, no standardized clinical endurance protocol has been developed to evaluate the effects of muscle fatigue on shoulder function. Such a test could improve clinical examination of individuals with shoulder disorders. Therefore, the purpose of this study was to establish a reliable protocol for objective assessment of shoulder muscle endurance. Methods An endurance protocol was developed on a stationary dynamometer (Biodex System 3. The endurance protocol was performed in isotonic mode with the resistance set at 50% of each subject's peak torque as measured for shoulder external (ER and internal rotation (IR. Each subject performed 60 continuous repetitions of IR/ER rotation. The endurance protocol was performed by 36 healthy individuals on two separate occasions at least two days apart. Maximal isometric shoulder strength tests were performed before and after the fatigue protocol to evaluate the effects of the endurance protocol and its reliability. Paired t-tests were used to evaluate the reduction in shoulder strength due to the protocol, while intraclass correlation coefficients (ICC and minimal detectable change (MDC were used to evaluate its reliability. Results Maximal isometric strength was significantly decreased after the endurance protocol (P 0.84. Conclusions Changes in muscular performance observed during and after the muscular endurance protocol suggests that the protocol did result in muscular fatigue. Furthermore, this study established that the resultant effects of fatigue of the proposed isotonic protocol were reproducible over time. The protocol was performed without difficulty by all volunteers and took less than 10 minutes to perform, suggesting that it might be feasible for clinical practice. This protocol could be used to induce

  2. Reliability-Aware Cooperative Node Sleeping and Clustering in Duty-Cycled Sensors Networks

    Directory of Open Access Journals (Sweden)

    Jeungeun Song

    2018-01-01

    Full Text Available Duty-cycled sensor networks provide a new perspective for improvement of energy efficiency and reliability assurance of multi-hop cooperative sensor networks. In this paper, we consider the energy-efficient cooperative node sleeping and clustering problems in cooperative sensor networks where clusters of relay nodes jointly transmit sensory data to the next hop. Our key idea for guaranteeing reliability is to exploit the on-demand number of cooperative nodes, facilitating the prediction of personalized end-to-end (ETE reliability. Namely, a novel reliability-aware cooperative routing (RCR scheme is proposed to select k-cooperative nodes at every hop (RCR-selection. After selecting k cooperative nodes at every hop, all of the non-cooperative nodes will go into sleep status. In order to solve the cooperative node clustering problem, we propose the RCR-based optimal relay assignment and cooperative data delivery (RCR-delivery scheme to provide a low-communication-overhead data transmission and an optimal duty cycle for a given number of cooperative nodes when the network is dynamic, which enables part of cooperative nodes to switch into idle status for further energy saving. Through the extensive OPNET-based simulations, we show that the proposed scheme significantly outperforms the existing geographic routing schemes and beaconless geographic routings in wireless sensor networks with a highly dynamic wireless channel and controls energy consumption, while ETE reliability is effectively guaranteed.

  3. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    International Nuclear Information System (INIS)

    Dong, Feifei; Liu, Yong; Su, Han; Zou, Rui; Guo, Huaicheng

    2015-01-01

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  4. Reliability-oriented multi-objective optimal decision-making approach for uncertainty-based watershed load reduction

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Feifei [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Liu, Yong, E-mail: yongliu@pku.edu.cn [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Institute of Water Sciences, Peking University, Beijing 100871 (China); Su, Han [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China); Zou, Rui [Tetra Tech, Inc., 10306 Eaton Place, Ste 340, Fairfax, VA 22030 (United States); Yunnan Key Laboratory of Pollution Process and Management of Plateau Lake-Watershed, Kunming 650034 (China); Guo, Huaicheng [College of Environmental Science and Engineering, Key Laboratory of Water and Sediment Sciences (MOE), Peking University, Beijing 100871 (China)

    2015-05-15

    Water quality management and load reduction are subject to inherent uncertainties in watershed systems and competing decision objectives. Therefore, optimal decision-making modeling in watershed load reduction is suffering due to the following challenges: (a) it is difficult to obtain absolutely “optimal” solutions, and (b) decision schemes may be vulnerable to failure. The probability that solutions are feasible under uncertainties is defined as reliability. A reliability-oriented multi-objective (ROMO) decision-making approach was proposed in this study for optimal decision making with stochastic parameters and multiple decision reliability objectives. Lake Dianchi, one of the three most eutrophic lakes in China, was examined as a case study for optimal watershed nutrient load reduction to restore lake water quality. This study aimed to maximize reliability levels from considerations of cost and load reductions. The Pareto solutions of the ROMO optimization model were generated with the multi-objective evolutionary algorithm, demonstrating schemes representing different biases towards reliability. The Pareto fronts of six maximum allowable emission (MAE) scenarios were obtained, which indicated that decisions may be unreliable under unpractical load reduction requirements. A decision scheme identification process was conducted using the back propagation neural network (BPNN) method to provide a shortcut for identifying schemes at specific reliability levels for decision makers. The model results indicated that the ROMO approach can offer decision makers great insights into reliability tradeoffs and can thus help them to avoid ineffective decisions. - Highlights: • Reliability-oriented multi-objective (ROMO) optimal decision approach was proposed. • The approach can avoid specifying reliability levels prior to optimization modeling. • Multiple reliability objectives can be systematically balanced using Pareto fronts. • Neural network model was used to

  5. Maximization of regional probabilities using Optimal Surface Graphs

    DEFF Research Database (Denmark)

    Arias Lorza, Andres M.; Van Engelen, Arna; Petersen, Jens

    2018-01-01

    Purpose: We present a segmentation method that maximizes regional probabilities enclosed by coupled surfaces using an Optimal Surface Graph (OSG) cut approach. This OSG cut determines the globally optimal solution given a graph constructed around an initial surface. While most methods for vessel...... wall segmentation only use edge information, we show that maximizing regional probabilities using an OSG improves the segmentation results. We applied this to automatically segment the vessel wall of the carotid artery in magnetic resonance images. Methods: First, voxel-wise regional probability maps...... were obtained using a Support Vector Machine classifier trained on local image features. Then, the OSG segments the regions which maximizes the regional probabilities considering smoothness and topological constraints. Results: The method was evaluated on 49 carotid arteries from 30 subjects...

  6. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  7. Node-pair reliability of network systems with small distances between adjacent nodes

    International Nuclear Information System (INIS)

    Malinowski, Jacek

    2007-01-01

    A new method for computing the node-pair reliability of network systems modeled by random graphs with nodes arranged in sequence is presented. It is based on a recursive algorithm using the 'sliding window' technique, the window being composed of several consecutive nodes. In a single step, the connectivity probabilities for all nodes included in the window are found. Subsequently, the window is moved one node forward. This process is repeated until, in the last step, the window reaches the terminal node. The connectivity probabilities found at that point are used to compute the node-pair reliability of the network system considered. The algorithm is designed especially for graphs with small distances between adjacent nodes, where the distance between two nodes is defined as the absolute value of the difference between the nodes' numbers. The maximal distance between any two adjacent nodes is denoted by Γ(G), where G symbolizes a random graph. If Γ(G)=2 then the method can be applied for directed as well as undirected graphs whose nodes and edges are subject to failure. This is important in view of the fact that many algorithms computing network reliability are designed for graphs with failure-prone edges and reliable nodes. If Γ(G)=3 then the method's applicability is limited to undirected graphs with reliable nodes. The main asset of the presented algorithms is their low numerical complexity-O(n), where n denotes the number of nodes

  8. Singularity Structure of Maximally Supersymmetric Scattering Amplitudes

    DEFF Research Database (Denmark)

    Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy

    2014-01-01

    We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....

  9. Identities on maximal subgroups of GLn(D)

    International Nuclear Information System (INIS)

    Kiani, D.; Mahdavi-Hezavehi, M.

    2002-04-01

    Let D be a division ring with centre F. Assume that M is a maximal subgroup of GL n (D), n≥1 such that Z(M) is algebraic over F. Group identities on M and polynomial identities on the F-linear hull F[M] are investigated. It is shown that if F[M] is a PI-algebra, then [D:F] n (D) and M is a maximal subgroup of N. If M satisfies a group identity, it is shown that M is abelian-by-finite. (author)

  10. Adaptive maximal poisson-disk sampling on surfaces

    KAUST Repository

    Yan, Dongming

    2012-01-01

    In this paper, we study the generation of maximal Poisson-disk sets with varying radii on surfaces. Based on the concepts of power diagram and regular triangulation, we present a geometric analysis of gaps in such disk sets on surfaces, which is the key ingredient of the adaptive maximal Poisson-disk sampling framework. Moreover, we adapt the presented sampling framework for remeshing applications. Several novel and efficient operators are developed for improving the sampling/meshing quality over the state-of-theart. © 2012 ACM.

  11. HiRel: Hybrid Automated Reliability Predictor (HARP) integrated reliability tool system, (version 7.0). Volume 3: HARP Graphics Oriented (GO) input user's guide

    Science.gov (United States)

    Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell

    1994-01-01

    The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.

  12. Validity and reliability of a low-cost digital dynamometer for measuring isometric strength of lower limb.

    Science.gov (United States)

    Romero-Franco, Natalia; Jiménez-Reyes, Pedro; Montaño-Munuera, Juan A

    2017-11-01

    Lower limb isometric strength is a key parameter to monitor the training process or recognise muscle weakness and injury risk. However, valid and reliable methods to evaluate it often require high-cost tools. The aim of this study was to analyse the concurrent validity and reliability of a low-cost digital dynamometer for measuring isometric strength in lower limb. Eleven physically active and healthy participants performed maximal isometric strength for: flexion and extension of ankle, flexion and extension of knee, flexion, extension, adduction, abduction, internal and external rotation of hip. Data obtained by the digital dynamometer were compared with the isokinetic dynamometer to examine its concurrent validity. Data obtained by the digital dynamometer from 2 different evaluators and 2 different sessions were compared to examine its inter-rater and intra-rater reliability. Intra-class correlation (ICC) for validity was excellent in every movement (ICC > 0.9). Intra and inter-tester reliability was excellent for all the movements assessed (ICC > 0.75). The low-cost digital dynamometer demonstrated strong concurrent validity and excellent intra and inter-tester reliability for assessing isometric strength in the main lower limb movements.

  13. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  14. Is the β phase maximal?

    International Nuclear Information System (INIS)

    Ferrandis, Javier

    2005-01-01

    The current experimental determination of the absolute values of the CKM elements indicates that 2 vertical bar V ub /V cb V us vertical bar =(1-z), with z given by z=0.19+/-0.14. This fact implies that irrespective of the form of the quark Yukawa matrices, the measured value of the SM CP phase β is approximately the maximum allowed by the measured absolute values of the CKM elements. This is β=(π/6-z/3) for γ=(π/3+z/3), which implies α=π/2. Alternatively, assuming that β is exactly maximal and using the experimental measurement sin(2β)=0.726+/-0.037, the phase γ is predicted to be γ=(π/2-β)=66.3 o +/-1.7 o . The maximality of β, if confirmed by near-future experiments, may give us some clues as to the origin of CP violation

  15. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    Science.gov (United States)

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  16. Reliability assessment of selected indicators of tree health

    Science.gov (United States)

    Pawel M. Lech

    2000-01-01

    The measurements of electrical resistance of near-cambium tissues, selected biometric features of needles and shoots, and the annual radial increment as well as visual estimates of crown defoliation were performed on about 100 Norway spruce trees in three 60- to 70-year-old stands located in the Western Sudety Mountains. The defoliation, electrical resistance, and...

  17. Intraoperative use of indocyanine green angiography for selecting the more reliable perforator of the anterolateral thigh flap: A comparison study.

    Science.gov (United States)

    La Padula, Simone; Hersant, Barbara; Meningaud, Jean Paul

    2018-03-30

    Anatomical variability of anterolateral thigh flap (ALT) perforators has been reported. The aim of this study is to assess if the use of intraoperative indocyanine green angiography (iICGA) can help surgeons to choose the ALT flap best perforator to be preserved. A retrospective study was conducted in 28 patients with open tibial fracture, following a road traffic crash, who had undergone ALT flap. Patients were classified into two groups: ICGA group (iICGA was used to select the more reliable perforator) and control group. The mean tissue loss size of the ICGA group (n = 13, 11 men and 2 women, mean age: 52 ± 6 years) was of 16.6 cm × 12.2 cm. The mean defect size of the control group (n = 15, 14 men and 1 women, mean age: 50 ± 5.52 years) was of 15.3 cm × 11.1 cm. Statistical analysis was performed to analyze and compare the results. ICGA allowed preserving only the most functional perforator, that provided the best ALT flap perfusion in 10 out of the 13 cases (77%). ICGA allowed a significant operative time reduction (160 ± 23 vs. 202 ± 48 minutes; P < .001). One case of distal necrosis was observed in the ICGA group (mean follow-up 12.3 months), while partial skin necrosis occurred in three cases of the control group (mean follow-up 13.1 months); P = .35. No additional coverage was required and a successful bone healing was observed in both groups. These findings suggest that iICGA is an effective method that allows to select the most reliable ALT flap perforators and to reduce operative time. © 2018 Wiley Periodicals, Inc.

  18. Evaluation of Reliability in Risk-Constrained Scheduling of Autonomous Microgrids with Demand Response and Renewable Resources

    DEFF Research Database (Denmark)

    Vahedipour-Dahraie, Mostafa; Anvari-Moghaddam, Amjad; Guerrero, Josep M.

    2018-01-01

    of microgrid. Moreover, the impacts of different VOLL and risk aversion parameter are illustrated on the system reliability. Extensive simulation results are also presented to illustrate the impact of risk aversion on system security issues with and without DR. Numerical results demonstrate the advantages......Uncertain natures of the renewable energy resources and consumers’ participation in demand response (DR) programs have introduced new challenges to the energy and reserve scheduling of microgrids, particularly in the autonomous mode. In this paper, a risk-constrained stochastic framework...... is presented to maximize the expected profit of a microgrid operator under uncertainties of renewable resources, demand load and electricity price. In the proposed model, the trade-off between maximizing the operator’s expected profit and the risk of getting low profits in undesired scenarios is modeled...

  19. Using wind plant data to increase reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Peters, Valerie A. (Sandia National Laboratories, Livermore, CA); Ogilvie, Alistair B.; McKenney, Bridget L.

    2011-01-01

    Operators interested in improving reliability should begin with a focus on the performance of the wind plant as a whole. To then understand the factors which drive individual turbine performance, which together comprise the plant performance, it is necessary to track a number of key indicators. Analysis of these key indicators can reveal the type, frequency, and cause of failures and will also identify their contributions to overall plant performance. The ideal approach to using data to drive good decisions includes first determining which critical decisions can be based on data. When those required decisions are understood, then the analysis required to inform those decisions can be identified, and finally the data to be collected in support of those analyses can be determined. Once equipped with high-quality data and analysis capabilities, the key steps to data-based decision making for reliability improvements are to isolate possible improvements, select the improvements with largest return on investment (ROI), implement the selected improvements, and finally to track their impact.

  20. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  1. Quantum speedup in solving the maximal-clique problem

    Science.gov (United States)

    Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang

    2018-03-01

    The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.

  2. Plastic packaged microcircuits: Quality, reliability, and cost issues

    Science.gov (United States)

    Pecht, Michael G.; Agarwal, Rakesh; Quearry, Dan

    1993-12-01

    Plastic encapsulated microcircuits (PEMs) find their main application in commercial and telecommunication electronics. The advantages of PEMs in cost, size, weight, performance, and market lead-time, have attracted 97% of the market share of worldwide microcircuit sales. However, PEMs have always been resisted in US Government and military applications due to the perception that PEM reliability is low. This paper surveys plastic packaging with respect to the issues of reliability, market lead-time, performance, cost, and weight as a means to guide part-selection and system-design.

  3. Muscle mitochondrial capacity exceeds maximal oxygen delivery in humans

    DEFF Research Database (Denmark)

    Boushel, Robert Christopher; Gnaiger, Erich; Calbet, Jose A L

    2011-01-01

    Across a wide range of species and body mass a close matching exists between maximal conductive oxygen delivery and mitochondrial respiratory rate. In this study we investigated in humans how closely in-vivo maximal oxygen consumption (VO(2) max) is matched to state 3 muscle mitochondrial respira...

  4. Class 1-Selective Histone Deacetylase (HDAC) Inhibitors Enhance HIV Latency Reversal while Preserving the Activity of HDAC Isoforms Necessary for Maximal HIV Gene Expression.

    Science.gov (United States)

    Zaikos, Thomas D; Painter, Mark M; Sebastian Kettinger, Nadia T; Terry, Valeri H; Collins, Kathleen L

    2018-03-15

    Combinations of drugs that affect distinct mechanisms of HIV latency aim to induce robust latency reversal leading to cytopathicity and elimination of the persistent HIV reservoir. Thus far, attempts have focused on combinations of protein kinase C (PKC) agonists and pan-histone deacetylase inhibitors (HDIs) despite the knowledge that HIV gene expression is regulated by class 1 histone deacetylases. We hypothesized that class 1-selective HDIs would promote more robust HIV latency reversal in combination with a PKC agonist than pan-HDIs because they preserve the activity of proviral factors regulated by non-class 1 histone deacetylases. Here, we show that class 1-selective agents used alone or with the PKC agonist bryostatin-1 induced more HIV protein expression per infected cell. In addition, the combination of entinostat and bryostatin-1 induced viral outgrowth, whereas bryostatin-1 combinations with pan-HDIs did not. When class 1-selective HDIs were used in combination with pan-HDIs, the amount of viral protein expression and virus outgrowth resembled that of pan-HDIs alone, suggesting that pan-HDIs inhibit robust gene expression induced by class 1-selective HDIs. Consistent with this, pan-HDI-containing combinations reduced the activity of NF-κB and Hsp90, two cellular factors necessary for potent HIV protein expression, but did not significantly reduce overall cell viability. An assessment of viral clearance from in vitro cultures indicated that maximal protein expression induced by class 1-selective HDI treatment was crucial for reservoir clearance. These findings elucidate the limitations of current approaches and provide a path toward more effective strategies to eliminate the HIV reservoir. IMPORTANCE Despite effective antiretroviral therapy, HIV evades eradication in a latent form that is not affected by currently available drug regimens. Pharmacologic latency reversal that leads to death of cellular reservoirs has been proposed as a strategy for

  5. Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals

    Science.gov (United States)

    Yu, Lianchun; Liu, Liwei

    2014-03-01

    The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.

  6. Reliability-based assessment of polyethylene pipe creep lifetime

    International Nuclear Information System (INIS)

    Khelif, Rabia; Chateauneuf, Alaa; Chaoui, Kamel

    2007-01-01

    Lifetime management of underground pipelines is mandatory for safe hydrocarbon transmission and distribution systems. The use of high-density polyethylene tubes subjected to internal pressure, external loading and environmental variations requires a reliability study in order to define the service limits and the optimal operating conditions. In service, the time-dependent phenomena, especially creep, take place during the pipe lifetime, leading to significant strength reduction. In this work, the reliability-based assessment of pipe lifetime models is carried out, in order to propose a probabilistic methodology for lifetime model selection and to determine the pipe safety levels as well as the most important parameters for pipeline reliability. This study is enhanced by parametric analysis on pipe configuration, gas pressure and operating temperature

  7. Reliability-based assessment of polyethylene pipe creep lifetime

    Energy Technology Data Exchange (ETDEWEB)

    Khelif, Rabia [LaMI-UBP and IFMA, Campus de Clermont-Fd, Les Cezeaux, BP 265, 63175 Aubiere Cedex (France); LR3MI, Departement de Genie Mecanique, Universite Badji Mokhtar, BP 12, Annaba 23000 (Algeria)], E-mail: rabia.khelif@ifma.fr; Chateauneuf, Alaa [LGC-University Blaise Pascal, Campus des Cezeaux, BP 206, 63174 Aubiere Cedex (France)], E-mail: alaa.chateauneuf@polytech.univ-bpclermont.fr; Chaoui, Kamel [LR3MI, Departement de Genie Mecanique, Universite Badji Mokhtar, BP 12, Annaba 23000 (Algeria)], E-mail: chaoui@univ-annaba.org

    2007-12-15

    Lifetime management of underground pipelines is mandatory for safe hydrocarbon transmission and distribution systems. The use of high-density polyethylene tubes subjected to internal pressure, external loading and environmental variations requires a reliability study in order to define the service limits and the optimal operating conditions. In service, the time-dependent phenomena, especially creep, take place during the pipe lifetime, leading to significant strength reduction. In this work, the reliability-based assessment of pipe lifetime models is carried out, in order to propose a probabilistic methodology for lifetime model selection and to determine the pipe safety levels as well as the most important parameters for pipeline reliability. This study is enhanced by parametric analysis on pipe configuration, gas pressure and operating temperature.

  8. MAXIMIZING SOCIAL VALUE IN THE HOTEL ONLINE ENVIRONMENT USING AN ANALYTIC HIERARCHY PROCESS

    Directory of Open Access Journals (Sweden)

    Carmen Păunescu

    2018-03-01

    Full Text Available The paper analyses the possibilities that hoteliers have to create and maximize the social value of their online platforms, in terms of their functionality and usage, in order to improve sales and increase hotels’ performance. It also discusses the opportunities that hotel managers can take to improve the hotel online decision-making strategy to convert more effectively visitors into actual customers. Although social value creation of online platforms has been well researched in the specialized literature, recent research has not examined the ways the online social value can be maximized and put into effective commercial use. The paper reviews the dimensions and characteristics of the hotel online environment by integrating literature analysis and field research practices. It employs the analytic hierarchy process method to analyse key elements of the hotel online environment that can serve as a focal point for value creation. The literature review and field research conducted pinpoint three possibilities of creating online social value: (a building online trust, (b ensuring high quality of the online service, and (c providing effective online communication experience. The paper results have given deeper understanding regarding potential areas of the hotel online environment where social value can be obtained. They prove applicability of the analytic hierarchy process method for evaluation and selection of strategies for online social value creation. At the same time, the paper provides new valuable insights to hoteliers, which might support their decisions to improve the business by proactively incorporating strategies for online social value maximization.

  9. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Enumerating all maximal frequent subtrees in collections of phylogenetic trees.

    Science.gov (United States)

    Deepak, Akshay; Fernández-Baca, David

    2014-01-01

    A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees.

  11. Softly Broken Lepton Numbers: an Approach to Maximal Neutrino Mixing

    International Nuclear Information System (INIS)

    Grimus, W.; Lavoura, L.

    2001-01-01

    We discuss models where the U(1) symmetries of lepton numbers are responsible for maximal neutrino mixing. We pay particular attention to an extension of the Standard Model (SM) with three right-handed neutrino singlets in which we require that the three lepton numbers L e , L μ , and L τ be separately conserved in the Yukawa couplings, but assume that they are softly broken by the Majorana mass matrix M R of the neutrino singlets. In this framework, where lepton-number breaking occurs at a scale much higher than the electroweak scale, deviations from family lepton number conservation are calculable, i.e., finite, and lepton mixing stems exclusively from M R . We show that in this framework either maximal atmospheric neutrino mixing or maximal solar neutrino mixing or both can be imposed by invoking symmetries. In this way those maximal mixings are stable against radiative corrections. The model which achieves maximal (or nearly maximal) solar neutrino mixing assumes that there are two different scales in M R and that the lepton number (dash)L=L e -L μ -L τ 1 is conserved in between them. We work out the difference between this model and the conventional scenario where (approximate) (dash)L invariance is imposed directly on the mass matrix of the light neutrinos. (author)

  12. Reliability and validity of the test of incremental respiratory endurance measures of inspiratory muscle performance in COPD

    Directory of Open Access Journals (Sweden)

    Formiga MF

    2018-05-01

    Full Text Available Magno F Formiga,1,2 Kathryn E Roach,1 Isabel Vital,3 Gisel Urdaneta,3 Kira Balestrini,3 Rafael A Calderon-Candelario,3,4 Michael A Campos,3,4,* Lawrence P Cahalin1,* 1Department of Physical Therapy, University of Miami Miller School of Medicine, Coral Gables, FL, USA; 2CAPES Foundation, Ministry of Education of Brazil, Brasilia, Brazil; 3Pulmonary Section, Miami Veterans Administration Medical Center, Miami, FL, USA; 4Division of Pulmonary, Allergy, Critical Care and Sleep Medicine, University of Miami Miller School of Medicine, Miami, FL, USA *These authors contributed equally to this work Purpose: The Test of Incremental Respiratory Endurance (TIRE provides a comprehensive assessment of inspiratory muscle performance by measuring maximal inspiratory pressure (MIP over time. The integration of MIP over inspiratory duration (ID provides the sustained maximal inspiratory pressure (SMIP. Evidence on the reliability and validity of these measurements in COPD is not currently available. Therefore, we assessed the reliability, responsiveness and construct validity of the TIRE measures of inspiratory muscle performance in subjects with COPD. Patients and methods: Test–retest reliability, known-groups and convergent validity assessments were implemented simultaneously in 81 male subjects with mild to very severe COPD. TIRE measures were obtained using the portable PrO2 device, following standard guidelines. Results: All TIRE measures were found to be highly reliable, with SMIP demonstrating the strongest test–retest reliability with a nearly perfect intraclass correlation coefficient (ICC of 0.99, while MIP and ID clustered closely together behind SMIP with ICC values of about 0.97. Our findings also demonstrated known-groups validity of all TIRE measures, with SMIP and ID yielding larger effect sizes when compared to MIP in distinguishing between subjects of different COPD status. Finally, our analyses confirmed convergent validity for both SMIP

  13. Maritime energy and security: Synergistic maximization or necessary tradeoffs?

    International Nuclear Information System (INIS)

    Nyman, Elizabeth

    2017-01-01

    Offshore energy is big business. The traditional source of maritime energy, offshore petroleum and gas, has been on the rise since a reliable method of extraction was discovered in the mid-20th century. Lately, it has been joined by offshore wind and tidal power as alternative “green” sources of maritime energy. Yet all of this has implications for maritime environmental regimes as well, as maritime energy extraction/generation can have a negative effect on the ocean environment. This paper considers two major questions surrounding maritime energy and environmental concerns. First, how and why do these two concerns, maritime energy and environmental protection, play against each other? Second, how can states both secure their energy and environmental securities in the maritime domain? Maximizing maritime energy output necessitates some environmental costs and vice versa, but these costs vary with the type of offshore energy technology used and with the extent to which states are willing to expend effort to protect both environmental and energy security. - Highlights: • Security is a complicated concept with several facets including energy and environmental issues. • Offshore energy contributes to energy supply but can have environmental and monitoring costs. • Understanding the contribution of offshore energy to security depends on which security facet is deemed most important.

  14. Learning reliable manipulation strategies without initial physical models

    Science.gov (United States)

    Christiansen, Alan D.; Mason, Matthew T.; Mitchell, Tom M.

    1990-01-01

    A description is given of a robot, possessing limited sensory and effectory capabilities but no initial model of the effects of its actions on the world, that acquires such a model through exploration, practice, and observation. By acquiring an increasingly correct model of its actions, it generates increasingly successful plans to achieve its goals. In an apparently nondeterministic world, achieving reliability requires the identification of reliable actions and a preference for using such actions. Furthermore, by selecting its training actions carefully, the robot can significantly improve its learning rate.

  15. Maximal heart rate does not limit cardiovascular capacity in healthy humans

    DEFF Research Database (Denmark)

    Munch, G D W; Svendsen, J H; Damsgaard, R

    2014-01-01

    In humans, maximal aerobic power (VO2 max ) is associated with a plateau in cardiac output (Q), but the mechanisms regulating the interplay between maximal heart rate (HRmax) and stroke volume (SV) are unclear. To evaluate the effect of tachycardia and elevations in HRmax on cardiovascular function...... and capacity during maximal exercise in healthy humans, 12 young male cyclists performed incremental cycling and one-legged knee-extensor exercise (KEE) to exhaustion with and without right atrial pacing to increase HR. During control cycling, Q and leg blood flow increased up to 85% of maximal workload (WLmax...... and RAP (P healthy...

  16. Power Converters Maximize Outputs Of Solar Cell Strings

    Science.gov (United States)

    Frederick, Martin E.; Jermakian, Joel B.

    1993-01-01

    Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.

  17. No Mikheyev-Smirnov-Wolfenstein Effect in Maximal Mixing

    OpenAIRE

    Harrison, P. F.; Perkins, D. H.; Scott, W. G.

    1996-01-01

    We investigate the possible influence of the MSW effect on the expectations for the solar neutrino experiments in the maximal mixing scenario suggested by the atmospheric neutrino data. A direct numerical calculation of matter induced effects in the Sun shows that the naive vacuum predictions are left completely undisturbed in the particular case of maximal mixing, so that the MSW effect turns out to be unobservable. We give a qualitative explanation of this result.

  18. Robot-Assisted End-Effector-Based Stair Climbing for Cardiopulmonary Exercise Testing: Feasibility, Reliability, and Repeatability.

    Science.gov (United States)

    Stoller, Oliver; Schindelholz, Matthias; Hunt, Kenneth J

    2016-01-01

    Neurological impairments can limit the implementation of conventional cardiopulmonary exercise testing (CPET) and cardiovascular training strategies. A promising approach to provoke cardiovascular stress while facilitating task-specific exercise in people with disabilities is feedback-controlled robot-assisted end-effector-based stair climbing (RASC). The aim of this study was to evaluate the feasibility, reliability, and repeatability of augmented RASC-based CPET in able-bodied subjects, with a view towards future research and applications in neurologically impaired populations. Twenty able-bodied subjects performed a familiarisation session and 2 consecutive incremental CPETs using augmented RASC. Outcome measures focussed on standard cardiopulmonary performance parameters and on accuracy of work rate tracking (RMSEP-root mean square error). Criteria for feasibility were cardiopulmonary responsiveness and technical implementation. Relative and absolute test-retest reliability were assessed by intraclass correlation coefficients (ICC), standard error of the measurement (SEM), and minimal detectable change (MDC). Mean differences, limits of agreement, and coefficients of variation (CoV) were estimated to assess repeatability. All criteria for feasibility were achieved. Mean V'O2peak was 106±9% of predicted V'O2max and mean HRpeak was 99±3% of predicted HRmax. 95% of the subjects achieved at least 1 criterion for V'O2max, and the detection of the sub-maximal ventilatory thresholds was successful (ventilatory anaerobic threshold 100%, respiratory compensation point 90% of the subjects). Excellent reliability was found for peak cardiopulmonary outcome measures (ICC ≥ 0.890, SEM ≤ 0.60%, MDC ≤ 1.67%). Repeatability for the primary outcomes was good (CoV ≤ 0.12). RASC-based CPET with feedback-guided exercise intensity demonstrated comparable or higher peak cardiopulmonary performance variables relative to predicted values, achieved the criteria for V'O2max

  19. Robot-Assisted End-Effector-Based Stair Climbing for Cardiopulmonary Exercise Testing: Feasibility, Reliability, and Repeatability.

    Directory of Open Access Journals (Sweden)

    Oliver Stoller

    Full Text Available Neurological impairments can limit the implementation of conventional cardiopulmonary exercise testing (CPET and cardiovascular training strategies. A promising approach to provoke cardiovascular stress while facilitating task-specific exercise in people with disabilities is feedback-controlled robot-assisted end-effector-based stair climbing (RASC. The aim of this study was to evaluate the feasibility, reliability, and repeatability of augmented RASC-based CPET in able-bodied subjects, with a view towards future research and applications in neurologically impaired populations.Twenty able-bodied subjects performed a familiarisation session and 2 consecutive incremental CPETs using augmented RASC. Outcome measures focussed on standard cardiopulmonary performance parameters and on accuracy of work rate tracking (RMSEP-root mean square error. Criteria for feasibility were cardiopulmonary responsiveness and technical implementation. Relative and absolute test-retest reliability were assessed by intraclass correlation coefficients (ICC, standard error of the measurement (SEM, and minimal detectable change (MDC. Mean differences, limits of agreement, and coefficients of variation (CoV were estimated to assess repeatability.All criteria for feasibility were achieved. Mean V'O2peak was 106±9% of predicted V'O2max and mean HRpeak was 99±3% of predicted HRmax. 95% of the subjects achieved at least 1 criterion for V'O2max, and the detection of the sub-maximal ventilatory thresholds was successful (ventilatory anaerobic threshold 100%, respiratory compensation point 90% of the subjects. Excellent reliability was found for peak cardiopulmonary outcome measures (ICC ≥ 0.890, SEM ≤ 0.60%, MDC ≤ 1.67%. Repeatability for the primary outcomes was good (CoV ≤ 0.12.RASC-based CPET with feedback-guided exercise intensity demonstrated comparable or higher peak cardiopulmonary performance variables relative to predicted values, achieved the criteria for V'O2

  20. Single maximal versus combination punch kinematics.

    Science.gov (United States)

    Piorkowski, Barry A; Lees, Adrian; Barton, Gabor J

    2011-03-01

    The aim of this study was to determine the influence of punch type (Jab, Cross, Lead Hook and Reverse Hook) and punch modality (Single maximal, 'In-synch' and 'Out of synch' combination) on punch speed and delivery time. Ten competition-standard volunteers performed punches with markers placed on their anatomical landmarks for 3D motion capture with an eight-camera optoelectronic system. Speed and duration between key moments were computed. There were significant differences in contact speed between punch types (F(2,18,84.87) = 105.76, p = 0.001) with Lead and Reverse Hooks developing greater speed than Jab and Cross. There were significant differences in contact speed between punch modalities (F(2,64,102.87) = 23.52, p = 0.001) with the Single maximal (M+/- SD: 9.26 +/- 2.09 m/s) higher than 'Out of synch' (7.49 +/- 2.32 m/s), 'In-synch' left (8.01 +/- 2.35 m/s) or right lead (7.97 +/- 2.53 m/s). Delivery times were significantly lower for Jab and Cross than Hook. Times were significantly lower 'In-synch' than a Single maximal or 'Out of synch' combination mode. It is concluded that a defender may have more evasion-time than previously reported. This research could be of use to performers and coaches when considering training preparations.

  1. Formation Control for the MAXIM Mission

    Science.gov (United States)

    Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.

    2004-01-01

    Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.

  2. Reliability models for a nonrepairable system with heterogeneous components having a phase-type time-to-failure distribution

    International Nuclear Information System (INIS)

    Kim, Heungseob; Kim, Pansoo

    2017-01-01

    This research paper presents practical stochastic models for designing and analyzing the time-dependent reliability of nonrepairable systems. The models are formulated for nonrepairable systems with heterogeneous components having phase-type time-to-failure distributions by a structured continuous time Markov chain (CTMC). The versatility of the phase-type distributions enhances the flexibility and practicality of the systems. By virtue of these benefits, studies in reliability engineering can be more advanced than the previous studies. This study attempts to solve a redundancy allocation problem (RAP) by using these new models. The implications of mixing components, redundancy levels, and redundancy strategies are simultaneously considered to maximize the reliability of a system. An imperfect switching case in a standby redundant system is also considered. Furthermore, the experimental results for a well-known RAP benchmark problem are presented to demonstrate the approximating error of the previous reliability function for a standby redundant system and the usefulness of the current research. - Highlights: • Phase-type time-to-failure distribution is used for components. • Reliability model for nonrepairable system is developed using Markov chain. • System is composed of heterogeneous components. • Model provides the real value of standby system reliability not an approximation. • Redundancy allocation problem is used to show usefulness of this model.

  3. Conversational Implicature of Peanuts Comic Strip Based on Grice’s Maxim Theory

    Directory of Open Access Journals (Sweden)

    Muhartoyo Muhartoyo

    2013-04-01

    Full Text Available This article discusses about conversational implicature that occurs in Peanuts comic strips. The objectives of this study are to find out the implied meaning in the conversation between Charlie Brown with Lucy van Pelt and Lucy van Pelt with Linus van Pelt to evaluate the existence of maxim flouting and maxim violating in those conversations in relation to the four maxims such as quantity, quality, relation, and manner. Likewise, this study attempts to find out the reason for using conversational implicature in a comic strip. The writers uses a qualitative method with library research concerning to Grice’s maxim theory to analyze the conversational implicature. Based on the analysis, it can be concluded that all the comics that comprise 14 comics generate conversational implicature since all the characters breach rules of maxim. The result of this analysis shows that flouting maxim of manner has the highest occurrence of conversational implicature and the least occurrences belong to flouting maxim of relation and violating maxim of quantity. Moreover, the writers concludes that to make a successful communication ideally the speaker and the hearer to cooperate in the conversation by saying explicitly so the hearer can grasp the meaning as the goal of communication is to deliver a message to the hearer.  

  4. Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System.

    Science.gov (United States)

    Cazettes, Fanny; Fischer, Brian J; Peña, Jose L

    2016-02-17

    Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior. Copyright © 2016 the authors 0270-6474/16/362101-10$15.00/0.

  5. Comparison of changes in the mobility of the pelvic floor muscle on during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction.

    Science.gov (United States)

    Jung, Halim; Jung, Sangwoo; Joo, Sunghee; Song, Changho

    2016-01-01

    [Purpose] The purpose of this study was to compare changes in the mobility of the pelvic floor muscle during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. [Subjects] Thirty healthy adults participated in this study (15 men and 15 women). [Methods] All participants performed a bridge exercise and abdominal curl-up during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. Pelvic floor mobility was evaluated as the distance from the bladder base using ultrasound. [Results] According to exercise method, bridge exercise and abdominal curl-ups led to significantly different pelvic floor mobility. The pelvic floor muscle was elevated during the abdominal drawing-in maneuver and descended during maximal expiration. Finally, pelvic floor muscle mobility was greater during abdominal curl-up than during the bridge exercise. [Conclusion] According to these results, the abdominal drawing-in maneuver induced pelvic floor muscle contraction, and pelvic floor muscle contraction was greater during the abdominal curl-up than during the bridge exercise.

  6. Reliability modeling of Clinch River breeder reactor electrical shutdown systems

    International Nuclear Information System (INIS)

    Schatz, R.A.; Duetsch, K.L.

    1974-01-01

    The initial simulation of the probabilistic properties of the Clinch River Breeder Reactor Plant (CRBRP) electrical shutdown systems is described. A model of the reliability (and availability) of the systems is presented utilizing Success State and continuous-time, discrete state Markov modeling techniques as significant elements of an overall reliability assessment process capable of demonstrating the achievement of program goals. This model is examined for its sensitivity to safe/unsafe failure rates, sybsystem redundant configurations, test and repair intervals, monitoring by reactor operators; and the control exercised over system reliability by design modifications and the selection of system operating characteristics. (U.S.)

  7. Descriptive Analysis on Flouting and Hedging of Conversational Maxims in the “Post Grad” Movie

    Directory of Open Access Journals (Sweden)

    Nastiti Rokhmania

    2012-11-01

    Full Text Available This research is focused on analyzing flouting and hedging of conversational maxim of utterances used by the main characters in “Post Grad” movie. Conversational maxims are the rules of cooperative principle categorized into four categories; Maxim of Quality, Maxim of Quantity, Maxim of Relevance, and Maxim of Manner. If these maxims are used in conversations, the conversations can go smoothly. However, people often break the maxims overtly (flouting maxim and sometimes break the maxims secretly (hedging maxims when they make a conversation. This research is conducted using descriptive qualitative method based on the theory known as Grice’s Maxims. The data are in form of utterances used by the characters in “Post Grad” movie. The data analysis reveals some finding covering the formulated research question. The maxims are flouted when the speaker breaks some conversational maxims when using the utterances in the form of rhetorical strategies, such as tautology, metaphor, hyperbole, irony, and rhetorical question. On the other hand, conversational maxims are also hedged when the information is not totally accurate or unclearly stated but seems informative, well-founded, and relevant.

  8. System reliability, performance and trust in adaptable automation.

    Science.gov (United States)

    Chavaillaz, Alain; Wastell, David; Sauer, Jürgen

    2016-01-01

    The present study examined the effects of reduced system reliability on operator performance and automation management in an adaptable automation environment. 39 operators were randomly assigned to one of three experimental groups: low (60%), medium (80%), and high (100%) reliability of automation support. The support system provided five incremental levels of automation which operators could freely select according to their needs. After 3 h of training on a simulated process control task (AutoCAMS) in which the automation worked infallibly, operator performance and automation management were measured during a 2.5-h testing session. Trust and workload were also assessed through questionnaires. Results showed that although reduced system reliability resulted in lower levels of trust towards automation, there were no corresponding differences in the operators' reliance on automation. While operators showed overall a noteworthy ability to cope with automation failure, there were, however, decrements in diagnostic speed and prospective memory with lower reliability. Copyright © 2015. Published by Elsevier Ltd.

  9. Maximal and anaerobic threshold cardiorespiratory responses during deepwater running

    Directory of Open Access Journals (Sweden)

    Ana Carolina Kanitz

    2014-12-01

    Full Text Available DOI: http://dx.doi.org/10.5007/1980-0037.2015v17n1p41   Aquatic exercises provide numerous benefits to the health of their practitioners. To secure these benefits, it is essential to have proper prescriptions to the needs of each individual and, therefore, it is important to study the cardiorespiratory responses of different activities in this environment. Thus, the aim of this study was to compare the cardiorespiratory responses at the anaerobic threshold (AT between maximal deep-water running (DWR and maximal treadmill running (TMR. In addition, two methods of determining the AT (the heart rate deflection point [HRDP] and ventilatory method [VM] are compared in the two evaluated protocols. Twelve young women performed the two maximal protocols. Two-factor ANOVA for repeated measures with a post-hoc Bonferroni test was used (α < 0.05. Significantly higher values of maximal heart rate (TMR: 33.7 ± 3.9; DWR: 22.5 ± 4.1 ml.kg−1.min−1 and maximal oxygen uptake (TMR: 33.7 ± 3.9; DWR: 22.5 ± 4.1 ml.kg−1.min−1 in TMR compared to the DWR were found. Furthermore, no significant differences were found between the methods for determining the AT (TMR: VM: 28.1 ± 5.3, HRDP: 26.6 ± 5.5 ml.kg−1.min−1; DWR: VM: 18.7 ± 4.8, HRDP: 17.8 ± 4.8 ml.kg−1.min−1. The results indicate that a specific maximal test for the trained modality should be conducted and the HRDP can be used as a simple and practical method of determining the AT, based on which the training intensity can be determined

  10. Effect of pillow size preference on extensor digitorum communis muscle strength and electromyographic activity during maximal contraction in healthy individuals: A pilot study

    Directory of Open Access Journals (Sweden)

    Jia-Chi Wang

    2015-03-01

    Conclusion: The results suggest that anatomical body measurements are not good predictors of optimal pillow height. As EDC muscle strength is affected by pillow height preference, maximal EDC muscle strength may be a useful complement for selecting the optimal pillow size.

  11. Reliability 'H' scheme of HV/MV substations

    Directory of Open Access Journals (Sweden)

    Perić Dragoslav M.

    2015-01-01

    Full Text Available Substations (HV/MV connect transmission and distribution systems with consumers of electric energy. The selective search method was used for calculation of substation reliability, where all arrangement elements were grouped into blocks. Subject of the analysis was H-arrangements comprising air-insulated switchgears on the high voltage side of HV/MV substations with different number of feeder and transformer bays and diverse scope of the installed switching equipment. Failure rate and duration were used as main HV/MV substation equipment reliability indices. A large number of arrangements were classified into groups, and within a group the arrangements were ranked with the use of multiple criteria. It is shown that the reliability of the electricity transit depends on the equipment of field for the transit of electricity, which favors poorly equipped field. On the other hand, the reliability of transformation of the full power depends mostly on the equipment in coupling field. It is essential that in the coupling field, there are at least two disconnectors. Installation of the switch in the coupling field is meaningful only with appropriate protection, because it further improves reliability. Conclusions are drawn for phased construction and expansion of the single pole diagram with an additional field for the transmission line.

  12. Performance Analysis of Selective Decode-and-Forward Multinode Incremental Relaying with Maximal Ratio Combining

    KAUST Repository

    Hadjtaieb, Amir

    2013-09-12

    In this paper, we propose an incremental multinode relaying protocol with arbitrary N-relay nodes that allows an efficient use of the channel spectrum. The destination combines the received signals from the source and the relays using maximal ratio Combining (MRC). The transmission ends successfully once the accumulated signal-to-noise ratio (SNR) exceeds a predefined threshold. The number of relays participating in the transmission is adapted to the channel conditions based on the feedback from the destination. The use of incremental relaying allows obtaining a higher spectral efficiency. Moreover, the symbol error probability (SEP) performance is enhanced by using MRC at the relays. The use of MRC at the relays implies that each relay overhears the signals from the source and all previous relays and combines them using MRC. The proposed protocol differs from most of existing relaying protocol by the fact that it combines both incremental relaying and MRC at the relays for a multinode topology. Our analyses for a decode-and-forward mode show that: (i) compared to existing multinode relaying schemes, the proposed scheme can essentially achieve the same SEP performance but with less average number of time slots, (ii) compared to schemes without MRC at the relays, the proposed scheme can approximately achieve a 3 dB gain.

  13. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!

    International Nuclear Information System (INIS)

    Nutku, Yavuz

    2003-01-01

    Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems

  14. Maximally flat radiation patterns of a circular aperture

    Science.gov (United States)

    Minkovich, B. M.; Mints, M. Ia.

    1989-08-01

    The paper presents an explicit solution to the problems of maximizing the area utilization coefficient and of obtaining the best approximation (on the average) of a sectorial Pi-shaped radiation pattern of an antenna with a circular aperture when Butterworth conditions are imposed on the approximating pattern with the aim of flattening it. Constraints on the choice of admissible minimum and maximum antenna dimensions are determined which make possible the synthesis of maximally flat patterns with small sidelobes.

  15. On Maximally Dissipative Shock Waves in Nonlinear Elasticity

    OpenAIRE

    Knowles, James K.

    2010-01-01

    Shock waves in nonlinearly elastic solids are, in general, dissipative. We study the following question: among all plane shock waves that can propagate with a given speed in a given one-dimensional nonlinearly elastic bar, which one—if any—maximizes the rate of dissipation? We find that the answer to this question depends strongly on the qualitative nature of the stress-strain relation characteristic of the given material. When maximally dissipative shocks do occur, they propagate according t...

  16. The Large Margin Mechanism for Differentially Private Maximization

    OpenAIRE

    Chaudhuri, Kamalika; Hsu, Daniel; Song, Shuang

    2014-01-01

    A basic problem in the design of privacy-preserving algorithms is the private maximization problem: the goal is to pick an item from a universe that (approximately) maximizes a data-dependent function, all under the constraint of differential privacy. This problem has been used as a sub-routine in many privacy-preserving algorithms for statistics and machine-learning. Previous algorithms for this problem are either range-dependent---i.e., their utility diminishes with the size of the universe...

  17. Factors Influencing the Reliability of the Glasgow Coma Scale: A Systematic Review.

    Science.gov (United States)

    Reith, Florence Cm; Synnot, Anneliese; van den Brande, Ruben; Gruen, Russell L; Maas, Andrew Ir

    2017-06-01

    The Glasgow Coma Scale (GCS) characterizes patients with diminished consciousness. In a recent systematic review, we found overall adequate reliability across different clinical settings, but reliability estimates varied considerably between studies, and methodological quality of studies was overall poor. Identifying and understanding factors that can affect its reliability is important, in order to promote high standards for clinical use of the GCS. The aim of this systematic review was to identify factors that influence reliability and to provide an evidence base for promoting consistent and reliable application of the GCS. A comprehensive literature search was undertaken in MEDLINE, EMBASE, and CINAHL from 1974 to July 2016. Studies assessing the reliability of the GCS in adults or describing any factor that influences reliability were included. Two reviewers independently screened citations, selected full texts, and undertook data extraction and critical appraisal. Methodological quality of studies was evaluated with the consensus-based standards for the selection of health measurement instruments checklist. Data were synthesized narratively and presented in tables. Forty-one studies were included for analysis. Factors identified that may influence reliability are education and training, the level of consciousness, and type of stimuli used. Conflicting results were found for experience of the observer, the pathology causing the reduced consciousness, and intubation/sedation. No clear influence was found for the professional background of observers. Reliability of the GCS is influenced by multiple factors and as such is context dependent. This review points to the potential for improvement from training and education and standardization of assessment methods, for which recommendations are presented. Copyright © 2017 by the Congress of Neurological Surgeons.

  18. Enumerating all maximal frequent subtrees in collections of phylogenetic trees

    Science.gov (United States)

    2014-01-01

    Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474

  19. Pace's Maxims for Homegrown Library Projects. Coming Full Circle

    Science.gov (United States)

    Pace, Andrew K.

    2005-01-01

    This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…

  20. Gravitational collapse of charged dust shell and maximal slicing condition

    International Nuclear Information System (INIS)

    Maeda, Keiichi

    1980-01-01

    The maximal slicing condition is a good time coordinate condition qualitatively when pursuing the gravitational collapse by the numerical calculation. The analytic solution of the gravitational collapse under the maximal slicing condition is given in the case of a spherical charged dust shell and the behavior of time slices with this coordinate condition is investigated. It is concluded that under the maximal slicing condition we can pursue the gravitational collapse until the radius of the shell decreases to about 0.7 x (the radius of the event horizon). (author)

  1. A definition of maximal CP-violation

    International Nuclear Information System (INIS)

    Roos, M.

    1985-01-01

    The unitary matrix of quark flavour mixing is parametrized in a general way, permitting a mathematically natural definition of maximal CP violation. Present data turn out to violate this definition by 2-3 standard deviations. (orig.)

  2. Deconstructing facts and frames in energy research: Maxims for evaluating contentious problems

    International Nuclear Information System (INIS)

    Sovacool, Benjamin K.; Brown, Marilyn A.

    2015-01-01

    In this article, we argue that assumptions and values can play a combative, corrosive role in the generation of objective energy analysis. We then propose six maxims for energy analysts and researchers. Our maxim of information asks readers to keep up to date on trends in energy resources and technology. Our maxim of inclusivity asks readers to involve citizens and other public actors more in energy decisions. Our maxim of symmetry asks readers to keep their analysis of energy technologies centered always on both technology and society. Our maxim of reflexivity asks readers to be self-aware of one's assumptions. Our maxim of prudence asks readers to make energy decisions that are ethical or at least informed. Our maxim of agnosticism asks readers to look beyond a given energy technology to the services it provides and recognize that many systems can provide a desired service. We conclude that decisions in energy are justified by, if not predicated on, beliefs—beliefs which may or may not be supported by objective data, constantly blurring the line between fact, fiction, and frames. - Highlights: • Assumptions and values can play a combative, corrosive role in the generation of objective energy analysis. • Decisions in energy are justified by, if not predicated on, beliefs. • We propose six maxims for energy analysts and researcher.

  3. Throughput Maximization for Cognitive Radio Networks Using Active Cooperation and Superposition Coding

    KAUST Repository

    Hamza, Doha R.

    2015-02-13

    We propose a three-message superposition coding scheme in a cognitive radio relay network exploiting active cooperation between primary and secondary users. The primary user is motivated to cooperate by substantial benefits it can reap from this access scenario. Specifically, the time resource is split into three transmission phases: The first two phases are dedicated to primary communication, while the third phase is for the secondary’s transmission. We formulate two throughput maximization problems for the secondary network subject to primary user rate constraints and per-node power constraints with respect to the time durations of primary transmission and the transmit power of the primary and the secondary users. The first throughput maximization problem assumes a partial power constraint such that the secondary power dedicated to primary cooperation, i.e. for the first two communication phases, is fixed apriori. In the second throughput maximization problem, a total power constraint is assumed over the three phases of communication. The two problems are difficult to solve analytically when the relaying channel gains are strictly greater than each other and strictly greater than the direct link channel gain. However, mathematically tractable lowerbound and upperbound solutions can be attained for the two problems. For both problems, by only using the lowerbound solution, we demonstrate significant throughput gains for both the primary and the secondary users through this active cooperation scheme. We find that most of the throughput gains come from minimizing the second phase transmission time since the secondary nodes assist the primary communication during this phase. Finally, we demonstrate the superiority of our proposed scheme compared to a number of reference schemes that include best relay selection, dual-hop routing, and an interference channel model.

  4. Throughput Maximization for Cognitive Radio Networks Using Active Cooperation and Superposition Coding

    KAUST Repository

    Hamza, Doha R.; Park, Kihong; Alouini, Mohamed-Slim; Aissa, Sonia

    2015-01-01

    We propose a three-message superposition coding scheme in a cognitive radio relay network exploiting active cooperation between primary and secondary users. The primary user is motivated to cooperate by substantial benefits it can reap from this access scenario. Specifically, the time resource is split into three transmission phases: The first two phases are dedicated to primary communication, while the third phase is for the secondary’s transmission. We formulate two throughput maximization problems for the secondary network subject to primary user rate constraints and per-node power constraints with respect to the time durations of primary transmission and the transmit power of the primary and the secondary users. The first throughput maximization problem assumes a partial power constraint such that the secondary power dedicated to primary cooperation, i.e. for the first two communication phases, is fixed apriori. In the second throughput maximization problem, a total power constraint is assumed over the three phases of communication. The two problems are difficult to solve analytically when the relaying channel gains are strictly greater than each other and strictly greater than the direct link channel gain. However, mathematically tractable lowerbound and upperbound solutions can be attained for the two problems. For both problems, by only using the lowerbound solution, we demonstrate significant throughput gains for both the primary and the secondary users through this active cooperation scheme. We find that most of the throughput gains come from minimizing the second phase transmission time since the secondary nodes assist the primary communication during this phase. Finally, we demonstrate the superiority of our proposed scheme compared to a number of reference schemes that include best relay selection, dual-hop routing, and an interference channel model.

  5. An Uncertain Wage Contract Model with Adverse Selection and Moral Hazard

    Directory of Open Access Journals (Sweden)

    Xiulan Wang

    2014-01-01

    it can be characterized as an uncertain variable. Moreover, the employee's effort is unobservable to the employer, and the employee can select her effort level to maximize her utility. Thus, an uncertain wage contract model with adverse selection and moral hazard is established to maximize the employer's expected profit. And the model analysis mainly focuses on the equivalent form of the proposed wage contract model and the optimal solution to this form. The optimal solution indicates that both the employee's effort level and the wage increase with the employee's ability. Lastly, a numerical example is given to illustrate the effectiveness of the proposed model.

  6. Maximal isometric strength of the cervical musculature in 100 healthy volunteers

    DEFF Research Database (Denmark)

    Jordan, A; Mehlsen, J; Bülow, P M

    1999-01-01

    A descriptive study involving maximal isometric strength measurements of the cervical musculature.......A descriptive study involving maximal isometric strength measurements of the cervical musculature....

  7. Validity and reliability of simple measurement device to assess the velocity of the barbell during squats.

    Science.gov (United States)

    Lorenzetti, Silvio; Lamparter, Thomas; Lüthy, Fabian

    2017-12-06

    The velocity of a barbell can provide important insights on the performance of athletes during strength training. The aim of this work was to assess the validity and reliably of four simple measurement devices that were compared to 3D motion capture measurements during squatting. Nine participants were assessed when performing 2 × 5 traditional squats with a weight of 70% of the 1 repetition maximum and ballistic squats with a weight of 25 kg. Simultaneously, data was recorded from three linear position transducers (T-FORCE, Tendo Power and GymAware), an accelerometer based system (Myotest) and a 3D motion capture system (Vicon) as the Gold Standard. Correlations between the simple measurement devices and 3D motion capture of the mean and the maximal velocity of the barbell, as well as the time to maximal velocity, were calculated. The correlations during traditional squats were significant and very high (r = 0.932, 0.990, p squats and was less accurate. All the linear position transducers were able to assess squat performance, particularly during traditional squats and especially in terms of mean velocity and time to maximal velocity.

  8. Test-retest reliability of jump execution variables using mechanography: a comparison of jump protocols.

    Science.gov (United States)

    Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N

    2018-05-01

    Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.

  9. Nonadditive entropy maximization is inconsistent with Bayesian updating

    Science.gov (United States)

    Pressé, Steve

    2014-11-01

    The maximum entropy method—used to infer probabilistic models from data—is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  10. Les Ambivalences du Silence: Les "Maximes" de la Rochefoucauld Par Quatre Chemins

    Science.gov (United States)

    Turcat, Eric

    2012-01-01

    Maxims are famous for their moral pronouncements, yet La Rochefoucauld's "Maximes" (1678) have become infamous for offering little moral guidance. Morally ambivalent at best, the "Maximes" are also less known for their other forms of ambivalence, whether rhetorical, psychological, anthropological or linguistic. Such are…

  11. Reliability of a structured interview for admission to an emergency medicine residency program.

    Science.gov (United States)

    Blouin, Danielle

    2010-10-01

    Interviews are most important in resident selection. Structured interviews are more reliable than unstructured ones. We sought to measure the interrater reliability of a newly designed structured interview during the selection process to an Emergency Medicine residency program. The critical incident technique was used to extract the desired dimensions of performance. The interview tool consisted of 7 clinical scenarios and 1 global rating. Three trained interviewers marked each candidate on all scenarios without discussing candidates' responses. Interitem consistency and estimates of variance were computed. Twenty-eight candidates were interviewed. The generalizability coefficient was 0.67. Removing the central tendency ratings increased the coefficient to 0.74. Coefficients of interitem consistency ranged from 0.64 to 0.74. The structured interview tool provided good although suboptimal interrater reliability. Increasing the number of scenarios improves reliability as does applying differential weights to the rating scale anchors. The latter would also facilitate the identification of those candidates with extreme ratings.

  12. FLOUTS OF THE COOPERATIVE PRINCIPLE MAXIMS IN SBY’S PRESIDENTIAL INTERVIEWS

    Directory of Open Access Journals (Sweden)

    Fahrus Zaman Fadhly

    2012-12-01

    Full Text Available This paper analyzed the presidential interviews of the President of Republic of Indonesia, Susilo Bambang Yudoyono (SBY, based on Grice’s theory of the Cooperative Principles (CP. This study employed a qualitative research design and the data were three transcripts of interview discourse between SBY and eight Indonesian journalists obtained through the presidential official website: http://www.presidentsby.info. The research investigated the ways of SBY in flouting the CP maxims in his presidential interviews and the functions of the flouts were. The research revealed that SBY flouted all the CP maxims and the maxim of Quantity was frequently flouted. Meanwhile, there were four ways used by SBY in flouting the CP maxims, i.e. hedging, indirectness, open answer and detailed element. The function of the flouts, i.e. face saving acts (FSA, self-protection, awareness, politeness, interestingness, control of information, elaboration and ignorance. This research also revealed that CP maxims of Grice are not universal.

  13. Evaluation of anti-hyperglycemic effect of Actinidia kolomikta (Maxim. etRur.) Maxim. root extract.

    Science.gov (United States)

    Hu, Xuansheng; Cheng, Delin; Wang, Linbo; Li, Shuhong; Wang, Yuepeng; Li, Kejuan; Yang, Yingnan; Zhang, Zhenya

    2015-05-01

    This study aimed to evaluate the anti-hyperglycemic effect of ethanol extract from Actinidia kolomikta (Maxim. etRur.) Maxim. root (AKE).An in vitro evaluation was performed by using rat intestinal α-glucosidase (maltase and sucrase), the key enzymes linked with type 2 diabetes. And an in vivo evaluation was also performed by loading maltose, sucrose, glucose to normal rats. As a result, AKE showed concentration-dependent inhibition effects on rat intestinal maltase and rat intestinal sucrase with IC(50) values of 1.83 and 1.03mg/mL, respectively. In normal rats, after loaded with maltose, sucrose and glucose, administration of AKE significantly reduced postprandial hyperglycemia, which is similar to acarbose used as an anti-diabetic drug. High contents of total phenolics (80.49 ± 0.05mg GAE/g extract) and total flavonoids (430.69 ± 0.91mg RE/g extract) were detected in AKE. In conclusion, AKE possessed anti-hyperglycemic effects and the possible mechanisms were associated with its inhibition on α-glucosidase and the improvement on insulin release and/or insulin sensitivity as well. The anti-hyperglycemic activity possessed by AKE maybe attributable to its high contents of phenolic and flavonoid compounds.

  14. A reliable computational workflow for the selection of optimal screening libraries.

    Science.gov (United States)

    Gilad, Yocheved; Nadassy, Katalin; Senderowitz, Hanoch

    2015-01-01

    The experimental screening of compound collections is a common starting point in many drug discovery projects. Successes of such screening campaigns critically depend on the quality of the screened library. Many libraries are currently available from different vendors yet the selection of the optimal screening library for a specific project is challenging. We have devised a novel workflow for the rational selection of project-specific screening libraries. The workflow accepts as input a set of virtual candidate libraries and applies the following steps to each library: (1) data curation; (2) assessment of ADME/T profile; (3) assessment of the number of promiscuous binders/frequent HTS hitters; (4) assessment of internal diversity; (5) assessment of similarity to known active compound(s) (optional); (6) assessment of similarity to in-house or otherwise accessible compound collections (optional). For ADME/T profiling, Lipinski's and Veber's rule-based filters were implemented and a new blood brain barrier permeation model was developed and validated (85 and 74 % success rate for training set and test set, respectively). Diversity and similarity descriptors which demonstrated best performances in terms of their ability to select either diverse or focused sets of compounds from three databases (Drug Bank, CMC and CHEMBL) were identified and used for diversity and similarity assessments. The workflow was used to analyze nine common screening libraries available from six vendors. The results of this analysis are reported for each library providing an assessment of its quality. Furthermore, a consensus approach was developed to combine the results of these analyses into a single score for selecting the optimal library under different scenarios. We have devised and tested a new workflow for the rational selection of screening libraries under different scenarios. The current workflow was implemented using the Pipeline Pilot software yet due to the usage of generic

  15. Reliability of measuring hip abductor strength following total knee arthroplasty using a hand-held dynamometer.

    Science.gov (United States)

    Schache, Margaret B; McClelland, Jodie A; Webster, Kate E

    2016-01-01

    To investigate the test-retest reliability of measuring hip abductor strength in patients with total knee arthroplasty (TKA) using a hand-held dynamometer (HHD) with two different types of resistance: belt and manual resistance. Test-retest reliability of 30 subjects (17 female, 13 male, 71.9 ± 7.4 years old), 9.2 ± 2.7 days post TKA was measured using belt and therapist resistance. Retest reliability was calculated with intra-class coefficients (ICC3,1) and 95% confidence intervals (CI) for both the group average and the individual scores. A paired t-test assessed whether a difference existed between the belt and therapist methods of resistance. ICCs were 0.82 and 0.80 for the belt and therapist resisted methods, respectively. Hip abductor strength increases of 8 N (14%) for belt resisted and 14 N (17%) for therapist resisted measurements of the group average exceeded the 95% CI and may represent real change. For individuals, hip abductor strength increases of 33 N (72%) (belt resisted) and 57 N (79%) (therapist resisted) could be interpreted as real change. Hip abductor strength can be reliably measured using HHD in the clinical setting with the described protocol. Belt resistance demonstrated slightly higher test-retest reliability. Reliable measurement of hip abductor muscle strength in patients with TKA is important to ensure deficiencies are addressed in rehabilitation programs and function is maximized. Hip abductor strength can be reliably measured with a hand-held dynamometer in the clinical setting using manual or belt resistance.

  16. To the problem of reliability of high-voltage accelerators for industrial purposes

    International Nuclear Information System (INIS)

    Al'bertinskij, B.I.; Svin'in, M.P.; Tsepakin, S.G.

    1979-01-01

    Statistical data characterizing the reliability of ELECTRON and AVRORA-2 type accelerators are presented. Used as a reliability index was the mean time to failure of the main accelerator units. The analysis of accelerator failures allowed a number of conclusions to be drawn. The high failure rate level is connected with inadequate training of the servicing personnel and a natural period of equipment adjustment. The mathematical analysis of the failure rate showed that the main responsibility for insufficient high reliability rests with selenium diodes which are employed in the high voltage power supply. Substitution of selenium diodes by silicon ones increases time between failures. It is shown that accumulation and processing of operational statistical data will permit more accurate prediction of the reliability of produced high-voltage accelerators, make it possible to cope with the problems of planning optimal, in time, preventive inspections and repair, and to select optimal safety factors and test procedures n time, preventive inspections and repair, and to select optimal safety factors and test procedures n time, prevent

  17. DIRAC: reliable data management for LHCb

    International Nuclear Information System (INIS)

    Smith, A C; Tsaregorodtsev, A

    2008-01-01

    DIRAC, LHCb's Grid Workload and Data Management System, utilizes WLCG resources and middleware components to perform distributed computing tasks satisfying LHCb's Computing Model. The Data Management System (DMS) handles data transfer and data access within LHCb. Its scope ranges from the output of the LHCb Online system to Grid-enabled storage for all data types. It supports metadata for these files in replica and bookkeeping catalogues, allowing dataset selection and localization. The DMS controls the movement of files in a redundant fashion whilst providing utilities for accessing all metadata. To do these tasks effectively the DMS requires complete self integrity between its components and external physical storage. The DMS provides highly redundant management of all LHCb data to leverage available storage resources and to manage transient errors in underlying services. It provides data driven and reliable distribution of files as well as reliable job output upload, utilizing VO Boxes at LHCb Tier1 sites to prevent data loss. This paper presents several examples of mechanisms implemented in the DMS to increase reliability, availability and integrity, highlighting successful design choices and limitations discovered

  18. Bipartite Bell Inequality and Maximal Violation

    International Nuclear Information System (INIS)

    Li Ming; Fei Shaoming; Li-Jost Xian-Qing

    2011-01-01

    We present new bell inequalities for arbitrary dimensional bipartite quantum systems. The maximal violation of the inequalities is computed. The Bell inequality is capable of detecting quantum entanglement of both pure and mixed quantum states more effectively. (general)

  19. Use of reliability engineering in development and manufacturing of metal parts

    International Nuclear Information System (INIS)

    Khan, A.; Iqbal, M.A.; Asif, M.

    2005-01-01

    The reliability engineering predicts modes of failures and weak links before the system is built instead of failure case study. The reliability engineering analysis will help in the manufacturing economy, assembly accuracy and qualification by testing, leading to production of metal parts in an aerospace industry. This methodology will also minimize the performance constraints in any requirement for the application of metal components in aerospace systems. The reliability engineering predicts the life of the parts under loading conditions whether dynamic or static. Reliability predictions can help engineers in making decisions about design of components, materials selection and qualification under applied stress levels. Two methods of reliability prediction i.e. Part Stress Analysis and Part Count have been used in this study. In this paper we will discuss how these two methods can be used to measure reliability of a system during development phases, which includes the measuring effect of environmental and operational variables. The equations are used to measure the reliability of each type of component, as well as, integration for measuring system applied for the reliability analysis. (author)

  20. Maximally efficient protocols for direct secure quantum communication

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Anindita [Department of Physics and Materials Science Engineering, Jaypee Institute of Information Technology, A-10, Sector-62, Noida, UP-201307 (India); Department of Physics and Center for Astroparticle Physics and Space Science, Bose Institute, Block EN, Sector V, Kolkata 700091 (India); Pathak, Anirban, E-mail: anirban.pathak@jiit.ac.in [Department of Physics and Materials Science Engineering, Jaypee Institute of Information Technology, A-10, Sector-62, Noida, UP-201307 (India); RCPTM, Joint Laboratory of Optics of Palacky University and Institute of Physics of Academy of Science of the Czech Republic, Faculty of Science, Palacky University, 17. Listopadu 12, 77146 Olomouc (Czech Republic)

    2012-10-01

    Two protocols for deterministic secure quantum communication (DSQC) using GHZ-like states have been proposed. It is shown that one of these protocols is maximally efficient and that can be modified to an equivalent protocol of quantum secure direct communication (QSDC). Security and efficiency of the proposed protocols are analyzed and compared. It is shown that dense coding is sufficient but not essential for DSQC and QSDC protocols. Maximally efficient QSDC protocols are shown to be more efficient than their DSQC counterparts. This additional efficiency arises at the cost of message transmission rate. -- Highlights: ► Two protocols for deterministic secure quantum communication (DSQC) are proposed. ► One of the above protocols is maximally efficient. ► It is modified to an equivalent protocol of quantum secure direct communication (QSDC). ► It is shown that dense coding is sufficient but not essential for DSQC and QSDC protocols. ► Efficient QSDC protocols are always more efficient than their DSQC counterparts.

  1. An intutionistic fuzzy optimization approach to vendor selection problem

    Directory of Open Access Journals (Sweden)

    Prabjot Kaur

    2016-09-01

    Full Text Available Selecting the right vendor is an important business decision made by any organization. The decision involves multiple criteria and if the objectives vary in preference and scope, then nature of decision becomes multiobjective. In this paper, a vendor selection problem has been formulated as an intutionistic fuzzy multiobjective optimization where appropriate number of vendors is to be selected and order allocated to them. The multiobjective problem includes three objectives: minimizing the net price, maximizing the quality, and maximizing the on time deliveries subject to supplier's constraints. The objection function and the demand are treated as intutionistic fuzzy sets. An intutionistic fuzzy set has its ability to handle uncertainty with additional degrees of freedom. The Intutionistic fuzzy optimization (IFO problem is converted into a crisp linear form and solved using optimization software Tora. The advantage of IFO is that they give better results than fuzzy/crisp optimization. The proposed approach is explained by a numerical example.

  2. Mission Reliability Estimation for Repairable Robot Teams

    Science.gov (United States)

    Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen

    2010-01-01

    A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the

  3. Chamaebatiaria millefolium (Torr.) Maxim.: fernbush

    Science.gov (United States)

    Nancy L. Shaw; Emerenciana G. Hurd

    2008-01-01

    Fernbush - Chamaebatiaria millefolium (Torr.) Maxim. - the only species in its genus, is endemic to the Great Basin, Colorado Plateau, and adjacent areas of the western United States. It is an upright, generally multistemmed, sweetly aromatic shrub 0.3 to 2 m tall. Bark of young branches is brown and becomes smooth and gray with age. Leaves are leathery, alternate,...

  4. Generalized Yosida Approximations Based on Relatively A-Maximal m-Relaxed Monotonicity Frameworks

    Directory of Open Access Journals (Sweden)

    Heng-you Lan

    2013-01-01

    Full Text Available We introduce and study a new notion of relatively A-maximal m-relaxed monotonicity framework and discuss some properties of a new class of generalized relatively resolvent operator associated with the relatively A-maximal m-relaxed monotone operator and the new generalized Yosida approximations based on relatively A-maximal m-relaxed monotonicity framework. Furthermore, we give some remarks to show that the theory of the new generalized relatively resolvent operator and Yosida approximations associated with relatively A-maximal m-relaxed monotone operators generalizes most of the existing notions on (relatively maximal monotone mappings in Hilbert as well as Banach space and can be applied to study variational inclusion problems and first-order evolution equations as well as evolution inclusions.

  5. Reliable selection of earthquake ground motions for performance-based design

    DEFF Research Database (Denmark)

    Katsanos, Evangelos; Sextos, A.G.

    2016-01-01

    A decision support process is presented to accommodate selecting and scaling of earthquake motions as required for the time domain analysis of structures. Prequalified code-compatible suites of seismic motions are provided through a multi-criterion approach to satisfy prescribed reduced variability...... of the method, by being subjected to numerous suites of motions that were highly ranked according to both the proposed approach (δsv-sc) and the conventional index (δconv), already used by most existing code-based earthquake records selection and scaling procedures. The findings reveal the superiority...

  6. Contributions of leaf photosynthetic capacity, leaf angle and self-shading to the maximization of net photosynthesis in Acer saccharum: a modelling assessment.

    Science.gov (United States)

    Posada, Juan M; Sievänen, Risto; Messier, Christian; Perttunen, Jari; Nikinmaa, Eero; Lechowicz, Martin J

    2012-08-01

    Plants are expected to maximize their net photosynthetic gains and efficiently use available resources, but the fundamental principles governing trade-offs in suites of traits related to resource-use optimization remain uncertain. This study investigated whether Acer saccharum (sugar maple) saplings could maximize their net photosynthetic gains through a combination of crown structure and foliar characteristics that let all leaves maximize their photosynthetic light-use efficiency (ε). A functional-structural model, LIGNUM, was used to simulate individuals of different leaf area index (LAI(ind)) together with a genetic algorithm to find distributions of leaf angle (L(A)) and leaf photosynthetic capacity (A(max)) that maximized net carbon gain at the whole-plant level. Saplings grown in either the open or in a forest gap were simulated with A(max) either unconstrained or constrained to an upper value consistent with reported values for A(max) in A. saccharum. It was found that total net photosynthetic gain was highest when whole-plant PPFD absorption and leaf ε were simultaneously maximized. Maximization of ε required simultaneous adjustments in L(A) and A(max) along gradients of PPFD in the plants. When A(max) was constrained to a maximum, plants growing in the open maximized their PPFD absorption but not ε because PPFD incident on leaves was higher than the PPFD at which ε(max) was attainable. Average leaf ε in constrained plants nonetheless improved with increasing LAI(ind) because of an increase in self-shading. It is concluded that there are selective pressures for plants to simultaneously maximize both PPFD absorption at the scale of the whole individual and ε at the scale of leaves, which requires a highly integrated response between L(A), A(max) and LAI(ind). The results also suggest that to maximize ε plants have evolved mechanisms that co-ordinate the L(A) and A(max) of individual leaves with PPFD availability.

  7. Applications of maximally concentrating optics for solar energy collection

    Science.gov (United States)

    O'Gallagher, J.; Winston, R.

    1985-11-01

    A new family of optical concentrators based on a general nonimaging design principle for maximizing the geometric concentration, C, for radiation within a given acceptance half angle ±θα has been developed. The maximum limit exceeds by factors of 2 to 10 that attainable by systems using focusing optics. The wide acceptance angles permitted using these techniques have several unique advantages for solar concentrators including the elimination of the diurnal tracking requirement at intermediate concentrations (up to ˜10x), collection of circumsolar and some diffuse radiation, and relaxed tolerances. Because of these advantages, these types of concentrators have applications in solar energy wherever concentration is desired, e.g. for a wide variety of both thermal and photovoltaic uses. The basic principles of nonimaging optical design are reviewed. Selected configurations for thermal collector applications are discussed and the use of nonimaging elements as secondary concentrators is illustrated in the context of higher concentration applications.

  8. Soft computing approach for reliability optimization: State-of-the-art survey

    International Nuclear Information System (INIS)

    Gen, Mitsuo; Yun, Young Su

    2006-01-01

    In the broadest sense, reliability is a measure of performance of systems. As systems have grown more complex, the consequences of their unreliable behavior have become severe in terms of cost, effort, lives, etc., and the interest in assessing system reliability and the need for improving the reliability of products and systems have become very important. Most solution methods for reliability optimization assume that systems have redundancy components in series and/or parallel systems and alternative designs are available. Reliability optimization problems concentrate on optimal allocation of redundancy components and optimal selection of alternative designs to meet system requirement. In the past two decades, numerous reliability optimization techniques have been proposed. Generally, these techniques can be classified as linear programming, dynamic programming, integer programming, geometric programming, heuristic method, Lagrangean multiplier method and so on. A Genetic Algorithm (GA), as a soft computing approach, is a powerful tool for solving various reliability optimization problems. In this paper, we briefly survey GA-based approach for various reliability optimization problems, such as reliability optimization of redundant system, reliability optimization with alternative design, reliability optimization with time-dependent reliability, reliability optimization with interval coefficients, bicriteria reliability optimization, and reliability optimization with fuzzy goals. We also introduce the hybrid approaches for combining GA with fuzzy logic, neural network and other conventional search techniques. Finally, we have some experiments with an example of various reliability optimization problems using hybrid GA approach

  9. New advances in human reliability using the EPRIHRA calculator

    International Nuclear Information System (INIS)

    Julius, J. A.; Grobbelaar, J. F.

    2006-01-01

    This paper describes new advances in human reliability associated with the integration of HRA methods, lessons learned during the first few years of operation of the EPRI HRA / PRA Tools Users Group, and application of human reliability techniques in areas beyond the more traditional Level 1 internal events PRA. This paper is organized as follows. 1. EPRI HRA Users Group Overview (mission, membership, activities, approach) 2. HRA Methods Currently Used (selection, integration, and addressing dependencies) 3. New Advances in HRA Methods 4. Conclusions. (authors)

  10. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  11. Reliability tests and guidelines for B-mode ultrasound assessment of central adiposity.

    Science.gov (United States)

    Stoner, Lee; Chinn, Victoria; Cornwall, Jon; Meikle, Grant; Page, Rachel; Lambrick, Danielle; Faulkner, James

    2015-11-01

    Ultrasound represents a validated and relatively inexpensive diagnostic device for assessing central adiposity; however, widespread adoption has been impeded by the lack of reliable standard operating procedures. To examine the reliability of, and describe guidelines for, ultrasound-derived recording of intra-abdominal fat thickness (IAT) and maximal preperitoneal fat thickness (PFT). Ultrasound scans were obtained from 20 adults (50% female, 26 ± 7 years, 24·5 kg/m(2) ) on three different mornings. IAT was assessed 2 cm above the umbilicus (transverse plane) measuring from linea alba to: (i) anterior aorta, (ii) posterior aorta and (iii) anterior aspect of the vertebral column. PFT was measured from linea alba to visceral peritoneum in (i) sagittal and (ii) transverse planes, immediately over and inferior to the xiphi-sternum, respectively. For IAT, the criterion intraclass correlation coefficient (ICC) of 0·75 was exceeded for measurements to anterior aorta (0·95), posterior aorta (0·94) and vertebra (0·96). The reliability coefficient expressed as a percentage of the mean (RC%) was lowest (better) for measurement to vertebrae (9·8%). For PFT, mean thickness was comparable for sagittal (1·74 cm) and transverse (1·76 cm) planes; ICC values were also comparable for both planes (0·98 vs. 0·98, respectively), as were RC% (7·5% vs. 7·1%, respectively). IAT assessments to the vertebra were marginally more reliable than those to other structures. While PFT assessments were equally reliable for both measurements planes, precise probe placement was easier for the sagittal plane. Based on these findings, guidelines for the reliable measurement of central adiposity using ultrasound are presented. © 2015 Stichting European Society for Clinical Investigation Journal Foundation.

  12. Mutually Unbiased Maximally Entangled Bases for the Bipartite System Cd⊗ C^{dk}

    Science.gov (United States)

    Nan, Hua; Tao, Yuan-Hong; Wang, Tian-Jiao; Zhang, Jun

    2016-10-01

    The construction of maximally entangled bases for the bipartite system Cd⊗ Cd is discussed firstly, and some mutually unbiased bases with maximally entangled bases are given, where 2≤ d≤5. Moreover, we study a systematic way of constructing mutually unbiased maximally entangled bases for the bipartite system Cd⊗ C^{dk}.

  13. Maximal near-field radiative heat transfer between two plates

    Science.gov (United States)

    Nefzaoui, Elyes; Ezzahri, Younès; Drévillon, Jérémie; Joulain, Karl

    2013-09-01

    Near-field radiative transfer is a promising way to significantly and simultaneously enhance both thermo-photovoltaic (TPV) devices power densities and efficiencies. A parametric study of Drude and Lorentz models performances in maximizing near-field radiative heat transfer between two semi-infinite planes separated by nanometric distances at room temperature is presented in this paper. Optimal parameters of these models that provide optical properties maximizing the radiative heat flux are reported and compared to real materials usually considered in similar studies, silicon carbide and heavily doped silicon in this case. Results are obtained by exact and approximate (in the extreme near-field regime and the electrostatic limit hypothesis) calculations. The two methods are compared in terms of accuracy and CPU resources consumption. Their differences are explained according to a mesoscopic description of nearfield radiative heat transfer. Finally, the frequently assumed hypothesis which states a maximal radiative heat transfer when the two semi-infinite planes are of identical materials is numerically confirmed. Its subsequent practical constraints are then discussed. Presented results enlighten relevant paths to follow in order to choose or design materials maximizing nano-TPV devices performances.

  14. Ant system for reliability optimization of a series system with multiple-choice and budget constraints

    International Nuclear Information System (INIS)

    Nahas, Nabil; Nourelfath, Mustapha

    2005-01-01

    Many researchers have shown that insect colonies behavior can be seen as a natural model of collective problem solving. The analogy between the way ants look for food and combinatorial optimization problems has given rise to a new computational paradigm, which is called ant system. This paper presents an application of ant system in a reliability optimization problem for a series system with multiple-choice constraints incorporated at each subsystem, to maximize the system reliability subject to the system budget. The problem is formulated as a nonlinear binary integer programming problem and characterized as an NP-hard problem. This problem is solved by developing and demonstrating a problem-specific ant system algorithm. In this algorithm, solutions of the reliability optimization problem are repeatedly constructed by considering the trace factor and the desirability factor. A local search is used to improve the quality of the solutions obtained by each ant. A penalty factor is introduced to deal with the budget constraint. Simulations have shown that the proposed ant system is efficient with respect to the quality of solutions and the computing time

  15. Grice’s Conversational Implicature: A Pragmatics Analysis of Selected Poems of Audre Lorde

    Directory of Open Access Journals (Sweden)

    Adaoma Igwedibia

    2017-12-01

    Full Text Available A number of works have been done by scholars on the study and interpretation of Audre Lorde’s poetry, especially through the lens of literary and critical analysis. However, Lorde’s poems have not been analyzed pragmatically. A lot may have been written about Lorde’s poetry, but there is absolutely no evidence of a pragmatics study of her work. Lorde is the author of many poems that have been studied in various theoretical dimensions, but none have been done with reference to their pragmatics implications. The problem which this research recognizes, therefore, is that Lorde’s poems, especially the ones under the present study, have not been studied and interpreted using Grice’s theory of Conversational Implicature (Cooperative Principle which is comprised the four maxims: the maxims of Quantity, Quality, Manner and Relation. This study seeks to discover the extent to which these maxims  could be applied to the reading of the selected poems of Lorde. It also seeks to ascertain the degree to which Lorde’s selected poems violate or adhere to these maxims. The study has found that Audre Lorde in some of her poems, violates the maxims as well as adheres to them both in the same breath.

  16. A novel reliability evaluation method for large engineering systems

    Directory of Open Access Journals (Sweden)

    Reda Farag

    2016-06-01

    Full Text Available A novel reliability evaluation method for large nonlinear engineering systems excited by dynamic loading applied in time domain is presented. For this class of problems, the performance functions are expected to be function of time and implicit in nature. Available first- or second-order reliability method (FORM/SORM will be challenging to estimate reliability of such systems. Because of its inefficiency, the classical Monte Carlo simulation (MCS method also cannot be used for large nonlinear dynamic systems. In the proposed approach, only tens instead of hundreds or thousands of deterministic evaluations at intelligently selected points are used to extract the reliability information. A hybrid approach, consisting of the stochastic finite element method (SFEM developed by the author and his research team using FORM, response surface method (RSM, an interpolation scheme, and advanced factorial schemes, is proposed. The method is clarified with the help of several numerical examples.

  17. Reliability-based decision making for selection of ready-mix concrete supply using stochastic superiority and inferiority ranking method

    International Nuclear Information System (INIS)

    Chou, Jui-Sheng; Ongkowijoyo, Citra Satria

    2015-01-01

    Corporate competitiveness is heavily influenced by the information acquired, processed, utilized and transferred by professional staff involved in the supply chain. This paper develops a decision aid for selecting on-site ready-mix concrete (RMC) unloading type in decision making situations involving multiple stakeholders and evaluation criteria. The uncertainty of criteria weights set by expert judgment can be transformed in random ways based on the probabilistic virtual-scale method within a prioritization matrix. The ranking is performed by grey relational grade systems considering stochastic criteria weight based on individual preference. Application of the decision aiding model in actual RMC case confirms that the method provides a robust and effective tool for facilitating decision making under uncertainty. - Highlights: • This study models decision aiding method to assess ready-mix concrete unloading type. • Applying Monte Carlo simulation to virtual-scale method achieves a reliable process. • Individual preference ranking method enhances the quality of global decision making. • Robust stochastic superiority and inferiority ranking obtains reasonable results

  18. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  19. Maximally entangled mixed states of two atoms trapped inside an optical cavity

    International Nuclear Information System (INIS)

    Li Shangbin; Xu Jingbo

    2009-01-01

    In some off-resonant cases, the reduced density matrix of two atoms symmetrically coupled with an optical cavity can very approximately approach maximally entangled mixed states or maximal Bell violation mixed states in their evolution. The influence of a phase decoherence on the generation of a maximally entangled mixed state is also discussed

  20. DATMAN: A reliability data analysis program using Bayesian updating

    International Nuclear Information System (INIS)

    Becker, M.; Feltus, M.A.

    1996-01-01

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, which can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately

  1. Training and Maintaining System-Wide Reliability in Outcome Management.

    Science.gov (United States)

    Barwick, Melanie A; Urajnik, Diana J; Moore, Julia E

    2014-01-01

    The Child and Adolescent Functional Assessment Scale (CAFAS) is widely used for outcome management, for providing real time client and program level data, and the monitoring of evidence-based practices. Methods of reliability training and the assessment of rater drift are critical for service decision-making within organizations and systems of care. We assessed two approaches for CAFAS training: external technical assistance and internal technical assistance. To this end, we sampled 315 practitioners trained by external technical assistance approach from 2,344 Ontario practitioners who had achieved reliability on the CAFAS. To assess the internal technical assistance approach as a reliable alternative training method, 140 practitioners trained internally were selected from the same pool of certified raters. Reliabilities were high for both practitioners trained by external technical assistance and internal technical assistance approaches (.909-.995, .915-.997, respectively). 1 and 3-year estimates showed some drift on several scales. High and consistent reliabilities over time and training method has implications for CAFAS training of behavioral health care practitioners, and the maintenance of CAFAS as a global outcome management tool in systems of care.

  2. IIB solutions with N>28 Killing spinors are maximally supersymmetric

    International Nuclear Information System (INIS)

    Gran, U.; Gutowski, J.; Papadopoulos, G.; Roest, D.

    2007-01-01

    We show that all IIB supergravity backgrounds which admit more than 28 Killing spinors are maximally supersymmetric. In particular, we find that for all N>28 backgrounds the supercovariant curvature vanishes, and that the quotients of maximally supersymmetric backgrounds either preserve all 32 or N<29 supersymmetries

  3. Reliability-Based Robustness Analysis for a Croatian Sports Hall

    DEFF Research Database (Denmark)

    Čizmar, Dean; Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a probabilistic approach for structural robustness assessment for a timber structure built a few years ago. The robustness analysis is based on a structural reliability based framework for robustness and a simplified mechanical system modelling of a timber truss system....... A complex timber structure with a large number of failure modes is modelled with only a few dominant failure modes. First, a component based robustness analysis is performed based on the reliability indices of the remaining elements after the removal of selected critical elements. The robustness...... is expressed and evaluated by a robustness index. Next, the robustness is assessed using system reliability indices where the probabilistic failure model is modelled by a series system of parallel systems....

  4. Reliability analysis techniques for the design engineer

    International Nuclear Information System (INIS)

    Corran, E.R.; Witt, H.H.

    1980-01-01

    A fault tree analysis package is described that eliminates most of the housekeeping tasks involved in proceeding from the initial construction of a fault tree to the final stage of presenting a reliability analysis in a safety report. It is suitable for designers with relatively little training in reliability analysis and computer operation. Users can rapidly investigate the reliability implications of various options at the design stage, and evolve a system which meets specified reliability objectives. Later independent review is thus unlikely to reveal major shortcomings necessitating modification and projects delays. The package operates interactively allowing the user to concentrate on the creative task of developing the system fault tree, which may be modified and displayed graphically. For preliminary analysis system data can be derived automatically from a generic data bank. As the analysis procedes improved estimates of critical failure rates and test and maintenance schedules can be inserted. The computations are standard, - identification of minimal cut-sets, estimation of reliability parameters, and ranking of the effect of the individual component failure modes and system failure modes on these parameters. The user can vary the fault trees and data on-line, and print selected data for preferred systems in a form suitable for inclusion in safety reports. A case history is given - that of HIFAR containment isolation system. (author)

  5. Violations of Grice`s Maxims in The Prince and the Pauper Movie

    Directory of Open Access Journals (Sweden)

    Antonius Waget

    2016-12-01

    Full Text Available Proper responses must be provided interlocutors to make onversation productive and meaningful. However, interlocutors do not always provide proper responses because they do not even know conversation rules. Grice coins 4 maxims as general rules to govern daily conversation. The maxims are Quantity, Quality, Relevance, and Manner. Conversation occurs in the real daily interaction also in arts including movies. The Prince and the Pauper movie is one of the media for human daily conversation. Some parts of the movie contains violations of Grice`s maxims by the characters. Based on this background, the writer intends to explore violations ofGrice’s maxims in the movie and analyze the purposes of the violations. To achieve these objectives, the writer formulates two research problems: (1 Which of Grice`s maxims are violated by the addressees in The Prince and the Pauper movie? (2 For what purposes do the addressees violate the maxims? The base of this research is a movie script as document. Thus, the writer uses document analysis as the method of this research. Grounded on the analysis, the writer finds that the characters, especially Prince, Tom Canty, King, and the Earl of Hertford in the movie dialogues violate the four of Grice`s maxims. When failing to provide sufficient information, telling lie to their addressers, providing irrelevant glosses, and failing to be true, brief, univocal, and orderly, they respectively violate maxims of Quantity, Quality, Relevance, and Manner. Moreover, the writer finds that the characters violate the maxims in order to mislead the counterparts, be polite, save face, avoid discussion, and communicate self-interest.   DOI: https://doi.org/10.24071/llt.2015.180101

  6. Safety and reliability in the 90s: will past experience or prediction meet our needs?

    International Nuclear Information System (INIS)

    Walter, M.H.; Cox, R.F.

    1990-01-01

    Twenty-six papers are presented in the proceedings of the 1990 Safety and Reliability Society Symposium. The papers selected provide current thinking on improved methods for identification, quantification and management of risks based on the safety culture developed across a range of industries during the last decade. In particular organizational and management factors feature in a large number of the papers. Two papers on the safety of all the operating plants at Sellafield's irradiated nuclear fuel handling and reprocessing site and the selection of field component reliability data for use in nuclear safety studies are selected and indexed separately. (author)

  7. Reliability evaluation of a port oil transportation system in variable operation conditions

    International Nuclear Information System (INIS)

    Soszynska, Joanna

    2006-01-01

    The semi-Markov model of the system operation processes is proposed and its selected parameters are determined. The series 'm out of k n ' multi-state system is considered and its reliability and risk characteristics are found. Next, the joint model of the system operation process and the system multi-state reliability and risk is constructed. Moreover, reliability and risk evaluation of the multi-state series 'm out of k n ' system in its operation process is applied to the port oil transportation system

  8. A new measurement of workload in Web application reliability assessment

    Directory of Open Access Journals (Sweden)

    CUI Xia

    2015-02-01

    Full Text Available Web application has been popular in various fields of social life.It becomes more and more important to study the reliability of Web application.In this paper the definition of Web application failure is firstly brought out,and then the definition of Web application reliability.By analyzing data in the IIS server logs and selecting corresponding usage and information delivery failure data,the paper study the feasibility of Web application reliability assessment from the perspective of Web software system based on IIS server logs.Because the usage for a Web site often has certain regularity,a new measurement of workload in Web application reliability assessment is raised.In this method,the unit is removed by weighted average technique;and the weights are assessed by setting objective function and optimization.Finally an experiment was raised for validation.The experiment result shows the assessment of Web application reliability base on the new workload is better.

  9. Inter-expert and intra-expert reliability in sleep spindle scoring

    DEFF Research Database (Denmark)

    Wendt, Sabrina Lyngbye; Welinder, Peter; Sørensen, Helge Bjarup Dissing

    2015-01-01

    Objectives To measure the inter-expert and intra-expert agreement in sleep spindle scoring, and to quantify how many experts are needed to build a reliable dataset of sleep spindle scorings. Methods The EEG dataset was comprised of 400 randomly selected 115 s segments of stage 2 sleep from 110...... with higher reliability than the estimation of spindle duration. Reliability of sleep spindle scoring can be improved by using qualitative confidence scores, rather than a dichotomous yes/no scoring system. Conclusions We estimate that 2–3 experts are needed to build a spindle scoring dataset...... with ‘substantial’ reliability (κ: 0.61–0.8), and 4 or more experts are needed to build a dataset with ‘almost perfect’ reliability (κ: 0.81–1). Significance Spindle scoring is a critical part of sleep staging, and spindles are believed to play an important role in development, aging, and diseases of the nervous...

  10. Neolignan and phenylpropanoid compounds from the fruits of Illicium simonsii Maxim.

    Science.gov (United States)

    Zhuang, Peng-Yu; Chen, Ming-Hua; Wang, Ya-Nan; Wang, Xiao-Xia; Feng, Ya-Jing; Zhang, Dan-Yang

    2018-01-04

    One new sesqui-neolignan compound, namely, sesqui-illisimonan A (1), one new neolignan, illisimonan A (2), and one new phenylpropanoid compound, illisimoid A (3) were isolated from the fruits of Illicium simonsii Maxim. The structures and absolute configurations of these compounds were determined by extensive spectroscopic, including NMR, circular dichroism and calculated electronic circular dichroism methods. The antioxidant activities of compounds 1-3 were also evaluated. Vitamin E was selected as the positive control (IC 50  = 49.73 ± 0.88 μM). Compounds 1 and 2 exhibited in vitro antioxidant activity with an IC 50 value of 55.76 ± 1.30 and 59.36 ± 0.50 μM, respectively. However, the compound 3 didn't show obvious antioxidant activity.

  11. Maximization of Tsallis entropy in the combinatorial formulation

    International Nuclear Information System (INIS)

    Suyari, Hiroki

    2010-01-01

    This paper presents the mathematical reformulation for maximization of Tsallis entropy S q in the combinatorial sense. More concretely, we generalize the original derivation of Maxwell-Boltzmann distribution law to Tsallis statistics by means of the corresponding generalized multinomial coefficient. Our results reveal that maximization of S 2-q under the usual expectation or S q under q-average using the escort expectation are naturally derived from the combinatorial formulations for Tsallis statistics with respective combinatorial dualities, that is, one for additive duality and the other for multiplicative duality.

  12. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  13. Selective arteriography in femoral head fractures

    Energy Technology Data Exchange (ETDEWEB)

    Mannella, P; Galeotti, R; Borrelli, M; Benea, G; Massari, L; Chiarelli, G M

    1986-01-01

    The choice between conservative and radical operation in case of femoral neck fractures is very important because it is the determining factor for a successfull therapy. In case of epiphysial necrosis, an endoprosthesis as well as an osteosynthesis will be carried out. Selective arteriography of the medical circumflex artery represents the most reliable study to establish, immediately after the fractures, the possible presence of a post-traumatic ischemic necrosis. Angiography, as a reliable diagnostic tool, has to be carried out in the most selective way and needs the image subtraction technique. The authors report their preliminary results on the reliability of angiography in the femoral epiphyseal ischemic necrosis diagnosed by comparing the results of angiography with the wood light test carried out on the surgically removed femoral head. 18 refs.

  14. Effect of Selected Light Spectra on the Growth of Chlorella spp ...

    African Journals Online (AJOL)

    The possibility for simultaneous production of chemical and electrical energies from a single microalgae cultivation plant is opening a new chapter in the efficient use of resources to maximize biomass productivity. In the current study, the effect of selected monochromatic lights (blue, red and pink) from spectrally selective ...

  15. Development of Probabilistic Reliability Models of Photovoltaic System Topologies for System Adequacy Evaluation

    Directory of Open Access Journals (Sweden)

    Ahmad Alferidi

    2017-02-01

    Full Text Available The contribution of solar power in electric power systems has been increasing rapidly due to its environmentally friendly nature. Photovoltaic (PV systems contain solar cell panels, power electronic converters, high power switching and often transformers. These components collectively play an important role in shaping the reliability of PV systems. Moreover, the power output of PV systems is variable, so it cannot be controlled as easily as conventional generation due to the unpredictable nature of weather conditions. Therefore, solar power has a different influence on generating system reliability compared to conventional power sources. Recently, different PV system designs have been constructed to maximize the output power of PV systems. These different designs are commonly adopted based on the scale of a PV system. Large-scale grid-connected PV systems are generally connected in a centralized or a string structure. Central and string PV schemes are different in terms of connecting the inverter to PV arrays. Micro-inverter systems are recognized as a third PV system topology. It is therefore important to evaluate the reliability contribution of PV systems under these topologies. This work utilizes a probabilistic technique to develop a power output model for a PV generation system. A reliability model is then developed for a PV integrated power system in order to assess the reliability and energy contribution of the solar system to meet overall system demand. The developed model is applied to a small isolated power unit to evaluate system adequacy and capacity level of a PV system considering the three topologies.

  16. New Interval-Valued Intuitionistic Fuzzy Behavioral MADM Method and Its Application in the Selection of Photovoltaic Cells

    Directory of Open Access Journals (Sweden)

    Xiaolu Zhang

    2016-10-01

    Full Text Available As one of the emerging renewable resources, the use of photovoltaic cells has become a promise for offering clean and plentiful energy. The selection of a best photovoltaic cell for a promoter plays a significant role in aspect of maximizing income, minimizing costs and conferring high maturity and reliability, which is a typical multiple attribute decision making (MADM problem. Although many prominent MADM techniques have been developed, most of them are usually to select the optimal alternative under the hypothesis that the decision maker or expert is completely rational and the decision data are represented by crisp values. However, in the selecting processes of photovoltaic cells the decision maker is usually bounded rational and the ratings of alternatives are usually imprecise and vague. To address these kinds of complex and common issues, in this paper we develop a new interval-valued intuitionistic fuzzy behavioral MADM method. We employ interval-valued intuitionistic fuzzy numbers (IVIFNs to express the imprecise ratings of alternatives; and we construct LINMAP-based nonlinear programming models to identify the reference points under IVIFNs contexts, which avoid the subjective randomness of selecting the reference points. Finally we develop a prospect theory-based ranking method to identify the optimal alternative, which takes fully into account the decision maker’s behavioral characteristics such as reference dependence, diminishing sensitivity and loss aversion in the decision making process.

  17. Maximal hypersurfaces and foliations of constant mean curvature in general relativity

    International Nuclear Information System (INIS)

    Marsden, J.E.; Tipler, F.J.; Texas Univ., Austin

    1980-01-01

    We prove theorems on existence, uniqueness and smoothness of maximal and constant mean curvature compact spacelike hypersurfaces in globally hyperbolic spacetimes. The uniqueness theorem for maximal hypersurfaces of Brill and Flaherty, which assumed matter everywhere, is extended to specetimes that are vacuum and non-flat or that satisfy a generic-type condition. In this connection we show that under general hypotheses, a spatially closed universe with a maximal hypersurface must be Wheeler universe; i.e. be closed in time as well. The existence of Lipschitz achronal maximal volume hypersurfaces under the hypothesis that candidate hypersurfaces are bounded away from the singularity is proved. This hypothesis is shown to be valid in two cases of interest: when the singularities are of strong curvature type, and when the singularity is a single ideal point. Some properties of these maximal volume hypersurfaces and difficulties with Avez' original arguments are discussed. The difficulties involve the possibility that the maximal volume hypersurface can be null on certain portions; we present an incomplete argument which suggests that these hypersurfaces are always smooth, but prove that an a priori bound on the second fundamental form does imply smoothness. An extension of the perturbation theorem of Choquet-Bruhat, Fischer and Marsden is given and conditions under which local foliantions by constant mean curvature hypersurfaces can be extended to global ones is obtained. (orig.)

  18. Maximal near-field radiative heat transfer between two plates

    OpenAIRE

    Nefzaoui, Elyes; Ezzahri, Younès; Drevillon, Jérémie; Joulain, Karl

    2013-01-01

    International audience; Near-field radiative transfer is a promising way to significantly and simultaneously enhance both thermo-photovoltaic (TPV) devices power densities and efficiencies. A parametric study of Drude and Lorentz models performances in maximizing near-field radiative heat transfer between two semi-infinite planes separated by nanometric distances at room temperature is presented in this paper. Optimal parameters of these models that provide optical properties maximizing the r...

  19. Validity And Reliability Of The Stages Cycling Power Meter.

    Science.gov (United States)

    Granier, Cyril; Hausswirth, Christophe; Dorel, Sylvain; Yann, Le Meur

    2017-09-06

    This study aimed to determine the validity and the reliability of the Stages power meter crank system (Boulder, United States) during several laboratory cycling tasks. Eleven trained participants completed laboratory cycling trials on an indoor cycle fitted with SRM Professional and Stages systems. The trials consisted of an incremental test at 100W, 200W, 300W, 400W and four 7s sprints. The level of pedaling asymmetry was determined for each cycling intensity during a similar protocol completed on a Lode Excalibur Sport ergometer. The reliability of Stages and SRM power meters was compared by repeating the incremental test during a test-retest protocol on a Cyclus 2 ergometer. Over power ranges of 100-1250W the Stages system produced trivial to small differences compared to the SRM (standardized typical error values of 0.06, 0.24 and 0.08 for the incremental, sprint and combined trials, respectively). A large correlation was reported between the difference in power output (PO) between the two systems and the level of pedaling asymmetry (r=0.58, p system according to the level of pedaling asymmetry provided only marginal improvements in PO measures. The reliability of the Stages power meter at the sub-maximal intensities was similar to the SRM Professional model (coefficient of variation: 2.1 and 1.3% for Stages and SRM, respectively). The Stages system is a suitable device for PO measurements, except when a typical error of measurement power ranges of 100-1250W is expected.

  20. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  1. Uncountably many maximizing measures for a dense subset of continuous functions

    Science.gov (United States)

    Shinoda, Mao

    2018-05-01

    Ergodic optimization aims to single out dynamically invariant Borel probability measures which maximize the integral of a given ‘performance’ function. For a continuous self-map of a compact metric space and a dense set of continuous functions, we show the existence of uncountably many ergodic maximizing measures. We also show that, for a topologically mixing subshift of finite type and a dense set of continuous functions there exist uncountably many ergodic maximizing measures with full support and positive entropy.

  2. The reliability of the Glasgow Coma Scale: a systematic review.

    Science.gov (United States)

    Reith, Florence C M; Van den Brande, Ruben; Synnot, Anneliese; Gruen, Russell; Maas, Andrew I R

    2016-01-01

    The Glasgow Coma Scale (GCS) provides a structured method for assessment of the level of consciousness. Its derived sum score is applied in research and adopted in intensive care unit scoring systems. Controversy exists on the reliability of the GCS. The aim of this systematic review was to summarize evidence on the reliability of the GCS. A literature search was undertaken in MEDLINE, EMBASE and CINAHL. Observational studies that assessed the reliability of the GCS, expressed by a statistical measure, were included. Methodological quality was evaluated with the consensus-based standards for the selection of health measurement instruments checklist and its influence on results considered. Reliability estimates were synthesized narratively. We identified 52 relevant studies that showed significant heterogeneity in the type of reliability estimates used, patients studied, setting and characteristics of observers. Methodological quality was good (n = 7), fair (n = 18) or poor (n = 27). In good quality studies, kappa values were ≥0.6 in 85%, and all intraclass correlation coefficients indicated excellent reliability. Poor quality studies showed lower reliability estimates. Reliability for the GCS components was higher than for the sum score. Factors that may influence reliability include education and training, the level of consciousness and type of stimuli used. Only 13% of studies were of good quality and inconsistency in reported reliability estimates was found. Although the reliability was adequate in good quality studies, further improvement is desirable. From a methodological perspective, the quality of reliability studies needs to be improved. From a clinical perspective, a renewed focus on training/education and standardization of assessment is required.

  3. Reliability Approach of a Compressor System using Reliability Block ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... This paper presents a reliability analysis of such a system using reliability ... Keywords-compressor system, reliability, reliability block diagram, RBD .... the same structure has been kept with the three subsystems: air flow, oil flow and .... and Safety in Engineering Design", Springer, 2009. [3] P. O'Connor ...

  4. Max-AUC feature selection in computer-aided detection of polyps in CT colonography.

    Science.gov (United States)

    Xu, Jian-Wu; Suzuki, Kenji

    2014-03-01

    We propose a feature selection method based on a sequential forward floating selection (SFFS) procedure to improve the performance of a classifier in computerized detection of polyps in CT colonography (CTC). The feature selection method is coupled with a nonlinear support vector machine (SVM) classifier. Unlike the conventional linear method based on Wilks' lambda, the proposed method selected the most relevant features that would maximize the area under the receiver operating characteristic curve (AUC), which directly maximizes classification performance, evaluated based on AUC value, in the computer-aided detection (CADe) scheme. We presented two variants of the proposed method with different stopping criteria used in the SFFS procedure. The first variant searched all feature combinations allowed in the SFFS procedure and selected the subsets that maximize the AUC values. The second variant performed a statistical test at each step during the SFFS procedure, and it was terminated if the increase in the AUC value was not statistically significant. The advantage of the second variant is its lower computational cost. To test the performance of the proposed method, we compared it against the popular stepwise feature selection method based on Wilks' lambda for a colonic-polyp database (25 polyps and 2624 nonpolyps). We extracted 75 morphologic, gray-level-based, and texture features from the segmented lesion candidate regions. The two variants of the proposed feature selection method chose 29 and 7 features, respectively. Two SVM classifiers trained with these selected features yielded a 96% by-polyp sensitivity at false-positive (FP) rates of 4.1 and 6.5 per patient, respectively. Experiments showed a significant improvement in the performance of the classifier with the proposed feature selection method over that with the popular stepwise feature selection based on Wilks' lambda that yielded 18.0 FPs per patient at the same sensitivity level.

  5. Effect of Maximal Versus Supra-Maximal Exhausting Race on Lipid Peroxidation, Antioxidant Activity and Muscle-Damage Biomarkers in Long-Distance and Middle-Distance Runners.

    Science.gov (United States)

    Mohamed, Said; Lamya, Ncir; Hamda, Mansour

    2016-03-01

    Exhausting physical exercise increases lipid peroxidation and causes important muscle damages. The human body tries to mitigate these adverse effects by mobilizing its antioxidant defenses. This study aims to investigate the effect of a maximal versus supra-maximal race sustained until exhaustion on lipid peroxidation, antioxidant activity and muscle-damage biomarkers in trained (i.e. long-distance and middle-distance runners) and sedentary subjects. The study has been carried out on 8 middle-distance runners (MDR), 9 long-distance runners (LDR), and 8 sedentary subjects (SS). Each subject has undergone two exhaustive running tests, the first one is an incremental event (VAMEVAL test), the second one is a constant supra-maximal intensity test (limited-time test). Blood samples were collected at rest and immediately after each test. A significant increase in malondialdehyde (MDA) concentrations was observed in SS and MDR after the VAMEVAL test and in LDR after the Limited-Time test. A significant difference was also observed between LDR and the other two groups after the VAMEVAL test, and between LDR and MDR after the Limited-Time test. Significant modifications, notably, in myoglobin, CK, LDH, IL-6, TNF-α, and TAS were likewise noted but depending on the race-type and the sportive specialty. Maximal and supra-maximal races induce a significant increase in lipid peroxidation and cause non-negligible inflammation and muscle damage. These effects were relatively related to the physical exercise type and the sportive specialty.

  6. Maximal Abelian sets of roots

    CERN Document Server

    Lawther, R

    2018-01-01

    In this work the author lets \\Phi be an irreducible root system, with Coxeter group W. He considers subsets of \\Phi which are abelian, meaning that no two roots in the set have sum in \\Phi \\cup \\{ 0 \\}. He classifies all maximal abelian sets (i.e., abelian sets properly contained in no other) up to the action of W: for each W-orbit of maximal abelian sets we provide an explicit representative X, identify the (setwise) stabilizer W_X of X in W, and decompose X into W_X-orbits. Abelian sets of roots are closely related to abelian unipotent subgroups of simple algebraic groups, and thus to abelian p-subgroups of finite groups of Lie type over fields of characteristic p. Parts of the work presented here have been used to confirm the p-rank of E_8(p^n), and (somewhat unexpectedly) to obtain for the first time the 2-ranks of the Monster and Baby Monster sporadic groups, together with the double cover of the latter. Root systems of classical type are dealt with quickly here; the vast majority of the present work con...

  7. Maximizing Function through Intelligent Robot Actuator Control

    Data.gov (United States)

    National Aeronautics and Space Administration — Maximizing Function through Intelligent Robot Actuator Control Successful missions to Mars and beyond will only be possible with the support of high-performance...

  8. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  9. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  10. Associations of maximal strength and muscular endurance test scores with cardiorespiratory fitness and body composition.

    Science.gov (United States)

    Vaara, Jani P; Kyröläinen, Heikki; Niemi, Jaakko; Ohrankämmen, Olli; Häkkinen, Arja; Kocay, Sheila; Häkkinen, Keijo

    2012-08-01

    The purpose of the present study was to assess the relationships between maximal strength and muscular endurance test scores additionally to previously widely studied measures of body composition and maximal aerobic capacity. 846 young men (25.5 ± 5.0 yrs) participated in the study. Maximal strength was measured using isometric bench press, leg extension and grip strength. Muscular endurance tests consisted of push-ups, sit-ups and repeated squats. An indirect graded cycle ergometer test was used to estimate maximal aerobic capacity (V(O2)max). Body composition was determined with bioelectrical impedance. Moreover, waist circumference (WC) and height were measured and body mass index (BMI) calculated. Maximal bench press was positively correlated with push-ups (r = 0.61, p strength (r = 0.34, p strength correlated positively (r = 0.36-0.44, p test scores were related to maximal aerobic capacity and body fat content, while fat free mass was associated with maximal strength test scores and thus is a major determinant for maximal strength. A contributive role of maximal strength to muscular endurance tests could be identified for the upper, but not the lower extremities. These findings suggest that push-up test is not only indicative of body fat content and maximal aerobic capacity but also maximal strength of upper body, whereas repeated squat test is mainly indicative of body fat content and maximal aerobic capacity, but not maximal strength of lower extremities.

  11. Reliability Testing Using the Vehicle Durability Simulator

    Science.gov (United States)

    2017-11-20

    techniques are employed to reduce test and simulation time. Through application of these processes and techniques the reliability characteristics...remote parameter control (RPC) software. The software is specifically designed for the data collection, analysis, and simulation processes outlined in...the selection process for determining the desired runs for simulation . 4.3 Drive File Development. After the data have been reviewed and

  12. Cycle length maximization in PWRs using empirical core models

    International Nuclear Information System (INIS)

    Okafor, K.C.; Aldemir, T.

    1987-01-01

    The problem of maximizing cycle length in nuclear reactors through optimal fuel and poison management has been addressed by many investigators. An often-used neutronic modeling technique is to find correlations between the state and control variables to describe the response of the core to changes in the control variables. In this study, a set of linear correlations, generated by two-dimensional diffusion-depletion calculations, is used to find the enrichment distribution that maximizes cycle length for the initial core of a pressurized water reactor (PWR). These correlations (a) incorporate the effect of composition changes in all the control zones on a given fuel assembly and (b) are valid for a given range of control variables. The advantage of using such correlations is that the cycle length maximization problem can be reduced to a linear programming problem

  13. Model uncertainty and multimodel inference in reliability estimation within a longitudinal framework.

    Science.gov (United States)

    Alonso, Ariel; Laenen, Annouschka

    2013-05-01

    Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.

  14. Reliability evaluation of a port oil transportation system in variable operation conditions

    Energy Technology Data Exchange (ETDEWEB)

    Soszynska, Joanna [Department of Mathematics, Gdynia Maritime University, ul. Morska 83, 81-225 Gdynia (Poland)]. E-mail: joannas@am.gdynia.pl

    2006-04-15

    The semi-Markov model of the system operation processes is proposed and its selected parameters are determined. The series 'm out of k {sub n}' multi-state system is considered and its reliability and risk characteristics are found. Next, the joint model of the system operation process and the system multi-state reliability and risk is constructed. Moreover, reliability and risk evaluation of the multi-state series 'm out of k {sub n}' system in its operation process is applied to the port oil transportation system.

  15. Modified personal interviews: resurrecting reliable personal interviews for admissions?

    Science.gov (United States)

    Hanson, Mark D; Kulasegaram, Kulamakan Mahan; Woods, Nicole N; Fechtig, Lindsey; Anderson, Geoff

    2012-10-01

    Traditional admissions personal interviews provide flexible faculty-student interactions but are plagued by low inter-interview reliability. Axelson and Kreiter (2009) retrospectively showed that multiple independent sampling (MIS) may improve reliability of personal interviews; thus, the authors incorporated MIS into the admissions process for medical students applying to the University of Toronto's Leadership Education and Development Program (LEAD). They examined the reliability and resource demands of this modified personal interview (MPI) format. In 2010-2011, LEAD candidates submitted written applications, which were used to screen for participation in the MPI process. Selected candidates completed four brief (10-12 minutes) independent MPIs each with a different interviewer. The authors blueprinted MPI questions to (i.e., aligned them with) leadership attributes, and interviewers assessed candidates' eligibility on a five-point Likert-type scale. The authors analyzed inter-interview reliability using the generalizability theory. Sixteen candidates submitted applications; 10 proceeded to the MPI stage. Reliability of the written application components was 0.75. The MPI process had overall inter-interview reliability of 0.79. Correlation between the written application and MPI scores was 0.49. A decision study showed acceptable reliability of 0.74 with only three MPIs scored using one global rating. Furthermore, a traditional admissions interview format would take 66% more time than the MPI format. The MPI format, used during the LEAD admissions process, achieved high reliability with minimal faculty resources. The MPI format's reliability and effective resource use were possible through MIS and employment of expert interviewers. MPIs may be useful for other admissions tasks.

  16. Strategy to maximize maintenance operation

    OpenAIRE

    Espinoza, Michael

    2005-01-01

    This project presents a strategic analysis to maximize maintenance operations in Alcan Kitimat Works in British Columbia. The project studies the role of maintenance in improving its overall maintenance performance. It provides strategic alternatives and specific recommendations addressing Kitimat Works key strategic issues and problems. A comprehensive industry and competitive analysis identifies the industry structure and its competitive forces. In the mature aluminium industry, the bargain...

  17. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  18. Maximizing the Range of a Projectile.

    Science.gov (United States)

    Brown, Ronald A.

    1992-01-01

    Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)

  19. Maximization of eigenvalues using topology optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2000-01-01

    to localized modes in low density areas. The topology optimization problem is formulated using the SIMP method. Special attention is paid to a numerical method for removing localized eigenmodes in low density areas. The method is applied to numerical examples of maximizing the first eigenfrequency, One example...

  20. Investing American Recovery and Reinvestment Act Funds to Advance Capability, Reliability, and Performance in NASA Wind Tunnels

    Science.gov (United States)

    Sydnor, Goerge H.

    2010-01-01

    The National Aeronautics and Space Administration's (NASA) Aeronautics Test Program (ATP) is implementing five significant ground-based test facility projects across the nation with funding provided by the American Recovery and Reinvestment Act (ARRA). The projects were selected as the best candidates within the constraints of the ARRA and the strategic plan of ATP. They are a combination of much-needed large scale maintenance, reliability, and system upgrades plus creating new test beds for upcoming research programs. The projects are: 1.) Re-activation of a large compressor to provide a second source for compressed air and vacuum to the Unitary Plan Wind Tunnel at the Ames Research Center (ARC) 2.) Addition of high-altitude ice crystal generation at the Glenn Research Center Propulsion Systems Laboratory Test Cell 3, 3.) New refrigeration system and tunnel heat exchanger for the Icing Research Tunnel at the Glenn Research Center, 4.) Technical viability improvements for the National Transonic Facility at the Langley Research Center, and 5.) Modifications to conduct Environmentally Responsible Aviation and Rotorcraft research at the 14 x 22 Subsonic Tunnel at Langley Research Center. The selection rationale, problem statement, and technical solution summary for each project is given here. The benefits and challenges of the ARRA funded projects are discussed. Indirectly, this opportunity provides the advantages of developing experience in NASA's workforce in large projects and maintaining corporate knowledge in that very unique capability. It is envisioned that improved facilities will attract a larger user base and capabilities that are needed for current and future research efforts will offer revenue growth and future operations stability. Several of the chosen projects will maximize wind tunnel reliability and maintainability by using newer, proven technologies in place of older and obsolete equipment and processes. The projects will meet NASA's goal of