WorldWideScience

Sample records for maximized selectivity reliable

  1. Maximally reliable Markov chains under energy constraints.

    Science.gov (United States)

    Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam

    2009-07-01

    Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.

  2. Maximal network reliability for a stochastic power transmission network

    International Nuclear Information System (INIS)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2011-01-01

    Many studies regarded a power transmission network as a binary-state network and constructed it with several arcs and vertices to evaluate network reliability. In practice, the power transmission network should be stochastic because each arc (transmission line) combined with several physical lines is multistate. Network reliability is the probability that the network can transmit d units of electric power from a power plant (source) to a high voltage substation at a specific area (sink). This study focuses on searching for the optimal transmission line assignment to the power transmission network such that network reliability is maximized. A genetic algorithm based method integrating the minimal paths and the Recursive Sum of Disjoint Products is developed to solve this assignment problem. A real power transmission network is adopted to demonstrate the computational efficiency of the proposed method while comparing with the random solution generation approach.

  3. Techniques to maximize software reliability in radiation fields

    International Nuclear Information System (INIS)

    Eichhorn, G.; Piercey, R.B.

    1986-01-01

    Microprocessor system failures due to memory corruption by single event upsets (SEUs) and/or latch-up in RAM or ROM memory are common in environments where there is high radiation flux. Traditional methods to harden microcomputer systems against SEUs and memory latch-up have usually involved expensive large scale hardware redundancy. Such systems offer higher reliability, but they tend to be more complex and non-standard. At the Space Astronomy Laboratory the authors have developed general programming techniques for producing software which is resistant to such memory failures. These techniques, which may be applied to standard off-the-shelf hardware, as well as custom designs, include an implementation of Maximally Redundant Software (MRS) model, error detection algorithms and memory verification and management

  4. Assessment of the Maximal Split-Half Coefficient to Estimate Reliability

    Science.gov (United States)

    Thompson, Barry L.; Green, Samuel B.; Yang, Yanyun

    2010-01-01

    The maximal split-half coefficient is computed by calculating all possible split-half reliability estimates for a scale and then choosing the maximal value as the reliability estimate. Osburn compared the maximal split-half coefficient with 10 other internal consistency estimates of reliability and concluded that it yielded the most consistently…

  5. Maximizing crossbred performance through purebred genomic selection

    DEFF Research Database (Denmark)

    Esfandyari, Hadi; Sørensen, Anders Christian; Bijma, Piter

    2015-01-01

    Background In livestock production, many animals are crossbred, with two distinct advantages: heterosis and breed complementarity. Genomic selection (GS) can be used to select purebred parental lines for crossbred performance (CP). Dominance being the likely genetic basis of heterosis, explicitly...

  6. Maximizing Crossbred Performance through Purebred Genomic Selection

    DEFF Research Database (Denmark)

    Esfandyari, Hadi; Sørensen, Anders Christian; Bijma, Pieter

    Genomic selection (GS) can be used to select purebreds for crossbred performance (CP). As dominance is the likely genetic basis of heterosis, explicitly including dominance in the GS model may be beneficial for selection of purebreds for CP, when estimating allelic effects from pure line data. Th...

  7. Reliability of Maximal Strength Testing in Novice Weightlifters

    Science.gov (United States)

    Loehr, James A.; Lee, Stuart M. C.; Feiveson, Alan H.; Ploutz-Snyder, Lori L.

    2009-01-01

    The one repetition maximum (1RM) is a criterion measure of muscle strength. However, the reliability of 1RM testing in novice subjects has received little attention. Understanding this information is crucial to accurately interpret changes in muscle strength. To evaluate the test-retest reliability of a squat (SQ), heel raise (HR), and deadlift (DL) 1RM in novice subjects. Twenty healthy males (31 plus or minus 5 y, 179.1 plus or minus 6.1 cm, 81.4 plus or minus 10.6 kg) with no weight training experience in the previous six months participated in four 1RM testing sessions, with each session separated by 5-7 days. SQ and HR 1RM were conducted using a smith machine; DL 1RM was assessed using free weights. Session 1 was considered a familiarization and was not included in the statistical analyses. Repeated measures analysis of variance with Tukey fs post-hoc tests were used to detect between-session differences in 1RM (p.0.05). Test-retest reliability was evaluated by intraclass correlation coefficients (ICC). During Session 2, the SQ and DL 1RM (SQ: 90.2 }4.3, DL: 75.9 }3.3 kg) were less than Session 3 (SQ: 95.3 }4.1, DL: 81.5 plus or minus 3.5 kg) and Session 4 (SQ: 96.6 }4.0, DL: 82.4 }3.9 kg), but there were no differences between Session 3 and Session 4. HR 1RM measured during Session 2 (150.1 }3.7 kg) and Session 3 (152.5 }3.9 kg) were not different from one another, but both were less than Session 4 (157.5 }3.8 kg). The reliability (ICC) of 1RM measures for Sessions 2-4 were 0.88, 0.83, and 0.87, for SQ, HR, and DL, respectively. When considering only Sessions 3 and 4, the reliability was 0.93, 0.91, and 0.86 for SQ, HR, and DL, respectively. One familiarization session and 2 test sessions (for SQ and DL) were required to obtain excellent reliability (ICC greater than or equal to 0.90) in 1RM values with novice subjects. We were unable to attain this level of reliability following 3 HR testing sessions therefore additional sessions may be required to obtain an

  8. Reliability of surface electromyography activity of gluteal and hamstring muscles during sub-maximal and maximal voluntary isometric contractions.

    Science.gov (United States)

    Bussey, Melanie D; Aldabe, Daniela; Adhia, Divya; Mani, Ramakrishnan

    2018-04-01

    Normalizing to a reference signal is essential when analysing and comparing electromyography signals across or within individuals. However, studies have shown that MVC testing may not be as reliable in persons with acute and chronic pain. The purpose of this study was to compare the test-retest reliability of the muscle activity in the biceps femoris and gluteus maximus between a novel sub-MVC and standard MVC protocols. This study utilized a single individual repeated measures design with 12 participants performing multiple trials of both the sub-MVC and MVC tasks on two separate days. The participant position in the prone leg raise task was standardised with an ultrasonic sensor to improve task precession between trials/days. Day-to-day and trial-to-trial reliability of the maximal muscle activity was examined using ICC and SEM. Day-to-day and trial-to-trial reliability of the EMG activity in the BF and GM were high (0.70-0.89) to very high (≥0.90) for both test procedures. %SEM was <5-10% for both tests on a given day but higher in the day-to-day comparisons. The lower amplitude of the sub-MVC is a likely contributor to increased %SEM (8-13%) in the day-to-day comparison. The findings show that the sub-MVC modified prone double leg raise results in GM and BF EMG measures similar in reliability and precision to the standard MVC tasks. Therefore, the modified prone double leg raise may be a useful substitute for traditional MVC testing for normalizing EMG signals of the BF and GM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A method of bias correction for maximal reliability with dichotomous measures.

    Science.gov (United States)

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  10. Reliability criteria selection for integrated resource planning

    International Nuclear Information System (INIS)

    Ruiu, D.; Ye, C.; Billinton, R.; Lakhanpal, D.

    1993-01-01

    A study was conducted on the selection of a generating system reliability criterion that ensures a reasonable continuity of supply while minimizing the total costs to utility customers. The study was conducted using the Institute for Electronic and Electrical Engineers (IEEE) reliability test system as the study system. The study inputs and results for conditions and load forecast data, new supply resources data, demand-side management resource data, resource planning criterion, criterion value selection, supply side development, integrated resource development, and best criterion values, are tabulated and discussed. Preliminary conclusions are drawn as follows. In the case of integrated resource planning, the selection of the best value for a given type of reliability criterion can be done using methods similar to those used for supply side planning. The reliability criteria values previously used for supply side planning may not be economically justified when integrated resource planning is used. Utilities may have to revise and adopt new, and perhaps lower supply reliability criteria for integrated resource planning. More complex reliability criteria, such as energy related indices, which take into account the magnitude, frequency and duration of the expected interruptions are better adapted than the simpler capacity-based reliability criteria such as loss of load expectation. 7 refs., 5 figs., 10 tabs

  11. The reliability and validity of fatigue measures during short-duration maximal-intensity intermittent cycling.

    Science.gov (United States)

    Glaister, Mark; Stone, Michael H; Stewart, Andrew M; Hughes, Michael; Moir, Gavin L

    2004-08-01

    The purpose of the present study was to assess the reliability and validity of fatigue measures, as derived from 4 separate formulae, during tests of repeat sprint ability. On separate days over a 3-week period, 2 groups of 7 recreationally active men completed 6 trials of 1 of 2 maximal (20 x 5 seconds) intermittent cycling tests with contrasting recovery periods (10 or 30 seconds). All trials were conducted on a friction-braked cycle ergometer, and fatigue scores were derived from measures of mean power output for each sprint. Apart from formula 1, which calculated fatigue from the percentage difference in mean power output between the first and last sprint, all remaining formulae produced fatigue scores that showed a reasonably good level of test-retest reliability in both intermittent test protocols (intraclass correlation range: 0.78-0.86; 95% likely range of true values: 0.54-0.97). Although between-protocol differences in the magnitude of the fatigue scores suggested good construct validity, within-protocol differences highlighted limitations with each formula. Overall, the results support the use of the percentage decrement score as the most valid and reliable measure of fatigue during brief maximal intermittent work.

  12. Diurnal variation and reliability of the urine lactate concentration after maximal exercise.

    Science.gov (United States)

    Nikolaidis, Stefanos; Kosmidis, Ioannis; Sougioultzis, Michail; Kabasakalis, Athanasios; Mougios, Vassilis

    2018-01-01

    The postexercise urine lactate concentration is a novel valid exercise biomarker, which has exhibited satisfactory reliability in the morning hours under controlled water intake. The aim of the present study was to investigate the diurnal variation of the postexercise urine lactate concentration and its reliability in the afternoon hours. Thirty-two healthy children (11 boys and 21 girls) and 23 adults (13 men and 10 women) participated in the study. All participants performed two identical sessions of eight 25 m bouts of maximal freestyle swimming executed every 2 min with passive recovery in between. These sessions were performed in the morning and afternoon and were separated by 3-4 days. Adults performed an additional afternoon session that was also separated by 3-4 days. All swimmers drank 500 mL of water before and another 500 mL after each test. Capillary blood and urine samples were collected before and after each test for lactate determination. Urine creatinine, urine density and body water content were also measured. The intraclass correlation coefficient was used as a reliability index between the morning and afternoon tests, as well as between the afternoon test and retest. Swimming performance and body water content exhibited excellent reliability in both children and adults. The postexercise blood lactate concentration did not show diurnal variation, showing a good reliability between the morning and afternoon tests, as well as high reliability between the afternoon test and retest. The postexercise urine density and lactate concentration were affected by time of day. However, when lactate was normalized to creatinine, it exhibited excellent reliability in children and good-to-high reliability in adults. The postexercise urine lactate concentration showed high reliability between the afternoon test and retest, independent of creatinine normalization. The postexercise blood and urine lactate concentrations were significantly correlated in all

  13. Maximizing Consensus in Portfolio Selection in Multicriteria Group Decision Making

    NARCIS (Netherlands)

    Michael, Emmerich T. M.; Deutz, A.H.; Li, L.; Asep, Maulana A.; Yevseyeva, I.

    2016-01-01

    This paper deals with a scenario of decision making where a moderator selects a (sub)set (aka portfolio) of decision alternatives from a larger set. The larger the number of decision makers who agree on a solution in the portfolio the more successful the moderator is. We assume that decision makers

  14. Application of the maximal covering location problem to habitat reserve site selection: a review

    Science.gov (United States)

    Stephanie A. Snyder; Robert G. Haight

    2016-01-01

    The Maximal Covering Location Problem (MCLP) is a classic model from the location science literature which has found wide application. One important application is to a fundamental problem in conservation biology, the Maximum Covering Species Problem (MCSP), which identifies land parcels to protect to maximize the number of species represented in the selected sites. We...

  15. Test-retest reliability of maximal leg muscle power and functional performance measures in patients with severe osteoarthritis (OA)

    DEFF Research Database (Denmark)

    Villadsen, Allan; Roos, Ewa M.; Overgaard, Søren

    Abstract : Purpose To evaluate the reliability of single-joint and multi-joint maximal leg muscle power and functional performance measures in patients with severe OA. Background Muscle power, taking both strength and velocity into account, is a more functional measure of lower extremity muscle...... and scheduled for unilateral total hip (n=9) or knee (n=11) replacement. Patients underwent a test battery on two occasions separated by approximately one week (range 7 to 11 days). Muscle power was measured using: 1. A linear encoder, unilateral lower limb isolated single-joint dynamic movement, e.g. knee...... flexion 2. A leg extension press, unilateral multi-joint knee and hip extension Functional performance was measured using: 1. 20 m walk usual pace 2. 20 m walk maximal pace 3. 5 times chair stands 4. Maximal number of knee bends/30sec Pain was measured on a VAS prior to and after conducting the entire...

  16. Maximizing Energy Savings Reliability in BC Hydro Industrial Demand-side Management Programs: An Assessment of Performance Incentive Models

    Science.gov (United States)

    Gosman, Nathaniel

    of alternative performance incentive program models to manage DSM risk in BC. Three performance incentive program models were assessed and compared to BC Hydro's current large industrial DSM incentive program, Power Smart Partners -- Transmission Project Incentives, itself a performance incentive-based program. Together, the selected program models represent a continuum of program design and implementation in terms of the schedule and level of incentives provided, the duration and rigour of measurement and verification (M&V), energy efficiency measures targeted and involvement of the private sector. A multi criteria assessment framework was developed to rank the capacity of each program model to manage BC large industrial DSM risk factors. DSM risk management rankings were then compared to program costeffectiveness, targeted energy savings potential in BC and survey results from BC industrial firms on the program models. The findings indicate that the reliability of DSM energy savings in the BC large industrial sector can be maximized through performance incentive program models that: (1) offer incentives jointly for capital and low-cost operations and maintenance (O&M) measures, (2) allow flexible lead times for project development, (3) utilize rigorous M&V methods capable of measuring variable load, process-based energy savings, (4) use moderate contract lengths that align with effective measure life, and (5) integrate energy management software tools capable of providing energy performance feedback to customers to maximize the persistence of energy savings. While this study focuses exclusively on the BC large industrial sector, the findings of this research have applicability to all energy utilities serving large, energy intensive industrial sectors.

  17. The reliability of a maximal isometric hip strength and simultaneous surface EMG screening protocol in elite, junior rugby league athletes.

    Science.gov (United States)

    Charlton, Paula C; Mentiplay, Benjamin F; Grimaldi, Alison; Pua, Yong-Hao; Clark, Ross A

    2017-02-01

    Firstly to describe the reliability of assessing maximal isometric strength of the hip abductor and adductor musculature using a hand held dynamometry (HHD) protocol with simultaneous wireless surface electromyographic (sEMG) evaluation of the gluteus medius (GM) and adductor longus (AL). Secondly, to describe the correlation between isometric strength recorded with the HHD protocol and a laboratory standard isokinetic device. Reliability and correlational study. A sample of 24 elite, male, junior, rugby league athletes, age 16-20 years participated in repeated HHD and isometric Kin-Com (KC) strength testing with simultaneous sEMG assessment, on average (range) 6 (5-7) days apart by a single assessor. Strength tests included; unilateral hip abduction (ABD) and adduction (ADD) and bilateral ADD assessed with squeeze (SQ) tests in 0 and 45° of hip flexion. HHD demonstrated good to excellent inter-session reliability for all outcome measures (ICC (2,1) =0.76-0.91) and good to excellent association with the laboratory reference KC (ICC (2,1) =0.80-0.88). Whilst intra-session, inter-trial reliability of EMG activation and co-activation outcome measures ranged from moderate to excellent (ICC (2,1) =0.70-0.94), inter-session reliability was poor (all ICC (2,1) Isometric strength testing of the hip ABD and ADD musculature using HHD may be measured reliably in elite, junior rugby league athletes. Due to the poor inter-session reliability of sEMG measures, it is not recommended for athlete screening purposes if using the techniques implemented in this study. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  18. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  19. The reliability of randomly selected final year pharmacy students in ...

    African Journals Online (AJOL)

    Employing ANOVA, factorial experimental analysis, and the theory of error, reliability studies were conducted on the assessment of the drug product chloroquine phosphate tablets. The G–Study employed equal numbers of the factors for uniform control, and involved three analysts (randomly selected final year Pharmacy ...

  20. Revenue-Maximizing Radio Access Technology Selection with Net Neutrality Compliance in Heterogeneous Wireless Networks

    Directory of Open Access Journals (Sweden)

    Elissar Khloussy

    2018-01-01

    Full Text Available The net neutrality principle states that users should have equal access to all Internet content and that Internet Service Providers (ISPs should not practice differentiated treatment on any of the Internet traffic. While net neutrality aims to restrain any kind of discrimination, it also grants exemption to a certain category of traffic known as specialized services (SS, by allowing the ISP to dedicate part of the resources for the latter. In this work, we consider a heterogeneous LTE/WiFi wireless network and we investigate revenue-maximizing Radio Access Technology (RAT selection strategies that are net neutrality-compliant, with exemption granted to SS traffic. Our objective is to find out how the bandwidth reservation for SS traffic would be made in a way that allows maximizing the revenue while being in compliance with net neutrality and how the choice of the ratio of reserved bandwidth would affect the revenue. The results show that reserving bandwidth for SS traffic in one RAT (LTE can achieve higher revenue. On the other hand, when the capacity is reserved across both LTE and WiFi, higher social benefit in terms of number of admitted users can be realized, as well as lower blocking probability for the Internet access traffic.

  1. Intrarater Reliability of Muscle Strength and Hamstring to Quadriceps Strength Imbalance Ratios During Concentric, Isometric, and Eccentric Maximal Voluntary Contractions Using the Isoforce Dynamometer.

    Science.gov (United States)

    Mau-Moeller, Anett; Gube, Martin; Felser, Sabine; Feldhege, Frank; Weippert, Matthias; Husmann, Florian; Tischer, Thomas; Bader, Rainer; Bruhn, Sven; Behrens, Martin

    2017-08-17

    To determine intrasession and intersession reliability of strength measurements and hamstrings to quadriceps strength imbalance ratios (H/Q ratios) using the new isoforce dynamometer. Repeated measures. Exercise science laboratory. Thirty healthy subjects (15 females, 15 males, 27.8 years). Coefficient of variation (CV) and intraclass correlation coefficients (ICC) were calculated for (1) strength parameters, that is peak torque, mean work, and mean power for concentric and eccentric maximal voluntary contractions; isometric maximal voluntary torque (IMVT); rate of torque development (RTD), and (2) H/Q ratios, that is conventional concentric, eccentric, and isometric H/Q ratios (Hcon/Qcon at 60 deg/s, 120 deg/s, and 180 deg/s, Hecc/Qecc at -60 deg/s and Hiso/Qiso) and functional eccentric antagonist to concentric agonist H/Q ratios (Hecc/Qcon and Hcon/Qecc). High reliability: CV 0.90; moderate reliability: CV between 10% and 20%, ICC between 0.80 and 0.90; low reliability: CV >20%, ICC Strength parameters: (a) high intrasession reliability for concentric, eccentric, and isometric measurements, (b) moderate-to-high intersession reliability for concentric and eccentric measurements and IMVT, and (c) moderate-to-high intrasession reliability but low intersession reliability for RTD. (2) H/Q ratios: (a) moderate-to-high intrasession reliability for conventional ratios, (b) high intrasession reliability for functional ratios, (c) higher intersession reliability for Hcon/Qcon and Hiso/Qiso (moderate to high) than Hecc/Qecc (low to moderate), and (d) higher intersession reliability for conventional H/Q ratios (low to high) than functional H/Q ratios (low to moderate). The results have confirmed the reliability of strength parameters and the most frequently used H/Q ratios.

  2. Selection of suitable hand gestures for reliable myoelectric human computer interface.

    Science.gov (United States)

    Castro, Maria Claudia F; Arjunan, Sridhar P; Kumar, Dinesh K

    2015-04-09

    Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Experiments were conducted where sEMG was recorded from the muscles of the forearm while subjects performed hand gestures and then was classified off-line. The performances of ten gestures were ranked using the proposed Positive-Negative Performance Measurement Index (PNM), generated by a series of confusion matrices. When using all the ten gestures, the sensitivity and specificity was 80.0% and 97.8%. After ranking the gestures using the PNM, six gestures were selected and these gave sensitivity and specificity greater than 95% (96.5% and 99.3%); Hand open, Hand close, Little finger flexion, Ring finger flexion, Middle finger flexion and Thumb flexion. This work has shown that reliable myoelectric based human computer interface systems require careful selection of the gestures that have to be recognized and without such selection, the reliability is poor.

  3. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  4. Improving inspection reliability through operator selection and training

    International Nuclear Information System (INIS)

    McGrath, Bernard; Carter, Luke

    2013-01-01

    A number of years ago the UK's Health and Safety Executive sponsored a series of three PANI projects investigating the application of manual ultrasonics, which endeavoured to establish the necessary steps that ensure a reliable inspection is performed. The results of the three projects were each reported separately on completion and also presented at number of international conferences. This paper summarises the results of these projects from the point of view of operator performance. The correlation of operator ultrasonic performance with results of aptitude tests is presented along with observations on the impact of training and qualifications of the operators. The results lead to conclusions on how the selection and training of operators could be modified to improve reliability of inspections.

  5. Neuron selection based on deflection coefficient maximization for the neural decoding of dexterous finger movements.

    Science.gov (United States)

    Kim, Yong-Hee; Thakor, Nitish V; Schieber, Marc H; Kim, Hyoung-Nam

    2015-05-01

    Future generations of brain-machine interface (BMI) will require more dexterous motion control such as hand and finger movements. Since a population of neurons in the primary motor cortex (M1) area is correlated with finger movements, neural activities recorded in M1 area are used to reconstruct an intended finger movement. In a BMI system, decoding discrete finger movements from a large number of input neurons does not guarantee a higher decoding accuracy in spite of the increase in computational burden. Hence, we hypothesize that selecting neurons important for coding dexterous flexion/extension of finger movements would improve the BMI performance. In this paper, two metrics are presented to quantitatively measure the importance of each neuron based on Bayes risk minimization and deflection coefficient maximization in a statistical decision problem. Since motor cortical neurons are active with movements of several different fingers, the proposed method is more suitable for a discrete decoding of flexion-extension finger movements than the previous methods for decoding reaching movements. In particular, the proposed metrics yielded high decoding accuracies across all subjects and also in the case of including six combined two-finger movements. While our data acquisition and analysis was done off-line and post processing, our results point to the significance of highly coding neurons in improving BMI performance.

  6. Maximizing Selective Cleavages at Aspartic Acid and Proline Residues for the Identification of Intact Proteins

    Science.gov (United States)

    Foreman, David J.; Dziekonski, Eric T.; McLuckey, Scott A.

    2018-04-01

    A new approach for the identification of intact proteins has been developed that relies on the generation of relatively few abundant products from specific cleavage sites. This strategy is intended to complement standard approaches that seek to generate many fragments relatively non-selectively. Specifically, this strategy seeks to maximize selective cleavage at aspartic acid and proline residues via collisional activation of precursor ions formed via electrospray ionization (ESI) under denaturing conditions. A statistical analysis of the SWISS-PROT database was used to predict the number of arginine residues for a given intact protein mass and predict a m/z range where the protein carries a similar charge to the number of arginine residues thereby enhancing cleavage at aspartic acid residues by limiting proton mobility. Cleavage at aspartic acid residues is predicted to be most favorable in the m/z range of 1500-2500, a range higher than that normally generated by ESI at low pH. Gas-phase proton transfer ion/ion reactions are therefore used for precursor ion concentration from relatively high charge states followed by ion isolation and subsequent generation of precursor ions within the optimal m/z range via a second proton transfer reaction step. It is shown that the majority of product ion abundance is concentrated into cleavages C-terminal to aspartic acid residues and N-terminal to proline residues for ions generated by this process. Implementation of a scoring system that weights both ion fragment type and ion fragment area demonstrated identification of standard proteins, ranging in mass from 8.5 to 29.0 kDa. [Figure not available: see fulltext.

  7. A strong response to selection on mass-independent maximal metabolic rate without a correlated response in basal metabolic rate

    DEFF Research Database (Denmark)

    Wone, B W M; Madsen, Per; Donovan, E R

    2015-01-01

    Metabolic rates are correlated with many aspects of ecology, but how selection on different aspects of metabolic rates affects their mutual evolution is poorly understood. Using laboratory mice, we artificially selected for high maximal mass-independent metabolic rate (MMR) without direct selection...... on mass-independent basal metabolic rate (BMR). Then we tested for responses to selection in MMR and correlated responses to selection in BMR. In other lines, we antagonistically selected for mice with a combination of high mass-independent MMR and low mass-independent BMR. All selection protocols...... and data analyses included body mass as a covariate, so effects of selection on the metabolic rates are mass adjusted (that is, independent of effects of body mass). The selection lasted eight generations. Compared with controls, MMR was significantly higher (11.2%) in lines selected for increased MMR...

  8. INDICATORS OF MAXIMAL FLEXOR FORCE OF LEFT AND RIGHT HAND FOR THE POLICE SELECTION CRITERIA PURPOSES

    Directory of Open Access Journals (Sweden)

    Milivoj Dopsaj

    2006-06-01

    factor the right hand participated with 95.8% force, while left hand participated with 95.0% force. We have therefore demonstrated, among tested population, measurement of right hand force is more representative estimation of the given variable. Based on the distribution of results for right hand force, as function of isolated cluster criterion, distribution of the tested population in respect to Cluster1-7 is following: 18.53%, 27.94%, 24.62%, 17.98%, 8.02%, 2.63%, 0.28%, respectively. The value of the bordering minimum for right hand force of Cluster 2 is 56.87 DaN, which represents 18.5‰ (percentile of tested population. In regard to tested policemen population between 19 and 24 years of age, results of right hand grip force is test of choice for estimation of maximal hand flexor force. The value of inflexion point (point of separation in regard to selection criterion – acceptable/ unacceptable is on the level of 56.87 DaN, for the right hand grip force and its placed among 18.5‰ (percentile of tested population.

  9. Reliability of shade selection using an intraoral spectrophotometer.

    Science.gov (United States)

    Witkowski, Siegbert; Yajima, Nao-Daniel; Wolkewitz, Martin; Strub, Jorge R

    2012-06-01

    In this study, we evaluate the accuracy and reproducibility of human tooth shade selection using a digital spectrophotometer. Variability among examiners and illumination conditions were tested for possible influence on measurement reproducibility. Fifteen intact anterior teeth of 15 subjects were evaluated for their shade using a digital spectrophotometer (Crystaleye, Olympus, Tokyo, Japan) by two examiners under the same light conditions representing a dental laboratory situation. Each examiner performed the measurement ten times on the labial surface of each tooth containing three evaluation sides (cervical, body, incisal). Commission International on Illumination color space values for L* (lightness), a* (red/green), and b* (yellow/blue) were obtained from each evaluated side. Examiner 2 repeated the measurements of the same subjects under different light conditions (i.e., a dental unit with a chairside lamp). To describe measurement precision, the mean color difference from the mean metric was used. The computed confidence interval (CI) value 5.228 (4.6598-5.8615) reflected (represented) the validity of the measurements. Least square mean analysis of the values obtained by examiners 1 and 2 or under different illumination conditions revealed no statistically significant differences (CI = 95%). Within the limits of the present study, the accuracy and reproducibility of dental shade selection using the tested spectrophotometer with respect to examiner and illumination conditions reflected the reliability of this device. This study suggests that the tested spectrophotometer can be recommended for the clinical application of shade selection.

  10. Fuzzy Goal Programming Approach in Selective Maintenance Reliability Model

    Directory of Open Access Journals (Sweden)

    Neha Gupta

    2013-12-01

    Full Text Available 800x600 In the present paper, we have considered the allocation problem of repairable components for a parallel-series system as a multi-objective optimization problem and have discussed two different models. In first model the reliability of subsystems are considered as different objectives. In second model the cost and time spent on repairing the components are considered as two different objectives. These two models is formulated as multi-objective Nonlinear Programming Problem (MONLPP and a Fuzzy goal programming method is used to work out the compromise allocation in multi-objective selective maintenance reliability model in which we define the membership functions of each objective function and then transform membership functions into equivalent linear membership functions by first order Taylor series and finally by forming a fuzzy goal programming model obtain a desired compromise allocation of maintenance components. A numerical example is also worked out to illustrate the computational details of the method.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

  11. Rate Adaptive Selective Segment Assignment for Reliable Wireless Video Transmission

    Directory of Open Access Journals (Sweden)

    Sajid Nazir

    2012-01-01

    Full Text Available A reliable video communication system is proposed based on data partitioning feature of H.264/AVC, used to create a layered stream, and LT codes for erasure protection. The proposed scheme termed rate adaptive selective segment assignment (RASSA is an adaptive low-complexity solution to varying channel conditions. The comparison of the results of the proposed scheme is also provided for slice-partitioned H.264/AVC data. Simulation results show competitiveness of the proposed scheme compared to optimized unequal and equal error protection solutions. The simulation results also demonstrate that a high visual quality video transmission can be maintained despite the adverse effect of varying channel conditions and the number of decoding failures can be reduced.

  12. Reliability of maximal mitochondrial oxidative phosphorylation in permeabilized fibers from the vastus lateralis employing high-resolution respirometry

    DEFF Research Database (Denmark)

    Cardinale, Daniele A; Gejl, Kasper D; Ørtenblad, Niels

    2018-01-01

    The purpose was to assess the impact of various factors on methodological errors associated with measurement of maximal oxidative phosphorylation (OXPHOS) in human skeletal muscle determined by high-resolution respirometry in saponin-permeabilized fibers. Biopsies were collected from 25 men...

  13. Criterion validity and reliability of a smartphone delivered sub-maximal fitness test for people with type 2 diabetes

    DEFF Research Database (Denmark)

    Brinklov, Cecilie Fau; Thorsen, Ida Kær; Karstoft, Kristian

    2016-01-01

    Background: Prevention of multi-morbidities following non-communicable diseases requires a systematic registration of adverse modifiable risk factors, including low physical fitness. The aim of the study was to establish criterion validity and reliability of a smartphone app (InterWalk) delivered....... The algorithm was validated using leave-one-out cross validation. Test-retest reliability was tested in a subset of participants (N = 10). Results: The overall VO2peak prediction of the algorithm (R2) was 0.60 and 0.45 when the smartphone was placed in the pockets of the pants and jacket, respectively (p ... calorimetry and the acceleration (vector magnitude) from the smartphone was obtained. The vector magnitude was used to predict VO2peak along with the co-variates weight, height and sex. The validity of the algorithm was tested when the smartphone was placed in the right pocket of the pants or jacket...

  14. Maximal cardiorespiratory fitness testing in individuals with chronic stroke with cognitive impairment: practice test effects and test-retest reliability.

    Science.gov (United States)

    Olivier, Charles; Doré, Jean; Blanchet, Sophie; Brooks, Dina; Richards, Carol L; Martel, Guy; Robitaille, Nancy-Michelle; Maltais, Désirée B

    2013-11-01

    To evaluate, for individuals with chronic stroke with cognitive impairment, (1) the effects of a practice test on peak cardiorespiratory fitness test results; (2) cardiorespiratory fitness test-retest reliability; and (3) the relationship between individual practice test effects and cognitive impairment. Cross-sectional. Rehabilitation center. A convenience sample of 21 persons (men [n=12] and women [n=9]; age range, 48-81y; 44.9±36.2mo poststroke) with cognitive impairments who had sufficient lower limb function to perform the test. Not applicable. Peak oxygen consumption (Vo(2)peak, ml·kg(-1)·min(-1)). Test-retest reliability of Vo(2)peak was excellent (intraclass correlation coefficient model 2,1 [ICC2,1]=.94; 95% confidence interval [CI], .86-.98). A paired t test showed that there was no significant difference for the group for Vo(2)peak obtained from 2 symptom-limited cardiorespiratory fitness tests performed 1 week apart on a semirecumbent cycle ergometer (test 2-test 1 difference, -.32ml·kg(-1)·min(-1); 95% CI, -.69 to 1.33ml·kg(-1)·min(-1); P=.512). Individual test-retest differences in Vo(2)peak were, however, positively related to general cognitive function as measured by the Mini-Mental State Examination (ρ=.485; Preliably measured in this group without a practice test. General cognitive function, however, may influence the effect of a practice test in that those with lower general cognitive function appear to respond differently to a practice test than those with higher cognitive function. Copyright © 2013 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  15. AVC: Selecting discriminative features on basis of AUC by maximizing variable complementarity.

    Science.gov (United States)

    Sun, Lei; Wang, Jun; Wei, Jinmao

    2017-03-14

    The Receiver Operator Characteristic (ROC) curve is well-known in evaluating classification performance in biomedical field. Owing to its superiority in dealing with imbalanced and cost-sensitive data, the ROC curve has been exploited as a popular metric to evaluate and find out disease-related genes (features). The existing ROC-based feature selection approaches are simple and effective in evaluating individual features. However, these approaches may fail to find real target feature subset due to their lack of effective means to reduce the redundancy between features, which is essential in machine learning. In this paper, we propose to assess feature complementarity by a trick of measuring the distances between the misclassified instances and their nearest misses on the dimensions of pairwise features. If a misclassified instance and its nearest miss on one feature dimension are far apart on another feature dimension, the two features are regarded as complementary to each other. Subsequently, we propose a novel filter feature selection approach on the basis of the ROC analysis. The new approach employs an efficient heuristic search strategy to select optimal features with highest complementarities. The experimental results on a broad range of microarray data sets validate that the classifiers built on the feature subset selected by our approach can get the minimal balanced error rate with a small amount of significant features. Compared with other ROC-based feature selection approaches, our new approach can select fewer features and effectively improve the classification performance.

  16. Selected Methods For Increases Reliability The Of Electronic Systems Security

    Directory of Open Access Journals (Sweden)

    Paś Jacek

    2015-11-01

    Full Text Available The article presents the issues related to the different methods to increase the reliability of electronic security systems (ESS for example, a fire alarm system (SSP. Reliability of the SSP in the descriptive sense is a property preservation capacity to implement the preset function (e.g. protection: fire airport, the port, logistics base, etc., at a certain time and under certain conditions, e.g. Environmental, despite the possible non-compliance by a specific subset of elements this system. Analyzing the available literature on the ESS-SSP is not available studies on methods to increase the reliability (several works similar topics but moving with respect to the burglary and robbery (Intrusion. Based on the analysis of the set of all paths in the system suitability of the SSP for the scenario mentioned elements fire events (device critical because of security.

  17. Performance Analysis of Selective Decode-and-Forward Multinode Incremental Relaying with Maximal Ratio Combining

    KAUST Repository

    Hadjtaieb, Amir

    2013-09-12

    In this paper, we propose an incremental multinode relaying protocol with arbitrary N-relay nodes that allows an efficient use of the channel spectrum. The destination combines the received signals from the source and the relays using maximal ratio Combining (MRC). The transmission ends successfully once the accumulated signal-to-noise ratio (SNR) exceeds a predefined threshold. The number of relays participating in the transmission is adapted to the channel conditions based on the feedback from the destination. The use of incremental relaying allows obtaining a higher spectral efficiency. Moreover, the symbol error probability (SEP) performance is enhanced by using MRC at the relays. The use of MRC at the relays implies that each relay overhears the signals from the source and all previous relays and combines them using MRC. The proposed protocol differs from most of existing relaying protocol by the fact that it combines both incremental relaying and MRC at the relays for a multinode topology. Our analyses for a decode-and-forward mode show that: (i) compared to existing multinode relaying schemes, the proposed scheme can essentially achieve the same SEP performance but with less average number of time slots, (ii) compared to schemes without MRC at the relays, the proposed scheme can approximately achieve a 3 dB gain.

  18. A strong response to selection on mass-independent maximal metabolic rate without a correlated response in basal metabolic rate.

    Science.gov (United States)

    Wone, B W M; Madsen, P; Donovan, E R; Labocha, M K; Sears, M W; Downs, C J; Sorensen, D A; Hayes, J P

    2015-04-01

    Metabolic rates are correlated with many aspects of ecology, but how selection on different aspects of metabolic rates affects their mutual evolution is poorly understood. Using laboratory mice, we artificially selected for high maximal mass-independent metabolic rate (MMR) without direct selection on mass-independent basal metabolic rate (BMR). Then we tested for responses to selection in MMR and correlated responses to selection in BMR. In other lines, we antagonistically selected for mice with a combination of high mass-independent MMR and low mass-independent BMR. All selection protocols and data analyses included body mass as a covariate, so effects of selection on the metabolic rates are mass adjusted (that is, independent of effects of body mass). The selection lasted eight generations. Compared with controls, MMR was significantly higher (11.2%) in lines selected for increased MMR, and BMR was slightly, but not significantly, higher (2.5%). Compared with controls, MMR was significantly higher (5.3%) in antagonistically selected lines, and BMR was slightly, but not significantly, lower (4.2%). Analysis of breeding values revealed no positive genetic trend for elevated BMR in high-MMR lines. A weak positive genetic correlation was detected between MMR and BMR. That weak positive genetic correlation supports the aerobic capacity model for the evolution of endothermy in the sense that it fails to falsify a key model assumption. Overall, the results suggest that at least in these mice there is significant capacity for independent evolution of metabolic traits. Whether that is true in the ancestral animals that evolved endothermy remains an important but unanswered question.

  19. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  20. Reliability of maximal isometric knee strength testing with modified hand-held dynamometry in patients awaiting total knee arthroplasty: useful in research and individual patient settings? A reliability study

    Directory of Open Access Journals (Sweden)

    Koblbauer Ian FH

    2011-10-01

    Full Text Available Abstract Background Patients undergoing total knee arthroplasty (TKA often experience strength deficits both pre- and post-operatively. As these deficits may have a direct impact on functional recovery, strength assessment should be performed in this patient population. For these assessments, reliable measurements should be used. This study aimed to determine the inter- and intrarater reliability of hand-held dynamometry (HHD in measuring isometric knee strength in patients awaiting TKA. Methods To determine interrater reliability, 32 patients (81.3% female were assessed by two examiners. Patients were assessed consecutively by both examiners on the same individual test dates. To determine intrarater reliability, a subgroup (n = 13 was again assessed by the examiners within four weeks of the initial testing procedure. Maximal isometric knee flexor and extensor strength were tested using a modified Citec hand-held dynamometer. Both the affected and unaffected knee were tested. Reliability was assessed using the Intraclass Correlation Coefficient (ICC. In addition, the Standard Error of Measurement (SEM and the Smallest Detectable Difference (SDD were used to determine reliability. Results In both the affected and unaffected knee, the inter- and intrarater reliability were good for knee flexors (ICC range 0.76-0.94 and excellent for knee extensors (ICC range 0.92-0.97. However, measurement error was high, displaying SDD ranges between 21.7% and 36.2% for interrater reliability and between 19.0% and 57.5% for intrarater reliability. Overall, measurement error was higher for the knee flexors than for the knee extensors. Conclusions Modified HHD appears to be a reliable strength measure, producing good to excellent ICC values for both inter- and intrarater reliability in a group of TKA patients. High SEM and SDD values, however, indicate high measurement error for individual measures. This study demonstrates that a modified HHD is appropriate to

  1. Selection for life-history traits to maximize population growth in an invasive marine species

    DEFF Research Database (Denmark)

    Jaspers, Cornelia; Marty, Lise; Kiørboe, Thomas

    2018-01-01

    Species establishing outside their natural range, negatively impacting local ecosystems, are of increasing global concern. They often display life-history features characteristic for r-selected populations with fast growth and high reproduction rates to achieve positive population growth rates (r...

  2. Reliability estimates for selected sensors in fusion applications

    International Nuclear Information System (INIS)

    Cadwallader, L.C.

    1996-09-01

    This report presents the results of a study to define several types of sensors in use, the qualitative reliability (failure modes) and quantitative reliability (average failure rates) for these types of process sensors. Temperature, pressure, flow, and level sensors are discussed for water coolant and for cryogenic coolants. The failure rates that have been found are useful for risk assessment and safety analysis. Repair times and calibration intervals are also given when found in the literature. All of these values can also be useful to plant operators and maintenance personnel. Designers may be able to make use of these data when planning systems. The final chapter in this report discusses failure rates for several types of personnel safety sensors, including ionizing radiation monitors, toxic and combustible gas detectors, humidity sensors, and magnetic field sensors. These data could be useful to industrial hygienists and other safety professionals when designing or auditing for personnel safety

  3. Maximizing mandibular prosthesis stability utilizing linear occlusion, occlusal plane selection, and centric recording.

    Science.gov (United States)

    Williamson, Richard A; Williamson, Anne E; Bowley, John; Toothaker, Randy

    2004-03-01

    The stability of mandibular complete dentures may be improved by reducing the transverse forces on the denture base through linear (noninterceptive) occlusion, selecting an occlusal plane that reduces horizontal vectors of force at occlusal contact, and utilizing a central bearing intraoral gothic arch tracing to record jaw relations. This article is intended to acquaint the reader with one technique for providing stable complete denture prostheses using the aforementioned materials, devices, and procedures.

  4. Rolling Bearing Fault Diagnosis Using Modified Neighborhood Preserving Embedding and Maximal Overlap Discrete Wavelet Packet Transform with Sensitive Features Selection

    Directory of Open Access Journals (Sweden)

    Fei Dong

    2018-01-01

    Full Text Available In order to enhance the performance of bearing fault diagnosis and classification, features extraction and features dimensionality reduction have become more important. The original statistical feature set was calculated from single branch reconstruction vibration signals obtained by using maximal overlap discrete wavelet packet transform (MODWPT. In order to reduce redundancy information of original statistical feature set, features selection by adjusted rand index and sum of within-class mean deviations (FSASD was proposed to select fault sensitive features. Furthermore, a modified features dimensionality reduction method, supervised neighborhood preserving embedding with label information (SNPEL, was proposed to realize low-dimensional representations for high-dimensional feature space. Finally, vibration signals collected from two experimental test rigs were employed to evaluate the performance of the proposed procedure. The results show that the effectiveness, adaptability, and superiority of the proposed procedure can serve as an intelligent bearing fault diagnosis system.

  5. Reliability of ultrasound for measurement of selected foot structures.

    Science.gov (United States)

    Crofts, G; Angin, S; Mickle, K J; Hill, S; Nester, C J

    2014-01-01

    Understanding the relationship between the lower leg muscles, foot structures and function is essential to explain how disease or injury may relate to changes in foot function and clinical pathology. The aim of this study was to investigate the inter-operator reliability of an ultrasound protocol to quantify features of: rear, mid and forefoot sections of the plantar fascia (PF); flexor hallucis brevis (FHB); flexor digitorum brevis (FDB); abductor hallucis (AbH); flexor digitorum longus (FDL); flexor hallucis longus (FHL); tibialis anterior (TA); and peroneus longus and brevis (PER). A sample of 6 females and 4 males (mean age 29.1 ± 7.2 years, mean BMI 25.5 ± 4.8) was recruited from a university student and staff population. Scans were obtained using a portable Venue 40 musculoskeletal ultrasound system (GE Healthcare UK) with a 5-13 MHz wideband linear array probe with a 12.7 mm × 47.1mm footprint by two operators in the same scanning session. Intraclass Correlation Coefficients (ICC) values for muscle thickness (ICC range 0.90-0.97), plantar fascia thickness (ICC range 0.94-0.98) and cross sectional muscle measurements (ICC range 0.91-0.98) revealed excellent inter-operator reliability. The limits of agreement, relative to structure size, ranged from 9.0% to 17.5% for muscle thickness, 11.0-18.0% for plantar fascia, and 11.0-26.0% for cross sectional area measurements. The ultrasound protocol implemented in this work has been shown to be reliable. It therefore offers the opportunity to quantify the structures concerned and better understand their contributions to foot function. Crown Copyright © 2013. Published by Elsevier B.V. All rights reserved.

  6. Resource allocation for maximizing prediction accuracy and genetic gain of genomic selection in plant breeding: a simulation experiment.

    Science.gov (United States)

    Lorenz, Aaron J

    2013-03-01

    Allocating resources between population size and replication affects both genetic gain through phenotypic selection and quantitative trait loci detection power and effect estimation accuracy for marker-assisted selection (MAS). It is well known that because alleles are replicated across individuals in quantitative trait loci mapping and MAS, more resources should be allocated to increasing population size compared with phenotypic selection. Genomic selection is a form of MAS using all marker information simultaneously to predict individual genetic values for complex traits and has widely been found superior to MAS. No studies have explicitly investigated how resource allocation decisions affect success of genomic selection. My objective was to study the effect of resource allocation on response to MAS and genomic selection in a single biparental population of doubled haploid lines by using computer simulation. Simulation results were compared with previously derived formulas for the calculation of prediction accuracy under different levels of heritability and population size. Response of prediction accuracy to resource allocation strategies differed between genomic selection models (ridge regression best linear unbiased prediction [RR-BLUP], BayesCπ) and multiple linear regression using ordinary least-squares estimation (OLS), leading to different optimal resource allocation choices between OLS and RR-BLUP. For OLS, it was always advantageous to maximize population size at the expense of replication, but a high degree of flexibility was observed for RR-BLUP. Prediction accuracy of doubled haploid lines included in the training set was much greater than of those excluded from the training set, so there was little benefit to phenotyping only a subset of the lines genotyped. Finally, observed prediction accuracies in the simulation compared well to calculated prediction accuracies, indicating these theoretical formulas are useful for making resource allocation

  7. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  8. Reliability assessment of selected indicators of tree health

    Science.gov (United States)

    Pawel M. Lech

    2000-01-01

    The measurements of electrical resistance of near-cambium tissues, selected biometric features of needles and shoots, and the annual radial increment as well as visual estimates of crown defoliation were performed on about 100 Norway spruce trees in three 60- to 70-year-old stands located in the Western Sudety Mountains. The defoliation, electrical resistance, and...

  9. Selecting reliable and robust freshwater macroalgae for biomass applications.

    Directory of Open Access Journals (Sweden)

    Rebecca J Lawton

    Full Text Available Intensive cultivation of freshwater macroalgae is likely to increase with the development of an algal biofuels industry and algal bioremediation. However, target freshwater macroalgae species suitable for large-scale intensive cultivation have not yet been identified. Therefore, as a first step to identifying target species, we compared the productivity, growth and biochemical composition of three species representative of key freshwater macroalgae genera across a range of cultivation conditions. We then selected a primary target species and assessed its competitive ability against other species over a range of stocking densities. Oedogonium had the highest productivity (8.0 g ash free dry weight m⁻² day⁻¹, lowest ash content (3-8%, lowest water content (fresh weigh: dry weight ratio of 3.4, highest carbon content (45% and highest bioenergy potential (higher heating value 20 MJ/kg compared to Cladophora and Spirogyra. The higher productivity of Oedogonium relative to Cladophora and Spirogyra was consistent when algae were cultured with and without the addition of CO₂ across three aeration treatments. Therefore, Oedogonium was selected as our primary target species. The competitive ability of Oedogonium was assessed by growing it in bi-cultures and polycultures with Cladophora and Spirogyra over a range of stocking densities. Cultures were initially stocked with equal proportions of each species, but after three weeks of growth the proportion of Oedogonium had increased to at least 96% (±7 S.E. in Oedogonium-Spirogyra bi-cultures, 86% (±16 S.E. in Oedogonium-Cladophora bi-cultures and 82% (±18 S.E. in polycultures. The high productivity, bioenergy potential and competitive dominance of Oedogonium make this species an ideal freshwater macroalgal target for large-scale production and a valuable biomass source for bioenergy applications. These results demonstrate that freshwater macroalgae are thus far an under-utilised feedstock with

  10. Reliable Path Selection Problem in Uncertain Traffic Network after Natural Disaster

    Directory of Open Access Journals (Sweden)

    Jing Wang

    2013-01-01

    Full Text Available After natural disaster, especially for large-scale disasters and affected areas, vast relief materials are often needed. In the meantime, the traffic networks are always of uncertainty because of the disaster. In this paper, we assume that the edges in the network are either connected or blocked, and the connection probability of each edge is known. In order to ensure the arrival of these supplies at the affected areas, it is important to select a reliable path. A reliable path selection model is formulated, and two algorithms for solving this model are presented. Then, adjustable reliable path selection model is proposed when the edge of the selected reliable path is broken. And the corresponding algorithms are shown to be efficient both theoretically and numerically.

  11. Advances in ranking and selection, multiple comparisons, and reliability methodology and applications

    CERN Document Server

    Balakrishnan, N; Nagaraja, HN

    2007-01-01

    S. Panchapakesan has made significant contributions to ranking and selection and has published in many other areas of statistics, including order statistics, reliability theory, stochastic inequalities, and inference. Written in his honor, the twenty invited articles in this volume reflect recent advances in these areas and form a tribute to Panchapakesan's influence and impact on these areas. Thematically organized, the chapters cover a broad range of topics from: Inference; Ranking and Selection; Multiple Comparisons and Tests; Agreement Assessment; Reliability; and Biostatistics. Featuring

  12. Reliable Portfolio Selection Problem in Fuzzy Environment: An mλ Measure Based Approach

    Directory of Open Access Journals (Sweden)

    Yuan Feng

    2017-04-01

    Full Text Available This paper investigates a fuzzy portfolio selection problem with guaranteed reliability, in which the fuzzy variables are used to capture the uncertain returns of different securities. To effectively handle the fuzziness in a mathematical way, a new expected value operator and variance of fuzzy variables are defined based on the m λ measure that is a linear combination of the possibility measure and necessity measure to balance the pessimism and optimism in the decision-making process. To formulate the reliable portfolio selection problem, we particularly adopt the expected total return and standard variance of the total return to evaluate the reliability of the investment strategies, producing three risk-guaranteed reliable portfolio selection models. To solve the proposed models, an effective genetic algorithm is designed to generate the approximate optimal solution to the considered problem. Finally, the numerical examples are given to show the performance of the proposed models and algorithm.

  13. Feasibility and reliability of digital imaging for estimating food selection and consumption from students' packed lunches.

    Science.gov (United States)

    Taylor, Jennifer C; Sutter, Carolyn; Ontai, Lenna L; Nishina, Adrienne; Zidenberg-Cherr, Sheri

    2018-01-01

    Although increasing attention is placed on the quality of foods in children's packed lunches, few studies have examined the capacity of observational methods to reliably determine both what is selected and consumed from these lunches. The objective of this project was to assess the feasibility and inter-rater reliability of digital imaging for determining selection and consumption from students' packed lunches, by adapting approaches previously applied to school lunches. Study 1 assessed feasibility and reliability of data collection among a sample of packed lunches (n = 155), while Study 2 further examined reliability in a larger sample of packed (n = 386) as well as school (n = 583) lunches. Based on the results from Study 1, it was feasible to collect and code most items in packed lunch images; missing data were most commonly attributed to packaging that limited visibility of contents. Across both studies, there was satisfactory reliability for determining food types selected, quantities selected, and quantities consumed in the eight food categories examined (weighted kappa coefficients 0.68-0.97 for packed lunches, 0.74-0.97 for school lunches), with lowest reliability for estimating condiments and meats/meat alternatives in packed lunches. In extending methods predominately applied to school lunches, these findings demonstrate the capacity of digital imaging for the objective estimation of selection and consumption from both school and packed lunches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. [Employees in high-reliability organizations: systematic selection of personnel as a final criterion].

    Science.gov (United States)

    Oubaid, V; Anheuser, P

    2014-05-01

    Employees represent an important safety factor in high-reliability organizations. The combination of clear organizational structures, a nonpunitive safety culture, and psychological personnel selection guarantee a high level of safety. The cockpit personnel selection process of a major German airline is presented in order to demonstrate a possible transferability into medicine and urology.

  15. Revealing kinetics and state-dependent binding properties of IKur-targeting drugs that maximize atrial fibrillation selectivity

    Science.gov (United States)

    Ellinwood, Nicholas; Dobrev, Dobromir; Morotti, Stefano; Grandi, Eleonora

    2017-09-01

    The KV1.5 potassium channel, which underlies the ultra-rapid delayed-rectifier current (IKur) and is predominantly expressed in atria vs. ventricles, has emerged as a promising target to treat atrial fibrillation (AF). However, while numerous KV1.5-selective compounds have been screened, characterized, and tested in various animal models of AF, evidence of antiarrhythmic efficacy in humans is still lacking. Moreover, current guidelines for pre-clinical assessment of candidate drugs heavily rely on steady-state concentration-response curves or IC50 values, which can overlook adverse cardiotoxic effects. We sought to investigate the effects of kinetics and state-dependent binding of IKur-targeting drugs on atrial electrophysiology in silico and reveal the ideal properties of IKur blockers that maximize anti-AF efficacy and minimize pro-arrhythmic risk. To this aim, we developed a new Markov model of IKur that describes KV1.5 gating based on experimental voltage-clamp data in atrial myocytes from patient right-atrial samples in normal sinus rhythm. We extended the IKur formulation to account for state-specificity and kinetics of KV1.5-drug interactions and incorporated it into our human atrial cell model. We simulated 1- and 3-Hz pacing protocols in drug-free conditions and with a [drug] equal to the IC50 value. The effects of binding and unbinding kinetics were determined by examining permutations of the forward (kon) and reverse (koff) binding rates to the closed, open, and inactivated states of the KV1.5 channel. We identified a subset of ideal drugs exhibiting anti-AF electrophysiological parameter changes at fast pacing rates (effective refractory period prolongation), while having little effect on normal sinus rhythm (limited action potential prolongation). Our results highlight that accurately accounting for channel interactions with drugs, including kinetics and state-dependent binding, is critical for developing safer and more effective pharmacological anti

  16. Material Selection for Cable Gland to Improved Reliability of the High-hazard Industries

    Science.gov (United States)

    Vashchuk, S. P.; Slobodyan, S. M.; Deeva, V. S.; Vashchuk, D. S.

    2018-01-01

    The sealed cable glands (SCG) are available to ensure safest connection sheathed single wire for the hazard production facility (nuclear power plant and others) the same as pilot cable, control cables, radio-frequency cables et al. In this paper, we investigate the specifics of the material selection of SCG with the express aim of hazardous man-made facility. We discuss the safe working conditions for cable glands. The research indicates the sintering powdered metals cables provide the reliability growth due to their properties. A number of studies have demonstrated the verification of material selection. On the face of it, we make findings indicating that double glazed sealed units could enhance reliability. We had evaluated sample reliability under fire conditions, seismic load, and pressure containment failure. We used the samples mineral insulated thermocouple cable.

  17. EVALUATION OF HUMAN RELIABILITY IN SELECTED ACTIVITIES IN THE RAILWAY INDUSTRY

    Directory of Open Access Journals (Sweden)

    Erika SUJOVÁ

    2016-07-01

    Full Text Available The article focuses on evaluation of human reliability in the human – machine system in the railway industry. Based on a survey of a train dispatcher and of selected activities, we have identified risk factors affecting the dispatcher‘s work and the evaluated risk level of their influence on the reliability and safety of preformed activities. The research took place at the authors‘ work place between 2012-2013. A survey method was used. With its help, authors were able to identify selected work activities of train dispatcher’s risk factors that affect his/her work and the evaluated seriousness of its in-fluence on the reliability and safety of performed activities. Amongst the most important finding fall expressions of un-clear and complicated internal regulations and work processes, a feeling of being overworked, fear for one’s safety at small, insufficiently protected stations.

  18. Reliability of pedigree-based and genomic evaluations in selected populations

    NARCIS (Netherlands)

    Gorjanc, G.; Bijma, P.; Hickey, J.M.

    2015-01-01

    Background: Reliability is an important parameter in breeding. It measures the precision of estimated breeding values (EBV) and, thus, potential response to selection on those EBV. The precision of EBV is commonly measured by relating the prediction error variance (PEV) of EBV to the base population

  19. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research.

    Science.gov (United States)

    Koo, Terry K; Li, Mae Y

    2016-06-01

    Intraclass correlation coefficient (ICC) is a widely used reliability index in test-retest, intrarater, and interrater reliability analyses. This article introduces the basic concept of ICC in the content of reliability analysis. There are 10 forms of ICCs. Because each form involves distinct assumptions in their calculation and will lead to different interpretations, researchers should explicitly specify the ICC form they used in their calculation. A thorough review of the research design is needed in selecting the appropriate form of ICC to evaluate reliability. The best practice of reporting ICC should include software information, "model," "type," and "definition" selections. When coming across an article that includes ICC, readers should first check whether information about the ICC form has been reported and if an appropriate ICC form was used. Based on the 95% confident interval of the ICC estimate, values less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and greater than 0.90 are indicative of poor, moderate, good, and excellent reliability, respectively. This article provides a practical guideline for clinical researchers to choose the correct form of ICC and suggests the best practice of reporting ICC parameters in scientific publications. This article also gives readers an appreciation for what to look for when coming across ICC while reading an article.

  20. Intersession reliability of self-selected and narrow stance balance testing in older adults.

    Science.gov (United States)

    Riemann, Bryan L; Piersol, Kelsey

    2017-10-01

    Despite the common practice of using force platforms to assess balance of older adults, few investigations have examined the reliability of postural screening tests in this population. We sought to determine the test-retest reliability of self-selected and narrow stance balance testing with eyes open and eyes closed in healthy older adults. Thirty older adults (>65 years) completed 45 s trials of eyes open and eyes closed stability tests using self-selected and narrow stances on two separate days (1.9 ± .7 days). Average medial-lateral center of pressure velocity was computed. The ICC results ranged from .74 to .86, and no significant systematic changes (P eyes open and closed balance testing using self-selected and narrow stances in older adults was established which should provide a foundation for the development of fall risk screening tests.

  1. Entropy maximization

    Indian Academy of Sciences (India)

    Abstract. It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf f that satisfy. ∫ fhi dμ = λi for i = 1, 2,...,...k the maximizer of entropy is an f0 that is pro- portional to exp(. ∑ ci hi ) for some choice of ci . An extension of this to a continuum of.

  2. Entropy Maximization

    Indian Academy of Sciences (India)

    It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy ∫ f h i d = i for i = 1 , 2 , … , … k the maximizer of entropy is an f 0 that is proportional to exp ⁡ ( ∑ c i h i ) for some choice of c i . An extension of this to a continuum of ...

  3. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  4. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  5. Inter-tester reliability of selected clinical tests for long-lasting temporomandibular disorders.

    Science.gov (United States)

    Julsvoll, Elisabeth Heggem; Vøllestad, Nina Køpke; Opseth, Gro; Robinson, Hilde Stendal

    2017-09-01

    Clinical tests used to examine patients with temporomandibular disorders vary in methodological quality, and some are not tested for reliability. The purpose of this cross-sectional study was to evaluate inter-tester reliability of clinical tests and a cluster of tests used to examine patients with long-lasting temporomandibular disorders. Forty patients with pain in the temporomandibular area treated by health-professionals were included. They were between 18-70 years, had 65 symptomatic (33 right/32 left) and 15 asymptomatic joints. Two manual therapists examined all participants with selected tests. Percentage agreement and the kappa coefficient ( k ) with 95% confidence interval (CI) were used to evaluate the tests with categorical outcomes. For tests with continuous outcomes, the relative inter-tester reliability was assessed by the intraclass-correlation-coefficient (ICC 3,1 , 95% CI) and the absolute reliability was calculated by the smallest detectable change (SDC). The best reliability among single tests was found for the dental stick test, the joint-sound test ( k  = 0.80-1.0) and range of mouth-opening (ICC 3,1 (95% CI) = 0.97 (0.95-0.98) and SDC = 4 mm). The reliability of cluster of tests was excellent with both four and five positive tests out of seven. The reliability was good to excellent for the clinical tests and the cluster of tests when performed by experienced therapists. The tests are feasible for use in the clinical setting. They require no advanced equipment and are easy to perform.

  6. A mechanism of extreme growth and reliable signaling in sexually selected ornaments and weapons.

    Science.gov (United States)

    Emlen, Douglas J; Warren, Ian A; Johns, Annika; Dworkin, Ian; Lavine, Laura Corley

    2012-08-17

    Many male animals wield ornaments or weapons of exaggerated proportions. We propose that increased cellular sensitivity to signaling through the insulin/insulin-like growth factor (IGF) pathway may be responsible for the extreme growth of these structures. We document how rhinoceros beetle horns, a sexually selected weapon, are more sensitive to nutrition and more responsive to perturbation of the insulin/IGF pathway than other body structures. We then illustrate how enhanced sensitivity to insulin/IGF signaling in a growing ornament or weapon would cause heightened condition sensitivity and increased variability in expression among individuals--critical properties of reliable signals of male quality. The possibility that reliable signaling arises as a by-product of the growth mechanism may explain why trait exaggeration has evolved so many different times in the context of sexual selection.

  7. Determine the optimal carrier selection for a logistics network based on multi-commodity reliability criterion

    Science.gov (United States)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-05-01

    From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.

  8. Failure database and tools for wind turbine availability and reliability analyses. The application of reliability data for selected wind turbines

    DEFF Research Database (Denmark)

    Kozine, Igor; Christensen, P.; Winther-Jensen, M.

    2000-01-01

    The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking the ...... similar safety systems. The database was established with Microsoft Access DatabaseManagement System, the software for reliability and availability assessments was created with Visual Basic....... the contributions at both the component and system levels. The project resulted in a software package combining a failure database with programs for predicting WTB availability and the reliability of all thecomponents and systems, especially the safety system. The report consists of a description of the theoretical......The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking...

  9. The Impact Analysis of Psychological Reliability of Population Pilot Study For Selection of Particular Reliable Multi-Choice Item Test in Foreign Language Research Work

    Directory of Open Access Journals (Sweden)

    Seyed Hossein Fazeli

    2010-10-01

    Full Text Available The purpose of research described in the current study is the psychological reliability, its’ importance, application, and more to investigate on the impact analysis of psychological reliability of population pilot study for selection of particular reliable multi-choice item test in foreign language research work. The population for subject recruitment was all under graduated students from second semester at large university in Iran (both male and female that study English as a compulsory paper. In Iran, English is taught as a foreign language.

  10. Peer-review for selection of oral presentations for conferences: Are we reliable?

    Science.gov (United States)

    Deveugele, Myriam; Silverman, Jonathan

    2017-11-01

    Although peer-review for journal submission, grant-applications and conference submissions has been called 'a counter- stone of science', and even 'the gold standard for evaluating scientific merit', publications on this topic remain scares. Research that has investigated peer-review reveals several issues and criticisms concerning bias, poor quality review, unreliability and inefficiency. The most important weakness of the peer review process is the inconsistency between reviewers leading to inadequate inter-rater reliability. To report the reliability of ratings for a large international conference and to suggest possible solutions to overcome the problem. In 2016 during the International Conference on Communication in Healthcare, organized by EACH: International Association for Communication in Healthcare, a calibration exercise was proposed and feedback was reported back to the participants of the exercise. Most abstracts, as well as most peer-reviewers, receive and give scores around the median. Contrary to the general assumption that there are high and low scorers, in this group only 3 peer-reviewers could be identified with a high mean, while 7 has a low mean score. Only 2 reviewers gave only high ratings (4 and 5). Of the eight abstracts included in this exercise, only one abstract received a high mean score and one a low mean score. Nevertheless, both these abstracts received both low and high scores; all other abstracts received all possible scores. Peer-review of submissions for conferences are, in accordance with the literature, unreliable. New and creative methods will be needed to give the participants of a conference what they really deserve: a more reliable selection of the best abstracts. More raters per abstract improves the inter-rater reliability; training of reviewers could be helpful; providing feedback to reviewers can lead to less inter-rater disagreement; fostering negative peer-review (rejecting the inappropriate submissions) rather than a

  11. Gamma prior distribution selection for Bayesian analysis of failure rate and reliability

    International Nuclear Information System (INIS)

    Waler, R.A.; Johnson, M.M.; Waterman, M.S.; Martz, H.F. Jr.

    1977-01-01

    It is assumed that the phenomenon under study is such that the time-to-failure may be modeled by an exponential distribution with failure-rate parameter, lambda. For Bayesian analyses of the assumed model, the family of gamma distributions provides conjugate prior models for lambda. Thus, an experimenter needs to select a particular gamma model to conduct a Bayesian reliability analysis. The purpose of this paper is to present a methodology which can be used to translate engineering information, experience, and judgment into a choice of a gamma prior distribution. The proposed methodology assumes that the practicing engineer can provide percentile data relating to either the failure rate or the reliability of the phenomenon being investigated. For example, the methodology will select the gamma prior distribution which conveys an engineer's belief that the failure rate, lambda, simultaneously satisfies the probability statements, P(lambda less than 1.0 x 10 -3 ) = 0.50 and P(lambda less than 1.0 x 10 -5 ) = 0.05. That is, two percentiles provided by an engineer are used to determine a gamma prior model which agrees with the specified percentiles. For those engineers who prefer to specify reliability percentiles rather than the failure-rate percentiles illustrated above, one can use the induced negative-log gamma prior distribution which satisfies the probability statements, P(R(t 0 ) less than 0.99) = 0.50 and P(R(t 0 ) less than 0.99999) = 0.95 for some operating time t 0 . Also, the paper includes graphs for selected percentiles which assist an engineer in applying the methodology

  12. Gamma prior distribution selection for Bayesian analysis of failure rate and reliability

    International Nuclear Information System (INIS)

    Waller, R.A.; Johnson, M.M.; Waterman, M.S.; Martz, H.F. Jr.

    1976-07-01

    It is assumed that the phenomenon under study is such that the time-to-failure may be modeled by an exponential distribution with failure rate lambda. For Bayesian analyses of the assumed model, the family of gamma distributions provides conjugate prior models for lambda. Thus, an experimenter needs to select a particular gamma model to conduct a Bayesian reliability analysis. The purpose of this report is to present a methodology that can be used to translate engineering information, experience, and judgment into a choice of a gamma prior distribution. The proposed methodology assumes that the practicing engineer can provide percentile data relating to either the failure rate or the reliability of the phenomenon being investigated. For example, the methodology will select the gamma prior distribution which conveys an engineer's belief that the failure rate lambda simultaneously satisfies the probability statements, P(lambda less than 1.0 x 10 -3 ) equals 0.50 and P(lambda less than 1.0 x 10 -5 ) equals 0.05. That is, two percentiles provided by an engineer are used to determine a gamma prior model which agrees with the specified percentiles. For those engineers who prefer to specify reliability percentiles rather than the failure rate percentiles illustrated above, it is possible to use the induced negative-log gamma prior distribution which satisfies the probability statements, P(R(t 0 ) less than 0.99) equals 0.50 and P(R(t 0 ) less than 0.99999) equals 0.95, for some operating time t 0 . The report also includes graphs for selected percentiles which assist an engineer in applying the procedure. 28 figures, 16 tables

  13. System reliability analysis using dominant failure modes identified by selective searching technique

    International Nuclear Information System (INIS)

    Kim, Dong-Seok; Ok, Seung-Yong; Song, Junho; Koh, Hyun-Moo

    2013-01-01

    The failure of a redundant structural system is often described by innumerable system failure modes such as combinations or sequences of local failures. An efficient approach is proposed to identify dominant failure modes in the space of random variables, and then perform system reliability analysis to compute the system failure probability. To identify dominant failure modes in the decreasing order of their contributions to the system failure probability, a new simulation-based selective searching technique is developed using a genetic algorithm. The system failure probability is computed by a multi-scale matrix-based system reliability (MSR) method. Lower-scale MSR analyses evaluate the probabilities of the identified failure modes and their statistical dependence. A higher-scale MSR analysis evaluates the system failure probability based on the results of the lower-scale analyses. Three illustrative examples demonstrate the efficiency and accuracy of the approach through comparison with existing methods and Monte Carlo simulations. The results show that the proposed method skillfully identifies the dominant failure modes, including those neglected by existing approaches. The multi-scale MSR method accurately evaluates the system failure probability with statistical dependence fully considered. The decoupling between the failure mode identification and the system reliability evaluation allows for effective applications to larger structural systems

  14. Nuclease Target Site Selection for Maximizing On-target Activity and Minimizing Off-target Effects in Genome Editing

    Science.gov (United States)

    Lee, Ciaran M; Cradick, Thomas J; Fine, Eli J; Bao, Gang

    2016-01-01

    The rapid advancement in targeted genome editing using engineered nucleases such as ZFNs, TALENs, and CRISPR/Cas9 systems has resulted in a suite of powerful methods that allows researchers to target any genomic locus of interest. A complementary set of design tools has been developed to aid researchers with nuclease design, target site selection, and experimental validation. Here, we review the various tools available for target selection in designing engineered nucleases, and for quantifying nuclease activity and specificity, including web-based search tools and experimental methods. We also elucidate challenges in target selection, especially in predicting off-target effects, and discuss future directions in precision genome editing and its applications. PMID:26750397

  15. Preparation of methodology for reliability analysis of selected digital segments of the instrumentation and control systems of NPPs. Pt. 1

    International Nuclear Information System (INIS)

    Hustak, S.; Patrik, M.; Babic, P.

    2000-12-01

    The report is structured as follows: (i) Introduction; (ii) Important notions relating to the safety and dependability of software systems for nuclear power plants (selected notions from IAEA Technical Report No. 397; safety aspects of software application; reliability/dependability aspects of digital systems); (iii) Peculiarities of digital systems and ways to a dependable performance of the required function (failures in the system and principles of defence against them; ensuring resistance of digital systems against failures at various hardware and software levels); (iv) The issue of analytical procedures to assess the safety and reliability of safety-related digital systems (safety and reliability assessment at an early stage of the project; general framework of reliability analysis of complex systems; choice of an appropriate quantitative measure of software reliability); (v) Selected qualitative and quantitative information about the reliability of digital systems; the use of relations between the incidence of various types of faults); and (vi) Conclusions and recommendations. (P.A.)

  16. Improving preimplantation genetic diagnosis (PGD) reliability by selection of sperm donor with the most informative haplotype.

    Science.gov (United States)

    Malcov, Mira; Gold, Veronica; Peleg, Sagit; Frumkin, Tsvia; Azem, Foad; Amit, Ami; Ben-Yosef, Dalit; Yaron, Yuval; Reches, Adi; Barda, Shimi; Kleiman, Sandra E; Yogev, Leah; Hauser, Ron

    2017-04-26

    The study is aimed to describe a novel strategy that increases the accuracy and reliability of PGD in patients using sperm donation by pre-selecting the donor whose haplotype does not overlap the carrier's one. A panel of 4-9 informative polymorphic markers, flanking the mutation in carriers of autosomal dominant/X-linked disorders, was tested in DNA of sperm donors before PGD. Whenever the lengths of donors' repeats overlapped those of the women, additional donors' DNA samples were analyzed. The donor that demonstrated the minimal overlapping with the patient was selected for IVF. In 8 out of 17 carriers the markers of the initially chosen donors overlapped the patients' alleles and 2-8 additional sperm donors for each patient were haplotyped. The selection of additional sperm donors increased the number of informative markers and reduced misdiagnosis risk from 6.00% ± 7.48 to 0.48% ±0.68. The PGD results were confirmed and no misdiagnosis was detected. Our study demonstrates that pre-selecting a sperm donor whose haplotype has minimal overlapping with the female's haplotype, is critical for reducing the misdiagnosis risk and ensuring a reliable PGD. This strategy may contribute to prevent the transmission of affected IVF-PGD embryos using a simple and economical procedure. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. DNA testing of donors was approved by the institutional Helsinki committee (registration number 319-08TLV, 2008). The present study was approved by the institutional Helsinki committee (registration number 0385-13TLV, 2013).

  17. Relay Selections for Security and Reliability in Mobile Communication Networks over Nakagami-m Fading Channels

    Directory of Open Access Journals (Sweden)

    Hongji Huang

    2017-01-01

    Full Text Available This paper studies the relay selection schemes in mobile communication system over Nakagami-m channel. To make efficient use of licensed spectrum, both single relay selection (SRS scheme and multirelays selection (MRS scheme over the Nakagami-m channel are proposed. Also, the intercept probability (IP and outage probability (OP of the proposed SRS and MRS for the communication links depending on realistic spectrum sensing are derived. Furthermore, this paper assesses the manifestation of conventional direct transmission scheme to compare with the proposed SRS and MRS ones based on the Nakagami-m channel, and the security-reliability trade-off (SRT performance of the proposed schemes and the conventional schemes is well investigated. Additionally, the SRT of the proposed SRS and MRS schemes is demonstrated better than that of direct transmission scheme over the Nakagami-m channel, which can protect the communication transmissions against eavesdropping attacks. Additionally, simulation results show that our proposed relay selection schemes achieve better SRT performance than that of conventional direct transmission over the Nakagami-m channel.

  18. Maximizing Wellness in Successful Aging and Cancer Coping: The Importance of Family Communication from a Socioemotional Selectivity Theoretical Perspective

    OpenAIRE

    Fisher, Carla L.; Nussbaum, Jon F.

    2015-01-01

    Interpersonal communication is a fundamental part of being and key to health. Interactions within family are especially critical to wellness across time. Family communication is a central means of adaptation to stress, coping, and successful aging. Still, no theoretical argument in the discipline exists that prioritizes kin communication in health. Theoretical advances can enhance interventions and policies that improve family life. This article explores socioemotional selectivity theory (SST...

  19. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  20. Reliability of maximal isometric knee strength testing with modified hand-held dynamometry in patients awaiting total knee arthroplasty: useful in research and individual patient settings? A reliability study

    NARCIS (Netherlands)

    Koblbauer, Ian F. H.; Lambrecht, Yannick; van der Hulst, Micheline L. M.; Neeter, Camille; Engelbert, Raoul H. H.; Poolman, Rudolf W.; Scholtes, Vanessa A.

    2011-01-01

    Patients undergoing total knee arthroplasty (TKA) often experience strength deficits both pre- and post-operatively. As these deficits may have a direct impact on functional recovery, strength assessment should be performed in this patient population. For these assessments, reliable measurements

  1. Reliability of maximal isometric knee strength testing with modified hand-held dynamometry in patients awaiting total knee arthroplasty: useful in research and individual patient settings? A reliability study

    NARCIS (Netherlands)

    Koblbauer, I.F.H.; Lambrecht, Y.; van der Hulst, M.L.M.; Neeter, C.; Engelbert, R.H.H.; Poolman, R.W.; Scholtes, V.A.

    2011-01-01

    Background: Patients undergoing total knee arthroplasty (TKA) often experience strength deficits both pre- and post-operatively. As these deficits may have a direct impact on functional recovery, strength assessment should be performed in this patient population. For these assessments, reliable

  2. Sensor Selection and Data Validation for Reliable Integrated System Health Management

    Science.gov (United States)

    Garg, Sanjay; Melcher, Kevin J.

    2008-01-01

    For new access to space systems with challenging mission requirements, effective implementation of integrated system health management (ISHM) must be available early in the program to support the design of systems that are safe, reliable, highly autonomous. Early ISHM availability is also needed to promote design for affordable operations; increased knowledge of functional health provided by ISHM supports construction of more efficient operations infrastructure. Lack of early ISHM inclusion in the system design process could result in retrofitting health management systems to augment and expand operational and safety requirements; thereby increasing program cost and risk due to increased instrumentation and computational complexity. Having the right sensors generating the required data to perform condition assessment, such as fault detection and isolation, with a high degree of confidence is critical to reliable operation of ISHM. Also, the data being generated by the sensors needs to be qualified to ensure that the assessments made by the ISHM is not based on faulty data. NASA Glenn Research Center has been developing technologies for sensor selection and data validation as part of the FDDR (Fault Detection, Diagnosis, and Response) element of the Upper Stage project of the Ares 1 launch vehicle development. This presentation will provide an overview of the GRC approach to sensor selection and data quality validation and will present recent results from applications that are representative of the complexity of propulsion systems for access to space vehicles. A brief overview of the sensor selection and data quality validation approaches is provided below. The NASA GRC developed Systematic Sensor Selection Strategy (S4) is a model-based procedure for systematically and quantitatively selecting an optimal sensor suite to provide overall health assessment of a host system. S4 can be logically partitioned into three major subdivisions: the knowledge base, the down-select

  3. Maximizing Wellness in Successful Aging and Cancer Coping: The Importance of Family Communication from a Socioemotional Selectivity Theoretical Perspective

    Science.gov (United States)

    Fisher, Carla L.; Nussbaum, Jon F.

    2015-01-01

    Interpersonal communication is a fundamental part of being and key to health. Interactions within family are especially critical to wellness across time. Family communication is a central means of adaptation to stress, coping, and successful aging. Still, no theoretical argument in the discipline exists that prioritizes kin communication in health. Theoretical advances can enhance interventions and policies that improve family life. This article explores socioemotional selectivity theory (SST), which highlights communication in our survival. Communication partner choice is based on one's time perspective, which affects our prioritization of goals to survive—goals sought socially. This is a first test of SST in a family communication study on women's health and aging. More than 300 women of varying ages and health status participated. Two time factors, later adulthood and late-stage breast cancer, lead women to prioritize family communication. Findings provide a theoretical basis for prioritizing family communication issues in health reform. PMID:26997920

  4. Maximizing Wellness in Successful Aging and Cancer Coping: The Importance of Family Communication from a Socioemotional Selectivity Theoretical Perspective.

    Science.gov (United States)

    Fisher, Carla L; Nussbaum, Jon F

    Interpersonal communication is a fundamental part of being and key to health. Interactions within family are especially critical to wellness across time. Family communication is a central means of adaptation to stress, coping, and successful aging. Still, no theoretical argument in the discipline exists that prioritizes kin communication in health. Theoretical advances can enhance interventions and policies that improve family life. This article explores socioemotional selectivity theory (SST), which highlights communication in our survival. Communication partner choice is based on one's time perspective, which affects our prioritization of goals to survive-goals sought socially. This is a first test of SST in a family communication study on women's health and aging. More than 300 women of varying ages and health status participated. Two time factors, later adulthood and late-stage breast cancer, lead women to prioritize family communication. Findings provide a theoretical basis for prioritizing family communication issues in health reform.

  5. Effects of a shade-matching light and background color on reliability in tooth shade selection.

    Science.gov (United States)

    Najafi-Abrandabadi, Siamak; Vahidi, Farhad; Janal, Malvin N

    2018-01-01

    The purpose of this study was to evaluate the effects of a shade-matching light (Rite-Lite-2, AdDent) and different viewing backgrounds on reliability in a test of shade tab matching. Four members of the Prosthodontic faculty matched 10 shade tabs selected for a range of shades against the shade guide. All raters were tested for color blindness and were calibrated prior to the study. Matching took place under four combinations of conditions: with operatory light or the shade-matching light, and using either a pink or a blue background. Reliability was quantified with the kappa statistic, separately for agreement of value, hue, and chroma for each shade tab. In general, raters showed fair to moderate levels of agreement when judging the value of the shade tabs, but could not agree on the hue and chroma of the stimuli. The pink background led to higher levels of agreement than the blue background, and the shade-matching light improved agreement when used in conjunction with the pink but not the blue background. Moderate levels of agreement were found in matching shade tab value. Agreement was generally better when using the pink rather than the blue background, regardless of light source. The use of the shade-matching light tended to amplify the advantage of the pink background.

  6. The selection of field component reliability data for use in nuclear safety studies

    International Nuclear Information System (INIS)

    Coxson, B.A.; Tabaie, Mansour

    1990-01-01

    The paper reviews the user requirements for field component failure data in nuclear safety studies, and the capability of various data sources to satisfy these requirements. Aspects such as estimating the population of items exposed to failure, incompleteness, and under-reporting problems are discussed. The paper takes as an example the selection of component reliability data for use in the Pre-Operational Safety Report (POSR) for Sizewell 'B' Power Station, where field data has in many cases been derived from equipment other than that to be procured and operated on site. The paper concludes that the main quality sought in the available data sources for such studies is the ability to examine failure narratives in component reliability data systems for equipment performing comparable duties to the intended plant application. The main benefit brought about in the last decade is the interactive access to data systems which are adequately structured with regard to the equipment covered, and also provide a text-searching capability of quality-controlled event narratives. (author)

  7. A Selective Role for Dopamine in Learning to Maximize Reward But Not to Minimize Effort: Evidence from Patients with Parkinson's Disease.

    Science.gov (United States)

    Skvortsova, Vasilisa; Degos, Bertrand; Welter, Marie-Laure; Vidailhet, Marie; Pessiglione, Mathias

    2017-06-21

    Instrumental learning is a fundamental process through which agents optimize their choices, taking into account various dimensions of available options such as the possible reward or punishment outcomes and the costs associated with potential actions. Although the implication of dopamine in learning from choice outcomes is well established, less is known about its role in learning the action costs such as effort. Here, we tested the ability of patients with Parkinson's disease (PD) to maximize monetary rewards and minimize physical efforts in a probabilistic instrumental learning task. The implication of dopamine was assessed by comparing performance ON and OFF prodopaminergic medication. In a first sample of PD patients ( n = 15), we observed that reward learning, but not effort learning, was selectively impaired in the absence of treatment, with a significant interaction between learning condition (reward vs effort) and medication status (OFF vs ON). These results were replicated in a second, independent sample of PD patients ( n = 20) using a simplified version of the task. According to Bayesian model selection, the best account for medication effects in both studies was a specific amplification of reward magnitude in a Q-learning algorithm. These results suggest that learning to avoid physical effort is independent from dopaminergic circuits and strengthen the general idea that dopaminergic signaling amplifies the effects of reward expectation or obtainment on instrumental behavior. SIGNIFICANCE STATEMENT Theoretically, maximizing reward and minimizing effort could involve the same computations and therefore rely on the same brain circuits. Here, we tested whether dopamine, a key component of reward-related circuitry, is also implicated in effort learning. We found that patients suffering from dopamine depletion due to Parkinson's disease were selectively impaired in reward learning, but not effort learning. Moreover, anti-parkinsonian medication restored the

  8. Semi-structured interview is a reliable and feasible tool for selection of doctors for general practice specialist training

    DEFF Research Database (Denmark)

    Isaksen, Jesper; Hertel, Niels Thomas; Kjær, Niels Kristian

    2013-01-01

    In order to optimise the selection process for admission to specialist training in family medicine, we developed a new design for structured applications and selection interviews. The design contains semi-structured interviews, which combine individualised elements from the applications...... with standardised behaviour-based questions. This paper describes the design of the tool, and offers reflections concerning its acceptability, reliability and feasibility....

  9. Validation and selection of ODE based systems biology models: how to arrive at more reliable decisions.

    Science.gov (United States)

    Hasdemir, Dicle; Hoefsloot, Huub C J; Smilde, Age K

    2015-07-08

    Most ordinary differential equation (ODE) based modeling studies in systems biology involve a hold-out validation step for model validation. In this framework a pre-determined part of the data is used as validation data and, therefore it is not used for estimating the parameters of the model. The model is assumed to be validated if the model predictions on the validation dataset show good agreement with the data. Model selection between alternative model structures can also be performed in the same setting, based on the predictive power of the model structures on the validation dataset. However, drawbacks associated with this approach are usually under-estimated. We have carried out simulations by using a recently published High Osmolarity Glycerol (HOG) pathway from S.cerevisiae to demonstrate these drawbacks. We have shown that it is very important how the data is partitioned and which part of the data is used for validation purposes. The hold-out validation strategy leads to biased conclusions, since it can lead to different validation and selection decisions when different partitioning schemes are used. Furthermore, finding sensible partitioning schemes that would lead to reliable decisions are heavily dependent on the biology and unknown model parameters which turns the problem into a paradox. This brings the need for alternative validation approaches that offer flexible partitioning of the data. For this purpose, we have introduced a stratified random cross-validation (SRCV) approach that successfully overcomes these limitations. SRCV leads to more stable decisions for both validation and selection which are not biased by underlying biological phenomena. Furthermore, it is less dependent on the specific noise realization in the data. Therefore, it proves to be a promising alternative to the standard hold-out validation strategy.

  10. Seven Reliability Indices for High-Stakes Decision Making: Description, Selection, and Simple Calculation

    Science.gov (United States)

    Smith, Stacey L.; Vannest, Kimberly J.; Davis, John L.

    2011-01-01

    The reliability of data is a critical issue in decision-making for practitioners in the school. Percent Agreement and Cohen's kappa are the two most widely reported indices of inter-rater reliability, however, a recent Monte Carlo study on the reliability of multi-category scales found other indices to be more trustworthy given the type of data…

  11. Selection of reliable reference genes in Caenorhabditis elegans for analysis of nanotoxicity.

    Directory of Open Access Journals (Sweden)

    Yanqiong Zhang

    Full Text Available Despite rapid development and application of a wide range of manufactured metal oxide nanoparticles (NPs, the understanding of potential risks of using NPs is less completed, especially at the molecular level. The nematode Caenorhabditis elegans (C.elegans has been emerging as an environmental model to study the molecular mechanism of environmental contaminations, using standard genetic tools such as the real-time quantitative PCR (RT-qPCR. The most important factor that may affect the accuracy of RT-qPCR is to choose appropriate genes for normalization. In this study, we selected 13 reference gene candidates (act-1, cdc-42, pmp-3, eif-3.C, actin, act-2, csq-1, Y45F10D.4, tba-1, mdh-1, ama-1, F35G12.2, and rbd-1 to test their expression stability under different doses of nano-copper oxide (CuO 0, 1, 10, and 50 µg/mL using RT-qPCR. Four algorithms, geNorm, NormFinder, BestKeeper, and the comparative ΔCt method, were employed to evaluate these 13 candidates expressions. As a result, tba-1, Y45F10D.4 and pmp-3 were the most reliable, which may be used as reference genes in future study of nanoparticle-induced genetic response using C.elegans.

  12. Selection of reliable reference genes in Caenorhabditis elegans for analysis of nanotoxicity.

    Science.gov (United States)

    Zhang, Yanqiong; Chen, Dongliang; Smith, Michael A; Zhang, Baohong; Pan, Xiaoping

    2012-01-01

    Despite rapid development and application of a wide range of manufactured metal oxide nanoparticles (NPs), the understanding of potential risks of using NPs is less completed, especially at the molecular level. The nematode Caenorhabditis elegans (C.elegans) has been emerging as an environmental model to study the molecular mechanism of environmental contaminations, using standard genetic tools such as the real-time quantitative PCR (RT-qPCR). The most important factor that may affect the accuracy of RT-qPCR is to choose appropriate genes for normalization. In this study, we selected 13 reference gene candidates (act-1, cdc-42, pmp-3, eif-3.C, actin, act-2, csq-1, Y45F10D.4, tba-1, mdh-1, ama-1, F35G12.2, and rbd-1) to test their expression stability under different doses of nano-copper oxide (CuO 0, 1, 10, and 50 µg/mL) using RT-qPCR. Four algorithms, geNorm, NormFinder, BestKeeper, and the comparative ΔCt method, were employed to evaluate these 13 candidates expressions. As a result, tba-1, Y45F10D.4 and pmp-3 were the most reliable, which may be used as reference genes in future study of nanoparticle-induced genetic response using C.elegans.

  13. Class 1-Selective Histone Deacetylase (HDAC) Inhibitors Enhance HIV Latency Reversal while Preserving the Activity of HDAC Isoforms Necessary for Maximal HIV Gene Expression.

    Science.gov (United States)

    Zaikos, Thomas D; Painter, Mark M; Sebastian Kettinger, Nadia T; Terry, Valeri H; Collins, Kathleen L

    2018-03-15

    Combinations of drugs that affect distinct mechanisms of HIV latency aim to induce robust latency reversal leading to cytopathicity and elimination of the persistent HIV reservoir. Thus far, attempts have focused on combinations of protein kinase C (PKC) agonists and pan-histone deacetylase inhibitors (HDIs) despite the knowledge that HIV gene expression is regulated by class 1 histone deacetylases. We hypothesized that class 1-selective HDIs would promote more robust HIV latency reversal in combination with a PKC agonist than pan-HDIs because they preserve the activity of proviral factors regulated by non-class 1 histone deacetylases. Here, we show that class 1-selective agents used alone or with the PKC agonist bryostatin-1 induced more HIV protein expression per infected cell. In addition, the combination of entinostat and bryostatin-1 induced viral outgrowth, whereas bryostatin-1 combinations with pan-HDIs did not. When class 1-selective HDIs were used in combination with pan-HDIs, the amount of viral protein expression and virus outgrowth resembled that of pan-HDIs alone, suggesting that pan-HDIs inhibit robust gene expression induced by class 1-selective HDIs. Consistent with this, pan-HDI-containing combinations reduced the activity of NF-κB and Hsp90, two cellular factors necessary for potent HIV protein expression, but did not significantly reduce overall cell viability. An assessment of viral clearance from in vitro cultures indicated that maximal protein expression induced by class 1-selective HDI treatment was crucial for reservoir clearance. These findings elucidate the limitations of current approaches and provide a path toward more effective strategies to eliminate the HIV reservoir. IMPORTANCE Despite effective antiretroviral therapy, HIV evades eradication in a latent form that is not affected by currently available drug regimens. Pharmacologic latency reversal that leads to death of cellular reservoirs has been proposed as a strategy for

  14. Improving accuracy of overhanging structures for selective laser melting through reliability characterization of single track formation on thick powder beds

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2016-01-01

    Repeatability and reproducibility of parts produced by selective laser melting is a standing issue, and coupled with a lack of standardized quality control presents a major hindrance towards maturing of selective laser melting as an industrial scale process. Consequently, numerical process...... modelling has been adopted towards improving the predictability of the outputs from the selective laser melting process. Establishing the reliability of the process, however, is still a challenge, especially in components having overhanging structures.In this paper, a systematic approach towards...... establishing reliability of overhanging structure production by selective laser melting has been adopted. A calibrated, fast, multiscale thermal model is used to simulate the single track formation on a thick powder bed. Single tracks are manufactured on a thick powder bed using same processing parameters...

  15. Cellular scanning strategy for selective laser melting: Generating reliable, optimized scanning paths and processing parameters

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2015-01-01

    method based uncertainty and reliability analysis. The reliability of the scanning paths are established using cumulative probability distribution functions for process output criteria such as sample density, thermal homogeneity, etc. A customized genetic algorithm is used along with the simulation model...

  16. Study of selected problems of reliability of the supply chain in the trading company

    Directory of Open Access Journals (Sweden)

    2010-06-01

    Full Text Available The paper presents the problems of the reliability of the supply chain as a whole in the dependence on the reliability of its elements. Different variants of reserving of canals (prime and reserve ones and issues connected with their switching are discussed.

  17. Selection of Reliable Reference Genes for Gene Expression Studies on Rhododendron molle G. Don.

    Science.gov (United States)

    Xiao, Zheng; Sun, Xiaobo; Liu, Xiaoqing; Li, Chang; He, Lisi; Chen, Shangping; Su, Jiale

    2016-01-01

    The quantitative real-time polymerase chain reaction (qRT-PCR) approach has become a widely used method to analyze expression patterns of target genes. The selection of an optimal reference gene is a prerequisite for the accurate normalization of gene expression in qRT-PCR. The present study constitutes the first systematic evaluation of potential reference genes in Rhododendron molle G. Don. Eleven candidate reference genes in different tissues and flowers at different developmental stages of R. molle were assessed using the following three software packages: GeNorm, NormFinder, and BestKeeper. The results showed that EF1- α (elongation factor 1-alpha), 18S (18s ribosomal RNA), and RPL3 (ribosomal protein L3) were the most stable reference genes in developing rhododendron flowers and, thus, in all of the tested samples, while tublin ( TUB ) was the least stable. ACT5 (actin), RPL3 , 18S , and EF1- α were found to be the top four choices for different tissues, whereas TUB was not found to favor qRT-PCR normalization in these tissues. Three stable reference genes are recommended for the normalization of qRT-PCR data in R. molle . Furthermore, the expression profiles of RmPSY (phytoene synthase) and RmPDS (phytoene dehydrogenase) were assessed using EF1- α, 18S , ACT5 , RPL3 , and their combination as internals. Similar trends were found, but these trends varied when the least stable reference gene TUB was used. The results further prove that it is necessary to validate the stability of reference genes prior to their use for normalization under different experimental conditions. This study provides useful information for reliable qRT-PCR data normalization in gene studies of R. molle .

  18. Selection of Reliable Reference Genes for Gene Expression Studies on Rhododendron molle G. Don

    Directory of Open Access Journals (Sweden)

    Zheng Xiao

    2016-10-01

    Full Text Available The quantitative real-time polymerase chain reaction (qRT-PCR approach has become a widely used method to analyze expression patterns of target genes. The selection of an optimal reference gene is a prerequisite for the accurate normalization of gene expression in qRT-PCR. The present study constitutes the first systematic evaluation of potential reference genes in Rhododendron molle G. Don. Eleven candidate reference genes in different tissues and flowers at different developmental stages of R. molle were assessed using the following three software packages: GeNorm, NormFinder and BestKeeper. The results showed that EF1-α (elongation factor 1-alpha, 18S (18s ribosomal RNA and RPL3 (ribosomal protein L3 were the most stable reference genes in developing rhododendron flowers and, thus, in all of the tested samples, while tublin (TUB was the least stable. ACT5 (actin, RPL3, 18S and EF1-α were found to be the top four choices for different tissues, whereas TUB was not found to favor qRT-PCR normalization in these tissues. Three stable reference genes are recommended for the normalization of qRT-PCR data in R. molle. Furthermore, the expression profiles of RmPSY (phytoene synthase and RmPDS (phytoene dehydrogenase were assessed using EF1-α, 18S, ACT5, and RPL3 and their combination as internals. Similar trends were found, but these trends varied when the least stable reference gene TUB was used. The results further prove that it is necessary to validate the stability of reference genes prior to their use for normalization under different experimental conditions. This study provides useful information for reliable qRT-PCR data normalization in gene studies of R. molle.

  19. Semi-structured interview is a reliable and feasible tool for selection of doctors for general practice specialist training.

    Science.gov (United States)

    Isaksen, Jesper Hesselbjerg; Hertel, Niels Thomas; Kjær, Niels Kristian

    2013-09-01

    In order to optimise the selection process for admission to specialist training in family medicine, we developed a new design for structured applications and selection interviews. The design contains semi-structured interviews, which combine individualised elements from the applications with standardised behaviour-based questions. This paper describes the design of the tool, and offers reflections concerning its acceptability, reliability and feasibility. We used a combined quantitative and qualitative evaluation method. Ratings obtained by the applicants in two selection rounds were analysed for reliability and generalisability using the GENOVA programme. Applicants and assessors were randomly selected for individual semi-structured in-depth interviews. The qualitative data were analysed in accordance with the grounded theory method. Quantitative analysis yielded a high Cronbach's alpha of 0.97 for the first round and 0.90 for the second round, and a G coefficient of the first round of 0.74 and of the second round of 0.40. Qualitative analysis demonstrated high acceptability and fairness and it improved the assessors' judgment. Applicants reported concerns about loss of personality and some anxiety. The applicants' ability to reflect on their competences was important. The developed selection tool demonstrated an acceptable level of reliability, but only moderate generalisability. The users found that the tool provided a high degree of acceptability; it is a feasible and useful tool for -selection of doctors for specialist training if combined with work-based assessment. Studies on the benefits and drawbacks of this tool compared with other selection models are relevant. not relevant. not relevant.

  20. Selected Problems of Sensitivity and Reliability of a Jack-Up Platform

    Directory of Open Access Journals (Sweden)

    Rozmarynowski Bogdan

    2018-03-01

    Full Text Available The paper deals with sensitivity and reliability applications to numerical studies of an off-shore platform model. Structural parameters and sea conditions are referred to the Baltic jack-up drilling platform. The sudy aims at the influence of particular basic variables on static and dynamic response as well as the probability of failure due to water waves and wind loads. The paper presents the sensitivity approach to a generalized eigenvalue problem and evaluation of the performace functions. The first order time-invariant problems of structural reliability analysis are under concern.

  1. Profit maximization mitigates competition

    DEFF Research Database (Denmark)

    Dierker, Egbert; Grodal, Birgit

    1996-01-01

    We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...

  2. Reliable Refuge: Two Sky Island Scorpion Species Select Larger, Thermally Stable Retreat Sites.

    Science.gov (United States)

    Becker, Jamie E; Brown, Christopher A

    2016-01-01

    Sky island scorpions shelter under rocks and other surface debris, but, as with other scorpions, it is unclear whether these species select retreat sites randomly. Furthermore, little is known about the thermal preferences of scorpions, and no research has been done to identify whether reproductive condition might influence retreat site selection. The objectives were to (1) identify physical or thermal characteristics for retreat sites occupied by two sky island scorpions (Vaejovis cashi Graham 2007 and V. electrum Hughes 2011) and those not occupied; (2) determine whether retreat site selection differs between the two study species; and (3) identify whether thermal selection differs between species and between gravid and non-gravid females of the same species. Within each scorpion's habitat, maximum dimensions of rocks along a transect line were measured and compared to occupied rocks to determine whether retreat site selection occurred randomly. Temperature loggers were placed under a subset of occupied and unoccupied rocks for 48 hours to compare the thermal characteristics of these rocks. Thermal gradient trials were conducted before parturition and after dispersal of young in order to identify whether gravidity influences thermal preference. Vaejovis cashi and V. electrum both selected larger retreat sites that had more stable thermal profiles. Neither species appeared to have thermal preferences influenced by reproductive condition. However, while thermal selection did not differ among non-gravid individuals, gravid V. electrum selected warmer temperatures than its gravid congener. Sky island scorpions appear to select large retreat sites to maintain thermal stability, although biotic factors (e.g., competition) could also be involved in this choice. Future studies should focus on identifying the various biotic or abiotic factors that could influence retreat site selection in scorpions, as well as determining whether reproductive condition affects thermal

  3. Reliability and Validity of Selected PROMIS Measures in People with Rheumatoid Arthritis.

    Directory of Open Access Journals (Sweden)

    Susan J Bartlett

    Full Text Available To evaluate the reliability and validity of 11 PROMIS measures to assess symptoms and impacts identified as important by people with rheumatoid arthritis (RA.Consecutive patients (N = 177 in an observational study completed PROMIS computer adapted tests (CATs and a short form (SF assessing pain, fatigue, physical function, mood, sleep, and participation. We assessed test-test reliability and internal consistency using correlation and Cronbach's alpha. We assessed convergent validity by examining Pearson correlations between PROMIS measures and existing measures of similar domains and known groups validity by comparing scores across disease activity levels using ANOVA.Participants were mostly female (82% and white (83% with mean (SD age of 56 (13 years; 24% had ≤ high school, 29% had RA ≤ 5 years with 13% ≤ 2 years, and 22% were disabled. PROMIS Physical Function, Pain Interference and Fatigue instruments correlated moderately to strongly (rho's ≥ 0.68 with corresponding PROs. Test-retest reliability ranged from .725-.883, and Cronbach's alpha from .906-.991. A dose-response relationship with disease activity was evident in Physical Function with similar trends in other scales except Anger.These data provide preliminary evidence of reliability and construct validity of PROMIS CATs to assess RA symptoms and impacts, and feasibility of use in clinical care. PROMIS instruments captured the experiences of RA patients across the broad continuum of RA symptoms and function, especially at low disease activity levels. Future research is needed to evaluate performance in relevant subgroups, assess responsiveness and identify clinically meaningful changes.

  4. Selected problems and results of the transient event and reliability analyses for the German safety study

    International Nuclear Information System (INIS)

    Hoertner, H.

    1977-01-01

    For the investigation of the risk of nuclear power plants loss-of-coolant accidents and transients have to be analyzed. The different functions of the engineered safety features installed to cope with transients are explained. The event tree analysis is carried out for the important transient 'loss of normal onsite power'. Preliminary results of the reliability analyses performed for quantitative evaluation of this event tree are shown. (orig.) [de

  5. Statistical test data selection for reliability evalution of process computer software

    International Nuclear Information System (INIS)

    Volkmann, K.P.; Hoermann, H.; Ehrenberger, W.

    1976-01-01

    The paper presents a concept for converting knowledge about the characteristics of process states into practicable procedures for the statistical selection of test cases in testing process computer software. Process states are defined as vectors whose components consist of values of input variables lying in discrete positions or within given limits. Two approaches for test data selection, based on knowledge about cases of demand, are outlined referring to a purely probabilistic method and to the mathematics of stratified sampling. (orig.) [de

  6. Diet selection in a molluscivore shorebird across Western Europe : does it show short- or long-term intake rate-maximization?

    NARCIS (Netherlands)

    Quaintenne, Gwenael; van Gils, Jan A.; Bocher, Pierrick; Dekinga, Anne; Piersma, Theunis; Webb, Tom

    P>1. Studies of diet choice usually assume maximization of energy intake. The well-known 'contingency model' (CM) additionally assumes that foraging animals only spend time searching or handling prey. Despite considerable empirical support, there are many foraging contexts in which the CM fails, but

  7. Correlating neutron yield and reliability for selecting experimental parameters for a plasma focus machine

    International Nuclear Information System (INIS)

    Pross, G.

    Possibilities of optimizing focus machines with a given energy content in the sense of high neutron yield and high reliability of the discharges are investigated experimentally. For this purpose, a focus machine of the Mather type with an energy content of 12 kJ was constructed. The following experimental parameters were varied: the material of the insulator in the ignition zone, the structure of the outside electrode, the length of the inside electrode, the filling pressure and the amount and polarity of the battery voltage. An important part of the diagnostic program consists of measurements of the azimuthal and axial current distribution in the accelerator, correlated with short-term photographs of the luminous front as a function of time. The results are given. A functional schematic has been drafted for focus discharge as an aid in extensive optimization of focus machines, combining findings from theory and experiments. The schematic takes into account the multiparameter character of the discharge and clarifies relationships between the experimental parameters and the target variables neutron yield and reliability

  8. Modeling of the thermal physical process and study on the reliability of linear energy density for selective laser melting

    Science.gov (United States)

    Xiang, Zhaowei; Yin, Ming; Dong, Guanhua; Mei, Xiaoqin; Yin, Guofu

    2018-06-01

    A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM) is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM.

  9. A reliable computational workflow for the selection of optimal screening libraries.

    Science.gov (United States)

    Gilad, Yocheved; Nadassy, Katalin; Senderowitz, Hanoch

    2015-01-01

    The experimental screening of compound collections is a common starting point in many drug discovery projects. Successes of such screening campaigns critically depend on the quality of the screened library. Many libraries are currently available from different vendors yet the selection of the optimal screening library for a specific project is challenging. We have devised a novel workflow for the rational selection of project-specific screening libraries. The workflow accepts as input a set of virtual candidate libraries and applies the following steps to each library: (1) data curation; (2) assessment of ADME/T profile; (3) assessment of the number of promiscuous binders/frequent HTS hitters; (4) assessment of internal diversity; (5) assessment of similarity to known active compound(s) (optional); (6) assessment of similarity to in-house or otherwise accessible compound collections (optional). For ADME/T profiling, Lipinski's and Veber's rule-based filters were implemented and a new blood brain barrier permeation model was developed and validated (85 and 74 % success rate for training set and test set, respectively). Diversity and similarity descriptors which demonstrated best performances in terms of their ability to select either diverse or focused sets of compounds from three databases (Drug Bank, CMC and CHEMBL) were identified and used for diversity and similarity assessments. The workflow was used to analyze nine common screening libraries available from six vendors. The results of this analysis are reported for each library providing an assessment of its quality. Furthermore, a consensus approach was developed to combine the results of these analyses into a single score for selecting the optimal library under different scenarios. We have devised and tested a new workflow for the rational selection of screening libraries under different scenarios. The current workflow was implemented using the Pipeline Pilot software yet due to the usage of generic

  10. Reliable selection of earthquake ground motions for performance-based design

    DEFF Research Database (Denmark)

    Katsanos, Evangelos; Sextos, A.G.

    2016-01-01

    A decision support process is presented to accommodate selecting and scaling of earthquake motions as required for the time domain analysis of structures. Prequalified code-compatible suites of seismic motions are provided through a multi-criterion approach to satisfy prescribed reduced variability...... of the method, by being subjected to numerous suites of motions that were highly ranked according to both the proposed approach (δsv-sc) and the conventional index (δconv), already used by most existing code-based earthquake records selection and scaling procedures. The findings reveal the superiority...

  11. Structure-specific selection of earthquake ground motions for the reliable design and assessment of structures

    DEFF Research Database (Denmark)

    Katsanos, E. I.; Sextos, A. G.

    2018-01-01

    A decision support process is presented to accommodate selecting and scaling of earthquake motions as required for the time domain analysis of structures. Code-compatible suites of seismic motions are provided being, at the same time, prequalified through a multi-criterion approach to induce...... was subjected to numerous suites of motions that were highly ranked according to both the proposed approach (δsv–sc) and the conventional one (δconv), that is commonly used for earthquake records selection and scaling. The findings from numerous linear response history analyses reveal the superiority...

  12. Impact of lifetime model selections on the reliability prediction of IGBT modules in modular multilevel converters

    DEFF Research Database (Denmark)

    Zhang, Yi; Wang, Huai; Wang, Zhongxu

    2017-01-01

    , this paper benchmarks the most commonly-employed lifetime models of power semiconductor devices for offshore Modular Multilevel Converters (MMC) based wind farms. The benchmarking reveals that the lifetime model selection has a significant impact on the lifetime estimation. The use of analytical lifetime...

  13. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability.

    Science.gov (United States)

    Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M

    2014-12-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.

  14. High-resolution imaging of expertise reveals reliable object selectivity in the fusiform face area related to perceptual performance.

    Science.gov (United States)

    McGugin, Rankin Williams; Gatenby, J Christopher; Gore, John C; Gauthier, Isabel

    2012-10-16

    The fusiform face area (FFA) is a region of human cortex that responds selectively to faces, but whether it supports a more general function relevant for perceptual expertise is debated. Although both faces and objects of expertise engage many brain areas, the FFA remains the focus of the strongest modular claims and the clearest predictions about expertise. Functional MRI studies at standard-resolution (SR-fMRI) have found responses in the FFA for nonface objects of expertise, but high-resolution fMRI (HR-fMRI) in the FFA [Grill-Spector K, et al. (2006) Nat Neurosci 9:1177-1185] and neurophysiology in face patches in the monkey brain [Tsao DY, et al. (2006) Science 311:670-674] reveal no reliable selectivity for objects. It is thus possible that FFA responses to objects with SR-fMRI are a result of spatial blurring of responses from nonface-selective areas, potentially driven by attention to objects of expertise. Using HR-fMRI in two experiments, we provide evidence of reliable responses to cars in the FFA that correlate with behavioral car expertise. Effects of expertise in the FFA for nonface objects cannot be attributed to spatial blurring beyond the scale at which modular claims have been made, and within the lateral fusiform gyrus, they are restricted to a small area (200 mm(2) on the right and 50 mm(2) on the left) centered on the peak of face selectivity. Experience with a category may be sufficient to explain the spatially clustered face selectivity observed in this region.

  15. Maximally incompatible quantum observables

    Energy Technology Data Exchange (ETDEWEB)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)

    2014-05-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  16. Maximally incompatible quantum observables

    International Nuclear Information System (INIS)

    Heinosaari, Teiko; Schultz, Jussi; Toigo, Alessandro; Ziman, Mario

    2014-01-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  17. Chemically Functionalized Arrays Comprising Micro and Nano-Etro-Mechanizal Systems for Reliable and Selective Characterization of Tank Waste

    International Nuclear Information System (INIS)

    Sepaniak, Michael J.

    2008-01-01

    Innovative technology of sensory and selective chemical monitoring of hazardous wastes present in storage tanks are of continued importance to the environment. This multifaceted research program exploits the unique characteristics of micro and nano-fabricated cantilever-based, micro-electro-mechanical systems (MEMES) and nano-electro-mechanical systems (NEMS) in chemical sensing. Significant progress was made in tasks that were listed in the work plan for DOE EMSP project 'Hybrid Micro-Electro-Mechanical Systems for Highly Reliable and Selective Characterization of Tank Waste'. These tasks are listed below in modified form followed by the report on progress. (1) Deposit chemically selective phases on model MEMS devices with nanostructured surface layers to identify optimal technological approaches. (2) Monitor mechanical (deflection) and optical (SERS) responses of the created MEMS to organic and inorganic species in aqueous environments. (3) Explore and compare different approaches to immobilization of selective phases on the thermal detectors. (4) Demonstrate improvements in selectivity and sensitivity to model pollutants due to implemented technologies of nanostructuring and multi-mode read-out. (5) Demonstrate detection of different analytes on a single hybrid MEMS (6) Implement the use of differential pairs of cantilever sensors (coated and reference) with the associated detector electronics which is expected to have an enhanced sensitivity with a low-noise low-drift response. (7) Development of methods to create differential arrays and test effectiveness at creating distinctive differential responses.

  18. Modeling of the thermal physical process and study on the reliability of linear energy density for selective laser melting

    Directory of Open Access Journals (Sweden)

    Zhaowei Xiang

    2018-06-01

    Full Text Available A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM. Keywords: Selective laser melting, Volume shrinkage, Powder-to-dense process, Numerical modeling, Thermal analysis, Linear energy density

  19. Inclusive Fitness Maximization:An Axiomatic Approach

    OpenAIRE

    Okasha, Samir; Weymark, John; Bossert, Walter

    2014-01-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...

  20. Maximizers versus satisficers

    OpenAIRE

    Andrew M. Parker; Wandi Bruine de Bruin; Baruch Fischhoff

    2007-01-01

    Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...

  1. Maximal combustion temperature estimation

    International Nuclear Information System (INIS)

    Golodova, E; Shchepakina, E

    2006-01-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models

  2. Intraoperative use of indocyanine green angiography for selecting the more reliable perforator of the anterolateral thigh flap: A comparison study.

    Science.gov (United States)

    La Padula, Simone; Hersant, Barbara; Meningaud, Jean Paul

    2018-03-30

    Anatomical variability of anterolateral thigh flap (ALT) perforators has been reported. The aim of this study is to assess if the use of intraoperative indocyanine green angiography (iICGA) can help surgeons to choose the ALT flap best perforator to be preserved. A retrospective study was conducted in 28 patients with open tibial fracture, following a road traffic crash, who had undergone ALT flap. Patients were classified into two groups: ICGA group (iICGA was used to select the more reliable perforator) and control group. The mean tissue loss size of the ICGA group (n = 13, 11 men and 2 women, mean age: 52 ± 6 years) was of 16.6 cm × 12.2 cm. The mean defect size of the control group (n = 15, 14 men and 1 women, mean age: 50 ± 5.52 years) was of 15.3 cm × 11.1 cm. Statistical analysis was performed to analyze and compare the results. ICGA allowed preserving only the most functional perforator, that provided the best ALT flap perfusion in 10 out of the 13 cases (77%). ICGA allowed a significant operative time reduction (160 ± 23 vs. 202 ± 48 minutes; P < .001). One case of distal necrosis was observed in the ICGA group (mean follow-up 12.3 months), while partial skin necrosis occurred in three cases of the control group (mean follow-up 13.1 months); P = .35. No additional coverage was required and a successful bone healing was observed in both groups. These findings suggest that iICGA is an effective method that allows to select the most reliable ALT flap perforators and to reduce operative time. © 2018 Wiley Periodicals, Inc.

  3. Maximally multipartite entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Parisi, Giorgio; Pascazio, Saverio

    2008-06-01

    We introduce the notion of maximally multipartite entangled states of n qubits as a generalization of the bipartite case. These pure states have a bipartite entanglement that does not depend on the bipartition and is maximal for all possible bipartitions. They are solutions of a minimization problem. Examples for small n are investigated, both analytically and numerically.

  4. Maximizers versus satisficers

    Directory of Open Access Journals (Sweden)

    Andrew M. Parker

    2007-12-01

    Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.

  5. Band selection and disentanglement using maximally localized Wannier functions: the cases of Co impurities in bulk copper and the Cu(111) surface

    Energy Technology Data Exchange (ETDEWEB)

    Korytar, Richard; Pruneda, Miguel; Ordejon, Pablo; Lorente, Nicolas [Centre d' Investigacio en Nanociencia i Nanotecnologia (CSIC-ICN), Campus de la UAB, E-08193 Bellaterra (Spain); Junquera, Javier, E-mail: rkorytar@cin2.e [Departamento de Ciencias de la Tierra y Fisica de la Materia Condensada, Universidad de Cantabria, E-39005 Santander (Spain)

    2010-09-29

    We have adapted the maximally localized Wannier function approach of Souza et al (2002 Phys. Rev. B 65 035109) to the density functional theory based SIESTA code (Soler et al 2002 J. Phys.: Condens. Mater. 14 2745) and applied it to the study of Co substitutional impurities in bulk copper as well as to the Cu(111) surface. In the Co impurity case, we have reduced the problem to the Co d-electrons and the Cu sp-band, permitting us to obtain an Anderson-like Hamiltonian from well defined density functional parameters in a fully orthonormal basis set. In order to test the quality of the Wannier approach to surfaces, we have studied the electronic structure of the Cu(111) surface by again transforming the density functional problem into the Wannier representation. An excellent description of the Shockley surface state is attained, permitting us to be confident in the application of this method to future studies of magnetic adsorbates in the presence of an extended surface state.

  6. Optimal dose selection accounting for patient subpopulations in a randomized Phase II trial to maximize the success probability of a subsequent Phase III trial.

    Science.gov (United States)

    Takahashi, Fumihiro; Morita, Satoshi

    2018-02-08

    Phase II clinical trials are conducted to determine the optimal dose of the study drug for use in Phase III clinical trials while also balancing efficacy and safety. In conducting these trials, it may be important to consider subpopulations of patients grouped by background factors such as drug metabolism and kidney and liver function. Determining the optimal dose, as well as maximizing the effectiveness of the study drug by analyzing patient subpopulations, requires a complex decision-making process. In extreme cases, drug development has to be terminated due to inadequate efficacy or severe toxicity. Such a decision may be based on a particular subpopulation. We propose a Bayesian utility approach (BUART) to randomized Phase II clinical trials which uses a first-order bivariate normal dynamic linear model for efficacy and safety in order to determine the optimal dose and study population in a subsequent Phase III clinical trial. We carried out a simulation study under a wide range of clinical scenarios to evaluate the performance of the proposed method in comparison with a conventional method separately analyzing efficacy and safety in each patient population. The proposed method showed more favorable operating characteristics in determining the optimal population and dose.

  7. Is CP violation maximal

    International Nuclear Information System (INIS)

    Gronau, M.

    1984-01-01

    Two ambiguities are noted in the definition of the concept of maximal CP violation. The phase convention ambiguity is overcome by introducing a CP violating phase in the quark mixing matrix U which is invariant under rephasing transformations. The second ambiguity, related to the parametrization of U, is resolved by finding a single empirically viable definition of maximal CP violation when assuming that U does not single out one generation. Considerable improvement in the calculation of nonleptonic weak amplitudes is required to test the conjecture of maximal CP violation. 21 references

  8. Selecting university undergraduate student activities via compromised-analytical hierarchy process and 0-1 integer programming to maximize SETARA points

    Science.gov (United States)

    Nazri, Engku Muhammad; Yusof, Nur Ai'Syah; Ahmad, Norazura; Shariffuddin, Mohd Dino Khairri; Khan, Shazida Jan Mohd

    2017-11-01

    Prioritizing and making decisions on what student activities to be selected and conducted to fulfill the aspiration of a university as translated in its strategic plan must be executed with transparency and accountability. It is becoming even more crucial, particularly for universities in Malaysia with the recent budget cut imposed by the Malaysian government. In this paper, we illustrated how 0-1 integer programming (0-1 IP) model was implemented to select which activities among the forty activities proposed by the student body of Universiti Utara Malaysia (UUM) to be implemented for the 2017/2018 academic year. Two different models were constructed. The first model was developed to determine the minimum total budget that should be given to the student body by the UUM management to conduct all the activities that can fulfill the minimum targeted number of activities as stated in its strategic plan. On the other hand, the second model was developed to determine which activities to be selected based on the total budget already allocated beforehand by the UUM management towards fulfilling the requirements as set in its strategic plan. The selection of activities for the second model, was also based on the preference of the members of the student body whereby the preference value for each activity was determined using Compromised-Analytical Hierarchy Process. The outputs from both models were compared and discussed. The technique used in this study will be useful and suitable to be implemented by organizations with key performance indicator-oriented programs and having limited budget allocation issues.

  9. Reliability-based decision making for selection of ready-mix concrete supply using stochastic superiority and inferiority ranking method

    International Nuclear Information System (INIS)

    Chou, Jui-Sheng; Ongkowijoyo, Citra Satria

    2015-01-01

    Corporate competitiveness is heavily influenced by the information acquired, processed, utilized and transferred by professional staff involved in the supply chain. This paper develops a decision aid for selecting on-site ready-mix concrete (RMC) unloading type in decision making situations involving multiple stakeholders and evaluation criteria. The uncertainty of criteria weights set by expert judgment can be transformed in random ways based on the probabilistic virtual-scale method within a prioritization matrix. The ranking is performed by grey relational grade systems considering stochastic criteria weight based on individual preference. Application of the decision aiding model in actual RMC case confirms that the method provides a robust and effective tool for facilitating decision making under uncertainty. - Highlights: • This study models decision aiding method to assess ready-mix concrete unloading type. • Applying Monte Carlo simulation to virtual-scale method achieves a reliable process. • Individual preference ranking method enhances the quality of global decision making. • Robust stochastic superiority and inferiority ranking obtains reasonable results

  10. Selection of reliable reference genes for quantitative real-time PCR in human T cells and neutrophils

    Directory of Open Access Journals (Sweden)

    Ledderose Carola

    2011-10-01

    Full Text Available Abstract Background The choice of reliable reference genes is a prerequisite for valid results when analyzing gene expression with real-time quantitative PCR (qPCR. This method is frequently applied to study gene expression patterns in immune cells, yet a thorough validation of potential reference genes is still lacking for most leukocyte subtypes and most models of their in vitro stimulation. In the current study, we evaluated the expression stability of common reference genes in two widely used cell culture models-anti-CD3/CD28 activated T cells and lipopolysaccharide stimulated neutrophils-as well as in unselected untreated leukocytes. Results The mRNA expression of 17 (T cells, 7 (neutrophils or 8 (unselected leukocytes potential reference genes was quantified by reverse transcription qPCR, and a ranking of the preselected candidate genes according to their expression stability was calculated using the programs NormFinder, geNorm and BestKeeper. IPO8, RPL13A, TBP and SDHA were identified as suitable reference genes in T cells. TBP, ACTB and SDHA were stably expressed in neutrophils. TBP and SDHA were also the most stable genes in untreated total blood leukocytes. The critical impact of reference gene selection on the estimated target gene expression is demonstrated for IL-2 and FIH expression in T cells. Conclusions The study provides a shortlist of suitable reference genes for normalization of gene expression data in unstimulated and stimulated T cells, unstimulated and stimulated neutrophils and in unselected leukocytes.

  11. Maximal metabolic rates during voluntary exercise, forced exercise, and cold exposure in house mice selectively bred for high wheel-running.

    Science.gov (United States)

    Rezende, Enrico L; Chappell, Mark A; Gomes, Fernando R; Malisch, Jessica L; Garland, Theodore

    2005-06-01

    Selective breeding for high wheel-running activity has generated four lines of laboratory house mice (S lines) that run about 170% more than their control counterparts (C lines) on a daily basis, mostly because they run faster. We tested whether maximum aerobic metabolic rates (V(O2max)) have evolved in concert with wheel-running, using 48 females from generation 35. Voluntary activity and metabolic rates were measured on days 5+6 of wheel access (mimicking conditions during selection), using wheels enclosed in metabolic chambers. Following this, V(O2max) was measured twice on a motorized treadmill and twice during cold-exposure in a heliox atmosphere (HeO2). Almost all measurements, except heliox V(O2max), were significantly repeatable. After accounting for differences in body mass (S running speeds on the treadmill. However, running speeds and V(O2max) during voluntary exercise were significantly higher in S lines. Nevertheless, S mice never voluntarily achieved the V(O2max) elicited during their forced treadmill trials, suggesting that aerobic capacity per se is not limiting the evolution of even higher wheel-running speeds in these lines. Our results support the hypothesis that S mice have genetically higher motivation for wheel-running and they demonstrate that behavior can sometimes evolve independently of performance capacities. We also discuss the possible importance of domestication as a confounding factor to extrapolate results from this animal model to natural populations.

  12. Inclusive fitness maximization: An axiomatic approach.

    Science.gov (United States)

    Okasha, Samir; Weymark, John A; Bossert, Walter

    2014-06-07

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Quantification of colour Doppler activity in the wrist in patients with rheumatoid arthritis - the reliability of different methods for image selection and evaluation

    DEFF Research Database (Denmark)

    Ellegaard, K.; Torp-Pedersen, S.; Lund, H.

    2008-01-01

    measurements in the wrist of patients with rheumatoid arthritis (RA) using different selection and quantification methods. Materials and Methods: 14 patients with RA had their wrist scanned twice by the same investigator with an interval of 30 Minutes, The images for analysis were selected either......Purpose: The amount Of colour Doppler activity in the inflamed synovium is used to quantity inflammatory activity. The measurements may vary due to image selection, quantification method, and point in cardiac cycle. This study investigated the test-retest reliability Of ultrasound colour Doppler...... was obtained when the images were selected guided by colour Doppler and the Subsequent quantification was (done in an area defined by anatomical Structures. With this method, the intra-class coefficient ICC (2.1) was 0.95 and the within-subject SD (SW) was 0.017, indicating good reliability. In contrast, poor...

  14. Guinea pig maximization test

    DEFF Research Database (Denmark)

    Andersen, Klaus Ejner

    1985-01-01

    Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...

  15. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  16. Highly Reliable Organizations in the Onshore Natural Gas Sector: An Assessment of Current Practices, Regulatory Frameworks, and Select Case Studies

    Energy Technology Data Exchange (ETDEWEB)

    Logan, Jeffrey S. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Paranhos, Elizabeth [Energy Innovation Partners, Seattle, WA (United States); Kozak, Tracy G. [Energy Innovation Partners, Seattle, WA (United States); Boyd, William [Univ. of Colorado, Boulder, CO (United States)

    2017-07-31

    This study focuses on onshore natural gas operations and examines the extent to which oil and gas firms have embraced certain organizational characteristics that lead to 'high reliability' - understood here as strong safety and reliability records over extended periods of operation. The key questions that motivated this study include whether onshore oil and gas firms engaged in exploration and production (E&P) and midstream (i.e., natural gas transmission and storage) are implementing practices characteristic of high reliability organizations (HROs) and the extent to which any such practices are being driven by industry innovations and standards and/or regulatory requirements.

  17. Tri-maximal vs. bi-maximal neutrino mixing

    International Nuclear Information System (INIS)

    Scott, W.G

    2000-01-01

    It is argued that data from atmospheric and solar neutrino experiments point strongly to tri-maximal or bi-maximal lepton mixing. While ('optimised') bi-maximal mixing gives an excellent a posteriori fit to the data, tri-maximal mixing is an a priori hypothesis, which is not excluded, taking account of terrestrial matter effects

  18. Selection and reporting of statistical methods to assess reliability of a diagnostic test: Conformity to recommended methods in a peer-reviewed journal

    International Nuclear Information System (INIS)

    Park, Ji Eun; Sung, Yu Sub; Han, Kyung Hwa

    2017-01-01

    To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary

  19. Selection and reporting of statistical methods to assess reliability of a diagnostic test: Conformity to recommended methods in a peer-reviewed journal

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ji Eun; Sung, Yu Sub [Dept. of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of); Han, Kyung Hwa [Dept. of Radiology, Research Institute of Radiological Science, Yonsei University College of Medicine, Seoul (Korea, Republic of); and others

    2017-11-15

    To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary.

  20. Reliability and validity of selected measures associated with increased fall risk in females over the age of 45 years with distal radius fracture - A pilot study.

    Science.gov (United States)

    Mehta, Saurabh P; MacDermid, Joy C; Richardson, Julie; MacIntyre, Norma J; Grewal, Ruby

    2015-01-01

    Clinical measurement. This study examined test-retest reliability and convergent/divergent construct validity of selected tests and measures that assess balance impairment, fear of falling (FOF), impaired physical activity (PA), and lower extremity muscle strength (LEMS) in females >45 years of age after the distal radius fracture (DRF) population. Twenty one female participants with DRF were assessed on two occasions. Timed Up and Go, Functional Reach, and One Leg Standing tests assessed balance impairment. Shortened Falls Efficacy Scale, Activity-specific Balance Confidence scale, and Fall Risk Perception Questionnaire assessed FOF. International Physical Activity Questionnaire and Rapid Assessment of Physical Activity were administered to assess PA level. Chair stand test and isometric muscle strength testing for hip and knee assessed LEMS. Intraclass correlation coefficients (ICC) examined the test-retest reliability of the measures. Pearson correlation coefficients (r) examined concurrent relationships between the measures. The results demonstrated fair to excellent test-retest reliability (ICC between 0.50 and 0.96) and low to moderate concordance between the measures (low if r ≤ 0.4; moderate if r = 0.4-0.7). The results provide preliminary estimates of test-retest reliability and convergent/divergent construct validity of selected measures associated with increased risk for falling in the females >45 years of age after DRF. Further research directions to advance knowledge regarding fall risk assessment in DRF population have been identified. Copyright © 2015 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.

  1. MAXIM: The Blackhole Imager

    Science.gov (United States)

    Gendreau, Keith; Cash, Webster; Gorenstein, Paul; Windt, David; Kaaret, Phil; Reynolds, Chris

    2004-01-01

    The Beyond Einstein Program in NASA's Office of Space Science Structure and Evolution of the Universe theme spells out the top level scientific requirements for a Black Hole Imager in its strategic plan. The MAXIM mission will provide better than one tenth of a microarcsecond imaging in the X-ray band in order to satisfy these requirements. We will overview the driving requirements to achieve these goals and ultimately resolve the event horizon of a supermassive black hole. We will present the current status of this effort that includes a study of a baseline design as well as two alternative approaches.

  2. Social group utility maximization

    CERN Document Server

    Gong, Xiaowen; Yang, Lei; Zhang, Junshan

    2014-01-01

    This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b

  3. Derivative pricing based on local utility maximization

    OpenAIRE

    Jan Kallsen

    2002-01-01

    This paper discusses a new approach to contingent claim valuation in general incomplete market models. We determine the neutral derivative price which occurs if investors maximize their local utility and if derivative demand and supply are balanced. We also introduce the sensitivity process of a contingent claim. This process quantifies the reliability of the neutral derivative price and it can be used to construct price bounds. Moreover, it allows to calibrate market models in order to be co...

  4. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability

    NARCIS (Netherlands)

    Vermeulen, M.I.; Tromp, F.; Zuithoff, N.P.; Pieters, R.H.; Damoiseaux, R.A.; Kuyvenhoven, M.M.

    2014-01-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the

  5. Non-Weight-Bearing and Weight-Bearing Ultrasonography of Select Foot Muscles in Young, Asymptomatic Participants: A Descriptive and Reliability Study.

    Science.gov (United States)

    Battaglia, Patrick J; Mattox, Ross; Winchester, Brett; Kettner, Norman W

    The primary aim of this study was to determine the reliability of diagnostic ultrasound imaging for select intrinsic foot muscles using both non-weight-bearing and weight-bearing postures. Our secondary aim was to describe the change in muscle cross-sectional area (CSA) and dorsoplantar thickness when bearing weight. An ultrasound examination was performed with a linear ultrasound transducer operating between 9 and 12 MHz. Long-axis and short-axis ultrasound images of the abductor hallucis, flexor digitorum brevis, and quadratus plantae were obtained in both the non-weight-bearing and weight-bearing postures. Two examiners independently collected ultrasound images to allow for interexaminer and intraexaminer reliability calculation. The change in muscle CSA and dorsoplantar thickness when bearing weight was also studied. There were 26 participants (17 female) with a mean age of 25.5 ± 3.8 years and a mean body mass index of 28.0 ± 7.8 kg/m 2 . Inter-examiner reliability was excellent when measuring the muscles in short axis (intraclass correlation coefficient >0.75) and fair to good in long axis (intraclass correlation coefficient >0.4). Intraexaminer reliability was excellent for the abductor hallucis and flexor digitorum brevis and ranged from fair to good to excellent for the quadratus plantae. Bearing weight did not reduce interexaminer or intraexaminer reliability. All muscles exhibited a significant increase in CSA when bearing weight. This is the first report to describe weight-bearing diagnostic ultrasound of the intrinsic foot muscles. Ultrasound imaging is reliable when imaging these muscles bearing weight. Furthermore, muscle CSA increases in the weight-bearing posture. Copyright © 2016. Published by Elsevier Inc.

  6. Selection of reliable reference genes for gene expression studies in Trichoderma afroharzianum LTR-2 under oxalic acid stress.

    Science.gov (United States)

    Lyu, Yuping; Wu, Xiaoqing; Ren, He; Zhou, Fangyuan; Zhou, Hongzi; Zhang, Xinjian; Yang, Hetong

    2017-10-01

    An appropriate reference gene is required to get reliable results from gene expression analysis by quantitative real-time reverse transcription PCR (qRT-PCR). In order to identify stable and reliable reference genes in Trichoderma afroharzianum under oxalic acid (OA) stress, six commonly used housekeeping genes, i.e., elongation factor 1, ubiquitin, ubiquitin-conjugating enzyme, glyceraldehyde-3-phosphate dehydrogenase, α-tubulin, actin, from the effective biocontrol isolate T. afroharzianum strain LTR-2 were tested for their expression during growth in liquid culture amended with OA. Four in silico programs (comparative ΔCt, NormFinder, geNorm and BestKeeper) were used to evaluate the expression stabilities of six candidate reference genes. The elongation factor 1 gene EF-1 was identified as the most stably expressed reference gene, and was used as the normalizer to quantify the expression level of the oxalate decarboxylase coding gene OXDC in T. afroharzianum strain LTR-2 under OA stress. The result showed that the expression of OXDC was significantly up-regulated as expected. This study provides an effective method to quantify expression changes of target genes in T. afroharzianum under OA stress. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Test-retest reliability of selected items of Health Behaviour in School-aged Children (HBSC survey questionnaire in Beijing, China

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2010-08-01

    Full Text Available Abstract Background Children's health and health behaviour are essential for their development and it is important to obtain abundant and accurate information to understand young people's health and health behaviour. The Health Behaviour in School-aged Children (HBSC study is among the first large-scale international surveys on adolescent health through self-report questionnaires. So far, more than 40 countries in Europe and North America have been involved in the HBSC study. The purpose of this study is to assess the test-retest reliability of selected items in the Chinese version of the HBSC survey questionnaire in a sample of adolescents in Beijing, China. Methods A sample of 95 male and female students aged 11 or 15 years old participated in a test and retest with a three weeks interval. Student Identity numbers of respondents were utilized to permit matching of test-retest questionnaires. 23 items concerning physical activity, sedentary behaviour, sleep and substance use were evaluated by using the percentage of response shifts and the single measure Intraclass Correlation Coefficients (ICC with 95% confidence interval (CI for all respondents and stratified by gender and age. Items on substance use were only evaluated for school children aged 15 years old. Results The percentage of no response shift between test and retest varied from 32% for the item on computer use at weekends to 92% for the three items on smoking. Of all the 23 items evaluated, 6 items (26% showed a moderate reliability, 12 items (52% displayed a substantial reliability and 4 items (17% indicated almost perfect reliability. No gender and age group difference of the test-retest reliability was found except for a few items on sedentary behaviour. Conclusions The overall findings of this study suggest that most selected indicators in the HBSC survey questionnaire have satisfactory test-retest reliability for the students in Beijing. Further test-retest studies in a large

  8. The relative reliability of actively participating and passively observing raters in a simulation-based assessment for selection to specialty training in anaesthesia.

    Science.gov (United States)

    Roberts, M J; Gale, T C E; Sice, P J A; Anderson, I R

    2013-06-01

    Selection to specialty training is a high-stakes assessment demanding valuable consultant time. In one initial entry level and two higher level anaesthesia selection centres, we investigated the feasibility of using staff participating in simulation scenarios, rather than observing consultants, to rate candidate performance. We compared participant and observer scores using four different outcomes: inter-rater reliability; score distributions; correlation of candidate rankings; and percentage of candidates whose selection might be affected by substituting participants' for observers' ratings. Inter-rater reliability between observers was good (correlation coefficient 0.73-0.96) but lower between participants (correlation coefficient 0.39-0.92), particularly at higher level where participants also rated candidates more favourably than did observers. Station rank orderings were strongly correlated between the rater groups at entry level (rho 0.81, p training posts available. We conclude that using participating raters is feasible at initial entry level only. Anaesthesia © 2013 The Association of Anaesthetists of Great Britain and Ireland.

  9. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  10. Determination of validity and reliability of performance assessments tasks developed for selected topics in high school chemistry

    Science.gov (United States)

    Zichittella, Gail Eberhardt

    The primary purpose of this study was to validate performance assessments, which can be used as teaching and assessment instruments in high school science classrooms. This study evaluated the classroom usability of these performance instruments and establishes the interrater reliability of the scoring rubrics when used by classroom teachers. The assessment instruments were designed to represent two levels of scientific inquiry. The high level of inquiry tasks are relatively unstructured in terms of student directions; the low inquiry tasks provided more structure for the student. The tasks cover two content topics studied in chemistry (scientific observation and density). Students from a variety of Western New York school districts who were enrolled in chemistry classes and other science courses were involved in completion of the tasks at the two levels of inquiry. The chemistry students completed the NYS Regents Examination in Chemistry. Their classroom teachers were interviewed and completed a questionnaire to aid in the establishment their epistemological view on the inclusion of inquiry based learning in the science classroom. Data showed that the performance assessment tasks were reliable, valid and helpful for obtaining a more complete picture of the students' scientific understanding. The teacher participants reported no difficulty with the usability of the task in the high school chemistry setting. Collected data gave no evidence of gender bias with reference to the performance tasks or the NYS Regents Chemistry Examination. Additionally, it was shown that the instructors' classroom practices do have an effect upon the students' achievement on the performance tasks and the NYS Regents examination. Data also showed that achievement on the performance tasks was influenced by the number of years of science instruction students had received.

  11. Selection and validation of a set of reliable reference genes for quantitative sod gene expression analysis in C. elegans

    Directory of Open Access Journals (Sweden)

    Vandesompele Jo

    2008-01-01

    Full Text Available Abstract Background In the nematode Caenorhabditis elegans the conserved Ins/IGF-1 signaling pathway regulates many biological processes including life span, stress response, dauer diapause and metabolism. Detection of differentially expressed genes may contribute to a better understanding of the mechanism by which the Ins/IGF-1 signaling pathway regulates these processes. Appropriate normalization is an essential prerequisite for obtaining accurate and reproducible quantification of gene expression levels. The aim of this study was to establish a reliable set of reference genes for gene expression analysis in C. elegans. Results Real-time quantitative PCR was used to evaluate the expression stability of 12 candidate reference genes (act-1, ama-1, cdc-42, csq-1, eif-3.C, mdh-1, gpd-2, pmp-3, tba-1, Y45F10D.4, rgs-6 and unc-16 in wild-type, three Ins/IGF-1 pathway mutants, dauers and L3 stage larvae. After geNorm analysis, cdc-42, pmp-3 and Y45F10D.4 showed the most stable expression pattern and were used to normalize 5 sod expression levels. Significant differences in mRNA levels were observed for sod-1 and sod-3 in daf-2 relative to wild-type animals, whereas in dauers sod-1, sod-3, sod-4 and sod-5 are differentially expressed relative to third stage larvae. Conclusion Our findings emphasize the importance of accurate normalization using stably expressed reference genes. The methodology used in this study is generally applicable to reliably quantify gene expression levels in the nematode C. elegans using quantitative PCR.

  12. Maximal Bell's inequality violation for non-maximal entanglement

    International Nuclear Information System (INIS)

    Kobayashi, M.; Khanna, F.; Mann, A.; Revzen, M.; Santana, A.

    2004-01-01

    Bell's inequality violation (BIQV) for correlations of polarization is studied for a product state of two two-mode squeezed vacuum (TMSV) states. The violation allowed is shown to attain its maximal limit for all values of the squeezing parameter, ζ. We show via an explicit example that a state whose entanglement is not maximal allow maximal BIQV. The Wigner function of the state is non-negative and the average value of either polarization is nil

  13. Maximizing your fleet

    Energy Technology Data Exchange (ETDEWEB)

    Lytle, J. [GE Capital Rail Services, Calgary, AB (Canada)

    2001-07-01

    A series of viewgraphs were presented to illustrate this discussion which focused on the economics of the railroad environment in 2001 and beyond. Fleet productivity and customer relations were described. The viewgraphs were entitled: new car builds; fleet demographics for North American LPG/AA fleet; and, issues and trends of North American LPG/AA fleet. It was noted that there is a continued demand for North American LPG/AA fleet as cars are phased out. GE Capital Rail Services is addressing the future by focusing on the following four initiatives: service, globalization, building a six sigma company, and transforming into an e-business. The company's six sigma approach is based on customer orientation, safety and compliance, quality and reliability, productivity and planning, cost, and people and culture. The actions taken thus far have been a planned maintenance program, the relocation of safety valves, and the elimination of early failures. figs.

  14. Maximally Symmetric Composite Higgs Models.

    Science.gov (United States)

    Csáki, Csaba; Ma, Teng; Shu, Jing

    2017-09-29

    Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.

  15. Principles of maximally classical and maximally realistic quantum ...

    Indian Academy of Sciences (India)

    Principles of maximally classical and maximally realistic quantum mechanics. S M ROY. Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India. Abstract. Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2N-dimensional phase space, ...

  16. Assessing the effects of data selection and representation on the development of reliable E. coli sigma 70 promoter region predictors.

    Directory of Open Access Journals (Sweden)

    Mostafa M Abbas

    Full Text Available As the number of sequenced bacterial genomes increases, the need for rapid and reliable tools for the annotation of functional elements (e.g., transcriptional regulatory elements becomes more desirable. Promoters are the key regulatory elements, which recruit the transcriptional machinery through binding to a variety of regulatory proteins (known as sigma factors. The identification of the promoter regions is very challenging because these regions do not adhere to specific sequence patterns or motifs and are difficult to determine experimentally. Machine learning represents a promising and cost-effective approach for computational identification of prokaryotic promoter regions. However, the quality of the predictors depends on several factors including: i training data; ii data representation; iii classification algorithms; iv evaluation procedures. In this work, we create several variants of E. coli promoter data sets and utilize them to experimentally examine the effect of these factors on the predictive performance of E. coli σ70 promoter models. Our results suggest that under some combinations of the first three criteria, a prediction model might perform very well on cross-validation experiments while its performance on independent test data is drastically very poor. This emphasizes the importance of evaluating promoter region predictors using independent test data, which corrects for the over-optimistic performance that might be estimated using the cross-validation procedure. Our analysis of the tested models shows that good prediction models often perform well despite how the non-promoter data was obtained. On the other hand, poor prediction models seems to be more sensitive to the choice of non-promoter sequences. Interestingly, the best performing sequence-based classifiers outperform the best performing structure-based classifiers on both cross-validation and independent test performance evaluation experiments. Finally, we propose a

  17. Translational database selection and multiplexed sequence capture for up front filtering of reliable breast cancer biomarker candidates.

    Directory of Open Access Journals (Sweden)

    Patrik L Ståhl

    Full Text Available Biomarker identification is of utmost importance for the development of novel diagnostics and therapeutics. Here we make use of a translational database selection strategy, utilizing data from the Human Protein Atlas (HPA on differentially expressed protein patterns in healthy and breast cancer tissues as a means to filter out potential biomarkers for underlying genetic causatives of the disease. DNA was isolated from ten breast cancer biopsies, and the protein coding and flanking non-coding genomic regions corresponding to the selected proteins were extracted in a multiplexed format from the samples using a single DNA sequence capture array. Deep sequencing revealed an even enrichment of the multiplexed samples and a great variation of genetic alterations in the tumors of the sampled individuals. Benefiting from the upstream filtering method, the final set of biomarker candidates could be completely verified through bidirectional Sanger sequencing, revealing a 40 percent false positive rate despite high read coverage. Of the variants encountered in translated regions, nine novel non-synonymous variations were identified and verified, two of which were present in more than one of the ten tumor samples.

  18. Maximizing and customer loyalty: Are maximizers less loyal?

    Directory of Open Access Journals (Sweden)

    Linda Lai

    2011-06-01

    Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.

  19. Implications of maximal Jarlskog invariant and maximal CP violation

    International Nuclear Information System (INIS)

    Rodriguez-Jauregui, E.; Universidad Nacional Autonoma de Mexico

    2001-04-01

    We argue here why CP violating phase Φ in the quark mixing matrix is maximal, that is, Φ=90 . In the Standard Model CP violation is related to the Jarlskog invariant J, which can be obtained from non commuting Hermitian mass matrices. In this article we derive the conditions to have Hermitian mass matrices which give maximal Jarlskog invariant J and maximal CP violating phase Φ. We find that all squared moduli of the quark mixing elements have a singular point when the CP violation phase Φ takes the value Φ=90 . This special feature of the Jarlskog invariant J and the quark mixing matrix is a clear and precise indication that CP violating Phase Φ is maximal in order to let nature treat democratically all of the quark mixing matrix moduli. (orig.)

  20. Crossover and maximal fat-oxidation points in sedentary healthy subjects: methodological issues.

    Science.gov (United States)

    Gmada, N; Marzouki, H; Haboubi, M; Tabka, Z; Shephard, R J; Bouhlel, E

    2012-02-01

    Our study aimed to assess the influence of protocol on the crossover point and maximal fat-oxidation (LIPOX(max)) values in sedentary, but otherwise healthy, young men. Maximal oxygen intake was assessed in 23 subjects, using a progressive maximal cycle ergometer test. Twelve sedentary males (aged 20.5±1.0 years) whose directly measured maximal aerobic power (MAP) values were lower than their theoretical maximal values (tMAP) were selected from this group. These individuals performed, in random sequence, three submaximal graded exercise tests, separated by three-day intervals; work rates were based on the tMAP in one test and on MAP in the remaining two. The third test was used to assess the reliability of data. Heart rate, respiratory parameters, blood lactate, the crossover point and LIPOX(max) values were measured during each of these tests. The crossover point and LIPOX(max) values were significantly lower when the testing protocol was based on tMAP rather than on MAP (PtMAP at 30, 40, 50 and 60% of maximal aerobic power (PtMAP rather than MAP (P<0.001). During the first 5 min of recovery, EPOC(5 min) and blood lactate were significantly correlated (r=0.89; P<0.001). Our data show that, to assess the crossover point and LIPOX(max) values for research purposes, the protocol must be based on the measured MAP rather than on a theoretical value. Such a determination should improve individualization of training for initially sedentary subjects. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  1. Phenomenology of maximal and near-maximal lepton mixing

    International Nuclear Information System (INIS)

    Gonzalez-Garcia, M. C.; Pena-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.

    2001-01-01

    The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ε(equivalent to)1-2sin 2 θ ex and quantify the present experimental status for |ε| e mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10 -8 eV 2 ∼ 2 ∼ -7 eV 2 . In the mass ranges Δm 2 ∼>1.5x10 -5 eV 2 and 4x10 -10 eV 2 ∼ 2 ∼ -7 eV 2 the full interval |ε| e mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay

  2. Maximal quantum Fisher information matrix

    International Nuclear Information System (INIS)

    Chen, Yu; Yuan, Haidong

    2017-01-01

    We study the existence of the maximal quantum Fisher information matrix in the multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher information matrix by deriving various trade-off relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit. (paper)

  3. Maximize x(a - x)

    Science.gov (United States)

    Lange, L. H.

    1974-01-01

    Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)

  4. Finding Maximal Quasiperiodicities in Strings

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Pedersen, Christian N. S.

    2000-01-01

    of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...... in the suffix tree that have a superprimitive path-label....

  5. On the maximal diphoton width

    CERN Document Server

    Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo

    2016-01-01

    Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.

  6. Maximal frustration as an immunological principle.

    Science.gov (United States)

    de Abreu, F Vistulo; Mostardinha, P

    2009-03-06

    A fundamental problem in immunology is that of understanding how the immune system selects promptly which cells to kill without harming the body. This problem poses an apparent paradox. Strong reactivity against pathogens seems incompatible with perfect tolerance towards self. We propose a different view on cellular reactivity to overcome this paradox: effector functions should be seen as the outcome of cellular decisions which can be in conflict with other cells' decisions. We argue that if cellular systems are frustrated, then extensive cross-reactivity among the elements in the system can decrease the reactivity of the system as a whole and induce perfect tolerance. Using numerical and mathematical analyses, we discuss two simple models that perform optimal pathogenic detection with no autoimmunity if cells are maximally frustrated. This study strongly suggests that a principle of maximal frustration could be used to build artificial immune systems. It would be interesting to test this principle in the real adaptive immune system.

  7. Maximization

    Directory of Open Access Journals (Sweden)

    A. Garmroodi Asil

    2017-09-01

    To further reduce the sulfur dioxide emission of the entire refining process, two scenarios of acid gas or air preheats are investigated when either of them is used simultaneously with the third enrichment scheme. The maximum overall sulfur recovery efficiency and highest combustion chamber temperature is slightly higher for acid gas preheats but air preheat is more favorable because it is more benign. To the best of our knowledge, optimization of the entire GTU + enrichment section and SRU processes has not been addressed previously.

  8. Selection of Reliable Reference Genes for Gene Expression Studies of a Promising Oilseed Crop, Plukenetia volubilis, by Real-Time Quantitative PCR

    Directory of Open Access Journals (Sweden)

    Longjian Niu

    2015-06-01

    Full Text Available Real-time quantitative PCR (RT-qPCR is a reliable and widely used method for gene expression analysis. The accuracy of the determination of a target gene expression level by RT-qPCR demands the use of appropriate reference genes to normalize the mRNA levels among different samples. However, suitable reference genes for RT-qPCR have not been identified in Sacha inchi (Plukenetia volubilis, a promising oilseed crop known for its polyunsaturated fatty acid (PUFA-rich seeds. In this study, using RT-qPCR, twelve candidate reference genes were examined in seedlings and adult plants, during flower and seed development and for the entire growth cycle of Sacha inchi. Four statistical algorithms (delta cycle threshold (ΔCt, BestKeeper, geNorm, and NormFinder were used to assess the expression stabilities of the candidate genes. The results showed that ubiquitin-conjugating enzyme (UCE, actin (ACT and phospholipase A22 (PLA were the most stable genes in Sacha inchi seedlings. For roots, stems, leaves, flowers, and seeds from adult plants, 30S ribosomal protein S13 (RPS13, cyclophilin (CYC and elongation factor-1alpha (EF1α were recommended as reference genes for RT-qPCR. During the development of reproductive organs, PLA, ACT and UCE were the optimal reference genes for flower development, whereas UCE, RPS13 and RNA polymerase II subunit (RPII were optimal for seed development. Considering the entire growth cycle of Sacha inchi, UCE, ACT and EF1α were sufficient for the purpose of normalization. Our results provide useful guidelines for the selection of reliable reference genes for the normalization of RT-qPCR data for seedlings and adult plants, for reproductive organs, and for the entire growth cycle of Sacha inchi.

  9. Selection of Reliable Reference Genes for Gene Expression Studies of a Promising Oilseed Crop, Plukenetia volubilis, by Real-Time Quantitative PCR

    Science.gov (United States)

    Niu, Longjian; Tao, Yan-Bin; Chen, Mao-Sheng; Fu, Qiantang; Li, Chaoqiong; Dong, Yuling; Wang, Xiulan; He, Huiying; Xu, Zeng-Fu

    2015-01-01

    Real-time quantitative PCR (RT-qPCR) is a reliable and widely used method for gene expression analysis. The accuracy of the determination of a target gene expression level by RT-qPCR demands the use of appropriate reference genes to normalize the mRNA levels among different samples. However, suitable reference genes for RT-qPCR have not been identified in Sacha inchi (Plukenetia volubilis), a promising oilseed crop known for its polyunsaturated fatty acid (PUFA)-rich seeds. In this study, using RT-qPCR, twelve candidate reference genes were examined in seedlings and adult plants, during flower and seed development and for the entire growth cycle of Sacha inchi. Four statistical algorithms (delta cycle threshold (ΔCt), BestKeeper, geNorm, and NormFinder) were used to assess the expression stabilities of the candidate genes. The results showed that ubiquitin-conjugating enzyme (UCE), actin (ACT) and phospholipase A22 (PLA) were the most stable genes in Sacha inchi seedlings. For roots, stems, leaves, flowers, and seeds from adult plants, 30S ribosomal protein S13 (RPS13), cyclophilin (CYC) and elongation factor-1alpha (EF1α) were recommended as reference genes for RT-qPCR. During the development of reproductive organs, PLA, ACT and UCE were the optimal reference genes for flower development, whereas UCE, RPS13 and RNA polymerase II subunit (RPII) were optimal for seed development. Considering the entire growth cycle of Sacha inchi, UCE, ACT and EF1α were sufficient for the purpose of normalization. Our results provide useful guidelines for the selection of reliable reference genes for the normalization of RT-qPCR data for seedlings and adult plants, for reproductive organs, and for the entire growth cycle of Sacha inchi. PMID:26047338

  10. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2013-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....

  11. Maximizing entropy over Markov processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2014-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...

  12. Software reliability

    CERN Document Server

    Bendell, A

    1986-01-01

    Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo

  13. Selection of reliable reference genes for RT-qPCR studies in Octopus vulgaris paralarvae during development and immune-stimulation.

    Science.gov (United States)

    García-Fernández, P; Castellanos-Martínez, S; Iglesias, J; Otero, J J; Gestal, C

    2016-07-01

    The common octopus, Octopus vulgaris is a new candidate species for aquaculture. However, rearing of octopus paralarvae is hampered by high mortality and poor growth rates that impede its entire culture. The study of genes involved in the octopus development and immune response capability could help to understand the key of paralarvae survival and thus, to complete the octopus life cycle. Quantitative real-time PCR (RT-qPCR) is the most frequently tool used to quantify the gene expression because of specificity and sensitivity. However, reliability of RT-qPCR requires the selection of appropriate normalization genes whose expression must be stable across the different experimental conditions of the study. Hence, the aim of the present work is to evaluate the stability of six candidate genes: β-actin (ACT), elongation factor 1-α (EF), ubiquitin (UBI), β-tubulin (TUB), glyceraldehyde 3-phosphate dehydrogenase (GADPH) and ribosomal RNA 18 (18S) in order to select the best reference gene. The stability of gene expression was analyzed using geNorm, NormFinder and Bestkeeper, in octopus paralarvae of seven developmental stages (embryo, paralarvae of 0, 10, 15, 20, 30 and 34days) and paralarvae of 20days after challenge with Vibrio lentus and Vibrio splendidus. The results were validated by measuring the expression of PGRP, a stimuli-specific gene. Our results showed UBI, EF and 18S as the most suitable reference genes during development of octopus paralarvae, and UBI, ACT and 18S for bacterial infection. These results provide a basis for further studies exploring molecular mechanism of their development and innate immune defense. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Transcriptome-wide selection of a reliable set of reference genes for gene expression studies in potato cyst nematodes (Globodera spp.).

    Science.gov (United States)

    Sabeh, Michael; Duceppe, Marc-Olivier; St-Arnaud, Marc; Mimee, Benjamin

    2018-01-01

    Relative gene expression analyses by qRT-PCR (quantitative reverse transcription PCR) require an internal control to normalize the expression data of genes of interest and eliminate the unwanted variation introduced by sample preparation. A perfect reference gene should have a constant expression level under all the experimental conditions. However, the same few housekeeping genes selected from the literature or successfully used in previous unrelated experiments are often routinely used in new conditions without proper validation of their stability across treatments. The advent of RNA-Seq and the availability of public datasets for numerous organisms are opening the way to finding better reference genes for expression studies. Globodera rostochiensis is a plant-parasitic nematode that is particularly yield-limiting for potato. The aim of our study was to identify a reliable set of reference genes to study G. rostochiensis gene expression. Gene expression levels from an RNA-Seq database were used to identify putative reference genes and were validated with qRT-PCR analysis. Three genes, GR, PMP-3, and aaRS, were found to be very stable within the experimental conditions of this study and are proposed as reference genes for future work.

  15. Chamaebatiaria millefolium (Torr.) Maxim.: fernbush

    Science.gov (United States)

    Nancy L. Shaw; Emerenciana G. Hurd

    2008-01-01

    Fernbush - Chamaebatiaria millefolium (Torr.) Maxim. - the only species in its genus, is endemic to the Great Basin, Colorado Plateau, and adjacent areas of the western United States. It is an upright, generally multistemmed, sweetly aromatic shrub 0.3 to 2 m tall. Bark of young branches is brown and becomes smooth and gray with age. Leaves are leathery, alternate,...

  16. Individual selection of X-ray tube settings in computed tomography coronary angiography: Reliability of an automated software algorithm to maintain constant image quality.

    Science.gov (United States)

    Durmus, Tahir; Luhur, Reny; Daqqaq, Tareef; Schwenke, Carsten; Knobloch, Gesine; Huppertz, Alexander; Hamm, Bernd; Lembcke, Alexander

    2016-05-01

    To evaluate a software tool that claims to maintain a constant contrast-to-noise ratio (CNR) in high-pitch dual-source computed tomography coronary angiography (CTCA) by automatically selecting both X-ray tube voltage and current. A total of 302 patients (171 males; age 61±12years; body weight 82±17kg, body mass index 27.3±4.6kg/cm(2)) underwent CTCA with a topogram-based, automatic selection of both tube voltage and current using dedicated software with quality reference values of 100kV and 250mAs/rotation (i.e., standard values for an average adult weighing 75kg) and an injected iodine load of 222mg/kg. The average radiation dose was estimated to be 1.02±0.64mSv. All data sets had adequate contrast enhancement. Average CNR in the aortic root, left ventricle, and left and right coronary artery was 15.7±4.5, 8.3±2.9, 16.1±4.3 and 15.3±3.9 respectively. Individual CNR values were independent of patients' body size and radiation dose. However, individual CNR values may vary considerably between subjects as reflected by interquartile ranges of 12.6-18.6, 6.2-9.9, 12.8-18.9 and 12.5-17.9 respectively. Moreover, average CNR values were significantly lower in males than females (15.1±4.1 vs. 16.6±11.7 and 7.9±2.7 vs. 8.9±3.0, 15.5±3.9 vs. 16.9±4.6 and 14.7±3.6 vs. 16.0±4.1 respectively). A topogram-based automatic selection of X-ray tube settings in CTCA provides diagnostic image quality independent of patients' body size. Nevertheless, considerable variation of individual CNR values between patients and significant differences of CNR values between males and females occur which questions the reliability of this approach. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. A decision theoretic framework for profit maximization in direct marketing

    NARCIS (Netherlands)

    Muus, L.; van der Scheer, H.; Wansbeek, T.J.; Montgomery, A.; Franses, P.H.B.F.

    2002-01-01

    One of the most important issues facing a firm involved in direct marketing is the selection of addresses from a mailing list. When the parameters of the model describing consumers' reaction to a mailing are known, addresses for a future mailing can be selected in a profit-maximizing way. Usually,

  18. Selection, design, qualification, testing, and reliability of emergency diesel generator units used as Class 1E onsite electric power systems at nuclear power plants

    International Nuclear Information System (INIS)

    1992-04-01

    This guide has been prepared for the resolution of Generic Safety Issue B-56, ''Diesel Generator Reliability,'' and is related to Unresolved Safety Issue (USI) A-44, ''Station Blackout.'' The resolution of USI A-44 established a need for an emergency diesel generator (EDG) reliability program that has the capability to achieve and maintain the emergency diesel generator reliability levels in the range of 0.95 per demand or better to cope with station blackout

  19. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  20. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  1. IMNN: Information Maximizing Neural Networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  2. Is the β phase maximal?

    International Nuclear Information System (INIS)

    Ferrandis, Javier

    2005-01-01

    The current experimental determination of the absolute values of the CKM elements indicates that 2 vertical bar V ub /V cb V us vertical bar =(1-z), with z given by z=0.19+/-0.14. This fact implies that irrespective of the form of the quark Yukawa matrices, the measured value of the SM CP phase β is approximately the maximum allowed by the measured absolute values of the CKM elements. This is β=(π/6-z/3) for γ=(π/3+z/3), which implies α=π/2. Alternatively, assuming that β is exactly maximal and using the experimental measurement sin(2β)=0.726+/-0.037, the phase γ is predicted to be γ=(π/2-β)=66.3 o +/-1.7 o . The maximality of β, if confirmed by near-future experiments, may give us some clues as to the origin of CP violation

  3. Strategy to maximize maintenance operation

    OpenAIRE

    Espinoza, Michael

    2005-01-01

    This project presents a strategic analysis to maximize maintenance operations in Alcan Kitimat Works in British Columbia. The project studies the role of maintenance in improving its overall maintenance performance. It provides strategic alternatives and specific recommendations addressing Kitimat Works key strategic issues and problems. A comprehensive industry and competitive analysis identifies the industry structure and its competitive forces. In the mature aluminium industry, the bargain...

  4. Scalable Nonlinear AUC Maximization Methods

    OpenAIRE

    Khalid, Majdi; Ray, Indrakshi; Chitsaz, Hamidreza

    2017-01-01

    The area under the ROC curve (AUC) is a measure of interest in various machine learning and data mining applications. It has been widely used to evaluate classification performance on heavily imbalanced data. The kernelized AUC maximization machines have established a superior generalization ability compared to linear AUC machines because of their capability in modeling the complex nonlinear structure underlying most real world-data. However, the high training complexity renders the kernelize...

  5. Cardiorespiratory Coordination in Repeated Maximal Exercise

    Directory of Open Access Journals (Sweden)

    Sergi Garcia-Retortillo

    2017-06-01

    Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC

  6. The value of reliability

    DEFF Research Database (Denmark)

    Fosgerau, Mogens; Karlström, Anders

    2010-01-01

    We derive the value of reliability in the scheduling of an activity of random duration, such as travel under congested conditions. Using a simple formulation of scheduling utility, we show that the maximal expected utility is linear in the mean and standard deviation of trip duration, regardless...... of the form of the standardised distribution of trip durations. This insight provides a unification of the scheduling model and models that include the standard deviation of trip duration directly as an argument in the cost or utility function. The results generalise approximately to the case where the mean...

  7. FLOUTING MAXIMS IN INDONESIA LAWAK KLUB CONVERSATION

    Directory of Open Access Journals (Sweden)

    Rahmawati Sukmaningrum

    2017-04-01

    Full Text Available This study aims to identify the types of maxims flouted in the conversation in famous comedy show, Indonesia Lawak Club. Likewise, it also tries to reveal the speakers‘ intention of flouting the maxim in the conversation during the show. The writers use descriptive qualitative method in conducting this research. The data is taken from the dialogue of Indonesia Lawak club and then analyzed based on Grice‘s cooperative principles. The researchers read the dialogue‘s transcripts, identify the maxims, and interpret the data to find the speakers‘ intention for flouting the maxims in the communication. The results show that there are four types of maxims flouted in the dialogue. Those are maxim of quality (23%, maxim of quantity (11%, maxim of manner (31%, and maxim of relevance (35. Flouting the maxims in the conversations is intended to make the speakers feel uncomfortable with the conversation, show arrogances, show disagreement or agreement, and ridicule other speakers.

  8. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation.

    Science.gov (United States)

    Chen, Qing; Zhang, Jinxiu; Hu, Ze

    2017-02-23

    This article investigates the dynamic topology control problemof satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites' relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.

  9. Value maximizing maintenance policies under general repair

    International Nuclear Information System (INIS)

    Marais, Karen B.

    2013-01-01

    One class of maintenance optimization problems considers the notion of general repair maintenance policies where systems are repaired or replaced on failure. In each case the optimality is based on minimizing the total maintenance cost of the system. These cost-centric optimizations ignore the value dimension of maintenance and can lead to maintenance strategies that do not maximize system value. This paper applies these ideas to the general repair optimization problem using a semi-Markov decision process, discounted cash flow techniques, and dynamic programming to identify the value-optimal actions for any given time and system condition. The impact of several parameters on maintenance strategy, such as operating cost and revenue, system failure characteristics, repair and replacement costs, and the planning time horizon, is explored. This approach provides a quantitative basis on which to base maintenance strategy decisions that contribute to system value. These decisions are different from those suggested by traditional cost-based approaches. The results show (1) how the optimal action for a given time and condition changes as replacement and repair costs change, and identifies the point at which these costs become too high for profitable system operation; (2) that for shorter planning horizons it is better to repair, since there is no time to reap the benefits of increased operating profit and reliability; (3) how the value-optimal maintenance policy is affected by the system's failure characteristics, and hence whether it is worthwhile to invest in higher reliability; and (4) the impact of the repair level on the optimal maintenance policy. -- Highlights: •Provides a quantitative basis for maintenance strategy decisions that contribute to system value. •Shows how the optimal action for a given condition changes as replacement and repair costs change. •Shows how the optimal policy is affected by the system's failure characteristics. •Shows when it is

  10. Scyllac equipment reliability analysis

    International Nuclear Information System (INIS)

    Gutscher, W.D.; Johnson, K.J.

    1975-01-01

    Most of the failures in Scyllac can be related to crowbar trigger cable faults. A new cable has been designed, procured, and is currently undergoing evaluation. When the new cable has been proven, it will be worked into the system as quickly as possible without causing too much additional down time. The cable-tip problem may not be easy or even desirable to solve. A tightly fastened permanent connection that maximizes contact area would be more reliable than the plug-in type of connection in use now, but it would make system changes and repairs much more difficult. The balance of the failures have such a low occurrence rate that they do not cause much down time and no major effort is underway to eliminate them. Even though Scyllac was built as an experimental system and has many thousands of components, its reliability is very good. Because of this the experiment has been able to progress at a reasonable pace

  11. Reliability Engineering

    CERN Document Server

    Lazzaroni, Massimo

    2012-01-01

    This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be

  12. Reliability training

    Science.gov (United States)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  13. Comparison of maximal voluntary isometric contraction and hand-held dynamometry in measuring muscle strength of patients with progressive lower motor neuron syndrome

    NARCIS (Netherlands)

    Visser, J.; Mans, E.; de Visser, M.; van den Berg-Vos, R. M.; Franssen, H.; de Jong, J. M. B. V.; van den Berg, L. H.; Wokke, J. H. J.; de Haan, R. J.

    2003-01-01

    Context. Maximal voluntary isometric contraction, a method quantitatively assessing muscle strength, has proven to be reliable, accurate and sensitive in amyotrophic lateral sclerosis. Hand-held dynamometry is less expensive and more quickly applicable than maximal voluntary isometric contraction.

  14. Towards higher reliability of CMS computing facilities

    International Nuclear Information System (INIS)

    Bagliesi, G; Bloom, K; Brew, C; Flix, J; Kreuzer, P; Sciabà, A

    2012-01-01

    The CMS experiment has adopted a computing system where resources are distributed worldwide in more than 50 sites. The operation of the system requires a stable and reliable behaviour of the underlying infrastructure. CMS has established procedures to extensively test all relevant aspects of a site and their capability to sustain the various CMS computing workflows at the required scale. The Site Readiness monitoring infrastructure has been instrumental in understanding how the system as a whole was improving towards LHC operations, measuring the reliability of sites when running CMS activities, and providing sites with the information they need to troubleshoot any problem. This contribution reviews the complete automation of the Site Readiness program, with the description of monitoring tools and their inclusion into the Site Status Board (SSB), the performance checks, the use of tools like HammerCloud, and the impact in improving the overall reliability of the Grid from the point of view of the CMS computing system. These results are used by CMS to select good sites to conduct workflows, in order to maximize workflows efficiencies. The performance against these tests seen at the sites during the first years of LHC running is as well reviewed.

  15. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  16. Maximal Abelian sets of roots

    CERN Document Server

    Lawther, R

    2018-01-01

    In this work the author lets \\Phi be an irreducible root system, with Coxeter group W. He considers subsets of \\Phi which are abelian, meaning that no two roots in the set have sum in \\Phi \\cup \\{ 0 \\}. He classifies all maximal abelian sets (i.e., abelian sets properly contained in no other) up to the action of W: for each W-orbit of maximal abelian sets we provide an explicit representative X, identify the (setwise) stabilizer W_X of X in W, and decompose X into W_X-orbits. Abelian sets of roots are closely related to abelian unipotent subgroups of simple algebraic groups, and thus to abelian p-subgroups of finite groups of Lie type over fields of characteristic p. Parts of the work presented here have been used to confirm the p-rank of E_8(p^n), and (somewhat unexpectedly) to obtain for the first time the 2-ranks of the Monster and Baby Monster sporadic groups, together with the double cover of the latter. Root systems of classical type are dealt with quickly here; the vast majority of the present work con...

  17. Systems reliability/structural reliability

    International Nuclear Information System (INIS)

    Green, A.E.

    1980-01-01

    The question of reliability technology using quantified techniques is considered for systems and structures. Systems reliability analysis has progressed to a viable and proven methodology whereas this has yet to be fully achieved for large scale structures. Structural loading variants over the half-time of the plant are considered to be more difficult to analyse than for systems, even though a relatively crude model may be a necessary starting point. Various reliability characteristics and environmental conditions are considered which enter this problem. The rare event situation is briefly mentioned together with aspects of proof testing and normal and upset loading conditions. (orig.)

  18. MAXIMIZING THE BENEFITS OF ERP SYSTEMS

    Directory of Open Access Journals (Sweden)

    Paulo André da Conceição Menezes

    2010-04-01

    Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.

  19. Maximizing benefits from resource development

    International Nuclear Information System (INIS)

    Skjelbred, B.

    2002-01-01

    The main objectives of Norwegian petroleum policy are to maximize the value creation for the country, develop a national oil and gas industry, and to be at the environmental forefront of long term resource management and coexistence with other industries. The paper presents a graph depicting production and net export of crude oil for countries around the world for 2002. Norway produced 3.41 mill b/d and exported 3.22 mill b/d. Norwegian petroleum policy measures include effective regulation and government ownership, research and technology development, and internationalisation. Research and development has been in five priority areas, including enhanced recovery, environmental protection, deep water recovery, small fields, and the gas value chain. The benefits of internationalisation includes capitalizing on Norwegian competency, exploiting emerging markets and the assurance of long-term value creation and employment. 5 figs

  20. Maximizing synchronizability of duplex networks

    Science.gov (United States)

    Wei, Xiang; Emenheiser, Jeffrey; Wu, Xiaoqun; Lu, Jun-an; D'Souza, Raissa M.

    2018-01-01

    We study the synchronizability of duplex networks formed by two randomly generated network layers with different patterns of interlayer node connections. According to the master stability function, we use the smallest nonzero eigenvalue and the eigenratio between the largest and the second smallest eigenvalues of supra-Laplacian matrices to characterize synchronizability on various duplexes. We find that the interlayer linking weight and linking fraction have a profound impact on synchronizability of duplex networks. The increasingly large inter-layer coupling weight is found to cause either decreasing or constant synchronizability for different classes of network dynamics. In addition, negative node degree correlation across interlayer links outperforms positive degree correlation when most interlayer links are present. The reverse is true when a few interlayer links are present. The numerical results and understanding based on these representative duplex networks are illustrative and instructive for building insights into maximizing synchronizability of more realistic multiplex networks.

  1. Human reliability

    International Nuclear Information System (INIS)

    Bubb, H.

    1992-01-01

    This book resulted from the activity of Task Force 4.2 - 'Human Reliability'. This group was established on February 27th, 1986, at the plenary meeting of the Technical Reliability Committee of VDI, within the framework of the joint committee of VDI on industrial systems technology - GIS. It is composed of representatives of industry, representatives of research institutes, of technical control boards and universities, whose job it is to study how man fits into the technical side of the world of work and to optimize this interaction. In a total of 17 sessions, information from the part of ergonomy dealing with human reliability in using technical systems at work was exchanged, and different methods for its evaluation were examined and analyzed. The outcome of this work was systematized and compiled in this book. (orig.) [de

  2. Microelectronics Reliability

    Science.gov (United States)

    2017-01-17

    inverters  connected in a chain. ................................................. 5  Figure 3  Typical graph showing frequency versus square root of...developing an experimental  reliability estimating methodology that could both illuminate the  lifetime  reliability of advanced devices,  circuits and...or  FIT of the device. In other words an accurate estimate of the device  lifetime  was found and thus the  reliability  that  can  be  conveniently

  3. Maximizing Statistical Power When Verifying Probabilistic Forecasts of Hydrometeorological Events

    Science.gov (United States)

    DeChant, C. M.; Moradkhani, H.

    2014-12-01

    Hydrometeorological events (i.e. floods, droughts, precipitation) are increasingly being forecasted probabilistically, owing to the uncertainties in the underlying causes of the phenomenon. In these forecasts, the probability of the event, over some lead time, is estimated based on some model simulations or predictive indicators. By issuing probabilistic forecasts, agencies may communicate the uncertainty in the event occurring. Assuming that the assigned probability of the event is correct, which is referred to as a reliable forecast, the end user may perform some risk management based on the potential damages resulting from the event. Alternatively, an unreliable forecast may give false impressions of the actual risk, leading to improper decision making when protecting resources from extreme events. Due to this requisite for reliable forecasts to perform effective risk management, this study takes a renewed look at reliability assessment in event forecasts. Illustrative experiments will be presented, showing deficiencies in the commonly available approaches (Brier Score, Reliability Diagram). Overall, it is shown that the conventional reliability assessment techniques do not maximize the ability to distinguish between a reliable and unreliable forecast. In this regard, a theoretical formulation of the probabilistic event forecast verification framework will be presented. From this analysis, hypothesis testing with the Poisson-Binomial distribution is the most exact model available for the verification framework, and therefore maximizes one's ability to distinguish between a reliable and unreliable forecast. Application of this verification system was also examined within a real forecasting case study, highlighting the additional statistical power provided with the use of the Poisson-Binomial distribution.

  4. VIOLATION OF CONVERSATION MAXIM ON TV ADVERTISEMENTS

    Directory of Open Access Journals (Sweden)

    Desak Putu Eka Pratiwi

    2015-07-01

    Full Text Available Maxim is a principle that must be obeyed by all participants textually and interpersonally in order to have a smooth communication process. Conversation maxim is divided into four namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner of speaking. Violation of the maxim may occur in a conversation in which the information the speaker has is not delivered well to his speaking partner. Violation of the maxim in a conversation will result in an awkward impression. The example of violation is the given information that is redundant, untrue, irrelevant, or convoluted. Advertisers often deliberately violate the maxim to create unique and controversial advertisements. This study aims to examine the violation of maxims in conversations of TV ads. The source of data in this research is food advertisements aired on TV media. Documentation and observation methods are applied to obtain qualitative data. The theory used in this study is a maxim theory proposed by Grice (1975. The results of the data analysis are presented with informal method. The results of this study show an interesting fact that the violation of maxim in a conversation found in the advertisement exactly makes the advertisements very attractive and have a high value.

  5. Quantitative approaches for profit maximization in direct marketing

    NARCIS (Netherlands)

    van der Scheer, H.R.

    1998-01-01

    An effective direct marketing campaign aims at selecting those targets, offer and communication elements - at the right time - that maximize the net profits. The list of individuals to be mailed, i.e. the targets, is considered to be the most important component. Therefore, a large amount of direct

  6. Cryogenic Selective Surfaces

    Data.gov (United States)

    National Aeronautics and Space Administration — Selective surfaces have wavelength dependent emissivity/absorption. These surfaces can be designed to reflect solar radiation, while maximizing infrared emittance,...

  7. Maximizing ROI (return on information)

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, B.

    2000-05-01

    The role and importance of managing information are discussed, underscoring the importance by quoting from the report of the International Data Corporation, according to which Fortune 500 companies lost $ 12 billion in 1999 due to inefficiencies resulting from intellectual re-work, substandard performance , and inability to find knowledge resources. The report predicts that this figure will rise to $ 31.5 billion by 2003. Key impediments to implementing knowledge management systems are identified as : the cost and human resources requirement of deployment; inflexibility of historical systems to adapt to change; and the difficulty of achieving corporate acceptance of inflexible software products that require changes in 'normal' ways of doing business. The author recommends the use of model, document and rule-independent systems with a document centered interface (DCI), employing rapid application development (RAD) and object technologies and visual model development, which eliminate these problems, making it possible for companies to maximize their return on information (ROI), and achieve substantial savings in implementation costs.

  8. Maximizing the optical network capacity.

    Science.gov (United States)

    Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I

    2016-03-06

    Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. © 2016 The Authors.

  9. Redefining reliability

    International Nuclear Information System (INIS)

    Paulson, S.L.

    1995-01-01

    Want to buy some reliability? The question would have been unthinkable in some markets served by the natural gas business even a few years ago, but in the new gas marketplace, industrial, commercial and even some residential customers have the opportunity to choose from among an array of options about the kind of natural gas service they need--and are willing to pay for. The complexities of this brave new world of restructuring and competition have sent the industry scrambling to find ways to educate and inform its customers about the increased responsibility they will have in determining the level of gas reliability they choose. This article discusses the new options and the new responsibilities of customers, the needed for continuous education, and MidAmerican Energy Company's experiment in direct marketing of natural gas

  10. Does mental exertion alter maximal muscle activation?

    Directory of Open Access Journals (Sweden)

    Vianney eRozand

    2014-09-01

    Full Text Available Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 minutes each: i high mental exertion (incongruent Stroop task, ii moderate mental exertion (congruent Stroop task, iii low mental exertion (watching a movie. In each condition, mental exertion was combined with ten intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 minutes. Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors.

  11. AUC-Maximizing Ensembles through Metalearning.

    Science.gov (United States)

    LeDell, Erin; van der Laan, Mark J; Petersen, Maya

    2016-05-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.

  12. On maximal massive 3D supergravity

    OpenAIRE

    Bergshoeff , Eric A; Hohm , Olaf; Rosseel , Jan; Townsend , Paul K

    2010-01-01

    ABSTRACT We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric " general massive supergravity " and the maximally supersymmetric N = 8 " new massive supergravity ". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level. (Bergshoeff, Eric A) (Hohm, Olaf) (Rosseel, Jan) P.K.Townsend@da...

  13. Issues in cognitive reliability

    International Nuclear Information System (INIS)

    Woods, D.D.; Hitchler, M.J.; Rumancik, J.A.

    1984-01-01

    This chapter examines some problems in current methods to assess reactor operator reliability at cognitive tasks and discusses new approaches to solve these problems. The two types of human failures are errors in the execution of an intention and errors in the formation/selection of an intention. Topics considered include the types of description, error correction, cognitive performance and response time, the speed-accuracy tradeoff function, function based task analysis, and cognitive task analysis. One problem of human reliability analysis (HRA) techniques in general is the question of what are the units of behavior whose reliability are to be determined. A second problem for HRA is that people often detect and correct their errors. The use of function based analysis, which maps the problem space for plant control, is recommended

  14. Activity versus outcome maximization in time management.

    Science.gov (United States)

    Malkoc, Selin A; Tonietto, Gabriela N

    2018-04-30

    Feeling time-pressed has become ubiquitous. Time management strategies have emerged to help individuals fit in more of their desired and necessary activities. We provide a review of these strategies. In doing so, we distinguish between two, often competing, motives people have in managing their time: activity maximization and outcome maximization. The emerging literature points to an important dilemma: a given strategy that maximizes the number of activities might be detrimental to outcome maximization. We discuss such factors that might hinder performance in work tasks and enjoyment in leisure tasks. Finally, we provide theoretically grounded recommendations that can help balance these two important goals in time management. Published by Elsevier Ltd.

  15. On the maximal superalgebras of supersymmetric backgrounds

    International Nuclear Information System (INIS)

    Figueroa-O'Farrill, Jose; Hackett-Jones, Emily; Moutsopoulos, George; Simon, Joan

    2009-01-01

    In this paper we give a precise definition of the notion of a maximal superalgebra of certain types of supersymmetric supergravity backgrounds, including the Freund-Rubin backgrounds, and propose a geometric construction extending the well-known construction of its Killing superalgebra. We determine the structure of maximal Lie superalgebras and show that there is a finite number of isomorphism classes, all related via contractions from an orthosymplectic Lie superalgebra. We use the structure theory to show that maximally supersymmetric waves do not possess such a maximal superalgebra, but that the maximally supersymmetric Freund-Rubin backgrounds do. We perform the explicit geometric construction of the maximal superalgebra of AdS 4 X S 7 and find that it is isomorphic to osp(1|32). We propose an algebraic construction of the maximal superalgebra of any background asymptotic to AdS 4 X S 7 and we test this proposal by computing the maximal superalgebra of the M2-brane in its two maximally supersymmetric limits, finding agreement.

  16. Task-oriented maximally entangled states

    International Nuclear Information System (INIS)

    Agrawal, Pankaj; Pradhan, B

    2010-01-01

    We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.

  17. MHTGR thermal performance envelopes: Reliability by design

    International Nuclear Information System (INIS)

    Etzel, K.T.; Howard, W.W.; Zgliczynski, J.B.

    1992-05-01

    This document discusses thermal performance envelopes which are used to specify steady-state design requirements for the systems of the Modular High Temperature Gas-Cooled Reactor to maximize plant performance reliability with optimized design. The thermal performance envelopes are constructed around the expected operating point accounting for uncertainties in actual plant as-built parameters and plant operation. The components are then designed to perform successfully at all points within the envelope. As a result, plant reliability is maximized by accounting for component thermal performance variation in the design. The design is optimized by providing a means to determine required margins in a disciplined and visible fashion

  18. Maximize Benefits, Minimize Risk: Selecting the Right HVAC Firm.

    Science.gov (United States)

    Golden, James T.

    1993-01-01

    An informal survey of 20 major urban school districts found that 40% were currently operating in a "break down" maintenance mode. A majority, 57.9%, also indicated they saw considerable benefits in contracting for heating, ventilating, and air conditioning (HVAC) maintenance services with outside firms. Offers guidelines in selecting…

  19. Maximally Entangled Multipartite States: A Brief Survey

    International Nuclear Information System (INIS)

    Enríquez, M; Wintrowicz, I; Życzkowski, K

    2016-01-01

    The problem of identifying maximally entangled quantum states of a composite quantum systems is analyzed. We review some states of multipartite systems distinguished with respect to certain measures of quantum entanglement. Numerical results obtained for 4-qubit pure states illustrate the fact that the notion of maximally entangled state depends on the measure used. (paper)

  20. Utility maximization and mode of payment

    NARCIS (Netherlands)

    Koning, R.H.; Ridder, G.; Heijmans, R.D.H.; Pollock, D.S.G.; Satorra, A.

    2000-01-01

    The implications of stochastic utility maximization in a model of choice of payment are examined. Three types of compatibility with utility maximization are distinguished: global compatibility, local compatibility on an interval, and local compatibility on a finite set of points. Keywords:

  1. Corporate Social Responsibility and Profit Maximizing Behaviour

    OpenAIRE

    Becchetti, Leonardo; Giallonardo, Luisa; Tessitore, Maria Elisabetta

    2005-01-01

    We examine the behavior of a profit maximizing monopolist in a horizontal differentiation model in which consumers differ in their degree of social responsibility (SR) and consumers SR is dynamically influenced by habit persistence. The model outlines parametric conditions under which (consumer driven) corporate social responsibility is an optimal choice compatible with profit maximizing behavior.

  2. Maximal Entanglement in High Energy Physics

    Directory of Open Access Journals (Sweden)

    Alba Cervera-Lierta, José I. Latorre, Juan Rojo, Luca Rottoli

    2017-11-01

    Full Text Available We analyze how maximal entanglement is generated at the fundamental level in QED by studying correlations between helicity states in tree-level scattering processes at high energy. We demonstrate that two mechanisms for the generation of maximal entanglement are at work: i $s$-channel processes where the virtual photon carries equal overlaps of the helicities of the final state particles, and ii the indistinguishable superposition between $t$- and $u$-channels. We then study whether requiring maximal entanglement constrains the coupling structure of QED and the weak interactions. In the case of photon-electron interactions unconstrained by gauge symmetry, we show how this requirement allows reproducing QED. For $Z$-mediated weak scattering, the maximal entanglement principle leads to non-trivial predictions for the value of the weak mixing angle $\\theta_W$. Our results are a first step towards understanding the connections between maximal entanglement and the fundamental symmetries of high-energy physics.

  3. An Introduction To Reliability

    International Nuclear Information System (INIS)

    Park, Kyoung Su

    1993-08-01

    This book introduces reliability with definition of reliability, requirement of reliability, system of life cycle and reliability, reliability and failure rate such as summary, reliability characteristic, chance failure, failure rate which changes over time, failure mode, replacement, reliability in engineering design, reliability test over assumption of failure rate, and drawing of reliability data, prediction of system reliability, conservation of system, failure such as summary and failure relay and analysis of system safety.

  4. Reliability issues : a Canadian perspective

    International Nuclear Information System (INIS)

    Konow, H.

    2004-01-01

    A Canadian perspective of power reliability issues was presented. Reliability depends on adequacy of supply and a framework for standards. The challenges facing the electric power industry include new demand, plant replacement and exports. It is expected that demand will by 670 TWh by 2020, with 205 TWh coming from new plants. Canada will require an investment of $150 billion to meet this demand and the need is comparable in the United States. As trade grows, the challenge becomes a continental issue and investment in the bi-national transmission grid will be essential. The 5 point plan of the Canadian Electricity Association is to: (1) establish an investment climate to ensure future electricity supply, (2) move government and industry towards smart and effective regulation, (3) work to ensure a sustainable future for the next generation, (4) foster innovation and accelerate skills development, and (5) build on the strengths of an integrated North American system to maximize opportunity for Canadians. The CEA's 7 measures that enhance North American reliability were listed with emphasis on its support for a self-governing international organization for developing and enforcing mandatory reliability standards. CEA also supports the creation of a binational Electric Reliability Organization (ERO) to identify and solve reliability issues in the context of a bi-national grid. tabs., figs

  5. Energy, complexity and wealth maximization

    CERN Document Server

    Ayres, Robert

    2016-01-01

    This book is about the mechanisms of wealth creation, or what we like to think of as evolutionary “progress”. For the modern economy, natural wealth consists of complex physical structures of condensed (“frozen”) energy – mass - maintained in the earth’s crust far from thermodynamic equilibrium. However, we usually perceive wealth as created when mutation or “invention” – a change agent - introduces something different, and fitter, and usually after some part of the natural wealth of the planet has been exploited in an episode of “creative destruction”. Selection out of the resulting diversity is determined by survival in a competitive environment, whether a planet, a habitat, or a market. While human wealth is associated with money and what it can buy, it is ultimately based on natural wealth, both as materials transformed into useful artifacts, and how those artifacts, activated by energy, can create and transmit useful information. Humans have learned how to transform natural wealth i...

  6. INTRA- AND INTER-OBSERVER RELIABILITY IN SELECTION OF THE HEART RATE DEFLECTION POINT DURING INCREMENTAL EXERCISE: COMPARISON TO A COMPUTER-GENERATED DEFLECTION POINT

    Directory of Open Access Journals (Sweden)

    Bridget A. Duoos

    2002-12-01

    Full Text Available This study was designed to 1 determine the relative frequency of occurrence of a heart rate deflection point (HRDP, when compared to a linear relationship, during progressive exercise, 2 measure the reproducibility of a visual assessment of a heart rate deflection point (HRDP, both within and between observers 3 compare visual and computer-assessed deflection points. Subjects consisted of 73 competitive male cyclists with mean age of 31.4 ± 6.3 years, mean height 178.3 ± 4.8 cm. and weight 74.0 ± 4.4 kg. Tests were conducted on an electrically-braked cycle ergometer beginning at 25 watts and progressing 25 watts per minute to fatigue. Heart Rates were recorded the last 10 seconds of each stage and at fatigue. Scatter plots of heart rate versus watts were computer-generated and given to 3 observers on two different occasions. A computer program was developed to assess if data points were best represented by a single line or two lines. The HRDP represented the intersection of the two lines. Results of this study showed that 1 computer-assessed HRDP showed that 44 of 73 subjects (60.3% had scatter plots best represented by a straight line with no HRDP 2in those subjects having HRDP, all 3 observers showed significant differences(p = 0.048, p = 0.007, p = 0.001 in reproducibility of their HRDP selection. Differences in HRDP selection were significant for two of the three comparisons between observers (p = 0.002, p = 0.305, p = 0.0003 Computer-generated HRDP was significantly different than visual HRDP for 2 of 3 observers (p = 0.0016, p = 0.513, p = 0.0001. It is concluded that 1 HRDP occurs in a minority of subjects 2 significant differences exist, both within and between observers, in selection of HRDP and 3 differences in agreement between visual and computer-generated HRDP would indicate that, when HRDP exists, it should be computer-assessed

  7. Eco-friendly ionic liquid based ultrasonic assisted selective extraction coupled with a simple liquid chromatography for the reliable determination of acrylamide in food samples.

    Science.gov (United States)

    Albishri, Hassan M; El-Hady, Deia Abd

    2014-01-01

    Acrylamide in food has drawn worldwide attention since 2002 due to its neurotoxic and carcinogenic effects. These influences brought out the dual polar and non-polar characters of acrylamide as they enabled it to dissolve in aqueous blood medium or penetrate the non-polar plasma membrane. In the current work, a simple HPLC/UV system was used to reveal that the penetration of acrylamide in non-polar phase was stronger than its dissolution in polar phase. The presence of phosphate salts in the polar phase reduced the acrylamide interaction with the non-polar phase. Furthermore, an eco-friendly and costless coupling of the HPLC/UV with ionic liquid based ultrasonic assisted extraction (ILUAE) was developed to determine the acrylamide content in food samples. ILUAE was proposed for the efficient extraction of acrylamide from bread and potato chips samples. The extracts were obtained by soaking of potato chips and bread samples in 1.5 mol L(-1) 1-butyl-3-methylimmidazolium bromide (BMIMBr) for 30.0 and 60.0 min, respectively and subsequent chromatographic separation within 12.0 min using Luna C18 column and 100% water mobile phase with 0.5 mL min(-1) under 25 °C column temperature at 250 nm. The extraction and analysis of acrylamide could be achieved within 2h. The mean extraction efficiency of acrylamide showed adequate repeatability with relative standard deviation (RSD) of 4.5%. The limit of detection and limit of quantitation were 25.0 and 80.0 ng mL(-1), respectively. The accuracy of the proposed method was tested by recovery in seven food samples giving values ranged between 90.6% and 109.8%. Therefore, the methodology was successfully validated by official guidelines, indicating its reliability to be applied to analysis of real samples, proven to be useful for its intended purpose. Moreover, it served as a simple, eco-friendly and costless alternative method over hitherto reported ones. © 2013 Elsevier B.V. All rights reserved.

  8. Selection and validation of a set of reliable reference genes for quantitative RT-PCR studies in the brain of the Cephalopod Mollusc Octopus vulgaris

    Directory of Open Access Journals (Sweden)

    Biffali Elio

    2009-07-01

    Full Text Available Abstract Background Quantitative real-time polymerase chain reaction (RT-qPCR is valuable for studying the molecular events underlying physiological and behavioral phenomena. Normalization of real-time PCR data is critical for a reliable mRNA quantification. Here we identify reference genes to be utilized in RT-qPCR experiments to normalize and monitor the expression of target genes in the brain of the cephalopod mollusc Octopus vulgaris, an invertebrate. Such an approach is novel for this taxon and of advantage in future experiments given the complexity of the behavioral repertoire of this species when compared with its relatively simple neural organization. Results We chose 16S, and 18S rRNA, actB, EEF1A, tubA and ubi as candidate reference genes (housekeeping genes, HKG. The expression of 16S and 18S was highly variable and did not meet the requirements of candidate HKG. The expression of the other genes was almost stable and uniform among samples. We analyzed the expression of HKG into two different set of animals using tissues taken from the central nervous system (brain parts and mantle (here considered as control tissue by BestKeeper, geNorm and NormFinder. We found that HKG expressions differed considerably with respect to brain area and octopus samples in an HKG-specific manner. However, when the mantle is treated as control tissue and the entire central nervous system is considered, NormFinder revealed tubA and ubi as the most suitable HKG pair. These two genes were utilized to evaluate the relative expression of the genes FoxP, creb, dat and TH in O. vulgaris. Conclusion We analyzed the expression profiles of some genes here identified for O. vulgaris by applying RT-qPCR analysis for the first time in cephalopods. We validated candidate reference genes and found the expression of ubi and tubA to be the most appropriate to evaluate the expression of target genes in the brain of different octopuses. Our results also underline the

  9. Bipartite Bell Inequality and Maximal Violation

    International Nuclear Information System (INIS)

    Li Ming; Fei Shaoming; Li-Jost Xian-Qing

    2011-01-01

    We present new bell inequalities for arbitrary dimensional bipartite quantum systems. The maximal violation of the inequalities is computed. The Bell inequality is capable of detecting quantum entanglement of both pure and mixed quantum states more effectively. (general)

  10. HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL

    CERN Document Server

    HR Division

    2000-01-01

    Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...

  11. Maximal Inequalities for Dependent Random Variables

    DEFF Research Database (Denmark)

    Hoffmann-Jorgensen, Jorgen

    2016-01-01

    Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X......-k. Then a maximal inequality gives conditions ensuring that the maximal partial sum M-n = max(1) (...

  12. Maximizing Function through Intelligent Robot Actuator Control

    Data.gov (United States)

    National Aeronautics and Space Administration — Maximizing Function through Intelligent Robot Actuator Control Successful missions to Mars and beyond will only be possible with the support of high-performance...

  13. An ethical justification of profit maximization

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2010-01-01

    In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...... utility) maximizing actions are ruled out, e.g., by behavioural norms or formal institutions....

  14. A definition of maximal CP-violation

    International Nuclear Information System (INIS)

    Roos, M.

    1985-01-01

    The unitary matrix of quark flavour mixing is parametrized in a general way, permitting a mathematically natural definition of maximal CP violation. Present data turn out to violate this definition by 2-3 standard deviations. (orig.)

  15. A cosmological problem for maximally symmetric supergravity

    International Nuclear Information System (INIS)

    German, G.; Ross, G.G.

    1986-01-01

    Under very general considerations it is shown that inflationary models of the universe based on maximally symmetric supergravity with flat potentials are unable to resolve the cosmological energy density (Polonyi) problem. (orig.)

  16. Insulin resistance and maximal oxygen uptake

    DEFF Research Database (Denmark)

    Seibaek, Marie; Vestergaard, Henrik; Burchardt, Hans

    2003-01-01

    BACKGROUND: Type 2 diabetes, coronary atherosclerosis, and physical fitness all correlate with insulin resistance, but the relative importance of each component is unknown. HYPOTHESIS: This study was undertaken to determine the relationship between insulin resistance, maximal oxygen uptake......, and the presence of either diabetes or ischemic heart disease. METHODS: The study population comprised 33 patients with and without diabetes and ischemic heart disease. Insulin resistance was measured by a hyperinsulinemic euglycemic clamp; maximal oxygen uptake was measured during a bicycle exercise test. RESULTS......: There was a strong correlation between maximal oxygen uptake and insulin-stimulated glucose uptake (r = 0.7, p = 0.001), and maximal oxygen uptake was the only factor of importance for determining insulin sensitivity in a model, which also included the presence of diabetes and ischemic heart disease. CONCLUSION...

  17. Maximal supergravities and the E10 model

    International Nuclear Information System (INIS)

    Kleinschmidt, Axel; Nicolai, Hermann

    2006-01-01

    The maximal rank hyperbolic Kac-Moody algebra e 10 has been conjectured to play a prominent role in the unification of duality symmetries in string and M theory. We review some recent developments supporting this conjecture

  18. Gaussian maximally multipartite-entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio

    2009-12-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .

  19. Gaussian maximally multipartite-entangled states

    International Nuclear Information System (INIS)

    Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano

    2009-01-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7.

  20. Neutrino mass textures with maximal CP violation

    International Nuclear Information System (INIS)

    Aizawa, Ichiro; Kitabayashi, Teruyuki; Yasue, Masaki

    2005-01-01

    We show three types of neutrino mass textures, which give maximal CP violation as well as maximal atmospheric neutrino mixing. These textures are described by six real mass parameters: one specified by two complex flavor neutrino masses and two constrained ones and the others specified by three complex flavor neutrino masses. In each texture, we calculate mixing angles and masses, which are consistent with observed data, as well as Majorana CP phases

  1. Why firms should not always maximize profits

    OpenAIRE

    Kolstad, Ivar

    2006-01-01

    Though corporate social responsibility (CSR) is on the agenda of most major corporations, corporate executives still largely support the view that corporations should maximize the returns to their owners. There are two lines of defence for this position. One is the Friedmanian view that maximizing owner returns is the corporate social responsibility of corporations. The other is a position voiced by many executives, that CSR and profits go together. This paper argues that the first position i...

  2. Maximally Informative Observables and Categorical Perception

    OpenAIRE

    Tsiang, Elaine

    2012-01-01

    We formulate the problem of perception in the framework of information theory, and prove that categorical perception is equivalent to the existence of an observable that has the maximum possible information on the target of perception. We call such an observable maximally informative. Regardless whether categorical perception is real, maximally informative observables can form the basis of a theory of perception. We conclude with the implications of such a theory for the problem of speech per...

  3. Extraction and reliable determination of acrylamide from thermally processed foods using ionic liquid-based ultrasound-assisted selective microextraction combined with spectrophotometry.

    Science.gov (United States)

    Altunay, Nail; Elik, Adil; Gürkan, Ramazan

    2018-02-01

    Acrylamide (AAm) is a carcinogenic chemical that can form in thermally processed foods by the Maillard reaction of glucose with asparagine. AAm can easily be formed especially in frequently consumed chips and cereal-based foods depending on processing conditions. Considering these properties of AAm, a new, simple and green method is proposed for the extraction of AAm from thermally processed food samples. In this study, an ionic liquid (1-butyl-3-methylimidazolium tetrafluoroborate, [Bmim][BF 4 ]) as extractant was used in the presence of a cationic phenazine group dye, 3,7-diamino-5-phenylphenazinium chloride (PSH + , phenosafranine) at pH 7.5 for the extraction of AAm as an ion-pair complex from selected samples. Under optimum conditions, the analytical features obtained for the proposed method were as follows; linear working range, the limits of detection (LOD, 3S b /m) and quantification (LOQ, 10S b /m), preconcentration factor, sensitivity enhancement factor, sample volume and recovery% were 2.2-350 µg kg -1 , 0.7 µg kg -1 , 2.3 µg kg -1 , 120, 95, 60 mL and 94.1-102.7%, respectively. The validity of the method was tested by analysis of two certified reference materials (CRMs) and intra-day and inter-day precision studies. Finally, the method was successfully applied to the determination of AAm levels in thermally processed foods using the standard addition method.

  4. A metabolic fingerprinting approach based on selected ion flow tube mass spectrometry (SIFT-MS) and chemometrics: A reliable tool for Mediterranean origin-labeled olive oils authentication.

    Science.gov (United States)

    Bajoub, Aadil; Medina-Rodríguez, Santiago; Ajal, El Amine; Cuadros-Rodríguez, Luis; Monasterio, Romina Paula; Vercammen, Joeri; Fernández-Gutiérrez, Alberto; Carrasco-Pancorbo, Alegría

    2018-04-01

    Selected Ion flow tube mass spectrometry (SIFT-MS) in combination with chemometrics was used to authenticate the geographical origin of Mediterranean virgin olive oils (VOOs) produced under geographical origin labels. In particular, 130 oil samples from six different Mediterranean regions (Kalamata (Greece); Toscana (Italy); Meknès and Tyout (Morocco); and Priego de Córdoba and Baena (Spain)) were considered. The headspace volatile fingerprints were measured by SIFT-MS in full scan with H 3 O + , NO + and O 2 + as precursor ions and the results were subjected to chemometric treatments. Principal Component Analysis (PCA) was used for preliminary multivariate data analysis and Partial Least Squares-Discriminant Analysis (PLS-DA) was applied to build different models (considering the three reagent ions) to classify samples according to the country of origin and regions (within the same country). The multi-class PLS-DA models showed very good performance in terms of fitting accuracy (98.90-100%) and prediction accuracy (96.70-100% accuracy for cross validation and 97.30-100% accuracy for external validation (test set)). Considering the two-class PLS-DA models, the one for the Spanish samples showed 100% sensitivity, specificity and accuracy in calibration, cross validation and external validation; the model for Moroccan oils also showed very satisfactory results (with perfect scores for almost every parameter in all the cases). Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Quantized hopfield networks for reliability optimization

    International Nuclear Information System (INIS)

    Nourelfath, Mustapha; Nahas, Nabil

    2003-01-01

    The use of neural networks in the reliability optimization field is rare. This paper presents an application of a recent kind of neural networks in a reliability optimization problem for a series system with multiple-choice constraints incorporated at each subsystem, to maximize the system reliability subject to the system budget. The problem is formulated as a nonlinear binary integer programming problem and characterized as an NP-hard problem. Our design of neural network to solve efficiently this problem is based on a quantized Hopfield network. This network allows us to obtain optimal design solutions very frequently and much more quickly than others Hopfield networks

  6. Escolha de cultivares de soja com base na composição química dos grãos como perspectiva para maximização dos lucros nas indústrias processadoras The selection of soybean varieties based on the chemical composition of the grains as a mean of maximizing soybean processing industry's profits

    Directory of Open Access Journals (Sweden)

    Adriana Sbardelotto

    2008-06-01

    Full Text Available Apresenta-se um modelo matemático baseado em programação linear para dar suporte às decisões referentes à escolha de cultivares de soja para processamento (extração de óleo e produção de farelo, de forma que esta escolha possa maximizar os lucros da indústria processadora. Este estudo utilizou amostras de nove cultivares de soja produzidas no município de Dois Vizinhos, região sudoeste do Estado do Paraná. Análises laboratoriais forneceram as respectivas composições dos grãos, os subprodutos, os resíduos e as perdas. O modelo permitiu estimar os retornos econômicos proporcionados à indústria processadora por meio do cultivo individual e pela média destas. A partir da análise dos resultados, conclui-se que a composição química dos grãos processados tem influência direta nos resultados econômicos da indústria e que o esmagamento das cultivares "BRS133", "CD215", "EMBRAPA48", "BRS184", "SPRING8350" e "M-SOY5826" pode maximizar os lucros da indústria processadora, enquanto que o esmagamento das cultivares "CD 205", "CD 206" e "BRS 214" pode reduzir os lucros, em relação à média obtida pelas cultivares avaliadas.This article presents a mathematical model based on linear programming to support decision-making when selecting the soybean variety to be processed for oil extraction and bran production, since the appropriate choice can maximize industry's profits. Samples of nine soybean varieties produced in the town of Dois Vizinhos, in the southwest region of the state of Paraná, Brazil, were used in this study. Laboratory analyses revealed the chemical composition of the grains, the byproducts, residues and losses. The model made it possible to estimate the profits from the processing of each variety and of the nine varieties' average. Our results show that the chemical composition of the grains directly influences the industry's profits and that the processing of the varieties 'BRS133', 'CD215', 'EMBRAPA48', 'BRS

  7. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  8. Shareholder, stakeholder-owner or broad stakeholder maximization

    OpenAIRE

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating stakeholder-owner. Maximization of shareholder value is a special case of owner-maximization, and only under quite re-strictive assumptions shareholder maximization is larger or equal to stakeholder-owner...

  9. "The Theory was Beautiful Indeed": Rise, Fall and Circulation of Maximizing Methods in Population Genetics (1930-1980).

    Science.gov (United States)

    Grodwohl, Jean-Baptiste

    2017-08-01

    Describing the theoretical population geneticists of the 1960s, Joseph Felsenstein reminisced: "our central obsession was finding out what function evolution would try to maximize. Population geneticists used to think, following Sewall Wright, that mean relative fitness, W, would be maximized by natural selection" (Felsenstein 2000). The present paper describes the genesis, diffusion and fall of this "obsession", by giving a biography of the mean fitness function in population genetics. This modeling method devised by Sewall Wright in the 1930s found its heyday in the late 1950s and early 1960s, in the wake of Motoo Kimura's and Richard Lewontin's works. It seemed a reliable guide in the mathematical study of deterministic effects (the study of natural selection in populations of infinite size, with no drift), leading to powerful generalizations presenting law-like properties. Progress in population genetics theory, it then seemed, would come from the application of this method to the study of systems with several genes. This ambition came to a halt in the context of the influential objections made by the Australian mathematician Patrick Moran in 1963. These objections triggered a controversy between mathematically- and biologically-inclined geneticists, with affected both the formal standards and the aims of population genetics as a science. Over the course of the 1960s, the mean fitness method withered with the ambition of developing the deterministic theory. The mathematical theory became increasingly complex. Kimura re-focused his modeling work on the theory of random processes; as a result of his computer simulations, Lewontin became the staunchest critic of maximizing principles in evolutionary biology. The mean fitness method then migrated to other research areas, being refashioned and used in evolutionary quantitative genetics and behavioral ecology.

  10. Cost analysis of reliability investigations

    International Nuclear Information System (INIS)

    Schmidt, F.

    1981-01-01

    Taking Epsteins testing theory as a basis, premisses are formulated for the selection of cost-optimized reliability inspection plans. Using an example, the expected testing costs and inspection time periods of various inspection plan types, standardized on the basis of the exponential distribution, are compared. It can be shown that sequential reliability tests usually involve lower costs than failure or time-fixed tests. The most 'costly' test is to be expected with the inspection plan type NOt. (orig.) [de

  11. Aerospace reliability applied to biomedicine.

    Science.gov (United States)

    Lalli, V. R.; Vargo, D. J.

    1972-01-01

    An analysis is presented that indicates that the reliability and quality assurance methodology selected by NASA to minimize failures in aerospace equipment can be applied directly to biomedical devices to improve hospital equipment reliability. The Space Electric Rocket Test project is used as an example of NASA application of reliability and quality assurance (R&QA) methods. By analogy a comparison is made to show how these same methods can be used in the development of transducers, instrumentation, and complex systems for use in medicine.

  12. Application of reliability methods in Ontario Hydro

    International Nuclear Information System (INIS)

    Jeppesen, R.; Ravishankar, T.J.

    1985-01-01

    Ontario Hydro have established a reliability program in support of its substantial nuclear program. Application of the reliability program to achieve both production and safety goals is described. The value of such a reliability program is evident in the record of Ontario Hydro's operating nuclear stations. The factors which have contributed to the success of the reliability program are identified as line management's commitment to reliability; selective and judicious application of reliability methods; establishing performance goals and monitoring the in-service performance; and collection, distribution, review and utilization of performance information to facilitate cost-effective achievement of goals and improvements. (orig.)

  13. Vacua of maximal gauged D=3 supergravities

    International Nuclear Information System (INIS)

    Fischbacher, T; Nicolai, H; Samtleben, H

    2002-01-01

    We analyse the scalar potentials of maximal gauged three-dimensional supergravities which reveal a surprisingly rich structure. In contrast to maximal supergravities in dimensions D≥4, all these theories possess a maximally supersymmetric (N=16) ground state with negative cosmological constant Λ 2 gauged theory, whose maximally supersymmetric groundstate has Λ = 0. We compute the mass spectra of bosonic and fermionic fluctuations around these vacua and identify the unitary irreducible representations of the relevant background (super)isometry groups to which they belong. In addition, we find several stationary points which are not maximally supersymmetric, and determine their complete mass spectra as well. In particular, we show that there are analogues of all stationary points found in higher dimensions, among them are de Sitter (dS) vacua in the theories with noncompact gauge groups SO(5, 3) 2 and SO(4, 4) 2 , as well as anti-de Sitter (AdS) vacua in the compact gauged theory preserving 1/4 and 1/8 of the supersymmetries. All the dS vacua have tachyonic instabilities, whereas there do exist nonsupersymmetric AdS vacua which are stable, again in contrast to the D≥4 theories

  14. An information maximization model of eye movements

    Science.gov (United States)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  15. Utility Maximization in Nonconvex Wireless Systems

    CERN Document Server

    Brehmer, Johannes

    2012-01-01

    This monograph formulates a framework for modeling and solving utility maximization problems in nonconvex wireless systems. First, a model for utility optimization in wireless systems is defined. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed. The development is based on a careful examination of the properties that are required for the application of each method. The focus is on problems whose initial formulation does not allow for a solution by standard convex methods. Solution approaches that take into account the nonconvexities inherent to wireless systems are discussed in detail. The monograph concludes with two case studies that demonstrate the application of the proposed framework to utility maximization in multi-antenna broadcast channels.

  16. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  17. Optimal design of water supply networks for enhancing seismic reliability

    International Nuclear Information System (INIS)

    Yoo, Do Guen; Kang, Doosun; Kim, Joong Hoon

    2016-01-01

    The goal of the present study is to construct a reliability evaluation model of a water supply system taking seismic hazards and present techniques to enhance hydraulic reliability of the design into consideration. To maximize seismic reliability with limited budgets, an optimal design model is developed using an optimization technique called harmony search (HS). The model is applied to actual water supply systems to determine pipe diameters that can maximize seismic reliability. The reliabilities between the optimal design and existing designs were compared and analyzed. The optimal design would both enhance reliability by approximately 8.9% and have a construction cost of approximately 1.3% less than current pipe construction cost. In addition, the reinforcement of the durability of individual pipes without considering the system produced ineffective results in terms of both cost and reliability. Therefore, to increase the supply ability of the entire system, optimized pipe diameter combinations should be derived. Systems in which normal status hydraulic stability and abnormal status available demand could be maximally secured if configured through the optimal design. - Highlights: • We construct a seismic reliability evaluation model of water supply system. • We present technique to enhance hydraulic reliability in the aspect of design. • Harmony search algorithm is applied in optimal designs process. • The effects of the proposed optimal design are improved reliability about by 9%. • Optimized pipe diameter combinations should be derived indispensably.

  18. Maximizing band gaps in plate structures

    DEFF Research Database (Denmark)

    Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard

    2006-01-01

    periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated......Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... theoretically and experimentally and the issue of finite size effects is addressed....

  19. Singularity Structure of Maximally Supersymmetric Scattering Amplitudes

    DEFF Research Database (Denmark)

    Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy

    2014-01-01

    We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....

  20. Learning curves for mutual information maximization

    International Nuclear Information System (INIS)

    Urbanczik, R.

    2003-01-01

    An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed [S. Becker and G. Hinton, Nature (London) 355, 161 (1992)]. For a generic data model, I show that in the large sample limit the structure in the data is recognized by mutual information maximization. For a more restricted model, where the networks are similar to perceptrons, I calculate the learning curves for zero-temperature Gibbs learning. These show that convergence can be rather slow, and a way of regularizing the procedure is considered

  1. Finding Maximal Pairs with Bounded Gap

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Lyngsø, Rune B.; Pedersen, Christian N. S.

    1999-01-01

    . In this paper we present methods for finding all maximal pairs under various constraints on the gap. In a string of length n we can find all maximal pairs with gap in an upper and lower bounded interval in time O(n log n+z) where z is the number of reported pairs. If the upper bound is removed the time reduces...... to O(n+z). Since a tandem repeat is a pair where the gap is zero, our methods can be seen as a generalization of finding tandem repeats. The running time of our methods equals the running time of well known methods for finding tandem repeats....

  2. Maximizing the Range of a Projectile.

    Science.gov (United States)

    Brown, Ronald A.

    1992-01-01

    Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)

  3. Robust Utility Maximization Under Convex Portfolio Constraints

    International Nuclear Information System (INIS)

    Matoussi, Anis; Mezghani, Hanen; Mnif, Mohamed

    2015-01-01

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle

  4. Ehrenfest's Lottery--Time and Entropy Maximization

    Science.gov (United States)

    Ashbaugh, Henry S.

    2010-01-01

    Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…

  5. Reserve design to maximize species persistence

    Science.gov (United States)

    Robert G. Haight; Laurel E. Travis

    2008-01-01

    We develop a reserve design strategy to maximize the probability of species persistence predicted by a stochastic, individual-based, metapopulation model. Because the population model does not fit exact optimization procedures, our strategy involves deriving promising solutions from theory, obtaining promising solutions from a simulation optimization heuristic, and...

  6. Maximal indecomposable past sets and event horizons

    International Nuclear Information System (INIS)

    Krolak, A.

    1984-01-01

    The existence of maximal indecomposable past sets MIPs is demonstrated using the Kuratowski-Zorn lemma. A criterion for the existence of an absolute event horizon in space-time is given in terms of MIPs and a relation to black hole event horizon is shown. (author)

  7. Maximization of eigenvalues using topology optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2000-01-01

    to localized modes in low density areas. The topology optimization problem is formulated using the SIMP method. Special attention is paid to a numerical method for removing localized eigenmodes in low density areas. The method is applied to numerical examples of maximizing the first eigenfrequency, One example...

  8. Maximizing Resource Utilization in Video Streaming Systems

    Science.gov (United States)

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  9. A THEORY OF MAXIMIZING SENSORY INFORMATION

    NARCIS (Netherlands)

    Hateren, J.H. van

    1992-01-01

    A theory is developed on the assumption that early sensory processing aims at maximizing the information rate in the channels connecting the sensory system to more central parts of the brain, where it is assumed that these channels are noisy and have a limited dynamic range. Given a stimulus power

  10. Maximizing scientific knowledge from randomized clinical trials

    DEFF Research Database (Denmark)

    Gustafsson, Finn; Atar, Dan; Pitt, Bertram

    2010-01-01

    Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly vari...

  11. A Model of College Tuition Maximization

    Science.gov (United States)

    Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.

    2009-01-01

    This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…

  12. Logit Analysis for Profit Maximizing Loan Classification

    OpenAIRE

    Watt, David L.; Mortensen, Timothy L.; Leistritz, F. Larry

    1988-01-01

    Lending criteria and loan classification methods are developed. Rating system breaking points are analyzed to present a method to maximize loan revenues. Financial characteristics of farmers are used as determinants of delinquency in a multivariate logistic model. Results indicate that debt-to-asset and operating ration are most indicative of default.

  13. Developing maximal neuromuscular power: Part 1--biological basis of maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-01-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances, the ability to generate maximal muscular power. Part 1 focuses on the factors that affect maximal power production, while part 2, which will follow in a forthcoming edition of Sports Medicine, explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability of the neuromuscular system to generate maximal power is affected by a range of interrelated factors. Maximal muscular power is defined and limited by the force-velocity relationship and affected by the length-tension relationship. The ability to generate maximal power is influenced by the type of muscle action involved and, in particular, the time available to develop force, storage and utilization of elastic energy, interactions of contractile and elastic elements, potentiation of contractile and elastic filaments as well as stretch reflexes. Furthermore, maximal power production is influenced by morphological factors including fibre type contribution to whole muscle area, muscle architectural features and tendon properties as well as neural factors including motor unit recruitment, firing frequency, synchronization and inter-muscular coordination. In addition, acute changes in the muscle environment (i.e. alterations resulting from fatigue, changes in hormone milieu and muscle temperature) impact the ability to generate maximal power. Resistance training has been shown to impact each of these neuromuscular factors in quite specific ways. Therefore, an understanding of the biological basis of maximal power production is essential for developing training programmes that effectively enhance maximal power production in the human.

  14. Maximizing biomarker discovery by minimizing gene signatures

    Directory of Open Access Journals (Sweden)

    Chang Chang

    2011-12-01

    Full Text Available Abstract Background The use of gene signatures can potentially be of considerable value in the field of clinical diagnosis. However, gene signatures defined with different methods can be quite various even when applied the same disease and the same endpoint. Previous studies have shown that the correct selection of subsets of genes from microarray data is key for the accurate classification of disease phenotypes, and a number of methods have been proposed for the purpose. However, these methods refine the subsets by only considering each single feature, and they do not confirm the association between the genes identified in each gene signature and the phenotype of the disease. We proposed an innovative new method termed Minimize Feature's Size (MFS based on multiple level similarity analyses and association between the genes and disease for breast cancer endpoints by comparing classifier models generated from the second phase of MicroArray Quality Control (MAQC-II, trying to develop effective meta-analysis strategies to transform the MAQC-II signatures into a robust and reliable set of biomarker for clinical applications. Results We analyzed the similarity of the multiple gene signatures in an endpoint and between the two endpoints of breast cancer at probe and gene levels, the results indicate that disease-related genes can be preferably selected as the components of gene signature, and that the gene signatures for the two endpoints could be interchangeable. The minimized signatures were built at probe level by using MFS for each endpoint. By applying the approach, we generated a much smaller set of gene signature with the similar predictive power compared with those gene signatures from MAQC-II. Conclusions Our results indicate that gene signatures of both large and small sizes could perform equally well in clinical applications. Besides, consistency and biological significances can be detected among different gene signatures, reflecting the

  15. Strategies for maximizing clinical effectiveness in the treatment of schizophrenia.

    Science.gov (United States)

    Tandon, Rajiv; Targum, Steven D; Nasrallah, Henry A; Ross, Ruth

    2006-11-01

    The ultimate clinical objective in the treatment of schizophrenia is to enable affected individuals to lead maximally productive and personally meaningful lives. As with other chronic diseases that lack a definitive cure, the individual's service/recovery plan must include treatment interventions directed towards decreasing manifestations of the illness, rehabilitative services directed towards enhancing adaptive skills, and social support mobilization aimed at optimizing function and quality of life. In this review, we provide a conceptual framework for considering approaches for maximizing the effectiveness of the array of treatments and other services towards promoting recovery of persons with schizophrenia. We discuss pharmacological, psychological, and social strategies that decrease the burden of the disease of schizophrenia on affected individuals and their families while adding the least possible burden of treatment. In view of the multitude of treatments necessary to optimize outcomes for individuals with schizophrenia, effective coordination of these services is essential. In addition to providing best possible clinical assessment and pharmacological treatment, the psychiatrist must function as an effective leader of the treatment team. To do so, however, the psychiatrist must be knowledgeable about the range of available services, must have skills in clinical-administrative leadership, and must accept the responsibility of coordinating the planning and delivery of this multidimensional array of treatments and services. Finally, the effectiveness of providing optimal individualized treatment/rehabilitation is best gauged by measuring progress on multiple effectiveness domains. Approaches for efficient and reliable assessment are discussed.

  16. Understanding Violations of Gricean Maxims in Preschoolers and Adults

    Directory of Open Access Journals (Sweden)

    Mako eOkanda

    2015-07-01

    Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.

  17. Estimation of maximal oxygen uptake without exercise testing in Korean healthy adult workers.

    Science.gov (United States)

    Jang, Tae-Won; Park, Shin-Goo; Kim, Hyoung-Ryoul; Kim, Jung-Man; Hong, Young-Seoub; Kim, Byoung-Gwon

    2012-08-01

    Maximal oxygen uptake is generally accepted as the most valid and reliable index of cardiorespiratory fitness and functional aerobic capacity. The exercise test for measuring maximal oxygen uptake is unsuitable for screening tests in public heath examinations, because of the potential risks of exercise exertion and time demands. We designed this study to determine whether work-related physical activity is a potential predictor of maximal oxygen uptake, and to develop a maximal oxygen uptake equation using a non-exercise regression model for the cardiorespiratory fitness test in Korean adult workers. Study subjects were adult workers of small-sized companies in Korea. Subjects with history of disease such as hypertension, diabetes, asthma and angina were excluded. In total, 217 adult subjects (113 men of 21-55 years old and 104 women of 20-64 years old) were included. Self-report questionnaire survey was conducted on study subjects, and maximal oxygen uptake of each subject was measured with the exercise test. The statistical analysis was carried out to develop an equation for estimating maximal oxygen uptake. The predictors for estimating maximal oxygen uptake included age, gender, body mass index, smoking, leisure-time physical activity and the factors representing work-related physical activity. The work-related physical activity was identified to be a predictor of maximal oxygen uptake. Moreover, the equation showed high validity according to the statistical analysis. The equation for estimating maximal oxygen uptake developed in the present study could be used as a screening test for assessing cardiorespiratory fitness in Korean adult workers.

  18. Reliability analysis and operator modelling

    International Nuclear Information System (INIS)

    Hollnagel, Erik

    1996-01-01

    The paper considers the state of operator modelling in reliability analysis. Operator models are needed in reliability analysis because operators are needed in process control systems. HRA methods must therefore be able to account both for human performance variability and for the dynamics of the interaction. A selected set of first generation HRA approaches is briefly described in terms of the operator model they use, their classification principle, and the actual method they propose. In addition, two examples of second generation methods are also considered. It is concluded that first generation HRA methods generally have very simplistic operator models, either referring to the time-reliability relationship or to elementary information processing concepts. It is argued that second generation HRA methods must recognise that cognition is embedded in a context, and be able to account for that in the way human reliability is analysed and assessed

  19. 78 FR 69018 - Improving the Resiliency of Mobile Wireless Communications Networks; Reliability and Continuity...

    Science.gov (United States)

    2013-11-18

    ... consumers value overall network reliability and quality in selecting mobile wireless service providers, they...-125] Improving the Resiliency of Mobile Wireless Communications Networks; Reliability and Continuity... (Reliability NOI) in 2011 to ``initiate a comprehensive examination of issues regarding the reliability...

  20. AMSAA Reliability Growth Guide

    National Research Council Canada - National Science Library

    Broemm, William

    2000-01-01

    ... has developed reliability growth methodology for all phases of the process, from planning to tracking to projection. The report presents this methodology and associated reliability growth concepts.

  1. Refined reservoir description to maximize oil recovery

    International Nuclear Information System (INIS)

    Flewitt, W.E.

    1975-01-01

    To assure maximized oil recovery from older pools, reservoir description has been advanced by fully integrating original open-hole logs and the recently introduced interpretive techniques made available through cased-hole wireline saturation logs. A refined reservoir description utilizing normalized original wireline porosity logs has been completed in the Judy Creek Beaverhill Lake ''A'' Pool, a reefal carbonate pool with current potential productivity of 100,000 BOPD and 188 active wells. Continuous porosity was documented within a reef rim and cap while discontinuous porous lenses characterized an interior lagoon. With the use of pulsed neutron logs and production data a separate water front and pressure response was recognized within discrete environmental units. The refined reservoir description aided in reservoir simulation model studies and quantifying pool performance. A pattern water flood has now replaced the original peripheral bottom water drive to maximize oil recovery

  2. A reliability simulation language for reliability analysis

    International Nuclear Information System (INIS)

    Deans, N.D.; Miller, A.J.; Mann, D.P.

    1986-01-01

    The results of work being undertaken to develop a Reliability Description Language (RDL) which will enable reliability analysts to describe complex reliability problems in a simple, clear and unambiguous way are described. Component and system features can be stated in a formal manner and subsequently used, along with control statements to form a structured program. The program can be compiled and executed on a general-purpose computer or special-purpose simulator. (DG)

  3. A GA based penalty function technique for solving constrained redundancy allocation problem of series system with interval valued reliability of components

    Science.gov (United States)

    Gupta, R. K.; Bhunia, A. K.; Roy, D.

    2009-10-01

    In this paper, we have considered the problem of constrained redundancy allocation of series system with interval valued reliability of components. For maximizing the overall system reliability under limited resource constraints, the problem is formulated as an unconstrained integer programming problem with interval coefficients by penalty function technique and solved by an advanced GA for integer variables with interval fitness function, tournament selection, uniform crossover, uniform mutation and elitism. As a special case, considering the lower and upper bounds of the interval valued reliabilities of the components to be the same, the corresponding problem has been solved. The model has been illustrated with some numerical examples and the results of the series redundancy allocation problem with fixed value of reliability of the components have been compared with the existing results available in the literature. Finally, sensitivity analyses have been shown graphically to study the stability of our developed GA with respect to the different GA parameters.

  4. Control of Shareholders’ Wealth Maximization in Nigeria

    OpenAIRE

    A. O. Oladipupo; C. O. Okafor

    2014-01-01

    This research focuses on who controls shareholder’s wealth maximization and how does this affect firm’s performance in publicly quoted non-financial companies in Nigeria. The shareholder fund was the dependent while explanatory variables were firm size (proxied by log of turnover), retained earning (representing management control) and dividend payment (representing measure of shareholders control). The data used for this study were obtained from the Nigerian Stock Exchange [NSE] fact book an...

  5. Definable maximal discrete sets in forcing extensions

    DEFF Research Database (Denmark)

    Törnquist, Asger Dag; Schrittesser, David

    2018-01-01

    Let  be a Σ11 binary relation, and recall that a set A is -discrete if no two elements of A are related by . We show that in the Sacks and Miller forcing extensions of L there is a Δ12 maximal -discrete set. We use this to answer in the negative the main question posed in [5] by showing...

  6. Dynamic Convex Duality in Constrained Utility Maximization

    OpenAIRE

    Li, Yusong; Zheng, Harry

    2016-01-01

    In this paper, we study a constrained utility maximization problem following the convex duality approach. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint process coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also...

  7. Measuring reliability under epistemic uncertainty: Review on non-probabilistic reliability metrics

    Directory of Open Access Journals (Sweden)

    Kang Rui

    2016-06-01

    Full Text Available In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability and uncertainty-theory-based reliability metrics (belief reliability. It is pointed out that a qualified reliability metric that is able to consider the effect of epistemic uncertainty needs to (1 compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2 satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.

  8. Single maximal versus combination punch kinematics.

    Science.gov (United States)

    Piorkowski, Barry A; Lees, Adrian; Barton, Gabor J

    2011-03-01

    The aim of this study was to determine the influence of punch type (Jab, Cross, Lead Hook and Reverse Hook) and punch modality (Single maximal, 'In-synch' and 'Out of synch' combination) on punch speed and delivery time. Ten competition-standard volunteers performed punches with markers placed on their anatomical landmarks for 3D motion capture with an eight-camera optoelectronic system. Speed and duration between key moments were computed. There were significant differences in contact speed between punch types (F(2,18,84.87) = 105.76, p = 0.001) with Lead and Reverse Hooks developing greater speed than Jab and Cross. There were significant differences in contact speed between punch modalities (F(2,64,102.87) = 23.52, p = 0.001) with the Single maximal (M+/- SD: 9.26 +/- 2.09 m/s) higher than 'Out of synch' (7.49 +/- 2.32 m/s), 'In-synch' left (8.01 +/- 2.35 m/s) or right lead (7.97 +/- 2.53 m/s). Delivery times were significantly lower for Jab and Cross than Hook. Times were significantly lower 'In-synch' than a Single maximal or 'Out of synch' combination mode. It is concluded that a defender may have more evasion-time than previously reported. This research could be of use to performers and coaches when considering training preparations.

  9. Formation Control for the MAXIM Mission

    Science.gov (United States)

    Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.

    2004-01-01

    Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.

  10. Gradient Dynamics and Entropy Production Maximization

    Science.gov (United States)

    Janečka, Adam; Pavelka, Michal

    2018-01-01

    We compare two methods for modeling dissipative processes, namely gradient dynamics and entropy production maximization. Both methods require similar physical inputs-how energy (or entropy) is stored and how it is dissipated. Gradient dynamics describes irreversible evolution by means of dissipation potential and entropy, it automatically satisfies Onsager reciprocal relations as well as their nonlinear generalization (Maxwell-Onsager relations), and it has statistical interpretation. Entropy production maximization is based on knowledge of free energy (or another thermodynamic potential) and entropy production. It also leads to the linear Onsager reciprocal relations and it has proven successful in thermodynamics of complex materials. Both methods are thermodynamically sound as they ensure approach to equilibrium, and we compare them and discuss their advantages and shortcomings. In particular, conditions under which the two approaches coincide and are capable of providing the same constitutive relations are identified. Besides, a commonly used but not often mentioned step in the entropy production maximization is pinpointed and the condition of incompressibility is incorporated into gradient dynamics.

  11. A Topology Control Strategy with Reliability Assurance for Satellite Cluster Networks in Earth Observation

    Directory of Open Access Journals (Sweden)

    Qing Chen

    2017-02-01

    Full Text Available This article investigates the dynamic topology control problemof satellite cluster networks (SCNs in Earth observation (EO missions by applying a novel metric of stability for inter-satellite links (ISLs. The properties of the periodicity and predictability of satellites’ relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.

  12. Making literature reviews more reliable through application of lessons from systematic reviews.

    Science.gov (United States)

    Haddaway, N R; Woodcock, P; Macura, B; Collins, A

    2015-12-01

    Review articles can provide valuable summaries of the ever-increasing volume of primary research in conservation biology. Where findings may influence important resource-allocation decisions in policy or practice, there is a need for a high degree of reliability when reviewing evidence. However, traditional literature reviews are susceptible to a number of biases during the identification, selection, and synthesis of included studies (e.g., publication bias, selection bias, and vote counting). Systematic reviews, pioneered in medicine and translated into conservation in 2006, address these issues through a strict methodology that aims to maximize transparency, objectivity, and repeatability. Systematic reviews will always be the gold standard for reliable synthesis of evidence. However, traditional literature reviews remain popular and will continue to be valuable where systematic reviews are not feasible. Where traditional reviews are used, lessons can be taken from systematic reviews and applied to traditional reviews in order to increase their reliability. Certain key aspects of systematic review methods that can be used in a context-specific manner in traditional reviews include focusing on mitigating bias; increasing transparency, consistency, and objectivity, and critically appraising the evidence and avoiding vote counting. In situations where conducting a full systematic review is not feasible, the proposed approach to reviewing evidence in a more systematic way can substantially improve the reliability of review findings, providing a time- and resource-efficient means of maximizing the value of traditional reviews. These methods are aimed particularly at those conducting literature reviews where systematic review is not feasible, for example, for graduate students, single reviewers, or small organizations. © 2015 Society for Conservation Biology.

  13. Problematics of Reliability of Road Rollers

    Science.gov (United States)

    Stawowiak, Michał; Kuczaj, Mariusz

    2018-06-01

    This article refers to the reliability of road rollers used in a selected roadworks company. Information on the method of road rollers service and how the service affects the reliability of these rollers is presented. Attention was paid to the process of the implemented maintenance plan with regard to the machine's operational time. The reliability of road rollers was analyzed by determining and interpreting readiness coefficients.

  14. Dopaminergic balance between reward maximization and policy complexity

    Directory of Open Access Journals (Sweden)

    Naama eParush

    2011-05-01

    Full Text Available Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor. Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost and maximizing the expected future reward (gain. We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the trade-off between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems.

  15. Comparison of three protocols for measuring the maximal respiratory pressures

    Directory of Open Access Journals (Sweden)

    Isabela Maria B. Sclauser Pessoa

    Full Text Available Introduction To avoid the selection of submaximal efforts during the assessment of maximal inspiratory and expiratory pressures (MIP and MEP, some reproducibility criteria have been suggested. Criteria that stand out are those proposed by the American Thoracic Society (ATS and European Respiratory Society (ERS and by the Brazilian Thoracic Association (BTA. However, no studies were found that compared these criteria or assessed the combination of both protocols. Objectives To assess the pressure values selected and the number of maneuvers required to achieve maximum performance using the reproducibility criteria proposed by the ATS/ERS, the BTA and the present study. Materials and method 113 healthy subjects (43.04 ± 16.94 years from both genders were assessed according to the criteria proposed by the ATS/ERS, BTA and the present study. Descriptive statistics were used for analysis, followed by ANOVA for repeated measures and post hoc LSD or by Friedman test and post hoc Wilcoxon, according to the data distribution. Results The criterion proposed by the present study resulted in a significantly higher number of maneuvers (MIP and MEP – median and 25%-75% interquartile range: 5[5-6], 4[3-5] and 3[3-4] for the present study criterion, BTA and ATS/ERS, respectively; p < 0.01 and higher pressure values (MIP – mean and 95% confidence interval: 103[91.43-103.72], 100[97.19-108.83] and 97.6[94.06-105.95]; MEP: median and 25%-75% interquartile range: 124.2[101.4-165.9], 123.3[95.4-153.8] and 118.4[95.5-152.7]; p < 0.05. Conclusion The proposed criterion resulted in the selection of pressure values closer to the individual’s maximal capacity. This new criterion should be considered in future studies concerning MIP and MEP measurements.

  16. Postactivation potentiation biases maximal isometric strength assessment.

    Science.gov (United States)

    Lima, Leonardo Coelho Rabello; Oliveira, Felipe Bruno Dias; Oliveira, Thiago Pires; Assumpção, Claudio de Oliveira; Greco, Camila Coelho; Cardozo, Adalgiso Croscato; Denadai, Benedito Sérgio

    2014-01-01

    Postactivation potentiation (PAP) is known to enhance force production. Maximal isometric strength assessment protocols usually consist of two or more maximal voluntary isometric contractions (MVCs). The objective of this study was to determine if PAP would influence isometric strength assessment. Healthy male volunteers (n = 23) performed two five-second MVCs separated by a 180-seconds interval. Changes in isometric peak torque (IPT), time to achieve it (tPTI), contractile impulse (CI), root mean square of the electromyographic signal during PTI (RMS), and rate of torque development (RTD), in different intervals, were measured. Significant increases in IPT (240.6 ± 55.7 N·m versus 248.9 ± 55.1 N·m), RTD (746 ± 152 N·m·s(-1) versus 727 ± 158 N·m·s(-1)), and RMS (59.1 ± 12.2% RMSMAX  versus 54.8 ± 9.4% RMSMAX) were found on the second MVC. tPTI decreased significantly on the second MVC (2373 ± 1200 ms versus 2784 ± 1226 ms). We conclude that a first MVC leads to PAP that elicits significant enhancements in strength-related variables of a second MVC performed 180 seconds later. If disconsidered, this phenomenon might bias maximal isometric strength assessment, overestimating some of these variables.

  17. Gain maximization in a probabilistic entanglement protocol

    Science.gov (United States)

    di Lorenzo, Antonio; Esteves de Queiroz, Johnny Hebert

    Entanglement is a resource. We can therefore define gain as a monotonic function of entanglement G (E) . If a pair with entanglement E is produced with probability P, the net gain is N = PG (E) - (1 - P) C , where C is the cost of a failed attempt. We study a protocol where a pair of quantum systems is produced in a maximally entangled state ρm with probability Pm, while it is produced in a partially entangled state ρp with the complementary probability 1 -Pm . We mix a fraction w of the partially entangled pairs with the maximally entangled ones, i.e. we take the state to be ρ = (ρm + wUlocρpUloc+) / (1 + w) , where Uloc is an appropriate unitary local operation designed to maximize the entanglement of ρ. This procedure on one hand reduces the entanglement E, and hence the gain, but on the other hand it increases the probability of success to P =Pm + w (1 -Pm) , therefore the net gain N may increase. There may be hence, a priori, an optimal value for w, the fraction of failed attempts that we mix in. We show that, in the hypothesis of a linear gain G (E) = E , even assuming a vanishing cost C -> 0 , the net gain N is increasing with w, therefore the best strategy is to always mix the partially entangled states. Work supported by CNPq, Conselho Nacional de Desenvolvimento Científico e Tecnológico, proc. 311288/2014-6, and by FAPEMIG, Fundação de Amparo à Pesquisa de Minas Gerais, proc. IC-FAPEMIG2016-0269 and PPM-00607-16.

  18. Reliability data banks

    International Nuclear Information System (INIS)

    Cannon, A.G.; Bendell, A.

    1991-01-01

    Following an introductory chapter on Reliability, what is it, why it is needed, how it is achieved and measured, the principles of reliability data bases and analysis methodologies are the subject of the next two chapters. Achievements due to the development of data banks are mentioned for different industries in the next chapter, FACTS, a comprehensive information system for industrial safety and reliability data collection in process plants are covered next. CREDO, the Central Reliability Data Organization is described in the next chapter and is indexed separately, as is the chapter on DANTE, the fabrication reliability Data analysis system. Reliability data banks at Electricite de France and IAEA's experience in compiling a generic component reliability data base are also separately indexed. The European reliability data system, ERDS, and the development of a large data bank come next. The last three chapters look at 'Reliability data banks, - friend foe or a waste of time'? and future developments. (UK)

  19. Suncor maintenance and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Little, S. [Suncor Energy, Calgary, AB (Canada)

    2006-07-01

    Fleet maintenance and reliability at Suncor Energy was discussed in this presentation, with reference to Suncor Energy's primary and support equipment fleets. This paper also discussed Suncor Energy's maintenance and reliability standard involving people, processes and technology. An organizational maturity chart that graphed organizational learning against organizational performance was illustrated. The presentation also reviewed the maintenance and reliability framework; maintenance reliability model; the process overview of the maintenance and reliability standard; a process flow chart of maintenance strategies and programs; and an asset reliability improvement process flow chart. An example of an improvement initiative was included, with reference to a shovel reliability review; a dipper trip reliability investigation; bucket related failures by type and frequency; root cause analysis of the reliability process; and additional actions taken. Last, the presentation provided a graph of the results of the improvement initiative and presented the key lessons learned. tabs., figs.

  20. Maximizing percentage depletion in solid minerals

    International Nuclear Information System (INIS)

    Tripp, J.; Grove, H.D.; McGrath, M.

    1982-01-01

    This article develops a strategy for maximizing percentage depletion deductions when extracting uranium or other solid minerals. The goal is to avoid losing percentage depletion deductions by staying below the 50% limitation on taxable income from the property. The article is divided into two major sections. The first section is comprised of depletion calculations that illustrate the problem and corresponding solutions. The last section deals with the feasibility of applying the strategy and complying with the Internal Revenue Code and appropriate regulations. Three separate strategies or appropriate situations are developed and illustrated. 13 references, 3 figures, 7 tables

  1. What currency do bumble bees maximize?

    Directory of Open Access Journals (Sweden)

    Nicholas L Charlton

    2010-08-01

    Full Text Available In modelling bumble bee foraging, net rate of energetic intake has been suggested as the appropriate currency. The foraging behaviour of honey bees is better predicted by using efficiency, the ratio of energetic gain to expenditure, as the currency. We re-analyse several studies of bumble bee foraging and show that efficiency is as good a currency as net rate in terms of predicting behaviour. We suggest that future studies of the foraging of bumble bees should be designed to distinguish between net rate and efficiency maximizing behaviour in an attempt to discover which is the more appropriate currency.

  2. New Maximal Two-distance Sets

    DEFF Research Database (Denmark)

    Lisonek, Petr

    1996-01-01

    A two-distance set in E^d is a point set X inthe d-dimensional Euclidean spacesuch that the distances between distinct points in Xassume only two different non-zero values. Based on results from classical distance geometry, we developan algorithm to classify, for a given dimension, all maximal...... (largest possible)two-distance sets in E^d.Using this algorithm we have completed the full classificationfor all dimensions less than or equal to 7, andwe have found one set in E^8 whosemaximality follows from Blokhuis' upper bound on sizes of s-distance sets.While in the dimensions less than or equal to 6...

  3. Maximizing policy learning in international committees

    DEFF Research Database (Denmark)

    Nedergaard, Peter

    2007-01-01

    , this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...

  4. Pouliot type duality via a-maximization

    International Nuclear Information System (INIS)

    Kawano, Teruhiko; Ookouchi, Yutaka; Tachikawa, Yuji; Yagi, Futoshi

    2006-01-01

    We study four-dimensional N=1Spin(10) gauge theory with a single spinor and N Q vectors at the superconformal fixed point via the electric-magnetic duality and a-maximization. When gauge invariant chiral primary operators hit the unitarity bounds, we find that the theory with no superpotential is identical to the one with some superpotential at the infrared fixed point. The auxiliary field method in the electric theory offers a satisfying description of the infrared fixed point, which is consistent with the better picture in the magnetic theory. In particular, it gives a clear description of the emergence of new massless degrees of freedom in the electric theory

  5. Developing maximal neuromuscular power: part 2 - training considerations for improving maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-02-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and

  6. The Accelerator Reliability Forum

    CERN Document Server

    Lüdeke, Andreas; Giachino, R

    2014-01-01

    A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum [1]. This contribution will describe the forum and advertise it’s usage in the community.

  7. Selective maintenance of multi-state systems with structural dependence

    International Nuclear Information System (INIS)

    Dao, Cuong D.; Zuo, Ming J.

    2017-01-01

    This paper studies the selective maintenance problem for multi-state systems with structural dependence. Each component can be in one of multiple working levels and several maintenance actions are possible to a component in a maintenance break. The components structurally form multiple hierarchical levels and dependence groups. A directed graph is used to represent the precedence relations of components in the system. A selective maintenance optimization model is developed to maximize the system reliability in the next mission under time and cost constraints. A backward search algorithm is used to determine the assembly sequence for a selective maintenance scenario. The maintenance model helps maintenance managers in determining the best combination of maintenance activities to maximize the probability of successfully completing the next mission. Examples showing the use of the proposed method are presented. - Highlights: • A selective maintenance model for multi-state systems is proposed considering both economic and structural dependence. • Structural dependence is modeled as precedence relationship when disassembling components for maintenance. • Resources for disassembly and maintenance are evaluated using a backward search algorithm. • Maintenance strategies with and without structural dependence are analyzed. • Ignoring structural dependence may lead to over-estimation of system reliability.

  8. Component reliability for electronic systems

    CERN Document Server

    Bajenescu, Titu-Marius I

    2010-01-01

    The main reason for the premature breakdown of today's electronic products (computers, cars, tools, appliances, etc.) is the failure of the components used to build these products. Today professionals are looking for effective ways to minimize the degradation of electronic components to help ensure longer-lasting, more technically sound products and systems. This practical book offers engineers specific guidance on how to design more reliable components and build more reliable electronic systems. Professionals learn how to optimize a virtual component prototype, accurately monitor product reliability during the entire production process, and add the burn-in and selection procedures that are the most appropriate for the intended applications. Moreover, the book helps system designers ensure that all components are correctly applied, margins are adequate, wear-out failure modes are prevented during the expected duration of life, and system interfaces cannot lead to failure.

  9. Maximizing Lifetime of Wireless Sensor Networks with Mobile Sink Nodes

    Directory of Open Access Journals (Sweden)

    Yourong Chen

    2014-01-01

    Full Text Available In order to maximize network lifetime and balance energy consumption when sink nodes can move, maximizing lifetime of wireless sensor networks with mobile sink nodes (MLMS is researched. The movement path selection method of sink nodes is proposed. Modified subtractive clustering method, k-means method, and nearest neighbor interpolation method are used to obtain the movement paths. The lifetime optimization model is established under flow constraint, energy consumption constraint, link transmission constraint, and other constraints. The model is solved from the perspective of static and mobile data gathering of sink nodes. Subgradient method is used to solve the lifetime optimization model when one sink node stays at one anchor location. Geometric method is used to evaluate the amount of gathering data when sink nodes are moving. Finally, all sensor nodes transmit data according to the optimal data transmission scheme. Sink nodes gather the data along the shortest movement paths. Simulation results show that MLMS can prolong network lifetime, balance node energy consumption, and reduce data gathering latency under appropriate parameters. Under certain conditions, it outperforms Ratio_w, TPGF, RCC, and GRND.

  10. Maximization techniques for oilfield development profits

    International Nuclear Information System (INIS)

    Lerche, I.

    1999-01-01

    In 1981 Nind provided a quantitative procedure for estimating the optimum number of development wells to emplace on an oilfield to maximize profit. Nind's treatment assumed that there was a steady selling price, that all wells were placed in production simultaneously, and that each well's production profile was identical and a simple exponential decline with time. This paper lifts these restrictions to allow for price fluctuations, variable with time emplacement of wells, and production rates that are more in line with actual production records than is a simple exponential decline curve. As a consequence, it is possible to design production rate strategies, correlated with price fluctuations, so as to maximize the present-day worth of a field. For price fluctuations that occur on a time-scale rapid compared to inflation rates it is appropriate to have production rates correlate directly with such price fluctuations. The same strategy does not apply for price fluctuations occurring on a time-scale long compared to inflation rates where, for small amplitudes in the price fluctuations, it is best to sell as much product as early as possible to overcome inflation factors, while for large amplitude fluctuations the best strategy is to sell product as early as possible but to do so mainly on price upswings. Examples are provided to show how these generalizations of Nind's (1981) formula change the complexion of oilfield development optimization. (author)

  11. A standard for test reliability in group research.

    Science.gov (United States)

    Ellis, Jules L

    2013-03-01

    Many authors adhere to the rule that test reliabilities should be at least .70 or .80 in group research. This article introduces a new standard according to which reliabilities can be evaluated. This standard is based on the costs or time of the experiment and of administering the test. For example, if test administration costs are 7 % of the total experimental costs, the efficient value of the reliability is .93. If the actual reliability of a test is equal to this efficient reliability, the test size maximizes the statistical power of the experiment, given the costs. As a standard in experimental research, it is proposed that the reliability of the dependent variable be close to the efficient reliability. Adhering to this standard will enhance the statistical power and reduce the costs of experiments.

  12. Human Reliability Program Overview

    Energy Technology Data Exchange (ETDEWEB)

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  13. Power electronics reliability analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  14. Reliability of software

    International Nuclear Information System (INIS)

    Kopetz, H.

    1980-01-01

    Common factors and differences in the reliability of hardware and software; reliability increase by means of methods of software redundancy. Maintenance of software for long term operating behavior. (HP) [de

  15. Reliable Design Versus Trust

    Science.gov (United States)

    Berg, Melanie; LaBel, Kenneth A.

    2016-01-01

    This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?

  16. Pocket Handbook on Reliability

    Science.gov (United States)

    1975-09-01

    exponencial distributions Weibull distribution, -xtimating reliability, confidence intervals, relia- bility growth, 0. P- curves, Bayesian analysis. 20 A S...introduction for those not familiar with reliability and a good refresher for those who are currently working in the area. LEWIS NERI, CHIEF...includes one or both of the following objectives: a) prediction of the current system reliability, b) projection on the system reliability for someI future

  17. Applications of maximally concentrating optics for solar energy collection

    Science.gov (United States)

    O'Gallagher, J.; Winston, R.

    1985-11-01

    A new family of optical concentrators based on a general nonimaging design principle for maximizing the geometric concentration, C, for radiation within a given acceptance half angle ±θα has been developed. The maximum limit exceeds by factors of 2 to 10 that attainable by systems using focusing optics. The wide acceptance angles permitted using these techniques have several unique advantages for solar concentrators including the elimination of the diurnal tracking requirement at intermediate concentrations (up to ˜10x), collection of circumsolar and some diffuse radiation, and relaxed tolerances. Because of these advantages, these types of concentrators have applications in solar energy wherever concentration is desired, e.g. for a wide variety of both thermal and photovoltaic uses. The basic principles of nonimaging optical design are reviewed. Selected configurations for thermal collector applications are discussed and the use of nonimaging elements as secondary concentrators is illustrated in the context of higher concentration applications.

  18. Principles of Bridge Reliability

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, Andrzej S.

    The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated......, and as an example the reliability profile and a sensitivity analyses for a corroded reinforced concrete bridge is shown....

  19. Reliability in engineering '87

    International Nuclear Information System (INIS)

    Tuma, M.

    1987-01-01

    The participants heard 51 papers dealing with the reliability of engineering products. Two of the papers were incorporated in INIS, namely ''Reliability comparison of two designs of low pressure regeneration of the 1000 MW unit at the Temelin nuclear power plant'' and ''Use of probability analysis of reliability in designing nuclear power facilities.''(J.B.)

  20. Developing Reliable Life Support for Mars

    Science.gov (United States)

    Jones, Harry W.

    2017-01-01

    A human mission to Mars will require highly reliable life support systems. Mars life support systems may recycle water and oxygen using systems similar to those on the International Space Station (ISS). However, achieving sufficient reliability is less difficult for ISS than it will be for Mars. If an ISS system has a serious failure, it is possible to provide spare parts, or directly supply water or oxygen, or if necessary bring the crew back to Earth. Life support for Mars must be designed, tested, and improved as needed to achieve high demonstrated reliability. A quantitative reliability goal should be established and used to guide development t. The designers should select reliable components and minimize interface and integration problems. In theory a system can achieve the component-limited reliability, but testing often reveal unexpected failures due to design mistakes or flawed components. Testing should extend long enough to detect any unexpected failure modes and to verify the expected reliability. Iterated redesign and retest may be required to achieve the reliability goal. If the reliability is less than required, it may be improved by providing spare components or redundant systems. The number of spares required to achieve a given reliability goal depends on the component failure rate. If the failure rate is under estimated, the number of spares will be insufficient and the system may fail. If the design is likely to have undiscovered design or component problems, it is advisable to use dissimilar redundancy, even though this multiplies the design and development cost. In the ideal case, a human tended closed system operational test should be conducted to gain confidence in operations, maintenance, and repair. The difficulty in achieving high reliability in unproven complex systems may require the use of simpler, more mature, intrinsically higher reliability systems. The limitations of budget, schedule, and technology may suggest accepting lower and

  1. Reliability and Availability of Cloud Computing

    CERN Document Server

    Bauer, Eric

    2012-01-01

    A holistic approach to service reliability and availability of cloud computing Reliability and Availability of Cloud Computing provides IS/IT system and solution architects, developers, and engineers with the knowledge needed to assess the impact of virtualization and cloud computing on service reliability and availability. It reveals how to select the most appropriate design for reliability diligence to assure that user expectations are met. Organized in three parts (basics, risk analysis, and recommendations), this resource is accessible to readers of diverse backgrounds and experience le

  2. Shareholder, stakeholder-owner or broad stakeholder maximization

    DEFF Research Database (Denmark)

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating...... including the shareholders of a company. Although it may be the ultimate goal for Corporate Social Responsibility to achieve this kind of maximization, broad stakeholder maximization is quite difficult to give a precise definition. There is no one-dimensional measure to add different stakeholder benefits...... not traded on the mar-ket, and therefore there is no possibility for practical application. Broad stakeholder maximization instead in practical applications becomes satisfying certain stakeholder demands, so that the practical application will be stakeholder-owner maximization un-der constraints defined...

  3. Maximizing Lumen Gain With Directional Atherectomy.

    Science.gov (United States)

    Stanley, Gregory A; Winscott, John G

    2016-08-01

    To describe the use of a low-pressure balloon inflation (LPBI) technique to delineate intraluminal plaque and guide directional atherectomy in order to maximize lumen gain and achieve procedure success. The technique is illustrated in a 77-year-old man with claudication who underwent superficial femoral artery revascularization using a HawkOne directional atherectomy catheter. A standard angioplasty balloon was inflated to 1 to 2 atm during live fluoroscopy to create a 3-dimensional "lumenogram" of the target lesion. Directional atherectomy was performed only where plaque impinged on the balloon at a specific fluoroscopic orientation. The results of the LPBI technique were corroborated with multimodality diagnostic imaging, including digital subtraction angiography, intravascular ultrasound, and intra-arterial pressure measurements. With the LPBI technique, directional atherectomy can routinely achieve <10% residual stenosis, as illustrated in this case, thereby broadly supporting a no-stent approach to lower extremity endovascular revascularization. © The Author(s) 2016.

  4. Primordial two-component maximally symmetric inflation

    Science.gov (United States)

    Enqvist, K.; Nanopoulos, D. V.; Quirós, M.; Kounnas, C.

    1985-12-01

    We propose a two-component inflation model, based on maximally symmetric supergravity, where the scales of reheating and the inflation potential at the origin are decoupled. This is possible because of the second-order phase transition from SU(5) to SU(3)×SU(2)×U(1) that takes place when φ≅φcinflation at the global minimum, and leads to a reheating temperature TR≅(1015-1016) GeV. This makes it possible to generate baryon asymmetry in the conventional way without any conflict with experimental data on proton lifetime. The mass of the gravitinos is m3/2≅1012 GeV, thus avoiding the gravitino problem. Monopoles are diluted by residual inflation in the broken phase below the cosmological bounds if φcUSA.

  5. Distributed-Memory Fast Maximal Independent Set

    Energy Technology Data Exchange (ETDEWEB)

    Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew

    2017-09-13

    The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluate their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.

  6. Quench dynamics of topological maximally entangled states.

    Science.gov (United States)

    Chung, Ming-Chiang; Jhu, Yi-Hao; Chen, Pochung; Mou, Chung-Yu

    2013-07-17

    We investigate the quench dynamics of the one-particle entanglement spectra (OPES) for systems with topologically nontrivial phases. By using dimerized chains as an example, it is demonstrated that the evolution of OPES for the quenched bipartite systems is governed by an effective Hamiltonian which is characterized by a pseudospin in a time-dependent pseudomagnetic field S(k,t). The existence and evolution of the topological maximally entangled states (tMESs) are determined by the winding number of S(k,t) in the k-space. In particular, the tMESs survive only if nontrivial Berry phases are induced by the winding of S(k,t). In the infinite-time limit the equilibrium OPES can be determined by an effective time-independent pseudomagnetic field Seff(k). Furthermore, when tMESs are unstable, they are destroyed by quasiparticles within a characteristic timescale in proportion to the system size.

  7. Maximizing policy learning in international committees

    DEFF Research Database (Denmark)

    Nedergaard, Peter

    2007-01-01

    , this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......In the voluminous literature on the European Union's open method of coordination (OMC), no one has hitherto analysed on the basis of scholarly examination the question of what contributes to the learning processes in the OMC committees. On the basis of a questionnaire sent to all participants......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...

  8. Lovelock black holes with maximally symmetric horizons

    Energy Technology Data Exchange (ETDEWEB)

    Maeda, Hideki; Willison, Steven; Ray, Sourya, E-mail: hideki@cecs.cl, E-mail: willison@cecs.cl, E-mail: ray@cecs.cl [Centro de Estudios CientIficos (CECs), Casilla 1469, Valdivia (Chile)

    2011-08-21

    We investigate some properties of n( {>=} 4)-dimensional spacetimes having symmetries corresponding to the isometries of an (n - 2)-dimensional maximally symmetric space in Lovelock gravity under the null or dominant energy condition. The well-posedness of the generalized Misner-Sharp quasi-local mass proposed in the past study is shown. Using this quasi-local mass, we clarify the basic properties of the dynamical black holes defined by a future outer trapping horizon under certain assumptions on the Lovelock coupling constants. The C{sup 2} vacuum solutions are classified into four types: (i) Schwarzschild-Tangherlini-type solution; (ii) Nariai-type solution; (iii) special degenerate vacuum solution; and (iv) exceptional vacuum solution. The conditions for the realization of the last two solutions are clarified. The Schwarzschild-Tangherlini-type solution is studied in detail. We prove the first law of black-hole thermodynamics and present the expressions for the heat capacity and the free energy.

  9. Maximal energy extraction under discrete diffusive exchange

    Energy Technology Data Exchange (ETDEWEB)

    Hay, M. J., E-mail: hay@princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Schiff, J. [Department of Mathematics, Bar-Ilan University, Ramat Gan 52900 (Israel); Fisch, N. J. [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)

    2015-10-15

    Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.

  10. Maximizing profitability in a hospital outpatient pharmacy.

    Science.gov (United States)

    Jorgenson, J A; Kilarski, J W; Malatestinic, W N; Rudy, T A

    1989-07-01

    This paper describes the strategies employed to increase the profitability of an existing ambulatory pharmacy operated by the hospital. Methods to generate new revenue including implementation of a home parenteral therapy program, a home enteral therapy program, a durable medical equipment service, and home care disposable sales are described. Programs to maximize existing revenue sources such as increasing the capture rate on discharge prescriptions, increasing "walk-in" prescription traffic and increasing HMO prescription volumes are discussed. A method utilized to reduce drug expenditures is also presented. By minimizing expenses and increasing the revenues for the ambulatory pharmacy operation, net profit increased from +26,000 to over +140,000 in one year.

  11. Maximizing the benefits of a dewatering system

    International Nuclear Information System (INIS)

    Matthews, P.; Iverson, T.S.

    1999-01-01

    The use of dewatering systems in the mining, industrial sludge and sewage waste treatment industries is discussed, also describing some of the problems that have been encountered while using drilling fluid dewatering technology. The technology is an acceptable drilling waste handling alternative but it has had problems associated with recycled fluid incompatibility, high chemical costs and system inefficiencies. This paper discussed the following five action areas that can maximize the benefits and help reduce costs of a dewatering project: (1) co-ordinate all services, (2) choose equipment that fits the drilling program, (3) match the chemical treatment with the drilling fluid types, (4) determine recycled fluid compatibility requirements, and (5) determine the disposal requirements before project start-up. 2 refs., 5 figs

  12. Mixtures of maximally entangled pure states

    Energy Technology Data Exchange (ETDEWEB)

    Flores, M.M., E-mail: mflores@nip.up.edu.ph; Galapon, E.A., E-mail: eric.galapon@gmail.com

    2016-09-15

    We study the conditions when mixtures of maximally entangled pure states remain entangled. We found that the resulting mixed state remains entangled when the number of entangled pure states to be mixed is less than or equal to the dimension of the pure states. For the latter case of mixing a number of pure states equal to their dimension, we found that the mixed state is entangled provided that the entangled pure states to be mixed are not equally weighted. We also found that one can restrict the set of pure states that one can mix from in order to ensure that the resulting mixed state is genuinely entangled. Also, we demonstrate how these results could be applied as a way to detect entanglement in mixtures of the entangled pure states with noise.

  13. Reliability and Validity of a Submaximal Warm-up Test for Monitoring Training Status in Professional Soccer Players.

    Science.gov (United States)

    Rabbani, Alireza; Kargarfard, Mehdi; Twist, Craig

    2018-02-01

    Rabbani, A, Kargarfard, M, and Twist, C. Reliability and validity of a submaximal warm-up test for monitoring training status in professional soccer players. J Strength Cond Res 32(2): 326-333, 2018-Two studies were conducted to assess the reliability and validity of a submaximal warm-up test (SWT) in professional soccer players. For the reliability study, 12 male players performed an SWT over 3 trials, with 1 week between trials. For the validity study, 14 players of the same team performed an SWT and a 30-15 intermittent fitness test (30-15IFT) 7 days apart. Week-to-week reliability in selected heart rate (HR) responses (exercise heart rate [HRex], heart rate recovery [HRR] expressed as the number of beats recovered within 1 minute [HRR60s], and HRR expressed as the mean HR during 1 minute [HRpost1]) was determined using the intraclass correlation coefficient (ICC) and typical error of measurement expressed as coefficient of variation (CV). The relationships between HR measures derived from the SWT and the maximal speed reached at the 30-15IFT (VIFT) were used to assess validity. The range for ICC and CV values was 0.83-0.95 and 1.4-7.0% in all HR measures, respectively, with the HRex as the most reliable HR measure of the SWT. Inverse large (r = -0.50 and 90% confidence limits [CLs] [-0.78 to -0.06]) and very large (r = -0.76 and CL, -0.90 to -0.45) relationships were observed between HRex and HRpost1 with VIFT in relative (expressed as the % of maximal HR) measures, respectively. The SWT is a reliable and valid submaximal test to monitor high-intensity intermittent running fitness in professional soccer players. In addition, the test's short duration (5 minutes) and simplicity mean that it can be used regularly to assess training status in high-level soccer players.

  14. Reliable computer systems.

    Science.gov (United States)

    Wear, L L; Pinkert, J R

    1993-11-01

    In this article, we looked at some decisions that apply to the design of reliable computer systems. We began with a discussion of several terms such as testability, then described some systems that call for highly reliable hardware and software. The article concluded with a discussion of methods that can be used to achieve higher reliability in computer systems. Reliability and fault tolerance in computers probably will continue to grow in importance. As more and more systems are computerized, people will want assurances about the reliability of these systems, and their ability to work properly even when sub-systems fail.

  15. Human factor reliability program

    International Nuclear Information System (INIS)

    Knoblochova, L.

    2017-01-01

    The human factor's reliability program was at Slovenske elektrarne, a.s. (SE) nuclear power plants. introduced as one of the components Initiatives of Excellent Performance in 2011. The initiative's goal was to increase the reliability of both people and facilities, in response to 3 major areas of improvement - Need for improvement of the results, Troubleshooting support, Supporting the achievement of the company's goals. The human agent's reliability program is in practice included: - Tools to prevent human error; - Managerial observation and coaching; - Human factor analysis; -Quick information about the event with a human agent; -Human reliability timeline and performance indicators; - Basic, periodic and extraordinary training in human factor reliability(authors)

  16. A Criterion to Identify Maximally Entangled Four-Qubit State

    International Nuclear Information System (INIS)

    Zha Xinwei; Song Haiyang; Feng Feng

    2011-01-01

    Paolo Facchi, et al. [Phys. Rev. A 77 (2008) 060304(R)] presented a maximally multipartite entangled state (MMES). Here, we give a criterion for the identification of maximally entangled four-qubit states. Using this criterion, we not only identify some existing maximally entangled four-qubit states in the literature, but also find several new maximally entangled four-qubit states as well. (general)

  17. Maximal lattice free bodies, test sets and the Frobenius problem

    DEFF Research Database (Denmark)

    Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt

    Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral m...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....

  18. The risk function approach to profit maximizing estimation in direct mailing

    NARCIS (Netherlands)

    Muus, Lars; Scheer, Hiek van der; Wansbeek, Tom

    1999-01-01

    When the parameters of the model describing consumers' reaction to a mailing are known, addresses for a future mailing can be selected in a profit-maximizing way. Usually, these parameters are unknown and are to be estimated. Standard estimation are based on a quadratic loss function. In the present

  19. THE DEVELOPMENT OF AN INSTRUMENT FOR MEASURING THE UNDERSTANDING OF PROFIT-MAXIMIZING PRINCIPLES.

    Science.gov (United States)

    MCCORMICK, FLOYD G.

    THE PURPOSE OF THE STUDY WAS TO DEVELOP AN INSTRUMENT FOR MEASURING PROFIT-MAXIMIZING PRINCIPLES IN FARM MANAGEMENT WITH IMPLICATIONS FOR VOCATIONAL AGRICULTURE. PRINCIPLES WERE IDENTIFIED FROM LITERATURE SELECTED BY AGRICULTURAL ECONOMISTS. FORTY-FIVE MULTIPLE-CHOICE QUESTIONS WERE REFINED ON THE BASIS OF RESULTS OF THREE PRETESTS AND…

  20. Efficient and reliable characterization of the corticospinal system using transcranial magnetic stimulation.

    Science.gov (United States)

    Kukke, Sahana N; Paine, Rainer W; Chao, Chi-Chao; de Campos, Ana C; Hallett, Mark

    2014-06-01

    The purpose of this study is to develop a method to reliably characterize multiple features of the corticospinal system in a more efficient manner than typically done in transcranial magnetic stimulation studies. Forty transcranial magnetic stimulation pulses of varying intensity were given over the first dorsal interosseous motor hot spot in 10 healthy adults. The first dorsal interosseous motor-evoked potential size was recorded during rest and activation to create recruitment curves. The Boltzmann sigmoidal function was fit to the data, and parameters relating to maximal motor-evoked potential size, curve slope, and stimulus intensity leading to half-maximal motor-evoked potential size were computed from the curve fit. Good to excellent test-retest reliability was found for all corticospinal parameters at rest and during activation with 40 transcranial magnetic stimulation pulses. Through the use of curve fitting, important features of the corticospinal system can be determined with fewer stimuli than typically used for the same information. Determining the recruitment curve provides a basis to understand the state of the corticospinal system and select subject-specific parameters for transcranial magnetic stimulation testing quickly and without unnecessary exposure to magnetic stimulation. This method can be useful in individuals who have difficulty in maintaining stillness, including children and patients with motor disorders.

  1. Innovations in Agriculture in Oregon: Farmers Irrigation District Improves Water Quality, Maximizes Water Conservation, and Generates Clean, Renewable Energy

    Science.gov (United States)

    The Hood River Farmers Irrigation District used $36.2 million in CWSRF loans for a multiple-year endeavor to convert the open canal system to a piped, pressurized irrigation system to maximize water conservation and restore reliable water delivery to crops

  2. Durability and Reliability of Large Diameter HDPE Pipe for Water Main Applications (Web Report 4485)

    Science.gov (United States)

    Research validates HDPE as a suitable material for use in municipal piping systems, and more research may help users maximize their understanding of its durability and reliability. Overall, corrosion resistance, hydraulic efficiency, flexibility, abrasion resistance, toughness, f...

  3. The behavioral economics of consumer brand choice: patterns of reinforcement and utility maximization.

    Science.gov (United States)

    Foxall, Gordon R; Oliveira-Castro, Jorge M; Schrezenmaier, Teresa C

    2004-06-30

    Purchasers of fast-moving consumer goods generally exhibit multi-brand choice, selecting apparently randomly among a small subset or "repertoire" of tried and trusted brands. Their behavior shows both matching and maximization, though it is not clear just what the majority of buyers are maximizing. Each brand attracts, however, a small percentage of consumers who are 100%-loyal to it during the period of observation. Some of these are exclusively buyers of premium-priced brands who are presumably maximizing informational reinforcement because their demand for the brand is relatively price-insensitive or inelastic. Others buy exclusively the cheapest brands available and can be assumed to maximize utilitarian reinforcement since their behavior is particularly price-sensitive or elastic. Between them are the majority of consumers whose multi-brand buying takes the form of selecting a mixture of economy -- and premium-priced brands. Based on the analysis of buying patterns of 80 consumers for 9 product categories, the paper examines the continuum of consumers so defined and seeks to relate their buying behavior to the question of how and what consumers maximize.

  4. Maritime energy and security: Synergistic maximization or necessary tradeoffs?

    International Nuclear Information System (INIS)

    Nyman, Elizabeth

    2017-01-01

    Offshore energy is big business. The traditional source of maritime energy, offshore petroleum and gas, has been on the rise since a reliable method of extraction was discovered in the mid-20th century. Lately, it has been joined by offshore wind and tidal power as alternative “green” sources of maritime energy. Yet all of this has implications for maritime environmental regimes as well, as maritime energy extraction/generation can have a negative effect on the ocean environment. This paper considers two major questions surrounding maritime energy and environmental concerns. First, how and why do these two concerns, maritime energy and environmental protection, play against each other? Second, how can states both secure their energy and environmental securities in the maritime domain? Maximizing maritime energy output necessitates some environmental costs and vice versa, but these costs vary with the type of offshore energy technology used and with the extent to which states are willing to expend effort to protect both environmental and energy security. - Highlights: • Security is a complicated concept with several facets including energy and environmental issues. • Offshore energy contributes to energy supply but can have environmental and monitoring costs. • Understanding the contribution of offshore energy to security depends on which security facet is deemed most important.

  5. Reliable Geographical Forwarding in Cognitive Radio Sensor Networks Using Virtual Clusters

    Directory of Open Access Journals (Sweden)

    Suleiman Zubair

    2014-05-01

    Full Text Available The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme.

  6. Reliable Geographical Forwarding in Cognitive Radio Sensor Networks Using Virtual Clusters

    Science.gov (United States)

    Zubair, Suleiman; Fisal, Norsheila

    2014-01-01

    The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme. PMID:24854362

  7. On maximal surfaces in asymptotically flat space-times

    International Nuclear Information System (INIS)

    Bartnik, R.; Chrusciel, P.T.; O Murchadha, N.

    1990-01-01

    Existence of maximal and 'almost maximal' hypersurfaces in asymptotically flat space-times is established under boundary conditions weaker than those considered previously. We show in particular that every vacuum evolution of asymptotically flat data for Einstein equations can be foliated by slices maximal outside a spatially compact set and that every (strictly) stationary asymptotically flat space-time can be foliated by maximal hypersurfaces. Amongst other uniqueness results, we show that maximal hypersurface can be used to 'partially fix' an asymptotic Poincare group. (orig.)

  8. Reliability and safety engineering

    CERN Document Server

    Verma, Ajit Kumar; Karanki, Durga Rao

    2016-01-01

    Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...

  9. Human reliability analysis

    International Nuclear Information System (INIS)

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach

  10. Reliability of electronic systems

    International Nuclear Information System (INIS)

    Roca, Jose L.

    2001-01-01

    Reliability techniques have been developed subsequently as a need of the diverse engineering disciplines, nevertheless they are not few those that think they have been work a lot on reliability before the same word was used in the current context. Military, space and nuclear industries were the first ones that have been involved in this topic, however not only in these environments it is that it has been carried out this small great revolution in benefit of the increase of the reliability figures of the products of those industries, but rather it has extended to the whole industry. The fact of the massive production, characteristic of the current industries, drove four decades ago, to the fall of the reliability of its products, on one hand, because the massively itself and, for other, to the recently discovered and even not stabilized industrial techniques. Industry should be changed according to those two new requirements, creating products of medium complexity and assuring an enough reliability appropriated to production costs and controls. Reliability began to be integral part of the manufactured product. Facing this philosophy, the book describes reliability techniques applied to electronics systems and provides a coherent and rigorous framework for these diverse activities providing a unifying scientific basis for the entire subject. It consists of eight chapters plus a lot of statistical tables and an extensive annotated bibliography. Chapters embrace the following topics: 1- Introduction to Reliability; 2- Basic Mathematical Concepts; 3- Catastrophic Failure Models; 4-Parametric Failure Models; 5- Systems Reliability; 6- Reliability in Design and Project; 7- Reliability Tests; 8- Software Reliability. This book is in Spanish language and has a potentially diverse audience as a text book from academic to industrial courses. (author)

  11. A Note of Caution on Maximizing Entropy

    Directory of Open Access Journals (Sweden)

    Richard E. Neapolitan

    2014-07-01

    Full Text Available The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some of these cases, and discusses how to identify some of the situations in which this principle should not be used. The paper starts by reviewing three approaches to probability, namely the classical approach, the limiting frequency approach, and the Bayesian approach. It then introduces maximum entropy and shows its relationship to the three approaches. Next, through examples, it shows that maximizing entropy sometimes can stand in direct opposition to Bayesian updating based on reasonable prior beliefs. The paper concludes that if we take the Bayesian approach that probability is about reasonable belief based on all available information, then we can resolve the conflict between the maximum entropy approach and the Bayesian approach that is demonstrated in the examples.

  12. Optimal topologies for maximizing network transmission capacity

    Science.gov (United States)

    Chen, Zhenhao; Wu, Jiajing; Rong, Zhihai; Tse, Chi K.

    2018-04-01

    It has been widely demonstrated that the structure of a network is a major factor that affects its traffic dynamics. In this work, we try to identify the optimal topologies for maximizing the network transmission capacity, as well as to build a clear relationship between structural features of a network and the transmission performance in terms of traffic delivery. We propose an approach for designing optimal network topologies against traffic congestion by link rewiring and apply them on the Barabási-Albert scale-free, static scale-free and Internet Autonomous System-level networks. Furthermore, we analyze the optimized networks using complex network parameters that characterize the structure of networks, and our simulation results suggest that an optimal network for traffic transmission is more likely to have a core-periphery structure. However, assortative mixing and the rich-club phenomenon may have negative impacts on network performance. Based on the observations of the optimized networks, we propose an efficient method to improve the transmission capacity of large-scale networks.

  13. New features of the maximal abelian projection

    International Nuclear Information System (INIS)

    Bornyakov, V.G.; Polikarpov, M.I.; Syritsyn, S.N.; Schierholz, G.; Suzuki, T.

    2005-12-01

    After fixing the Maximal Abelian gauge in SU(2) lattice gauge theory we decompose the nonabelian gauge field into the so called monopole field and the modified nonabelian field with monopoles removed. We then calculate respective static potentials and find that the potential due to the modified nonabelian field is nonconfining while, as is well known, the monopole field potential is linear. Furthermore, we show that the sum of these potentials approximates the nonabelian static potential with 5% or higher precision at all distances considered. We conclude that at large distances the monopole field potential describes the classical energy of the hadronic string while the modified nonabelian field potential describes the string fluctuations. Similar decomposition was observed to work for the adjoint static potential. A check was also made of the center projection in the direct center gauge. Two static potentials, determined by projected Z 2 and by modified nonabelian field without Z 2 component were calculated. It was found that their sum is a substantially worse approximation of the SU(2) static potential than that found in the monopole case. It is further demonstrated that similar decomposition can be made for the flux tube action/energy density. (orig.)

  14. Effects of ethnicity on the relationship between vertical jump and maximal power on a cycle ergometer

    Directory of Open Access Journals (Sweden)

    Rouis Majdi

    2016-06-01

    Full Text Available The aim of this study was to verify the impact of ethnicity on the maximal power-vertical jump relationship. Thirty-one healthy males, sixteen Caucasian (age: 26.3 ± 3.5 years; body height: 179.1 ± 5.5 cm; body mass: 78.1 ± 9.8 kg and fifteen Afro-Caribbean (age: 24.4 ±2.6 years; body height: 178.9 ± 5.5 cm; body mass: 77.1 ± 10.3 kg completed three sessions during which vertical jump height and maximal power of lower limbs were measured. The results showed that the values of vertical jump height and maximal power were higher for Afro-Caribbean participants (62.92 ± 6.7 cm and 14.70 ± 1.75 W∙kg-1 than for Caucasian ones (52.92 ± 4.4 cm and 12.75 ± 1.36 W∙kg-1. Moreover, very high reliability indices were obtained on vertical jump (e.g. 0.95 < ICC < 0.98 and maximal power performance (e.g. 0.75 < ICC < 0.97. However, multiple linear regression analysis showed that, for a given value of maximal power, the Afro-Caribbean participants jumped 8 cm higher than the Caucasians. Together, these results confirmed that ethnicity impacted the maximal power-vertical jump relationship over three sessions. In the current context of cultural diversity, the use of vertical jump performance as a predictor of muscular power should be considered with caution when dealing with populations of different ethnic origins.

  15. Operational safety reliability research

    International Nuclear Information System (INIS)

    Hall, R.E.; Boccio, J.L.

    1986-01-01

    Operating reactor events such as the TMI accident and the Salem automatic-trip failures raised the concern that during a plant's operating lifetime the reliability of systems could degrade from the design level that was considered in the licensing process. To address this concern, NRC is sponsoring the Operational Safety Reliability Research project. The objectives of this project are to identify the essential tasks of a reliability program and to evaluate the effectiveness and attributes of such a reliability program applicable to maintaining an acceptable level of safety during the operating lifetime at the plant

  16. Circuit design for reliability

    CERN Document Server

    Cao, Yu; Wirth, Gilson

    2015-01-01

    This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units.  The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.

  17. Technology success: Integration of power plant reliability and effective maintenance

    International Nuclear Information System (INIS)

    Ferguson, K.

    2008-01-01

    The nuclear power generation sector has a tradition of utilizing technology as a key attribute for advancement. Companies that own, manage, and operate nuclear power plants can be expected to continue to rely on technology as a vital element of success. Inherent with the operations of the nuclear power industry in many parts of the world is the close connection between efficiency of power plant operations and successful business survival. The relationship among power plant availability, reliability of systems and components, and viability of the enterprise is more evident than ever. Technology decisions need to be accomplished that reflect business strategies, work processes, as well as needs of stakeholders and authorities. Such rigor is needed to address overarching concerns such as power plant life extension and license renewal, new plant orders, outage management, plant safety, inventory management etc. Particular to power plant reliability, the prudent leveraging of technology as a key to future success is vital. A dominant concern is effective asset management as physical plant assets age. Many plants are in, or are entering in, a situation in which systems and component design life and margins are converging such that failure threats can come into play with increasing frequency. Wisely selected technologies can be vital to the identification of emerging threats to reliable performance of key plant features and initiating effective maintenance actions and investments that can sustain or enhance current reliability in a cost effective manner. This attention to detail is vital to investment in new plants as well This paper and presentation will address (1) specific technology success in place at power plants, including nuclear, that integrates attention to attaining high plant reliability and effective maintenance actions as well as (2) complimentary actions that maximize technology success. In addition, the range of benefits that accrue as a result of

  18. Phenolic compounds from Glycyrrhiza pallidiflora Maxim. and their cytotoxic activity.

    Science.gov (United States)

    Shults, Elvira E; Shakirov, Makhmut M; Pokrovsky, Mikhail A; Petrova, Tatijana N; Pokrovsky, Andrey G; Gorovoy, Petr G

    2017-02-01

    Twenty-one phenolic compounds (1-21) including dihydrocinnamic acid, isoflavonoids, flavonoids, coumestans, pterocarpans, chalcones, isoflavan and isoflaven, were isolated from the roots of Glycyrrhiza pallidiflora Maxim. Phloretinic acid (1), chrysin (6), 9-methoxycoumestan (8), isoglycyrol (9), 6″-O-acetylanonin (19) and 6″-O-acetylwistin (21) were isolated from G. pallidiflora for the first time. Isoflavonoid acetylglycosides 19, 21 might be artefacts that could be produced during the EtOAc fractionation process of whole extract. Compounds 2-4, 10, 11, 19 and 21 were evaluated for their cytotoxic activity with respect to model cancer cell lines (CEM-13, MT-4, U-937) using the conventional MTT assays. Isoflavonoid calycosin (4) showed the best potency against human T-cell leukaemia cells MT-4 (CTD 50 , 2.9 μM). Pterocarpans medicarpin (10) and homopterocarpin (11) exhibit anticancer activity in micromolar range with selectivity on the human monocyte cells U-937. The isoflavan (3R)-vestitol (16) was highly selective on the lymphoblastoid leukaemia cells CEM-13 and was more active than the drug doxorubicin.

  19. Network reliability assessment using a cellular automata approach

    International Nuclear Information System (INIS)

    Rocco S, Claudio M.; Moreno, Jose Ali

    2002-01-01

    Two cellular automata (CA) models that evaluate the s-t connectedness and shortest path in a network are presented. CA based algorithms enhance the performance of classical algorithms, since they allow a more reliable and straightforward parallel implementation resulting in a dynamic network evaluation, where changes in the connectivity and/or link costs can readily be incorporated avoiding recalculation from scratch. The paper also demonstrates how these algorithms can be applied for network reliability evaluation (based on Monte-Carlo approach) and for finding s-t path with maximal reliability

  20. Hawaii Electric System Reliability

    Energy Technology Data Exchange (ETDEWEB)

    Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2012-08-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  1. Hawaii electric system reliability.

    Energy Technology Data Exchange (ETDEWEB)

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  2. Improving machinery reliability

    CERN Document Server

    Bloch, Heinz P

    1998-01-01

    This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.

  3. LED system reliability

    NARCIS (Netherlands)

    Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.

    2011-01-01

    This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific

  4. Integrated system reliability analysis

    DEFF Research Database (Denmark)

    Gintautas, Tomas; Sørensen, John Dalsgaard

    Specific targets: 1) The report shall describe the state of the art of reliability and risk-based assessment of wind turbine components. 2) Development of methodology for reliability and risk-based assessment of the wind turbine at system level. 3) Describe quantitative and qualitative measures...

  5. Reliability of neural encoding

    DEFF Research Database (Denmark)

    Alstrøm, Preben; Beierholm, Ulrik; Nielsen, Carsten Dahl

    2002-01-01

    The reliability with which a neuron is able to create the same firing pattern when presented with the same stimulus is of critical importance to the understanding of neuronal information processing. We show that reliability is closely related to the process of phaselocking. Experimental results f...

  6. Design reliability engineering

    International Nuclear Information System (INIS)

    Buden, D.; Hunt, R.N.M.

    1989-01-01

    Improved design techniques are needed to achieve high reliability at minimum cost. This is especially true of space systems where lifetimes of many years without maintenance are needed and severe mass limitations exist. Reliability must be designed into these systems from the start. Techniques are now being explored to structure a formal design process that will be more complete and less expensive. The intent is to integrate the best features of design, reliability analysis, and expert systems to design highly reliable systems to meet stressing needs. Taken into account are the large uncertainties that exist in materials, design models, and fabrication techniques. Expert systems are a convenient method to integrate into the design process a complete definition of all elements that should be considered and an opportunity to integrate the design process with reliability, safety, test engineering, maintenance and operator training. 1 fig

  7. Bayesian methods in reliability

    Science.gov (United States)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  8. Reliability: How much is it worth? Beyond its estimation or prediction, the (net) present value of reliability

    International Nuclear Information System (INIS)

    Saleh, J.H.; Marais, K.

    2006-01-01

    In this article, we link an engineering concept, reliability, to a financial and managerial concept, net present value, by exploring the impact of a system's reliability on its revenue generation capability. The framework here developed for non-repairable systems quantitatively captures the value of reliability from a financial standpoint. We show that traditional present value calculations of engineering systems do not account for system reliability, thus over-estimate a system's worth and can therefore lead to flawed investment decisions. It is therefore important to involve reliability engineers upfront before investment decisions are made in technical systems. In addition, the analyses here developed help designers identify the optimal level of reliability that maximizes a system's net present value-the financial value reliability provides to the system minus the cost to achieve this level of reliability. Although we recognize that there are numerous considerations driving the specification of an engineering system's reliability, we contend that the financial analysis of reliability here developed should be made available to decision-makers to support in part, or at least be factored into, the system reliability specification

  9. Demonstration of reliability centered maintenance

    International Nuclear Information System (INIS)

    Schwan, C.A.; Morgan, T.A.

    1991-04-01

    Reliability centered maintenance (RCM) is an approach to preventive maintenance planning and evaluation that has been used successfully by other industries, most notably the airlines and military. Now EPRI is demonstrating RCM in the commercial nuclear power industry. Just completed are large-scale, two-year demonstrations at Rochester Gas ampersand Electric (Ginna Nuclear Power Station) and Southern California Edison (San Onofre Nuclear Generating Station). Both demonstrations were begun in the spring of 1988. At each plant, RCM was performed on 12 to 21 major systems. Both demonstrations determined that RCM is an appropriate means to optimize a PM program and improve nuclear plant preventive maintenance on a large scale. Such favorable results had been suggested by three earlier EPRI pilot studies at Florida Power ampersand Light, Duke Power, and Southern California Edison. EPRI selected the Ginna and San Onofre sites because, together, they represent a broad range of utility and plant size, plant organization, plant age, and histories of availability and reliability. Significant steps in each demonstration included: selecting and prioritizing plant systems for RCM evaluation; performing the RCM evaluation steps on selected systems; evaluating the RCM recommendations by a multi-disciplinary task force; implementing the RCM recommendations; establishing a system to track and verify the RCM benefits; and establishing procedures to update the RCM bases and recommendations with time (a living program). 7 refs., 1 tab

  10. POLITENESS MAXIM OF MAIN CHARACTER IN SECRET FORGIVEN

    Directory of Open Access Journals (Sweden)

    Sang Ayu Isnu Maharani

    2017-06-01

    Full Text Available Maxim of Politeness is an interesting subject to be discussed, since politeness has been criticized from our childhood. We are obliques to be polite to anyone either in speaking or in acting. Somehow we are manage to show politeness in our spoken expression though our intention might be not so polite. For example we must appriciate others opinion although we feel objection toward the opinion. In this article the analysis of politeness is based on maxim proposes by Leech. He proposed six types of politeness maxim. The discussion shows that the main character (Kristen and Kami use all types of maxim in their conversation. The most commonly used are approbation maxim and agreement maxim

  11. Introduction to quality and reliability engineering

    CERN Document Server

    Jiang, Renyan

    2015-01-01

    This book presents the state-of-the-art in quality and reliability engineering from a product life cycle standpoint. Topics in reliability include reliability models, life data analysis and modeling, design for reliability and accelerated life testing, while topics in quality include design for quality, acceptance sampling and supplier selection, statistical process control, production tests such as screening and burn-in, warranty and maintenance. The book provides comprehensive insights into two closely related subjects, and includes a wealth of examples and problems to enhance reader comprehension and link theory and practice. All numerical examples can be easily solved using Microsoft Excel. The book is intended for senior undergraduate and post-graduate students in related engineering and management programs such as mechanical engineering, manufacturing engineering, industrial engineering and engineering management programs, as well as for researchers and engineers in the quality and reliability fields. D...

  12. Improving the Accuracy of Predicting Maximal Oxygen Consumption (VO2pk)

    Science.gov (United States)

    Downs, Meghan E.; Lee, Stuart M. C.; Ploutz-Snyder, Lori; Feiveson, Alan

    2016-01-01

    Maximal oxygen (VO2pk) is the maximum amount of oxygen that the body can use during intense exercise and is used for benchmarking endurance exercise capacity. The most accurate method to determineVO2pk requires continuous measurements of ventilation and gas exchange during an exercise test to maximal effort, which necessitates expensive equipment, a trained staff, and time to set-up the equipment. For astronauts, accurate VO2pk measures are important to assess mission critical task performance capabilities and to prescribe exercise intensities to optimize performance. Currently, astronauts perform submaximal exercise tests during flight to predict VO2pk; however, while submaximal VO2pk prediction equations provide reliable estimates of mean VO2pk for populations, they can be unacceptably inaccurate for a given individual. The error in current predictions and logistical limitations of measuring VO2pk, particularly during spaceflight, highlights the need for improved estimation methods.

  13. Optimally Fortifying Logic Reliability through Criticality Ranking

    Directory of Open Access Journals (Sweden)

    Yu Bai

    2015-02-01

    Full Text Available With CMOS technology aggressively scaling towards the 22-nm node, modern FPGA devices face tremendous aging-induced reliability challenges due to bias temperature instability (BTI and hot carrier injection (HCI. This paper presents a novel anti-aging technique at the logic level that is both scalable and applicable for VLSI digital circuits implemented with FPGA devices. The key idea is to prolong the lifetime of FPGA-mapped designs by strategically elevating the VDD values of some LUTs based on their modular criticality values. Although the idea of scaling VDD in order to improve either energy efficiency or circuit reliability has been explored extensively, our study distinguishes itself by approaching this challenge through an analytical procedure, therefore being able to maximize the overall reliability of the target FPGA design by rigorously modeling the BTI-induced device reliability and optimally solving the VDD assignment problem. Specifically, we first develop a systematic framework to analytically model the reliability of an FPGA LUT (look-up table, which consists of both RAM memory bits and associated switching circuit. We also, for the first time, establish the relationship between signal transition density and a LUT’s reliability in an analytical way. This key observation further motivates us to define the modular criticality as the product of signal transition density and the logic observability of each LUT. Finally, we analytically prove, for the first time, that the optimal way to improve the overall reliability of a whole FPGA device is to fortify individual LUTs according to their modular criticality. To the best of our knowledge, this work is the first to draw such a conclusion.

  14. Reliability of nuclear power plants and equipment

    International Nuclear Information System (INIS)

    1985-01-01

    The standard sets the general principles, a list of reliability indexes and demands on their selection. Reliability indexes of nuclear power plants include the simple indexes of fail-safe operation, life and maintainability, and of storage capability. All terms and notions are explained and methods of evaluating the indexes briefly listed - statistical, and calculation experimental. The dates when the standard comes in force in the individual CMEA countries are given. (M.D.)

  15. Maximizers versus satisficers: Decision-making styles, competence, and outcomes

    OpenAIRE

    Andrew M. Parker; Wändi Bruine de Bruin; Baruch Fischhoff

    2007-01-01

    Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al.\\ (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decision...

  16. MAXIMIZING THE BENEFITS OF ERP SYSTEMS

    OpenAIRE

    Paulo André da Conceição Menezes; Fernando González-Ladrón-de-Guevara

    2010-01-01

    The ERP (Enterprise Resource Planning) systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as th...

  17. Comparison of Critical Power and W' Derived From 2 or 3 Maximal Tests.

    Science.gov (United States)

    Simpson, Len Parker; Kordi, Mehdi

    2017-07-01

    Typically, accessing the asymptote (critical power; CP) and curvature constant (W') parameters of the hyperbolic power-duration relationship requires multiple constant-power exhaustive-exercise trials spread over several visits. However, more recently single-visit protocols and personal power meters have been used. This study investigated the practicality of using a 2-trial, single-visit protocol in providing reliable CP and W' estimates. Eight trained cyclists underwent 3- and 12-min maximal-exercise trials in a single session to derive (2-trial) CP and W' estimates. On a separate occasion a 5-min trial was performed, providing a 3rd trial to calculate (3-trial) CP and W'. There were no differences in CP (283 ± 66 vs 282 ± 65 W) or W' (18.72 ± 6.21 vs 18.27 ± 6.29 kJ) obtained from either the 2-trial or 3-trial method, respectively. After 2 familiarization sessions (completing a 3- and a 12-min trial on both occasions), both CP and W' remained reliable over additional separate measurements. The current study demonstrates that after 2 familiarization sessions, reliable CP and W' parameters can be obtained from trained cyclists using only 2 maximal-exercise trials. These results offer practitioners a practical, time-efficient solution for incorporating power-duration testing into applied athlete support.

  18. Natural maximal νμ-ντ mixing

    International Nuclear Information System (INIS)

    Wetterich, C.

    1999-01-01

    The naturalness of maximal mixing between myon- and tau-neutrinos is investigated. A spontaneously broken nonabelian generation symmetry can explain a small parameter which governs the deviation from maximal mixing. In many cases all three neutrino masses are almost degenerate. Maximal ν μ -ν τ -mixing suggests that the leading contribution to the light neutrino masses arises from the expectation value of a heavy weak triplet rather than from the seesaw mechanism. In this scenario the deviation from maximal mixing is predicted to be less than about 1%. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  19. On the way towards a generalized entropy maximization procedure

    International Nuclear Information System (INIS)

    Bagci, G. Baris; Tirnakli, Ugur

    2009-01-01

    We propose a generalized entropy maximization procedure, which takes into account the generalized averaging procedures and information gain definitions underlying the generalized entropies. This novel generalized procedure is then applied to Renyi and Tsallis entropies. The generalized entropy maximization procedure for Renyi entropies results in the exponential stationary distribution asymptotically for q element of (0,1] in contrast to the stationary distribution of the inverse power law obtained through the ordinary entropy maximization procedure. Another result of the generalized entropy maximization procedure is that one can naturally obtain all the possible stationary distributions associated with the Tsallis entropies by employing either ordinary or q-generalized Fourier transforms in the averaging procedure.

  20. Violating Bell inequalities maximally for two d-dimensional systems

    International Nuclear Information System (INIS)

    Chen Jingling; Wu Chunfeng; Oh, C. H.; Kwek, L. C.; Ge Molin

    2006-01-01

    We show the maximal violation of Bell inequalities for two d-dimensional systems by using the method of the Bell operator. The maximal violation corresponds to the maximal eigenvalue of the Bell operator matrix. The eigenvectors corresponding to these eigenvalues are described by asymmetric entangled states. We estimate the maximum value of the eigenvalue for large dimension. A family of elegant entangled states |Ψ> app that violate Bell inequality more strongly than the maximally entangled state but are somewhat close to these eigenvectors is presented. These approximate states can potentially be useful for quantum cryptography as well as many other important fields of quantum information

  1. Reliability on the move: safety and reliability in transportation

    International Nuclear Information System (INIS)

    Guy, G.B.

    1989-01-01

    The development of transportation has been a significant factor in the development of civilisation as a whole. Our technical ability to move people and goods now seems virtually limitless when one considers for example the achievements of the various space programmes. Yet our current achievements rely heavily on high standards of safety and reliability from equipment and the human component of transportation systems. Recent failures have highlighted our dependence on equipment and human reliability. This book represents the proceedings of the 1989 Safety and Reliability Society symposium held at Bath on 11-12 October 1989. The structure of the book follows the structure of the symposium itself and the papers selected represent current thinking the the wide field of transportation, and the areas of rail (6 papers, three on railway signalling), air including space (two papers), road (one paper), road and rail (two papers) and sea (three papers) are covered. There are four papers concerned with general transport issues. Three papers concerned with the transport of radioactive materials are indexed separately. (author)

  2. Development in structural systems reliability theory

    International Nuclear Information System (INIS)

    Murotsu, Y.

    1986-01-01

    This paper is concerned with two topics on structural systems reliability theory. One covers automatic generation of failure mode equations, identifications of stochastically dominant failure modes, and reliability assessment of redundant structures. Reduced stiffness matrixes and equivalent nodal forces representing the failed elements are introduced for expressing the safety of the elements, using a matrix method. Dominant failure modes are systematically selected by a branch-and-bound technique and heuristic operations. The other discusses the various optimum design problems based on reliability concept. Those problems are interpreted through a solution to a multi-objective optimization problem. (orig.)

  3. Development in structural systems reliability theory

    Energy Technology Data Exchange (ETDEWEB)

    Murotsu, Y

    1986-07-01

    This paper is concerned with two topics on structural systems reliability theory. One covers automatic generation of failure mode equations, identifications of stochastically dominant failure modes, and reliability assessment of redundant structures. Reduced stiffness matrixes and equivalent nodal forces representing the failed elements are introduced for expressing the safety of the elements, using a matrix method. Dominant failure modes are systematically selected by a branch-and-bound technique and heuristic operations. The other discusses the various optimum design problems based on reliability concept. Those problems are interpreted through a solution to a multi-objective optimization problem.

  4. Determination of the exercise intensity that elicits maximal fat oxidation in individuals with obesity.

    Science.gov (United States)

    Dandanell, Sune; Præst, Charlotte Boslev; Søndergård, Stine Dam; Skovborg, Camilla; Dela, Flemming; Larsen, Steen; Helge, Jørn Wulff

    2017-04-01

    Maximal fat oxidation (MFO) and the exercise intensity that elicits MFO (Fat Max ) are commonly determined by indirect calorimetry during graded exercise tests in both obese and normal-weight individuals. However, no protocol has been validated in individuals with obesity. Thus, the aims were to develop a graded exercise protocol for determination of Fat Max in individuals with obesity, and to test validity and inter-method reliability. Fat oxidation was assessed over a range of exercise intensities in 16 individuals (age: 28 (26-29) years; body mass index: 36 (35-38) kg·m -2 ; 95% confidence interval) on a cycle ergometer. The graded exercise protocol was validated against a short continuous exercise (SCE) protocol, in which Fat Max was determined from fat oxidation at rest and during 10 min of continuous exercise at 35%, 50%, and 65% of maximal oxygen uptake. Intraclass and Pearson correlation coefficients between the protocols were 0.75 and 0.72 and within-subject coefficient of variation (CV) was 5 (3-7)%. A Bland-Altman plot revealed a bias of -3% points of maximal oxygen uptake (limits of agreement: -12 to 7). A tendency towards a systematic difference (p = 0.06) was observed, where Fat Max occurred at 42 (40-44)% and 45 (43-47)% of maximal oxygen uptake with the graded and the SCE protocol, respectively. In conclusion, there was a high-excellent correlation and a low CV between the 2 protocols, suggesting that the graded exercise protocol has a high inter-method reliability. However, considerable intra-individual variation and a trend towards systematic difference between the protocols reveal that further optimization of the graded exercise protocol is needed to improve validity.

  5. Reliability of construction materials

    International Nuclear Information System (INIS)

    Merz, H.

    1976-01-01

    One can also speak of reliability with respect to materials. While for reliability of components the MTBF (mean time between failures) is regarded as the main criterium, this is replaced with regard to materials by possible failure mechanisms like physical/chemical reaction mechanisms, disturbances of physical or chemical equilibrium, or other interactions or changes of system. The main tasks of the reliability analysis of materials therefore is the prediction of the various failure reasons, the identification of interactions, and the development of nondestructive testing methods. (RW) [de

  6. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...... of the uncertainties and their interplay is the developed, step-by-step. The concepts presented are illustrated by numerous examples throughout the text....

  7. Reliability and mechanical design

    International Nuclear Information System (INIS)

    Lemaire, Maurice

    1997-01-01

    A lot of results in mechanical design are obtained from a modelisation of physical reality and from a numerical solution which would lead to the evaluation of needs and resources. The goal of the reliability analysis is to evaluate the confidence which it is possible to grant to the chosen design through the calculation of a probability of failure linked to the retained scenario. Two types of analysis are proposed: the sensitivity analysis and the reliability analysis. Approximate methods are applicable to problems related to reliability, availability, maintainability and safety (RAMS)

  8. RTE - 2013 Reliability Report

    International Nuclear Information System (INIS)

    Denis, Anne-Marie

    2014-01-01

    RTE publishes a yearly reliability report based on a standard model to facilitate comparisons and highlight long-term trends. The 2013 report is not only stating the facts of the Significant System Events (ESS), but it moreover underlines the main elements dealing with the reliability of the electrical power system. It highlights the various elements which contribute to present and future reliability and provides an overview of the interaction between the various stakeholders of the Electrical Power System on the scale of the European Interconnected Network. (author)

  9. Development of reliability-based safety enhancement technology

    International Nuclear Information System (INIS)

    Kim, Kil Yoo; Han, Sang Hoon; Jang, Seung Cherl

    2002-04-01

    This project aims to develop critical technologies and the necessary reliability DB for maximizing the economics in the NPP operation with keeping the safety using the information of the risk (or reliability). For the research goal, firstly the four critical technologies(Risk Informed Tech. Spec. Optimization, Risk Informed Inservice Testing, On-line Maintenance, Maintenance Rule) for RIR and A have been developed. Secondly, KIND (Korea Information System for Nuclear Reliability Data) has been developed. Using KIND, YGN 3,4 and UCN 3,4 component reliability DB have been established. A reactor trip history DB for all NPP in Korea also has been developed and analyzed. Finally, a detailed reliability analysis of RPS/ESFAS for KNSP has been performed. With the result of the analysis, the sensitivity analysis also has been performed to optimize the AOT/STI of tech. spec. A statistical analysis procedure and computer code have been developed for the set point drift analysis

  10. Towards Reliable, Scalable, and Energy Efficient Cognitive Radio Systems

    KAUST Repository

    Sboui, Lokman

    2017-11-01

    The cognitive radio (CR) concept is expected to be adopted along with many technologies to meet the requirements of the next generation of wireless and mobile systems, the 5G. Consequently, it is important to determine the performance of the CR systems with respect to these requirements. In this thesis, after briefly describing the 5G requirements, we present three main directions in which we aim to enhance the CR performance. The first direction is the reliability. We study the achievable rate of a multiple-input multiple-output (MIMO) relay-assisted CR under two scenarios; an unmanned aerial vehicle (UAV) one-way relaying (OWR) and a fixed two-way relaying (TWR). We propose special linear precoding schemes that enable the secondary user (SU) to take advantage of the primary-free channel eigenmodes. We study the SU rate sensitivity to the relay power, the relay gain, the UAV altitude, the number of antennas and the line of sight availability. The second direction is the scalability. We first study a multiple access channel (MAC) with multiple SUs scenario. We propose a particular linear precoding and SUs selection scheme maximizing their sum-rate. We show that the proposed scheme provides a significant sum-rate improvement as the number of SUs increases. Secondly, we expand our scalability study to cognitive cellular networks. We propose a low-complexity algorithm for base station activation/deactivation and dynamic spectrum management maximizing the profits of primary and secondary networks subject to green constraints. We show that our proposed algorithms achieve performance close to those obtained with the exhaustive search method. The third direction is the energy efficiency (EE). We present a novel power allocation scheme based on maximizing the EE of both single-input and single-output (SISO) and MIMO systems. We solve a non-convex problem and derive explicit expressions of the corresponding optimal power. When the instantaneous channel is not available, we

  11. Evaluation of anti-hyperglycemic effect of Actinidia kolomikta (Maxim. etRur.) Maxim. root extract.

    Science.gov (United States)

    Hu, Xuansheng; Cheng, Delin; Wang, Linbo; Li, Shuhong; Wang, Yuepeng; Li, Kejuan; Yang, Yingnan; Zhang, Zhenya

    2015-05-01

    This study aimed to evaluate the anti-hyperglycemic effect of ethanol extract from Actinidia kolomikta (Maxim. etRur.) Maxim. root (AKE).An in vitro evaluation was performed by using rat intestinal α-glucosidase (maltase and sucrase), the key enzymes linked with type 2 diabetes. And an in vivo evaluation was also performed by loading maltose, sucrose, glucose to normal rats. As a result, AKE showed concentration-dependent inhibition effects on rat intestinal maltase and rat intestinal sucrase with IC(50) values of 1.83 and 1.03mg/mL, respectively. In normal rats, after loaded with maltose, sucrose and glucose, administration of AKE significantly reduced postprandial hyperglycemia, which is similar to acarbose used as an anti-diabetic drug. High contents of total phenolics (80.49 ± 0.05mg GAE/g extract) and total flavonoids (430.69 ± 0.91mg RE/g extract) were detected in AKE. In conclusion, AKE possessed anti-hyperglycemic effects and the possible mechanisms were associated with its inhibition on α-glucosidase and the improvement on insulin release and/or insulin sensitivity as well. The anti-hyperglycemic activity possessed by AKE maybe attributable to its high contents of phenolic and flavonoid compounds.

  12. Alternative approaches to maximally supersymmetric field theories

    International Nuclear Information System (INIS)

    Broedel, Johannes

    2010-01-01

    The central objective of this work is the exploration and application of alternative possibilities to describe maximally supersymmetric field theories in four dimensions: N=4 super Yang-Mills theory and N=8 supergravity. While twistor string theory has been proven very useful in the context of N=4 SYM, no analogous formulation for N=8 supergravity is available. In addition to the part describing N=4 SYM theory, twistor string theory contains vertex operators corresponding to the states of N=4 conformal supergravity. Those vertex operators have to be altered in order to describe (non-conformal) Einstein supergravity. A modified version of the known open twistor string theory, including a term which breaks the conformal symmetry for the gravitational vertex operators, has been proposed recently. In a first part of the thesis structural aspects and consistency of the modified theory are discussed. Unfortunately, the majority of amplitudes can not be constructed, which can be traced back to the fact that the dimension of the moduli space of algebraic curves in twistor space is reduced in an inconsistent manner. The issue of a possible finiteness of N=8 supergravity is closely related to the question of the existence of valid counterterms in the perturbation expansion of the theory. In particular, the coefficient in front of the so-called R 4 counterterm candidate has been shown to vanish by explicit calculation. This behavior points into the direction of a symmetry not taken into account, for which the hidden on-shell E 7(7) symmetry is the prime candidate. The validity of the so-called double-soft scalar limit relation is a necessary condition for a theory exhibiting E 7(7) symmetry. By calculating the double-soft scalar limit for amplitudes derived from an N=8 supergravity action modified by an additional R 4 counterterm, one can test for possible constraints originating in the E 7(7) symmetry. In a second part of the thesis, the appropriate amplitudes are calculated

  13. Approach to reliability assessment

    International Nuclear Information System (INIS)

    Green, A.E.; Bourne, A.J.

    1975-01-01

    Experience has shown that reliability assessments can play an important role in the early design and subsequent operation of technological systems where reliability is at a premium. The approaches to and techniques for such assessments, which have been outlined in the paper, have been successfully applied in variety of applications ranging from individual equipments to large and complex systems. The general approach involves the logical and systematic establishment of the purpose, performance requirements and reliability criteria of systems. This is followed by an appraisal of likely system achievment based on the understanding of different types of variational behavior. A fundamental reliability model emerges from the correlation between the appropriate Q and H functions for performance requirement and achievement. This model may cover the complete spectrum of performance behavior in all the system dimensions

  14. The rating reliability calculator

    Directory of Open Access Journals (Sweden)

    Solomon David J

    2004-04-01

    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  15. Structural systems reliability analysis

    International Nuclear Information System (INIS)

    Frangopol, D.

    1975-01-01

    For an exact evaluation of the reliability of a structure it appears necessary to determine the distribution densities of the loads and resistances and to calculate the correlation coefficients between loads and between resistances. These statistical characteristics can be obtained only on the basis of a long activity period. In case that such studies are missing the statistical properties formulated here give upper and lower bounds of the reliability. (orig./HP) [de

  16. Reliability and maintainability

    International Nuclear Information System (INIS)

    1994-01-01

    Several communications in this conference are concerned with nuclear plant reliability and maintainability; their titles are: maintenance optimization of stand-by Diesels of 900 MW nuclear power plants; CLAIRE: an event-based simulation tool for software testing; reliability as one important issue within the periodic safety review of nuclear power plants; design of nuclear building ventilation by the means of functional analysis; operation characteristic analysis for a power industry plant park, as a function of influence parameters

  17. Reliability data book

    International Nuclear Information System (INIS)

    Bento, J.P.; Boerje, S.; Ericsson, G.; Hasler, A.; Lyden, C.O.; Wallin, L.; Poern, K.; Aakerlund, O.

    1985-01-01

    The main objective for the report is to improve failure data for reliability calculations as parts of safety analyses for Swedish nuclear power plants. The work is based primarily on evaluations of failure reports as well as information provided by the operation and maintenance staff of each plant. In the report are presented charts of reliability data for: pumps, valves, control rods/rod drives, electrical components, and instruments. (L.E.)

  18. Between-day reliability of a method for non-invasive estimation of muscle composition.

    Science.gov (United States)

    Simunič, Boštjan

    2012-08-01

    Tensiomyography is a method for valid and non-invasive estimation of skeletal muscle fibre type composition. The validity of selected temporal tensiomyographic measures has been well established recently; there is, however, no evidence regarding the method's between-day reliability. Therefore it is the aim of this paper to establish the between-day repeatability of tensiomyographic measures in three skeletal muscles. For three consecutive days, 10 healthy male volunteers (mean±SD: age 24.6 ± 3.0 years; height 177.9 ± 3.9 cm; weight 72.4 ± 5.2 kg) were examined in a supine position. Four temporal measures (delay, contraction, sustain, and half-relaxation time) and maximal amplitude were extracted from the displacement-time tensiomyogram. A reliability analysis was performed with calculations of bias, random error, coefficient of variation (CV), standard error of measurement, and intra-class correlation coefficient (ICC) with a 95% confidence interval. An analysis of ICC demonstrated excellent agreement (ICC were over 0.94 in 14 out of 15 tested parameters). However, lower CV was observed in half-relaxation time, presumably because of the specifics of the parameter definition itself. These data indicate that for the three muscles tested, tensiomyographic measurements were reproducible across consecutive test days. Furthermore, we indicated the most possible origin of the lowest reliability detected in half-relaxation time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Parton Distributions based on a Maximally Consistent Dataset

    Science.gov (United States)

    Rojo, Juan

    2016-04-01

    The choice of data that enters a global QCD analysis can have a substantial impact on the resulting parton distributions and their predictions for collider observables. One of the main reasons for this has to do with the possible presence of inconsistencies, either internal within an experiment or external between different experiments. In order to assess the robustness of the global fit, different definitions of a conservative PDF set, that is, a PDF set based on a maximally consistent dataset, have been introduced. However, these approaches are typically affected by theory biases in the selection of the dataset. In this contribution, after a brief overview of recent NNPDF developments, we propose a new, fully objective, definition of a conservative PDF set, based on the Bayesian reweighting approach. Using the new NNPDF3.0 framework, we produce various conservative sets, which turn out to be mutually in agreement within the respective PDF uncertainties, as well as with the global fit. We explore some of their implications for LHC phenomenology, finding also good consistency with the global fit result. These results provide a non-trivial validation test of the new NNPDF3.0 fitting methodology, and indicate that possible inconsistencies in the fitted dataset do not affect substantially the global fit PDFs.

  20. Anesthetic constituents of Zanthoxylum bungeanum Maxim.: A pharmacokinetic study.

    Science.gov (United States)

    Rong, Rong; Cui, Mei-Yu; Zhang, Qi-Li; Zhang, Mei-Yan; Yu, Yu-Ming; Zhou, Xian-Ying; Yu, Zhi-Guo; Zhao, Yun-Li

    2016-07-01

    A sensitive and selective ultra high performance liquid chromatography with tandem mass spectrometry method was established and validated for the simultaneous determination of hydroxy-α-sanshool, hydroxy-β-sanshool, and hydroxy-γ-sanshool in rat plasma after the subcutaneous and intravenous administration of an extract of the pericarp of Zanthoxylum bungeanum Maxim. Piperine was used as the internal standard. The analytes were extracted from rat plasma by liquid-liquid extraction with ethyl acetate and separated on a Thermo Hypersil GOLD C18 column (2.1 mm × 50 mm, 1.9 μm) with a gradient elution system at a flow rate of 0.4 mL/min. The mobile phase consisted of acetonitrile/0.05% formic acid in water and the total analysis time was 4 min. Positive electrospray ionization was performed using multiple reaction monitoring mode for the analytes. The calibration curves of the three analytes were linear over the tested concentration range. The intra- and interday precision was no more than 13.6%. Extraction recovery, matrix effect, and stability were satisfactory in rat plasma. The developed and validated method was suitable for the quantification of hydroxy-α-sanshool, hydroxy-β-sanshool, and hydroxy-γ-sanshool and successfully applied to a pharmacokinetic study of these analytes after subcutaneous and intravenous administration to rats. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Comparison of empirical strategies to maximize GENEHUNTER lod scores.

    Science.gov (United States)

    Chen, C H; Finch, S J; Mendell, N R; Gordon, D

    1999-01-01

    We compare four strategies for finding the settings of genetic parameters that maximize the lod scores reported in GENEHUNTER 1.2. The four strategies are iterated complete factorial designs, iterated orthogonal Latin hypercubes, evolutionary operation, and numerical optimization. The genetic parameters that are set are the phenocopy rate, penetrance, and disease allele frequency; both recessive and dominant models are considered. We selected the optimization of a recessive model on the Collaborative Study on the Genetics of Alcoholism (COGA) data of chromosome 1 for complete analysis. Convergence to a setting producing a local maximum required the evaluation of over 100 settings (for a time budget of 800 minutes on a Pentium II 300 MHz PC). Two notable local maxima were detected, suggesting the need for a more extensive search before claiming that a global maximum had been found. The orthogonal Latin hypercube design was the best strategy for finding areas that produced high lod scores with small numbers of evaluations. Numerical optimization starting from a region producing high lod scores was the strategy that found the highest maximum observed.

  2. Mammogram segmentation using maximal cell strength updation in cellular automata.

    Science.gov (United States)

    Anitha, J; Peter, J Dinesh

    2015-08-01

    Breast cancer is the most frequently diagnosed type of cancer among women. Mammogram is one of the most effective tools for early detection of the breast cancer. Various computer-aided systems have been introduced to detect the breast cancer from mammogram images. In a computer-aided diagnosis system, detection and segmentation of breast masses from the background tissues is an important issue. In this paper, an automatic segmentation method is proposed to identify and segment the suspicious mass regions of mammogram using a modified transition rule named maximal cell strength updation in cellular automata (CA). In coarse-level segmentation, the proposed method performs an adaptive global thresholding based on the histogram peak analysis to obtain the rough region of interest. An automatic seed point selection is proposed using gray-level co-occurrence matrix-based sum average feature in the coarse segmented image. Finally, the method utilizes CA with the identified initial seed point and the modified transition rule to segment the mass region. The proposed approach is evaluated over the dataset of 70 mammograms with mass from mini-MIAS database. Experimental results show that the proposed approach yields promising results to segment the mass region in the mammograms with the sensitivity of 92.25% and accuracy of 93.48%.

  3. Determination of the exercise intensity that elicits maximal fat oxidation in individuals with obesity

    DEFF Research Database (Denmark)

    Jørgensen, Sune Dandanell; Præst, Charlotte Boslev; Søndergård, Stine Dam

    2017-01-01

    . The graded exercise protocol was validated against a short continuous exercise (SCE) protocol, in which FatMax was determined from fat oxidation at rest and during 10-min continuous exercise at 35, 50 and 65% of maximal oxygen uptake (VO2max). Intraclass and Pearson correlation coefficients between......2max with the graded and the SCE protocol, respectively. In conclusion, there was a high-excellent correlation and a low CV between the two protocols, suggesting that the graded exercise protocol has a high inter-method reliability. However, considerable intra-individual variation and a trend...

  4. Multidisciplinary System Reliability Analysis

    Science.gov (United States)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  5. Reliability engineering for nuclear and other high technology systems

    International Nuclear Information System (INIS)

    Lakner, A.A.; Anderson, R.T.

    1985-01-01

    This book is written for the reliability instructor, program manager, system engineer, design engineer, reliability engineer, nuclear regulator, probability risk assessment (PRA) analyst, general manager and others who are involved in system hardware acquisition, design and operation and are concerned with plant safety and operational cost-effectiveness. It provides criteria, guidelines and comprehensive engineering data affecting reliability; it covers the key aspects of system reliability as it relates to conceptual planning, cost tradeoff decisions, specification, contractor selection, design, test and plant acceptance and operation. It treats reliability as an integrated methodology, explicitly describing life cycle management techniques as well as the basic elements of a total hardware development program, including: reliability parameters and design improvement attributes, reliability testing, reliability engineering and control. It describes how these elements can be defined during procurement, and implemented during design and development to yield reliable equipment. (author)

  6. Analysis and Application of Reliability

    International Nuclear Information System (INIS)

    Jeong, Hae Seong; Park, Dong Ho; Kim, Jae Ju

    1999-05-01

    This book tells of analysis and application of reliability, which includes definition, importance and historical background of reliability, function of reliability and failure rate, life distribution and assumption of reliability, reliability of unrepaired system, reliability of repairable system, sampling test of reliability, failure analysis like failure analysis by FEMA and FTA, and cases, accelerated life testing such as basic conception, acceleration and acceleration factor, and analysis of accelerated life testing data, maintenance policy about alternation and inspection.

  7. Strategic defense and attack for reliability systems

    International Nuclear Information System (INIS)

    Hausken, Kjell

    2008-01-01

    This article illustrates a method by which arbitrarily complex series/parallel reliability systems can be analyzed. The method is illustrated with the series-parallel and parallel-series systems. Analytical expressions are determined for the investments and utilities of the defender and the attacker, depend on their unit costs of investment for each component, the contest intensity for each component, and their evaluations of the value of system functionality. For a series-parallel system, infinitely many components in parallel benefit the defender maximally regardless of the finite number of parallel subsystems in series. Conversely, infinitely many components in series benefit the attacker maximally regardless of the finite number of components in parallel in each subsystem. For a parallel-series system, the results are opposite. With equivalent components, equal unit costs for defender and attacker, equal intensity for all components, and equally many components in series and parallel, the defender always prefers the series-parallel system rather than the parallel-series system, and converse holds for the attacker. Hence from the defender's perspective, ceteris paribus, the series-parallel system is more reliable, and has fewer 'cut sets' or failure modes

  8. Kinetic theory in maximal-acceleration invariant phase space

    International Nuclear Information System (INIS)

    Brandt, H.E.

    1989-01-01

    A vanishing directional derivative of a scalar field along particle trajectories in maximal acceleration invariant phase space is identical in form to the ordinary covariant Vlasov equation in curved spacetime in the presence of both gravitational and nongravitational forces. A natural foundation is thereby provided for a covariant kinetic theory of particles in maximal-acceleration invariant phase space. (orig.)

  9. IIB solutions with N>28 Killing spinors are maximally supersymmetric

    International Nuclear Information System (INIS)

    Gran, U.; Gutowski, J.; Papadopoulos, G.; Roest, D.

    2007-01-01

    We show that all IIB supergravity backgrounds which admit more than 28 Killing spinors are maximally supersymmetric. In particular, we find that for all N>28 backgrounds the supercovariant curvature vanishes, and that the quotients of maximally supersymmetric backgrounds either preserve all 32 or N<29 supersymmetries

  10. Muscle mitochondrial capacity exceeds maximal oxygen delivery in humans

    DEFF Research Database (Denmark)

    Boushel, Robert Christopher; Gnaiger, Erich; Calbet, Jose A L

    2011-01-01

    Across a wide range of species and body mass a close matching exists between maximal conductive oxygen delivery and mitochondrial respiratory rate. In this study we investigated in humans how closely in-vivo maximal oxygen consumption (VO(2) max) is matched to state 3 muscle mitochondrial respira...

  11. Pace's Maxims for Homegrown Library Projects. Coming Full Circle

    Science.gov (United States)

    Pace, Andrew K.

    2005-01-01

    This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…

  12. Exploratory factor analysis and reliability analysis with missing data: A simple method for SPSS users

    Directory of Open Access Journals (Sweden)

    Bruce Weaver

    2014-09-01

    Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.

  13. Maximizing value for planned maintenance and turnarounds

    Energy Technology Data Exchange (ETDEWEB)

    Crager, John [Asset Performance Canada, ULC (Canada)

    2011-07-01

    In this presentation, Asset Performance Canada elaborates on turnaround management, and provides insight into managing strategies and risks associated with turnaround processes. Value is created fast as turnarounds progress from the strategy and scope definition phases towards the planning and execution phases. Turnarounds employ best practice solutions through common processes proven effective by data, research, and experience, ensuring consistent value creation, performance tracking, and shared lesson-learning. Adherence to the turnaround work process by carefully selected planning team members with clearly defined responsibilities and expectations, as well as an effective process for scope collection and management, are crucial to turnaround success. Further to this, a formal risk management process, employing automated risk tracking software and a safety management plan for assessing risks and their severity is also invaluable in avoiding any additional costs and schedule delays. Finally, assessment of team readiness and alignment through use of the turnaround readiness index, and rapid management of discovery work, can aid in contingency management.

  14. A reliability analysis tool for SpaceWire network

    Science.gov (United States)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  15. Evaluation and comparison of alternative fleet-level selective maintenance models

    International Nuclear Information System (INIS)

    Schneider, Kellie; Richard Cassady, C.

    2015-01-01

    Fleet-level selective maintenance refers to the process of identifying the subset of maintenance actions to perform on a fleet of repairable systems when the maintenance resources allocated to the fleet are insufficient for performing all desirable maintenance actions. The original fleet-level selective maintenance model is designed to maximize the probability that all missions in a future set are completed successfully. We extend this model in several ways. First, we consider a cost-based optimization model and show that a special case of this model maximizes the expected value of the number of successful missions in the future set. We also consider the situation in which one or more of the future missions may be canceled. These models and the original fleet-level selective maintenance optimization models are nonlinear. Therefore, we also consider an alternative model in which the objective function can be linearized. We show that the alternative model is a good approximation to the other models. - Highlights: • Investigate nonlinear fleet-level selective maintenance optimization models. • A cost based model is used to maximize the expected number of successful missions. • Another model is allowed to cancel missions if reliability is sufficiently low. • An alternative model has an objective function that can be linearized. • We show that the alternative model is a good approximation to the other models

  16. Application of Expectation Maximization Method for Purchase Decision-Making Support in Welding Branch

    Directory of Open Access Journals (Sweden)

    Kujawińska Agnieszka

    2016-06-01

    Full Text Available The article presents a study of applying the proposed method of cluster analysis to support purchasing decisions in the welding industry. The authors analyze the usefulness of the non-hierarchical method, Expectation Maximization (EM, in the selection of material (212 combinations of flux and wire melt for the SAW (Submerged Arc Welding method process. The proposed approach to cluster analysis is proved as useful in supporting purchase decisions.

  17. Reliability and Validity of Dual-Task Mobility Assessments in People with Chronic Stroke

    Science.gov (United States)

    Yang, Lei; He, Chengqi; Pang, Marco Yiu Chung

    2016-01-01

    Background The ability to perform a cognitive task while walking simultaneously (dual-tasking) is important in real life. However, the psychometric properties of dual-task walking tests have not been well established in stroke. Objective To assess the test-retest reliability, concurrent and known-groups validity of various dual-task walking tests in people with chronic stroke. Design Observational measurement study with a test-retest design. Methods Eighty-eight individuals with chronic stroke participated. The testing protocol involved four walking tasks (walking forward at self-selected and maximal speed, walking backward at self-selected speed, and crossing over obstacles) performed simultaneously with each of the three attention-demanding tasks (verbal fluency, serial 3 subtractions or carrying a cup of water). For each dual-task condition, the time taken to complete the walking task, the correct response rate (CRR) of the cognitive task, and the dual-task effect (DTE) for the walking time and CRR were calculated. Forty-six of the participants were tested twice within 3–4 days to establish test-retest reliability. Results The walking time in various dual-task assessments demonstrated good to excellent reliability [Intraclass correlation coefficient (ICC2,1) = 0.70–0.93; relative minimal detectable change at 95% confidence level (MDC95%) = 29%-45%]. The reliability of the CRR (ICC2,1 = 0.58–0.81) and the DTE in walking time (ICC2,1 = 0.11–0.80) was more varied. The reliability of the DTE in CRR (ICC2,1 = -0.31–0.40) was poor to fair. The walking time and CRR obtained in various dual-task walking tests were moderately to strongly correlated with those of the dual-task Timed-up-and-Go test, thus demonstrating good concurrent validity. None of the tests could discriminate fallers (those who had sustained at least one fall in the past year) from non-fallers. Limitation The results are generalizable to community-dwelling individuals with chronic stroke only

  18. Safety and reliability criteria

    International Nuclear Information System (INIS)

    O'Neil, R.

    1978-01-01

    Nuclear power plants and, in particular, reactor pressure boundary components have unique reliability requirements, in that usually no significant redundancy is possible, and a single failure can give rise to possible widespread core damage and fission product release. Reliability may be required for availability or safety reasons, but in the case of the pressure boundary and certain other systems safety may dominate. Possible Safety and Reliability (S and R) criteria are proposed which would produce acceptable reactor design. Without some S and R requirement the designer has no way of knowing how far he must go in analysing his system or component, or whether his proposed solution is likely to gain acceptance. The paper shows how reliability targets for given components and systems can be individually considered against the derived S and R criteria at the design and construction stage. Since in the case of nuclear pressure boundary components there is often very little direct experience on which to base reliability studies, relevant non-nuclear experience is examined. (author)

  19. Proposed reliability cost model

    Science.gov (United States)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  20. Reliability of thermal interface materials: A review

    International Nuclear Information System (INIS)

    Due, Jens; Robinson, Anthony J.

    2013-01-01

    Thermal interface materials (TIMs) are used extensively to improve thermal conduction across two mating parts. They are particularly crucial in electronics thermal management since excessive junction-to-ambient thermal resistances can cause elevated temperatures which can negatively influence device performance and reliability. Of particular interest to electronic package designers is the thermal resistance of the TIM layer at the end of its design life. Estimations of this allow the package to be designed to perform adequately over its entire useful life. To this end, TIM reliability studies have been performed using accelerated stress tests. This paper reviews the body of work which has been performed on TIM reliability. It focuses on the various test methodologies with commentary on the results which have been obtained for the different TIM materials. Based on the information available in the open literature, a test procedure is proposed for TIM selection based on beginning and end of life performance. - Highlights: ► This paper reviews the body of work which has been performed on TIM reliability. ► Test methodologies for reliability testing are outlined. ► Reliability results for the different TIM materials are discussed. ► A test procedure is proposed for TIM selection BOLife and EOLife performance.

  1. Reliability enhancement of portal frame structure by finite element synthesis

    International Nuclear Information System (INIS)

    Nakagiri, S.

    1989-01-01

    The stochastic finite element methods have been applied to the evaluation of structural response and reliability of uncertain structural systems. The structural reliability index of the advanced first-order second moment (AFOSM) method is a candidate of the measure of assessing structural safety and reliability. The reliability index can be evaluated when a baseline design of structures under interest is proposed and the covariance matrix of the probabilistic variables is acquired to represent uncertainties involved in the structure systems. The reliability index thus evaluated is not assured the largest one for the structure. There is left a possibility to enhance the structural reliability for the given covariance matrix by changing the baseline design. From such a viewpoint of structural optimization, some ideas have been proposed to maximize the reliability or to minimize the failure probability of uncertain structural systems. A method of changing the design is proposed to increase the reliability index from its baseline value to another desired value. The reliability index in this paper is calculated mainly by the method of Lagrange multiplier

  2. Mission Reliability Estimation for Repairable Robot Teams

    Science.gov (United States)

    Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen

    2010-01-01

    A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the

  3. Self-Tuning Method for Increased Obstacle Detection Reliability Based on Internet of Things LiDAR Sensor Models.

    Science.gov (United States)

    Castaño, Fernando; Beruvides, Gerardo; Villalonga, Alberto; Haber, Rodolfo E

    2018-05-10

    On-chip LiDAR sensors for vehicle collision avoidance are a rapidly expanding area of research and development. The assessment of reliable obstacle detection using data collected by LiDAR sensors has become a key issue that the scientific community is actively exploring. The design of a self-tuning methodology and its implementation are presented in this paper, to maximize the reliability of LiDAR sensors network for obstacle detection in the 'Internet of Things' (IoT) mobility scenarios. The Webots Automobile 3D simulation tool for emulating sensor interaction in complex driving environments is selected in order to achieve that objective. Furthermore, a model-based framework is defined that employs a point-cloud clustering technique, and an error-based prediction model library that is composed of a multilayer perceptron neural network, and k-nearest neighbors and linear regression models. Finally, a reinforcement learning technique, specifically a Q-learning method, is implemented to determine the number of LiDAR sensors that are required to increase sensor reliability for obstacle localization tasks. In addition, a IoT driving assistance user scenario, connecting a five LiDAR sensor network is designed and implemented to validate the accuracy of the computational intelligence-based framework. The results demonstrated that the self-tuning method is an appropriate strategy to increase the reliability of the sensor network while minimizing detection thresholds.

  4. Reliability Centered Maintenance - Methodologies

    Science.gov (United States)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  5. Inquiry in bibliography some of the bustan`s maxim

    Directory of Open Access Journals (Sweden)

    sajjad rahmatian

    2016-12-01

    Full Text Available Sa`di is on of those poets who`s has placed a special position to preaching and guiding the people and among his works, allocated throughout the text of bustan to advice and maxim on legal and ethical various subjects. Surely, sa`di on the way of to compose this work and expression of its moral point, direct or indirect have been affected by some previous sources and possibly using their content. The main purpose of this article is that the pay review of basis and sources of bustan`s maxims and show that sa`di when expression the maxims of this work has been affected by which of the texts and works. For this purpose is tried to with search and research on the resources that have been allocated more or less to the aphorisms, to discover and extract traces of influence sa`di from their moral and didactic content. From the most important the finding of this study can be mentioned that indirect effect of some pahlavi books of maxim (like maxims of azarbad marespandan and bozorgmehr book of maxim and also noted sa`di directly influenced of moral and ethical works of poets and writers before him, and of this, sa`di`s influence from abo- shakur balkhi maxims, ferdowsi and keikavus is remarkable and noteworthy.

  6. Can monkeys make investments based on maximized pay-off?

    Directory of Open Access Journals (Sweden)

    Sophie Steelandt

    2011-03-01

    Full Text Available Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella and thirteen macaques (Macaca fascicularis, Macaca tonkeana in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible.

  7. Reliability issues in PACS

    Science.gov (United States)

    Taira, Ricky K.; Chan, Kelby K.; Stewart, Brent K.; Weinberg, Wolfram S.

    1991-07-01

    Reliability is an increasing concern when moving PACS from the experimental laboratory to the clinical environment. Any system downtime may seriously affect patient care. The authors report on the several classes of errors encountered during the pre-clinical release of the PACS during the past several months and present the solutions implemented to handle them. The reliability issues discussed include: (1) environmental precautions, (2) database backups, (3) monitor routines of critical resources and processes, (4) hardware redundancy (networks, archives), and (5) development of a PACS quality control program.

  8. Reliability Parts Derating Guidelines

    Science.gov (United States)

    1982-06-01

    226-30, October 1974. 66 I, 26. "Reliability of GAAS Injection Lasers", De Loach , B. C., Jr., 1973 IEEE/OSA Conference on Laser Engineering and...Vol. R-23, No. 4, 226-30, October 1974. 28. "Reliability of GAAS Injection Lasers", De Loach , B. C., Jr., 1973 IEEE/OSA Conference on Laser...opnatien ot 󈨊 deg C, mounted on a 4-inach square 0.250~ inch thick al~loy alum~nusi panel.. This mounting technique should be L~ ken into cunoidur~tiou

  9. Gravitational collapse of charged dust shell and maximal slicing condition

    International Nuclear Information System (INIS)

    Maeda, Keiichi

    1980-01-01

    The maximal slicing condition is a good time coordinate condition qualitatively when pursuing the gravitational collapse by the numerical calculation. The analytic solution of the gravitational collapse under the maximal slicing condition is given in the case of a spherical charged dust shell and the behavior of time slices with this coordinate condition is investigated. It is concluded that under the maximal slicing condition we can pursue the gravitational collapse until the radius of the shell decreases to about 0.7 x (the radius of the event horizon). (author)

  10. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  11. Breakdown of maximality conjecture in continuous phase transitions

    International Nuclear Information System (INIS)

    Mukamel, D.; Jaric, M.V.

    1983-04-01

    A Landau-Ginzburg-Wilson model associated with a single irreducible representation which exhibits an ordered phase whose symmetry group is not a maximal isotropy subgroup of the symmetry group of the disordered phase is constructed. This example disproves the maximality conjecture suggested in numerous previous studies. Below the (continuous) transition, the order parameter points along a direction which varies with the temperature and with the other parameters which define the model. An extension of the maximality conjecture to reducible representations was postulated in the context of Higgs symmetry breaking mechanism. Our model can also be extended to provide a counter example in these cases. (author)

  12. Reliability data bases: the current picture

    International Nuclear Information System (INIS)

    Fragola, J.R.

    1985-01-01

    The paper addresses specific advances in nuclear power plant reliability data base development, a critical review of a select set of relevant data bases and suggested future data bases and suggested future data development needs required for risk assessment techniques to reach full potential

  13. Reliability Testing Using the Vehicle Durability Simulator

    Science.gov (United States)

    2017-11-20

    techniques are employed to reduce test and simulation time. Through application of these processes and techniques the reliability characteristics...remote parameter control (RPC) software. The software is specifically designed for the data collection, analysis, and simulation processes outlined in...the selection process for determining the desired runs for simulation . 4.3 Drive File Development. After the data have been reviewed and

  14. Columbus safety and reliability

    Science.gov (United States)

    Longhurst, F.; Wessels, H.

    1988-10-01

    Analyses carried out to ensure Columbus reliability, availability, and maintainability, and operational and design safety are summarized. Failure modes/effects/criticality is the main qualitative tool used. The main aspects studied are fault tolerance, hazard consequence control, risk minimization, human error effects, restorability, and safe-life design.

  15. Reliability versus reproducibility

    International Nuclear Information System (INIS)

    Lautzenheiser, C.E.

    1976-01-01

    Defect detection and reproducibility of results are two separate but closely related subjects. It is axiomatic that a defect must be detected from examination to examination or reproducibility of results is very poor. On the other hand, a defect can be detected on each of subsequent examinations for higher reliability and still have poor reproducibility of results

  16. Power transformer reliability modelling

    NARCIS (Netherlands)

    Schijndel, van A.

    2010-01-01

    Problem description Electrical power grids serve to transport and distribute electrical power with high reliability and availability at acceptable costs and risks. These grids play a crucial though preferably invisible role in supplying sufficient power in a convenient form. Today’s society has

  17. Designing reliability into accelerators

    International Nuclear Information System (INIS)

    Hutton, A.

    1992-08-01

    For the next generation of high performance, high average luminosity colliders, the ''factories,'' reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis

  18. Proof tests on reliability

    International Nuclear Information System (INIS)

    Mishima, Yoshitsugu

    1983-01-01

    In order to obtain public understanding on nuclear power plants, tests should be carried out to prove the reliability and safety of present LWR plants. For example, the aseismicity of nuclear power plants must be verified by using a large scale earthquake simulator. Reliability test began in fiscal 1975, and the proof tests on steam generators and on PWR support and flexure pins against stress corrosion cracking have already been completed, and the results have been internationally highly appreciated. The capacity factor of the nuclear power plant operation in Japan rose to 80% in the summer of 1983, and considering the period of regular inspection, it means the operation of almost full capacity. Japanese LWR technology has now risen to the top place in the world after having overcome the defects. The significance of the reliability test is to secure the functioning till the age limit is reached, to confirm the correct forecast of deteriorating process, to confirm the effectiveness of the remedy to defects and to confirm the accuracy of predicting the behavior of facilities. The reliability of nuclear valves, fuel assemblies, the heat affected zones in welding, reactor cooling pumps and electric instruments has been tested or is being tested. (Kako, I.)

  19. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  20. Reliability of Plastic Slabs

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    1989-01-01

    In the paper it is shown how upper and lower bounds for the reliability of plastic slabs can be determined. For the fundamental case it is shown that optimal bounds of a deterministic and a stochastic analysis are obtained on the basis of the same failure mechanisms and the same stress fields....

  1. Reliability based structural design

    NARCIS (Netherlands)

    Vrouwenvelder, A.C.W.M.

    2014-01-01

    According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A

  2. Travel time reliability modeling.

    Science.gov (United States)

    2011-07-01

    This report includes three papers as follows: : 1. Guo F., Rakha H., and Park S. (2010), "A Multi-state Travel Time Reliability Model," : Transportation Research Record: Journal of the Transportation Research Board, n 2188, : pp. 46-54. : 2. Park S.,...

  3. Reliability and Model Fit

    Science.gov (United States)

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  4. Parametric Mass Reliability Study

    Science.gov (United States)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  5. [Selective biopsy of the sentinel lymph node in patients with breast cancer and previous excisional biopsy: is there a change in the reliability of the technique according to time from surgery?].

    Science.gov (United States)

    Sabaté-Llobera, A; Notta, P C; Benítez-Segura, A; López-Ojeda, A; Pernas-Simon, S; Boya-Román, M P; Bajén, M T

    2015-01-01

    To assess the influence of time on the reliability of sentinel lymph node biopsy (SLNB) in breast cancer patients with previous excisional biopsy (EB), analyzing both the sentinel lymph node detection and the lymph node recurrence rate. Thirty-six patients with cT1/T2 N0 breast cancer and previous EB of the lesion underwent a lymphoscintigraphy after subdermal periareolar administration of radiocolloid, the day before SLNB. Patients were classified into two groups, one including 12 patients with up to 29 days elapsed between EB and SLNB (group A), and another with the remaining 24 in which time between both procedures was of 30 days or more (group B). Scintigraphic and surgical detection of the sentinel lymph node, histological status of the sentinel lymph node and of the axillary lymph node dissection, if performed, and lymphatic recurrences during follow-up, were analyzed. Sentinel lymph node visualization at the lymphoscintigraphy and surgical detection were 100% in both groups. Histologically, three patients showed macrometastasis in the sentinel lymph node, one from group A and two from group B. None of the patients, not even those with malignancy of the sentinel lymph node, relapsed after a medium follow-up of 49.5 months (24-75). Time elapsed between EB and SLNB does not influence the reliability of this latter technique as long as a superficial injection of the radiopharmaceutical is performed, proving a very high detection rate of the sentinel lymph node without evidence of lymphatic relapse during follow-up. Copyright © 2014 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  6. Selection and reliability of financial ratios in an attempt to analyse financial statements. An empirical research of the listed companies at Greek stock exchange in Construction Sector. Dimitrios Tsiolis MA Finance

    OpenAIRE

    Tsiolis, Dimitrios

    2008-01-01

    Financial ratio analysis is a widely known financial statements analysis tool and is used to evaluate companies` financial position. Careful selection process in collaboration with other financial statement analysis techniques as well as taking into consideration the financial ratio analysis problems can lead the companies' analysts to a clear determination of their company's financial position.

  7. Reliability Approach of a Compressor System using Reliability Block ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... This paper presents a reliability analysis of such a system using reliability ... Keywords-compressor system, reliability, reliability block diagram, RBD .... the same structure has been kept with the three subsystems: air flow, oil flow and .... and Safety in Engineering Design", Springer, 2009. [3] P. O'Connor ...

  8. Reference Values for Maximal Inspiratory Pressure: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Isabela MB Sclauser Pessoa

    2014-01-01

    Full Text Available BACKGROUND: Maximal inspiratory pressure (MIP is the most commonly used measure to evaluate inspiratory muscle strength. Normative values for MIP vary significantly among studies, which may reflect differences in participant demographics and technique of MIP measurement.

  9. Classification of conformal representations induced from the maximal cuspidal parabolic

    Energy Technology Data Exchange (ETDEWEB)

    Dobrev, V. K., E-mail: dobrev@inrne.bas.bg [Scuola Internazionale Superiore di Studi Avanzati (Italy)

    2017-03-15

    In the present paper we continue the project of systematic construction of invariant differential operators on the example of representations of the conformal algebra induced from the maximal cuspidal parabolic.

  10. Maximizing Your Investment in Building Automation System Technology.

    Science.gov (United States)

    Darnell, Charles

    2001-01-01

    Discusses how organizational issues and system standardization can be important factors that determine an institution's ability to fully exploit contemporary building automation systems (BAS). Further presented is management strategy for maximizing BAS investments. (GR)

  11. Eccentric exercise decreases maximal insulin action in humans

    DEFF Research Database (Denmark)

    Asp, Svend; Daugaard, J R; Kristiansen, S

    1996-01-01

    subjects participated in two euglycaemic clamps, performed in random order. One clamp was preceded 2 days earlier by one-legged eccentric exercise (post-eccentric exercise clamp (PEC)) and one was without the prior exercise (control clamp (CC)). 2. During PEC the maximal insulin-stimulated glucose uptake...... for all three clamp steps used (P maximal activity of glycogen synthase was identical in the two thighs for all clamp steps. 3. The glucose infusion rate (GIR......) necessary to maintain euglycaemia during maximal insulin stimulation was lower during PEC compared with CC (15.7%, 81.3 +/- 3.2 vs. 96.4 +/- 8.8 mumol kg-1 min-1, P maximal...

  12. Maximal slicing of D-dimensional spherically symmetric vacuum spacetime

    International Nuclear Information System (INIS)

    Nakao, Ken-ichi; Abe, Hiroyuki; Yoshino, Hirotaka; Shibata, Masaru

    2009-01-01

    We study the foliation of a D-dimensional spherically symmetric black-hole spacetime with D≥5 by two kinds of one-parameter families of maximal hypersurfaces: a reflection-symmetric foliation with respect to the wormhole slot and a stationary foliation that has an infinitely long trumpetlike shape. As in the four-dimensional case, the foliations by the maximal hypersurfaces avoid the singularity irrespective of the dimensionality. This indicates that the maximal slicing condition will be useful for simulating higher-dimensional black-hole spacetimes in numerical relativity. For the case of D=5, we present analytic solutions of the intrinsic metric, the extrinsic curvature, the lapse function, and the shift vector for the foliation by the stationary maximal hypersurfaces. These data will be useful for checking five-dimensional numerical-relativity codes based on the moving puncture approach.

  13. ICTs and Urban Micro Enterprises : Maximizing Opportunities for ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ICTs and Urban Micro Enterprises : Maximizing Opportunities for Economic Development ... the use of ICTs in micro enterprises and their role in reducing poverty. ... in its approach to technological connectivity but bottom-up in relation to.

  14. Real-time topic-aware influence maximization using preprocessing.

    Science.gov (United States)

    Chen, Wei; Lin, Tian; Yang, Cheng

    2016-01-01

    Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.

  15. Nonadditive entropy maximization is inconsistent with Bayesian updating

    Science.gov (United States)

    Pressé, Steve

    2014-11-01

    The maximum entropy method—used to infer probabilistic models from data—is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  16. Sex differences in autonomic function following maximal exercise.

    Science.gov (United States)

    Kappus, Rebecca M; Ranadive, Sushant M; Yan, Huimin; Lane-Cordova, Abbi D; Cook, Marc D; Sun, Peng; Harvey, I Shevon; Wilund, Kenneth R; Woods, Jeffrey A; Fernhall, Bo

    2015-01-01

    Heart rate variability (HRV), blood pressure variability, (BPV) and heart rate recovery (HRR) are measures that provide insight regarding autonomic function. Maximal exercise can affect autonomic function, and it is unknown if there are sex differences in autonomic recovery following exercise. Therefore, the purpose of this study was to determine sex differences in several measures of autonomic function and the response following maximal exercise. Seventy-one (31 males and 40 females) healthy, nonsmoking, sedentary normotensive subjects between the ages of 18 and 35 underwent measurements of HRV and BPV at rest and following a maximal exercise bout. HRR was measured at minute one and two following maximal exercise. Males have significantly greater HRR following maximal exercise at both minute one and two; however, the significance between sexes was eliminated when controlling for VO2 peak. Males had significantly higher resting BPV-low-frequency (LF) values compared to females and did not significantly change following exercise, whereas females had significantly increased BPV-LF values following acute maximal exercise. Although males and females exhibited a significant decrease in both HRV-LF and HRV-high frequency (HF) with exercise, females had significantly higher HRV-HF values following exercise. Males had a significantly higher HRV-LF/HF ratio at rest; however, both males and females significantly increased their HRV-LF/HF ratio following exercise. Pre-menopausal females exhibit a cardioprotective autonomic profile compared to age-matched males due to lower resting sympathetic activity and faster vagal reactivation following maximal exercise. Acute maximal exercise is a sufficient autonomic stressor to demonstrate sex differences in the critical post-exercise recovery period.

  17. Power Converters Maximize Outputs Of Solar Cell Strings

    Science.gov (United States)

    Frederick, Martin E.; Jermakian, Joel B.

    1993-01-01

    Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.

  18. Maximally flat radiation patterns of a circular aperture

    Science.gov (United States)

    Minkovich, B. M.; Mints, M. Ia.

    1989-08-01

    The paper presents an explicit solution to the problems of maximizing the area utilization coefficient and of obtaining the best approximation (on the average) of a sectorial Pi-shaped radiation pattern of an antenna with a circular aperture when Butterworth conditions are imposed on the approximating pattern with the aim of flattening it. Constraints on the choice of admissible minimum and maximum antenna dimensions are determined which make possible the synthesis of maximally flat patterns with small sidelobes.

  19. Design of optimal linear antennas with maximally flat radiation patterns

    Science.gov (United States)

    Minkovich, B. M.; Mints, M. Ia.

    1990-02-01

    The paper presents an explicit solution to the problem of maximizing the aperture area utilization coefficient and obtaining the best approximation in the mean of the sectorial U-shaped radiation pattern of a linear antenna, when Butterworth flattening constraints are imposed on the approximating pattern. Constraints are established on the choice of the smallest and large antenna dimensions that make it possible to obtain maximally flat patterns, having a low sidelobe level and free from pulsations within the main lobe.

  20. No Mikheyev-Smirnov-Wolfenstein Effect in Maximal Mixing

    OpenAIRE

    Harrison, P. F.; Perkins, D. H.; Scott, W. G.

    1996-01-01

    We investigate the possible influence of the MSW effect on the expectations for the solar neutrino experiments in the maximal mixing scenario suggested by the atmospheric neutrino data. A direct numerical calculation of matter induced effects in the Sun shows that the naive vacuum predictions are left completely undisturbed in the particular case of maximal mixing, so that the MSW effect turns out to be unobservable. We give a qualitative explanation of this result.

  1. A fractional optimal control problem for maximizing advertising efficiency

    OpenAIRE

    Igor Bykadorov; Andrea Ellero; Stefania Funari; Elena Moretti

    2007-01-01

    We propose an optimal control problem to model the dynamics of the communication activity of a firm with the aim of maximizing its efficiency. We assume that the advertising effort undertaken by the firm contributes to increase the firm's goodwill and that the goodwill affects the firm's sales. The aim is to find the advertising policies in order to maximize the firm's efficiency index which is computed as the ratio between "outputs" and "inputs" properly weighted; the outputs are represented...

  2. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    Science.gov (United States)

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  3. On Maximally Dissipative Shock Waves in Nonlinear Elasticity

    OpenAIRE

    Knowles, James K.

    2010-01-01

    Shock waves in nonlinearly elastic solids are, in general, dissipative. We study the following question: among all plane shock waves that can propagate with a given speed in a given one-dimensional nonlinearly elastic bar, which one—if any—maximizes the rate of dissipation? We find that the answer to this question depends strongly on the qualitative nature of the stress-strain relation characteristic of the given material. When maximally dissipative shocks do occur, they propagate according t...

  4. Maximal near-field radiative heat transfer between two plates

    OpenAIRE

    Nefzaoui, Elyes; Ezzahri, Younès; Drevillon, Jérémie; Joulain, Karl

    2013-01-01

    International audience; Near-field radiative transfer is a promising way to significantly and simultaneously enhance both thermo-photovoltaic (TPV) devices power densities and efficiencies. A parametric study of Drude and Lorentz models performances in maximizing near-field radiative heat transfer between two semi-infinite planes separated by nanometric distances at room temperature is presented in this paper. Optimal parameters of these models that provide optical properties maximizing the r...

  5. Reliability in the utility computing era: Towards reliable Fog computing

    DEFF Research Database (Denmark)

    Madsen, Henrik; Burtschy, Bernard; Albeanu, G.

    2013-01-01

    This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....

  6. Enumerating all maximal frequent subtrees in collections of phylogenetic trees.

    Science.gov (United States)

    Deepak, Akshay; Fernández-Baca, David

    2014-01-01

    A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees.

  7. Softly Broken Lepton Numbers: an Approach to Maximal Neutrino Mixing

    International Nuclear Information System (INIS)

    Grimus, W.; Lavoura, L.

    2001-01-01

    We discuss models where the U(1) symmetries of lepton numbers are responsible for maximal neutrino mixing. We pay particular attention to an extension of the Standard Model (SM) with three right-handed neutrino singlets in which we require that the three lepton numbers L e , L μ , and L τ be separately conserved in the Yukawa couplings, but assume that they are softly broken by the Majorana mass matrix M R of the neutrino singlets. In this framework, where lepton-number breaking occurs at a scale much higher than the electroweak scale, deviations from family lepton number conservation are calculable, i.e., finite, and lepton mixing stems exclusively from M R . We show that in this framework either maximal atmospheric neutrino mixing or maximal solar neutrino mixing or both can be imposed by invoking symmetries. In this way those maximal mixings are stable against radiative corrections. The model which achieves maximal (or nearly maximal) solar neutrino mixing assumes that there are two different scales in M R and that the lepton number (dash)L=L e -L μ -L τ 1 is conserved in between them. We work out the difference between this model and the conventional scenario where (approximate) (dash)L invariance is imposed directly on the mass matrix of the light neutrinos. (author)

  8. Enumerating all maximal frequent subtrees in collections of phylogenetic trees

    Science.gov (United States)

    2014-01-01

    Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474

  9. RTE - Reliability report 2016

    International Nuclear Information System (INIS)

    2017-06-01

    Every year, RTE produces a reliability report for the past year. This document lays out the main factors that affected the electrical power system's operational reliability in 2016 and the initiatives currently under way intended to ensure its reliability in the future. Within a context of the energy transition, changes to the European interconnected network mean that RTE has to adapt on an on-going basis. These changes include the increase in the share of renewables injecting an intermittent power supply into networks, resulting in a need for flexibility, and a diversification in the numbers of stakeholders operating in the energy sector and changes in the ways in which they behave. These changes are dramatically changing the structure of the power system of tomorrow and the way in which it will operate - particularly the way in which voltage and frequency are controlled, as well as the distribution of flows, the power system's stability, the level of reserves needed to ensure supply-demand balance, network studies, assets' operating and control rules, the tools used and the expertise of operators. The results obtained in 2016 are evidence of a globally satisfactory level of reliability for RTE's operations in somewhat demanding circumstances: more complex supply-demand balance management, cross-border schedules at interconnections indicating operation that is closer to its limits and - most noteworthy - having to manage a cold spell just as several nuclear power plants had been shut down. In a drive to keep pace with the changes expected to occur in these circumstances, RTE implemented numerous initiatives to ensure high levels of reliability: - maintaining investment levels of euro 1.5 billion per year; - increasing cross-zonal capacity at borders with our neighbouring countries, thus bolstering the security of our electricity supply; - implementing new mechanisms (demand response, capacity mechanism, interruptibility, etc.); - involvement in tests or projects

  10. Reliability on ISS Talk Outline

    Science.gov (United States)

    Misiora, Mike

    2015-01-01

    1. Overview of ISS 2. Space Environment and it effects a. Radiation b. Microgravity 3. How we ensure reliability a. Requirements b. Component Selection i. Note: I plan to stay away from talk about Rad Hardened components and talk about why we use older processors because they are less susceptible to SEUs. c. Testing d. Redundancy / Failure Tolerance e. Sparing strategies 4. Operational Examples a. Multiple MDM Failures on 6A due to hard drive failure In general, my plan is to only talk about data that is currently available via normal internet sources to ensure that I stay away from any topics that would be Export Controlled, ITAR, or NDA-controlled. The operational example has been well-reported on in the media and those are the details that I plan to cover. Additionally I am not planning on using any slides or showing any photos during the talk.

  11. Waste package reliability analysis

    International Nuclear Information System (INIS)

    Pescatore, C.; Sastre, C.

    1983-01-01

    Proof of future performance of a complex system such as a high-level nuclear waste package over a period of hundreds to thousands of years cannot be had in the ordinary sense of the word. The general method of probabilistic reliability analysis could provide an acceptable framework to identify, organize, and convey the information necessary to satisfy the criterion of reasonable assurance of waste package performance according to the regulatory requirements set forth in 10 CFR 60. General principles which may be used to evaluate the qualitative and quantitative reliability of a waste package design are indicated and illustrated with a sample calculation of a repository concept in basalt. 8 references, 1 table

  12. Accelerator reliability workshop

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, L; Duru, Ph; Koch, J M; Revol, J L; Van Vaerenbergh, P; Volpe, A M; Clugnet, K; Dely, A; Goodhew, D

    2002-07-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop.

  13. Human Reliability Program Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Landers, John; Rogers, Erin; Gerke, Gretchen

    2014-05-18

    A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

  14. Reliability and construction control

    Directory of Open Access Journals (Sweden)

    Sherif S. AbdelSalam

    2016-06-01

    Full Text Available The goal of this study was to determine the most reliable and efficient combination of design and construction methods required for vibro piles. For a wide range of static and dynamic formulas, the reliability-based resistance factors were calculated using EGYPT database, which houses load test results for 318 piles. The analysis was extended to introduce a construction control factor that determines the variation between the pile nominal capacities calculated using static versus dynamic formulae. From the major outcomes, the lowest coefficient of variation is associated with Davisson’s criterion, and the resistance factors calculated for the AASHTO method are relatively high compared with other methods. Additionally, the CPT-Nottingham and Schmertmann method provided the most economic design. Recommendations related to a pile construction control factor were also presented, and it was found that utilizing the factor can significantly reduce variations between calculated and actual capacities.

  15. Improving Power Converter Reliability

    DEFF Research Database (Denmark)

    Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon

    2014-01-01

    of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental......The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...

  16. Accelerator reliability workshop

    International Nuclear Information System (INIS)

    Hardy, L.; Duru, Ph.; Koch, J.M.; Revol, J.L.; Van Vaerenbergh, P.; Volpe, A.M.; Clugnet, K.; Dely, A.; Goodhew, D.

    2002-01-01

    About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop

  17. Safety and reliability assessment

    International Nuclear Information System (INIS)

    1979-01-01

    This report contains the papers delivered at the course on safety and reliability assessment held at the CSIR Conference Centre, Scientia, Pretoria. The following topics were discussed: safety standards; licensing; biological effects of radiation; what is a PWR; safety principles in the design of a nuclear reactor; radio-release analysis; quality assurance; the staffing, organisation and training for a nuclear power plant project; event trees, fault trees and probability; Automatic Protective Systems; sources of failure-rate data; interpretation of failure data; synthesis and reliability; quantification of human error in man-machine systems; dispersion of noxious substances through the atmosphere; criticality aspects of enrichment and recovery plants; and risk and hazard analysis. Extensive examples are given as well as case studies

  18. Cross Layer Design for Optimizing Transmission Reliability, Energy Efficiency, and Lifetime in Body Sensor Networks.

    Science.gov (United States)

    Chen, Xi; Xu, Yixuan; Liu, Anfeng

    2017-04-19

    High transmission reliability, energy efficiency, and long lifetime are pivotal issues for wireless body area networks (WBANs. However, these performance metrics are not independent of each other, making it hard to obtain overall improvements through optimizing one single aspect. Therefore, a Cross Layer Design Optimal (CLDO) scheme is proposed to simultaneously optimize transmission reliability, energy efficiency, and lifetime of WBANs from several layers. Firstly, due to the fact that the transmission power of nodes directly influences the reliability of links, the optimized transmission power of different nodes is deduced, which is able to maximize energy efficiency in theory under the premise that requirements on delay and jitter are fulfilled. Secondly, a relay decision algorithm is proposed to choose optimized relay nodes. Using this algorithm, nodes will choose relay nodes that ensure a balance of network energy consumption, provided that all nodes transmit with optimized transmission power and the same packet size. Thirdly, the energy consumption of nodes is still unbalanced even with optimized transmission power because of their different locations in the topology of the network. In addition, packet size also has an impact on final performance metrics. Therefore, a synthesized cross layer method for optimization is proposed. With this method, the transmission power of nodes with more residual energy will be enhanced while suitable packet size is determined for different links in the network, leading to further improvements in the WBAN system. Both our comprehensive theoretical analysis and experimental results indicate that the performance of our proposed scheme is better than reported in previous studies. Relative to the relay selection and power control game (RSPCG) scheme, the CLDO scheme can enhance transmission reliability by more than 44.6% and prolong the lifetime by as much as 33.2%.

  19. Reliability of Circumplex Axes

    Directory of Open Access Journals (Sweden)

    Micha Strack

    2013-06-01

    Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.

  20. The cost of reliability

    International Nuclear Information System (INIS)

    Ilic, M.

    1998-01-01

    In this article the restructuring process under way in the US power industry is being revisited from the point of view of transmission system provision and reliability was rolled into the average cost of electricity to all, it is not so obvious how is this cost managed in the new industry. A new MIT approach to transmission pricing is here suggested as a possible solution [it

  1. Software reliability studies

    Science.gov (United States)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  2. Energy efficiency and SINR maximization beamformers for cognitive radio utilizing sensing information

    KAUST Repository

    Alabbasi, Abdulrahman

    2014-06-01

    In this paper we consider a cognitive radio multi-input multi-output environment in which we adapt our beamformer to maximize both energy efficiency and signal to interference plus noise ratio (SINR) metrics. Our design considers an underlaying communication using adaptive beamforming schemes combined with the sensing information to achieve an optimal energy efficient system. The proposed schemes maximize the energy efficiency and SINR metrics subject to cognitive radio and quality of service constraints. Since the optimization of energy efficiency problem is not a convex problem, we transform it into a standard semi-definite programming (SDP) form to guarantee a global optimal solution. Analytical solution is provided for one scheme, while the other scheme is left in a standard SDP form. Selected numerical results are used to quantify the impact of the sensing information on the proposed schemes compared to the benchmark ones.

  3. Throughput maximization for buffer-aided hybrid half-/full-duplex relaying with self-interference

    KAUST Repository

    Khafagy, Mohammad Galal

    2015-06-01

    In this work, we consider a two-hop cooperative setting where a source communicates with a destination through an intermediate relay node with a buffer. Unlike the existing body of work on buffer-aided half-duplex relaying, we consider a hybrid half-/full-duplex relaying scenario with loopback interference in the full-duplex mode. Depending on the channel outage and buffer states that are assumed available at the transmitters, the source and relay may either transmit simultaneously or revert to orthogonal transmission. Specifically, a joint source/relay scheduling and relaying mode selection mechanism is proposed to maximize the end-to-end throughput. The throughput maximization problem is converted to a linear program where the exact global optimal solution is efficiently obtained via standard convex/linear numerical optimization tools. Finally, the theoretical findings are corroborated with event-based simulations to provide the necessary performance validation.

  4. Investment in new product reliability

    International Nuclear Information System (INIS)

    Murthy, D.N.P.; Rausand, M.; Virtanen, S.

    2009-01-01

    Product reliability is of great importance to both manufacturers and customers. Building reliability into a new product is costly, but the consequences of inadequate product reliability can be costlier. This implies that manufacturers need to decide on the optimal investment in new product reliability by achieving a suitable trade-off between the two costs. This paper develops a framework and proposes an approach to help manufacturers decide on the investment in new product reliability.

  5. AN APPRAISAL OF INSTRUCTIONAL UNITS TO ENHANCE STUDENT UNDERSTANDING OF PROFIT-MAXIMIZING PRINCIPLES. RESEARCH SERIES IN AGRICULTURAL EDUCATION.

    Science.gov (United States)

    BARKER, RICHARD L.; BENDER, RALPH E.

    TWENTY-TWO SELECTED OHIO VOCATIONAL AGRICULTURE TEACHERS AND 262 JUNIOR AND SENIOR VOCATIONAL AGRICULTURE STUDENTS PARTICIPATED IN A STUDY TO MEASURE THE RELATIVE EFFECTIVENESS OF NEWLY DEVELOPED INSTRUCTIONAL UNITS DESIGNED TO ENHANCE STUDENT UNDERSTANDING OF PROFIT-MAXIMIZING PRINCIPLES IN FARM MANAGEMENT. FARM MANAGEMENT WAS TAUGHT IN THE…

  6. Reliability of application of inspection procedures

    Energy Technology Data Exchange (ETDEWEB)

    Murgatroyd, R A

    1988-12-31

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC). 3 refs.

  7. Human reliability assessment and probabilistic risk assessment

    International Nuclear Information System (INIS)

    Embrey, D.E.; Lucas, D.A.

    1989-01-01

    Human reliability assessment (HRA) is used within Probabilistic Risk Assessment (PRA) to identify the human errors (both omission and commission) which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. There exist a variey of HRA techniques and the selection of an appropriate one is often difficult. This paper reviews a number of available HRA techniques and discusses their strengths and weaknesses. The techniques reviewed include: decompositional methods, time-reliability curves and systematic expert judgement techniques. (orig.)

  8. Reliability of application of inspection procedures

    International Nuclear Information System (INIS)

    Murgatroyd, R.A.

    1988-01-01

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC)

  9. Support to NPP operation and maintenance technology risk management. A concept for establishing criteria and procedure for the selection of components with respect to their importance. Stage 3.1. NPP equipment reliability management

    International Nuclear Information System (INIS)

    Stvan, F.

    2003-12-01

    A proposal was developed for a procedure using the deterministic approach to the assessment of components from the operational point of view and other aspects that cannot be directly and readily quantified and of the probabilistic approach for the assessment of component importance with respect to nuclear safety. A specific PSA study performed for the Dukovany NPP was employed. The structure of the report is as follows: (1) Aspects of component selection; (2) Introductory procedure; (3) Criteria for the selection of components with respect to their importance (4) Assessing the priority of use of the assets - effect on production, safety, and profit; (5) Assessment of the risk aspect of the assets - effect on major processes; (6) Assessment of the level of use of the assets; (7) Assessment of the structure of the assets - optimal structure for maintenance in relation to the major processes; (8) Assessment of the criteria for estimating the importance of the components; (9) Probabilistic assessment of importance from the safety aspect by means of PSA; and (10) Deterministic assessment of importance from the safety aspect. (P.A.)

  10. Laboratory and Field-Based Evaluation of Short-Term Effort with Maximal Intensity in Individuals with Intellectual Disabilities

    Directory of Open Access Journals (Sweden)

    Lencse-Mucha Judit

    2015-12-01

    Full Text Available Results of previous studies have not indicated clearly which tests should be used to assess short-term efforts of people with intellectual disabilities. Thus, the aim of the present study was to evaluate laboratory and field-based tests of short-term effort with maximal intensity of subjects with intellectual disabilities. Twenty four people with intellectual disability, who trained soccer, participated in this study. The 30 s Wingate test and additionally an 8 s test with maximum intensity were performed on a bicycle ergometer. The fatigue index, maximal and mean power, relative maximal and relative mean power were measured. Overall, nine field-based tests were conducted: 5, 10 and 20 m sprints, a 20 m shuttle run, a seated medicine ball throw, a bent arm hang test, a standing broad jump, sit-ups and a hand grip test. The reliability of the 30 s and 8 s Wingate tests for subjects with intellectual disability was confirmed. Significant correlation was observed for mean power between the 30 s and 8 s tests on the bicycle ergometer at a moderate level (r >0.4. Moreover, significant correlations were indicated between the results of laboratory tests and field tests, such as the 20 m sprint, the 20 m shuttle run, the standing long jump and the medicine ball throw. The strongest correlation was in the medicine ball throw. The 30 s Wingate test is a reliable test assessing maximal effort in subjects with intellectual disability. The results of this research confirmed that the 8 s test on a bicycle ergometer had a moderate correlation with the 30 s Wingate test in this population, thus, this comparison needs further investigation to examine alternativeness of the 8 s to 30 s Wingate tests. The non-laboratory tests could be used to indirectly assess performance in short-term efforts with maximal intensity.

  11. Laboratory and Field-Based Evaluation of Short-Term Effort with Maximal Intensity in Individuals with Intellectual Disabilities

    Science.gov (United States)

    Lencse-Mucha, Judit; Molik, Bartosz; Marszałek, Jolanta; Kaźmierska-Kowalewska, Kalina; Ogonowska-Słodownik, Anna

    2015-01-01

    Results of previous studies have not indicated clearly which tests should be used to assess short-term efforts of people with intellectual disabilities. Thus, the aim of the present study was to evaluate laboratory and field-based tests of short-term effort with maximal intensity of subjects with intellectual disabilities. Twenty four people with intellectual disability, who trained soccer, participated in this study. The 30 s Wingate test and additionally an 8 s test with maximum intensity were performed on a bicycle ergometer. The fatigue index, maximal and mean power, relative maximal and relative mean power were measured. Overall, nine field-based tests were conducted: 5, 10 and 20 m sprints, a 20 m shuttle run, a seated medicine ball throw, a bent arm hang test, a standing broad jump, sit-ups and a hand grip test. The reliability of the 30 s and 8 s Wingate tests for subjects with intellectual disability was confirmed. Significant correlation was observed for mean power between the 30 s and 8 s tests on the bicycle ergometer at a moderate level (r >0.4). Moreover, significant correlations were indicated between the results of laboratory tests and field tests, such as the 20 m sprint, the 20 m shuttle run, the standing long jump and the medicine ball throw. The strongest correlation was in the medicine ball throw. The 30 s Wingate test is a reliable test assessing maximal effort in subjects with intellectual disability. The results of this research confirmed that the 8 s test on a bicycle ergometer had a moderate correlation with the 30 s Wingate test in this population, thus, this comparison needs further investigation to examine alternativeness of the 8 s to 30 s Wingate tests. The non-laboratory tests could be used to indirectly assess performance in short-term efforts with maximal intensity. PMID:26834874

  12. Disk Density Tuning of a Maximal Random Packing.

    Science.gov (United States)

    Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A

    2016-08-01

    We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.

  13. Anatomia microcirúrgica do hipocampo na Amígdalo-hipocampectomia seletiva sob a perspectiva da técnica de Niemeyer e método pré-operatório para maximizar a corticotomia Hippocampal microsurgical anatomy regarding the selective amygdalohippocampectomy in the Niemeyer’s technique perspective and preoperative method to maximize the corticotomy

    Directory of Open Access Journals (Sweden)

    Gustavo Rassier Isolan

    2007-12-01

    Full Text Available O conhecimento da anatomia microcirúrgica do hipocampo tem importância fundamental na cirurgia da epilepsia do lobo temporal. Uma das técnicas mais utilizadas na cirurgia da epilepsia é a técnica de Niemeyer. OBJETIVO: Descrever em detalhes a anatomia do hipocampo e mostrar uma técnica na qual pontos de referências anatômicos pré-operatórios visualizados na RNM são usados para guiar a corticotomia. MÉTODO: Foram utilizados 20 hemisférios cerebrais e 8 cadáveres para dissecções anatômicas microcirúrgicas do lobo temporal e hipocampo para identificação e descrição das principais estruturas do hipocampo. Foram estudados prospectivamente 32 pacientes com epilepsia do lobo temporal refratários ao tratamento clínico submetidos a amígdalo-hipocampectomia seletiva pela técnica de Niemeyer três parâmetros anatômicos foram mensurados na RNM pré operatória e transferidos para o ato cirúrgico. RESULTADOS: O hipocampo foi dividido em cabeça, corpo e cauda e sua anatomia microcirúrgica descrita em detalhes. As medidas adquiridas são apresentadas e discutidas. CONCLUSÃO: A complexa anatomia do hipocampo pode ser entendida de uma forma tridimensional durante dissecções microcirúrgicas. As medidas pré-operatórias mostraram-se guias anatômicos úteis para corticotomia na técnica de Niemeyer.The deep knowledge of hippocampal microsurgical anatomy is paramount in epilepsy surgery. One of the most used techniques is those proposed by Niemeyer. PURPOSE: To describe the hippocampal anatomy in details and to present a technique which preoperative anatomical points in MRI are identified to guide the corticotomy. METHOD: Microsurgical dissections were performed in twenty brain hemispheres and eight cadaveric heads to identify temporal lobe and hippocampus structures. Thirty two patients with drug-resistent temporal lobe epilepsy underwent a selective amygdalohippocampectomy with Niemeyer’s technique being measured three

  14. Reliability analysis using network simulation

    International Nuclear Information System (INIS)

    Engi, D.

    1985-01-01

    The models that can be used to provide estimates of the reliability of nuclear power systems operate at many different levels of sophistication. The least-sophisticated models treat failure processes that entail only time-independent phenomena (such as demand failure). More advanced models treat processes that also include time-dependent phenomena such as run failure and possibly repair. However, many of these dynamic models are deficient in some respects because they either disregard the time-dependent phenomena that cannot be expressed in closed-form analytic terms or because they treat these phenomena in quasi-static terms. The next level of modeling requires a dynamic approach that incorporates not only procedures for treating all significant time-dependent phenomena but also procedures for treating these phenomena when they are conditionally linked or characterized by arbitrarily selected probability distributions. The level of sophistication that is required is provided by a dynamic, Monte Carlo modeling approach. A computer code that uses a dynamic, Monte Carlo modeling approach is Q-GERT (Graphical Evaluation and Review Technique - with Queueing), and the present study had demonstrated the feasibility of using Q-GERT for modeling time-dependent, unconditionally and conditionally linked phenomena that are characterized by arbitrarily selected probability distributions

  15. Towards the production of reliable quantitative microbiological data for risk assessment: Direct quantification of Campylobacter in naturally infected chicken fecal samples using selective culture and real-time PCR

    DEFF Research Database (Denmark)

    Garcia Clavero, Ana Belén; Vigre, Håkan; Josefsen, Mathilde Hasseldam

    2015-01-01

    of Campylobacter by real-time PCR was performed using standard curves designed for two different DNA extraction methods: Easy-DNA™ Kit from Invitrogen (Easy-DNA) and NucliSENS® MiniMAG® from bioMérieux (MiniMAG). Results indicated that the estimation of the numbers of Campylobacter present in chicken fecal samples...... and for the evaluation of control strategies implemented in poultry production. The aim of this study was to compare estimates of the numbers of Campylobacter spp. in naturally infected chicken fecal samples obtained using direct quantification by selective culture and by real-time PCR. Absolute quantification....... Although there were differences in terms of estimates of Campylobacter numbers between the methods and samples, the differences between culture and real-time PCR were not statistically significant for most of the samples used in this study....

  16. Quantitative metal magnetic memory reliability modeling for welded joints

    Science.gov (United States)

    Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng

    2016-03-01

    Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.

  17. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization.

    Science.gov (United States)

    Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for

  18. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization

    Directory of Open Access Journals (Sweden)

    Yoanna Arlina Kurnianingsih

    2015-05-01

    Full Text Available We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble and choice strategies (what gamble information influences choices within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning.We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61 to 80 years old were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic

  19. Nuclear performance and reliability

    International Nuclear Information System (INIS)

    Rothwell, G.

    1993-01-01

    If fewer forced outages are a sign of improved safety, nuclear power plants have become safer and more productive. There has been a significant improvement in nuclear power plant performance, due largely to a decline in the forced outage rate and a dramatic drop in the average number of forced outages per fuel cycle. If fewer forced outages are a sign of improved safety, nuclear power plants have become safer and more productive over time. To encourage further increases in performance, regulatory incentive schemes should reward reactor operators for improved reliability and safety, as well as for improved performance

  20. [How Reliable is Neuronavigation?].

    Science.gov (United States)

    Stieglitz, Lennart Henning

    2016-02-17

    Neuronavigation plays a central role in modern neurosurgery. It allows visualizing instruments and three-dimensional image data intraoperatively and supports spatial orientation. Thus it allows to reduce surgical risks and speed up complex surgical procedures. The growing availability and importance of neuronavigation makes clear how relevant it is to know about its reliability and accuracy. Different factors may influence the accuracy during the surgery unnoticed, misleading the surgeon. Besides the best possible optimization of the systems themselves, a good knowledge about its weaknesses is mandatory for every neurosurgeon.