Maximally reliable Markov chains under energy constraints.
Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam
2009-07-01
Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.
Optimal bounded control for maximizing reliability of Duhem hysteretic systems
Ming XU; Xiaoling JIN; Yong WANG; Zhilong HUANG
2015-01-01
The optimal bounded control of stochastic-excited systems with Duhem hysteretic components for maximizing system reliability is investigated. The Duhem hysteretic force is transformed to energy-depending damping and stiffness by the energy dissipation balance technique. The controlled system is transformed to the equivalent non-hysteretic system. Stochastic averaging is then implemented to obtain the Itˆo stochastic equation associated with the total energy of the vibrating system, appropriate for eval-uating system responses. Dynamical programming equations for maximizing system re-liability are formulated by the dynamical programming principle. The optimal bounded control is derived from the maximization condition in the dynamical programming equa-tion. Finally, the conditional reliability function and mean time of first-passage failure of the optimal Duhem systems are numerically solved from the Kolmogorov equations. The proposed procedure is illustrated with a representative example.
Natural selection and the maximization of fitness.
Birch, Jonathan
2016-08-01
The notion that natural selection is a process of fitness maximization gets a bad press in population genetics, yet in other areas of biology the view that organisms behave as if attempting to maximize their fitness remains widespread. Here I critically appraise the prospects for reconciliation. I first distinguish four varieties of fitness maximization. I then examine two recent developments that may appear to vindicate at least one of these varieties. The first is the 'new' interpretation of Fisher's fundamental theorem of natural selection, on which the theorem is exactly true for any evolving population that satisfies some minimal assumptions. The second is the Formal Darwinism project, which forges links between gene frequency change and optimal strategy choice. In both cases, I argue that the results fail to establish a biologically significant maximization principle. I conclude that it may be a mistake to look for universal maximization principles justified by theory alone. A more promising approach may be to find maximization principles that apply conditionally and to show that the conditions were satisfied in the evolution of particular traits.
Reliability and validity of the maximal anaerobic running test.
Nummela, A; Alberts, M; Rijntjes, R P; Luhtanen, P; Rusko, H
1996-07-01
Physically active men (n = 13) twice performed the Maximal Anaerobic Running Test (MART) on a treadmill and once the Wingate Anaerobic Test (WAnT) on a cycle ergometer. The MART consisted of n 20-s runs with 100-s recovery between the runs. The speed of the first run was 14.6 km.h-1 and the inclination 4 degrees. Thereafter, the speed was increased by 1.37 km.h-1 every run until exhaustion. During all tests oxygen uptake was measured breath-by-breath and blood samples were taken from the fingertip 40s after each run to determine the lactate concentration (BLa). Power at submaximal BLa levels and maximal power (P5mM, P10mM and Pmax, respectively) were calculated and P was expressed as the oxygen demand of running according to the American College of Sports Medicine equation. In the MART the Pmax was 108 ml.kg-1.min-1 and peak BLa was 15.6 mM. The reliability for the power indices in the MART were as follows: r = 0.92 (p cycle ergometer test measure slightly different qualities.
Maximizing crossbred performance through purebred genomic selection
Esfandyari, Hadi; Sørensen, Anders Christian; Bijma, Piter
2015-01-01
Background In livestock production, many animals are crossbred, with two distinct advantages: heterosis and breed complementarity. Genomic selection (GS) can be used to select purebred parental lines for crossbred performance (CP). Dominance being the likely genetic basis of heterosis, explicitly...
Maximizing Crossbred Performance through Purebred Genomic Selection
Esfandyari, Hadi; Sørensen, Anders Christian; Bijma, Pieter
Genomic selection (GS) can be used to select purebreds for crossbred performance (CP). As dominance is the likely genetic basis of heterosis, explicitly including dominance in the GS model may be beneficial for selection of purebreds for CP, when estimating allelic effects from pure line data...
Maximizing crossbred performance through purebred genomic selection
Esfandyari, H.; Sorensen, A.C.; Bijma, P.
2015-01-01
Background: In livestock production, many animals are crossbred, with two distinct advantages: heterosis and breed complementarity. Genomic selection (GS) can be used to select purebred parental lines for crossbred performance (CP). Dominance being the likely genetic basis of heterosis, explicitly i
The Negative Consequences of Maximizing in Friendship Selection.
Newman, David B; Schug, Joanna; Yuki, Masaki; Yamada, Junko; Nezlek, John B
2017-02-27
Previous studies have shown that the maximizing orientation, reflecting a motivation to select the best option among a given set of choices, is associated with various negative psychological outcomes. In the present studies, we examined whether these relationships extend to friendship selection and how the number of options for friends moderated these effects. Across 5 studies, maximizing in selecting friends was negatively related to life satisfaction, positive affect, and self-esteem, and was positively related to negative affect and regret. In Study 1, a maximizing in selecting friends scale was created, and regret mediated the relationships between maximizing and well-being. In a naturalistic setting in Studies 2a and 2b, the tendency to maximize among those who participated in the fraternity and sorority recruitment process was negatively related to satisfaction with their selection, and positively related to regret and negative affect. In Study 3, daily levels of maximizing were negatively related to daily well-being, and these relationships were mediated by daily regret. In Study 4, we extended the findings to samples from the U.S. and Japan. When participants who tended to maximize were faced with many choices, operationalized as the daily number of friends met (Study 3) and relational mobility (Study 4), the opportunities to regret a decision increased and further diminished well-being. These findings imply that, paradoxically, attempts to maximize when selecting potential friends is detrimental to one's well-being. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Maximizing crossbred performance through purebred genomic selection
Esfandyari, Hadi; Sørensen, Anders Christian; Bijma, Piter
2015-01-01
Background In livestock production, many animals are crossbred, with two distinct advantages: heterosis and breed complementarity. Genomic selection (GS) can be used to select purebred parental lines for crossbred performance (CP). Dominance being the likely genetic basis of heterosis, explicitly...... to select purebred animals for CP, based on purebred phenotypic and genotypic information. A second objective was to compare the use of two separate pure line reference populations to that of a single reference population that combines both pure lines. These objectives were investigated under two conditions......, i.e. either a low or a high correlation of linkage disequilibrium (LD) phase between the pure lines. Results The results demonstrate that the gain in CP was higher when parental lines were selected for CP, rather than purebred performance, both with a low and a high correlation of LD phase...
Maximized Reliability Estimates for Some Research Scales of the MMPI.
Wagner, Edwin E.; And Others
1990-01-01
This study, using data for 200 psychiatric/chemical dependency patients, attempted to justify subscales of the Minnesota Multiphasic Personality Inventory (MMPI). Distributions of all possible split-half correlations for certain research scales of the MMPI revealed negative skewness resulting in spuriously lowered reliability estimates. The scales…
Reliability of Maximal Strength Testing in Novice Weightlifters
Loehr, James A.; Lee, Stuart M. C.; Feiveson, Alan H.; Ploutz-Snyder, Lori L.
2009-01-01
The one repetition maximum (1RM) is a criterion measure of muscle strength. However, the reliability of 1RM testing in novice subjects has received little attention. Understanding this information is crucial to accurately interpret changes in muscle strength. To evaluate the test-retest reliability of a squat (SQ), heel raise (HR), and deadlift (DL) 1RM in novice subjects. Twenty healthy males (31 plus or minus 5 y, 179.1 plus or minus 6.1 cm, 81.4 plus or minus 10.6 kg) with no weight training experience in the previous six months participated in four 1RM testing sessions, with each session separated by 5-7 days. SQ and HR 1RM were conducted using a smith machine; DL 1RM was assessed using free weights. Session 1 was considered a familiarization and was not included in the statistical analyses. Repeated measures analysis of variance with Tukey fs post-hoc tests were used to detect between-session differences in 1RM (p.0.05). Test-retest reliability was evaluated by intraclass correlation coefficients (ICC). During Session 2, the SQ and DL 1RM (SQ: 90.2 }4.3, DL: 75.9 }3.3 kg) were less than Session 3 (SQ: 95.3 }4.1, DL: 81.5 plus or minus 3.5 kg) and Session 4 (SQ: 96.6 }4.0, DL: 82.4 }3.9 kg), but there were no differences between Session 3 and Session 4. HR 1RM measured during Session 2 (150.1 }3.7 kg) and Session 3 (152.5 }3.9 kg) were not different from one another, but both were less than Session 4 (157.5 }3.8 kg). The reliability (ICC) of 1RM measures for Sessions 2-4 were 0.88, 0.83, and 0.87, for SQ, HR, and DL, respectively. When considering only Sessions 3 and 4, the reliability was 0.93, 0.91, and 0.86 for SQ, HR, and DL, respectively. One familiarization session and 2 test sessions (for SQ and DL) were required to obtain excellent reliability (ICC greater than or equal to 0.90) in 1RM values with novice subjects. We were unable to attain this level of reliability following 3 HR testing sessions therefore additional sessions may be required to obtain an
Reliable Resource Selection in Grid Environment
Rajesh Kumar Bawa
2012-04-01
Full Text Available The primary concern in area of computational grid is security and resources. Most of the existing grids address this problem by authenticating the users, hosts and their interactions in an appropriate manner. A secured system is compulsory for the efficient utilization of grid services. The high degree of strangeness has been identified as the problem factors in the secured selection of grid. Without the assurance of a higher degree of trust relationship, competent resource selection and utilization cannot be achieved. In this paper we proposed an approach which is providing reliability and reputation aware security for resource selection in grid environment. In this approach, the self-protection capability and reputation weightage is utilized to obtain the Reliability Factor (RF value. Therefore jobs are allocated to the resources that posses higher RF values. Extensive experimental evaluation shows that as higher trust and reliable nodes are selected the chances of failure decreased drastically.
Reliable Resource Selection in Grid Environment
Bawa, Rajesh Kumar
2012-01-01
The primary concern in area of computational grid is security and resources. Most of the existing grids address this problem by authenticating the users, hosts and their interactions in an appropriate manner. A secured system is compulsory for the efficient utilization of grid services. The high degree of strangeness has been identified as the problem factors in the secured selection of grid. Without the assurance of a higher degree of trust relationship, competent resource selection and utilization cannot be achieved. In this paper we proposed an approach which is providing reliability and reputation aware security for resource selection in grid environment. In this approach, the self-protection capability and reputation weightage is utilized to obtain the Reliability Factor (RF) value. Therefore jobs are allocated to the resources that posses higher RF values. Extensive experimental evaluation shows that as higher trust and reliable nodes are selected the chances of failure decreased drastically.
Reliability of the calculated maximal lactate steady state in amateur cyclists
Jennifer Adam
2015-01-01
Full Text Available Complex performance diagnostics in sports medicine should contain maximal aerobic and maximal anaerobic performance. The requirements on appropriate stress protocols are high. To validate a test protocol quality criteria like objectivity and reliability are necessary. Therefore, the present study was performed in intention to analyze the reliability of maximal lactate production rate ( VLamax by using a sprint test, maximum oxygen consumption ( VO 2max by using a ramp test and, based on these data, resulting power in calculated maximum lactate-steady-state (PMLSS especially for amateur cyclists. All subjects (n=23, age 26 ± 4 years were leisure cyclists. At three different days they completed first a sprint test to approximate VLamax. After 60 min of recreation time a ramp test to assess VO 2max was performed. The results of VLamax-test and VO 2max -test and the body weight were used to calculate PMLSS for all subjects. The intra class correlation (ICC for VLamax and VO 2max was 0.904 and 0.987, respectively, coefficient of variation (CV was 6.3 % and 2.1 %, respectively. Between the measurements the reliable change index of 0.11 mmol∙l-1∙s-1 for VLamax and 3.3 ml∙kg-1∙min-1 for VO 2max achieved significance. The mean of the calculated PMLSS was 237 ± 72 W with an RCI of 9 W and reached with ICC = 0.985 a very high reliability. Both metabolic performance tests and the calculated PMLSS are reliable for leisure cyclists.
Familiarization, reliability, and comparability of a 40-m maximal shuttle run test.
Glaister, Mark; Hauck, Hanna; Abraham, Corinne S; Merry, Kevin L; Beaver, Dean; Woods, Bernadette; McInnes, Gillian
2009-01-01
The aims of this study were to examine familiarization and reliability associated with a 40-m maximal shuttle run test (40-m MST), and to compare performance measures from the test with those of a typical unidirectional multiple sprint running test (UMSRT). 12 men and 4 women completed four trials of the 40-m MST (8 × 40-m; 20 s rest periods) followed by one trial of a UMSRT (12 × 30-m; repeated every 35 s); with seven days between trials. All trials were conducted indoors and performance times were recorded via twin-beam photocells. Significant between-trial differences in mean 40-m MST times were indicative of learning effects between trials 1 and 2. Test-retest reliability across the remaining trials as determined by coefficient of variation (CV) and intraclass correlation coefficient (ICC) revealed: a) very good reliability for measures of fastest and mean shuttle time (CV = 1.1 - 1.3%; ICC = 0.91 - 0.92); b) good reliability for measures of blood lactate (CV = 10.1 - 23.9%; ICC = 0.74 - 0.82) and ratings of perceived exertion (CV = 5.3 - 7.6%; ICC = 0.79 - 0.84); and c) poor reliability for measures of fatigue (CV = 38.7%; ICC = 0.59). Comparisons between performance indices of the 40-m MST and the UMSRT revealed significant correlations between all measures, except pre-test blood lactate concentration (r = 0. 47). Whilst the 40-m MST does not appear to provide more information than can be gleaned from a typical UMSRT, following the completion of a familiarization trial, the 40-m MST provides an alternative and, except for fatigue measures, reliable means of evaluating repeated sprint ability. Key pointsTests of multiple sprint performance are a popular means of evaluating repeated sprint ability.Multiple sprint tests incorporating changes of direction may be more ecologically valid than unidirectional protocols.The 40-m maximal shuttle run test is a reliable way of evaluating repeated sprint ability following the completion of one familiarization trial
Stochastic Portfolio Selection Problem with Reliability Criteria
Xiangsong Meng
2016-01-01
Full Text Available Portfolio selection focuses on allocating the capital to a set of securities such that the profit or the risks can be optimized. Due to the uncertainty of the real-world life, the return parameters always take uncertain information in the realistic environments because of the scarcity of the a priori knowledge or uncertain disturbances. This paper particularly considers a portfolio selection process in the stochastic environment, where the return parameters are characterized by sample-based correlated random variables. To decrease the decision risks, three evaluation criteria are proposed to generate the reliable portfolio selection plans, including max-min reliability criterion, percentile reliability criterion, and expected disutility criterion. The equivalent linear (mixed integer programming models are also deduced for different evaluation strategies. A genetic algorithm with a polishing strategy is designed to search for the approximate optimal solutions of the proposed models. Finally, a series of numerical experiments are implemented to demonstrate the effectiveness and performance of the proposed approaches.
Reliability of Maximal Back Squat and Power Clean Performances in Inexperienced Athletes.
Comfort, Paul; McMahon, John J
2015-11-01
The aim of the study was to determine between-session reliability of maximal weight lifted during the back squat and power clean, in inexperienced athletes, and to identify the smallest detectable difference between sessions. Forty-four collegiate athletes (men: n = 32; age: 21.5 ± 2.0 years; height: 180.0 ± 6.1 cm; body mass: 81.01 ± 7.42 kg; women: n = 12; age: 21.0 ± 1.9 years; height: 169.0 ± 5.2 cm; body mass: 62.90 ± 7.46 kg) participated in this study. One repetition maximum (1RM) back squat and power cleans were each performed twice on separate days, 3-5 days apart. Paired samples' t tests revealed no significant differences between trial 1 and trial 2 of the power clean (70.55 ± 24.24 kg, 71.22 ± 23.87 kg, p > 0.05, power = 0.99) and the back squat (130.32 ± 34.05 kg, 129.82 ± 34.07 kg, p > 0.05, power = 1.0). No differences in reliability or measurement error were observed between men and women. Intraclass correlation coefficients (ICCs) demonstrated a high reliability (ICC = 0.997, p clean with an R of 0.987; similarly, high reliability was observed for between-session back squat performances (ICC = 0.994, p 5% to identify a meaningful change in both maximal back squat and power clean performance.
FAMILIARIZATION, RELIABILITY, AND COMPARABILITY OF A 40-M MAXIMAL SHUTTLE RUN TEST
Mark Glaister
2009-03-01
Full Text Available The aims of this study were to examine familiarization and reliability associated with a 40-m maximal shuttle run test (40-m MST, and to compare performance measures from the test with those of a typical unidirectional multiple sprint running test (UMSRT. 12 men and 4 women completed four trials of the 40-m MST (8 × 40-m; 20 s rest periods followed by one trial of a UMSRT (12 × 30-m; repeated every 35 s; with seven days between trials. All trials were conducted indoors and performance times were recorded via twin-beam photocells. Significant between-trial differences in mean 40-m MST times were indicative of learning effects between trials 1 and 2. Test-retest reliability across the remaining trials as determined by coefficient of variation (CV and intraclass correlation coefficient (ICC revealed: a very good reliability for measures of fastest and mean shuttle time (CV = 1.1 - 1.3%; ICC = 0.91 - 0.92; b good reliability for measures of blood lactate (CV = 10.1 - 23.9%; ICC = 0.74 - 0.82 and ratings of perceived exertion (CV = 5.3 - 7.6%; ICC = 0.79 - 0.84; and c poor reliability for measures of fatigue (CV = 38.7%; ICC = 0.59. Comparisons between performance indices of the 40-m MST and the UMSRT revealed significant correlations between all measures, except pre-test blood lactate concentration (r = 0. 47. Whilst the 40-m MST does not appear to provide more information than can be gleaned from a typical UMSRT, following the completion of a familiarization trial, the 40-m MST provides an alternative and, except for fatigue measures, reliable means of evaluating repeated sprint ability
Wei, Wei; Wang, Jianhui; Mei, Shengwei
2016-09-23
In this paper, we consider dispatchability as the set of all admissible nodal wind power injections that will not cause infeasibility in real-time dispatch (RTD). Our work reveals that the dispatchability of the affine policy based RTD (AF-RTD) is a polytope whose coefficients are linear functions of the generation schedule and the gain matrix of affine policy. Two mathematical formulations of the dispatchability maximized energy and reserve dispatch (DM-ERD) are proposed. The first one maximizes the distance from the forecast to the boundaries of the dispatchability polytope subject to the available production cost or reserve cost. Provided the forecast value and variance of wind power, the generalized Gauss inequality (GGI) is adopted to evaluate the probability of infeasible RTD without the exact probability distribution of wind power. Combining the first formulation and the GGI approach, the second one minimizes the total cost subject to a desired reliability level through dispatchability maximization. Efficient convex optimization based algorithms are developed to solve these two models. Different from the conventional robust optimization method, our model does not rely on the specific uncertainty set of wind generation and directly optimizes the uncertainty accommodation capability of the power system. The proposed method is also compared with the affine policy based robust energy and reserve dispatch (AR-ERD). Case studies on the PJM 5-bus system illustrate the proposed concept and method. Experiments on the IEEE 118-bus system demonstrate the applicability of our method on moderate sized systems and its scalability to large dimensional uncertainty.
Measures of reliability in the kinematics of maximal undulatory underwater swimming.
Connaboy, Chris; Coleman, Simon; Moir, Gavin; Sanders, Ross
2010-04-01
The purposes of this article were to establish the reliability of the kinematics of maximal undulatory underwater swimming (UUS) in skilled swimmers, to determine any requirement for familiarization trials, to establish the within-subject (WS) variability of the kinematics, and to calculate the number of cycles required to accurately represent UUS performance. Fifteen male swimmers performed 20 maximal UUS trials (two cycles per trial) during four sessions. The magnitude of any systematic bias present within the kinematic variables was calculated between session, trial, and cycle. Random error calculations were calculated to determine the WS variation. An iterative intraclass correlation coefficient (ICC) process was used to determine the number of cycles required to achieve a stable representation of each kinematic variable. Significant differences were found between session 1 and all other sessions for several variables, indicating the requirement for a familiarization session. Results indicated a wide range of WS variation (coefficient of variation [CV] = 1.21%-12.42%). Reductions in WS variation were observed for all variables when the number of cycles of data used to calculate WS variation was increased. Using six cycles of data, including additional cycles of data, provided diminishing returns regarding the reduction of WS variation. The ICC analysis indicated that an average of nine cycles (mean ± SD = 9.47 ± 5.63) was required to achieve the maximum ICC values attained, and an average of four cycles (mean ± SD = 3.57 ± 2.09) was required to achieve an ICC of 0.95. After determining the systematic bias and establishing the requirement for a familiarization session, six cycles of data were found to be sufficient to provide high levels of reliability (CV(TE) = 0.86-8.92; ICC = 0.811-0.996) for each of the UUS kinematic variables.
How reliable are the equations for predicting maximal heart rate values in military personnel?
Sporis, Goran; Vucetic, Vlatko; Jukic, Igor; Omrcen, Darija; Bok, Daniel; Custonja, Zrinko
2011-03-01
The purpose of this study was to evaluate the validity and reliability of equations for predicting maximal values of heart rate (HR) in military personnel. Five hundred and nine members of the Croatian Armed Forces (age 29.1 +/- 5.5 years; height 180.1 +/- 6.6 cm; body mass 83.4 +/- 11.3 kg; maximal oxygen uptake [VO2(max)] 49.7 +/- 6.9 mL O2/kg/min) were tested. The graded exercise test with gas exchange measurements was used to determine VO2(max) and maximum HR (HR(max)). The analysis of variance was used to determine the differences between the equations to calculate HR(max). The analysis of variance yielded statistically significant differences between seven HR equations (p max) = 205 - [age/2]) and Fox and Haskell's (HR(max) = 220 - age) equations had the highest correlation with the HRmax obtained by the graded exercise test. The authors recommend using the HR(max) values from the Stevens Creek and the Fox and Haskell equations for the purpose of training, testing, and daily exercise routine in military personnel.
Park, Jihong; Hopkins, J Ty
2013-01-01
A ratio between the torque generated by maximal voluntary isometric contraction (MVIC) and exogenous electrical stimulus, central activation ratio (CAR), has been widely used to assess quadriceps function. To date, no data exist regarding between-session reliability of this measurement. Thirteen neurologically sound volunteers underwent three testing sessions (three trials per session) with 48 hours between-session. Subjects performed MVICs of the quadriceps with the knee locked at 90° flexion and the hip at 85°. Once the MVIC reached a plateau, an electrical stimulation from superimposed burst technique (SIB: 125 V with peak output current 450 mA) was manually delivered and transmitted directly to the quadriceps via stimulating electrodes. CAR was calculated by using the following equation: CAR = MVIC torque/MVIC + SIB torque. Intraclass correlation coefficients (ICC) were calculated within- (ICC((2,1))) and between-session (ICC((2,k))) for MVIC torques and CAR values. Our data show that quadriceps MVIC and CAR are very reliable both within- (ICC((2,1)) = 0.99 for MVIC; 0.94 for CAR) and between-measurement sessions (ICC((2,k)) = 0.92 for MVIC; 0.86 for CAR) in healthy young adults. For clinical research, more data of the patients with pathological conditions are required to ensure reproducibility of calculation of CAR.
Villadsen, Allan; Roos, Ewa M.; Overgaard, Søren
Abstract : Purpose To evaluate the reliability of single-joint and multi-joint maximal leg muscle power and functional performance measures in patients with severe OA. Background Muscle power, taking both strength and velocity into account, is a more functional measure of lower extremity muscle a...
Maximize Benefits, Minimize Risk: Selecting the Right HVAC Firm.
Golden, James T.
1993-01-01
An informal survey of 20 major urban school districts found that 40% were currently operating in a "break down" maintenance mode. A majority, 57.9%, also indicated they saw considerable benefits in contracting for heating, ventilating, and air conditioning (HVAC) maintenance services with outside firms. Offers guidelines in selecting HVAC…
Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas
2017-04-15
The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd.
Gosman, Nathaniel
of alternative performance incentive program models to manage DSM risk in BC. Three performance incentive program models were assessed and compared to BC Hydro's current large industrial DSM incentive program, Power Smart Partners -- Transmission Project Incentives, itself a performance incentive-based program. Together, the selected program models represent a continuum of program design and implementation in terms of the schedule and level of incentives provided, the duration and rigour of measurement and verification (M&V), energy efficiency measures targeted and involvement of the private sector. A multi criteria assessment framework was developed to rank the capacity of each program model to manage BC large industrial DSM risk factors. DSM risk management rankings were then compared to program costeffectiveness, targeted energy savings potential in BC and survey results from BC industrial firms on the program models. The findings indicate that the reliability of DSM energy savings in the BC large industrial sector can be maximized through performance incentive program models that: (1) offer incentives jointly for capital and low-cost operations and maintenance (O&M) measures, (2) allow flexible lead times for project development, (3) utilize rigorous M&V methods capable of measuring variable load, process-based energy savings, (4) use moderate contract lengths that align with effective measure life, and (5) integrate energy management software tools capable of providing energy performance feedback to customers to maximize the persistence of energy savings. While this study focuses exclusively on the BC large industrial sector, the findings of this research have applicability to all energy utilities serving large, energy intensive industrial sectors.
A Reliability Based Model for Wind Turbine Selection
A.K. Rajeevan
2013-06-01
Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.
Numerical Model based Reliability Estimation of Selective Laser Melting Process
Mohanty, Sankhya; Hattel, Jesper Henri
2014-01-01
Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....
Yanjie Dong
2013-01-01
Full Text Available The capacity of Multiple Input Multiple Output (MIMO system is highly related to the number of active antennas. But as the active antenna number increases, the MIMO system will consume more energy. To maximize the energy efficiency of MIMO system, we propose an antenna selection scheme which can maximize the energy efficiency of BS cluster. In the scheme, ergodic energy efficiency is derived according to large scale channel state information (CSI. Based on this ergodic energy efficiency, we introduce a cost function varied with the number of antennas, in which the effect to the energy efficiency of both the serving BS and the neighbor BS is considered. With this function, we can transform the whole system optimization problem to a sectional optimization problem and obtain a suboptimal antenna set using a heuristic algorithm. Simulation results verify that the proposed approach performs better than the comparison schemes in terms of network energy efficiency and achieves 98% network energy efficiency of the centralized antenna selection scheme. Besides, since the proposed scheme does not need the complete CSI of the neighbor BS, it can effectively reduce the signaling overhead.
Simple neural-like p systems for maximal independent set selection.
Xu, Lei; Jeavons, Peter
2013-06-01
Membrane systems (P systems) are distributed computing models inspired by living cells where a collection of processors jointly achieves a computing task. The problem of maximal independent set (MIS) selection in a graph is to choose a set of nonadjacent nodes to which no further nodes can be added. In this letter, we design a class of simple neural-like P systems to solve the MIS selection problem efficiently in a distributed way. This new class of systems possesses two features that are attractive for both distributed computing and membrane computing: first, the individual processors do not need any information about the overall size of the graph; second, they communicate using only one-bit messages.
EFFECTS OF SELF-SELECTED MUSIC ON MAXIMAL BENCH PRESS STRENGTH AND STRENGTH ENDURANCE.
Bartolomei, Sandro; Di Michele, Rocco; Merni, Franco
2015-06-01
Listening to music during strength workouts has become a very common practice. The goal of this study was to assess the effect of listening to self-selected music on strength performances. Thirty-one resistance-trained men (M age = 24.7 yr., SD = 5.9; M height = 178.7 cm, SD = 4.7; M body mass = 83.54 kg, SD = 12.0) were randomly assigned to either a Music group (n = 19) or to a Control group (n = 12). Both groups took part in two separate sessions; each session consisted in a maximal strength test (1-RM) and a strength-endurance test (repetitions to failure at 60% 1-RM) using the bench press exercise. The music group listened to music in the second assessment session, while the control group performed both tests without music. Listening to music induced a significant increase of strength endurance performance and no effects on maximal strength. These findings have implications for the use of music during strength workouts.
Repeatability and reliability of human eye in visual shade selection.
Özat, P B; Tuncel, İ; Eroğlu, E
2013-12-01
Deficiencies in the human visual percep-tion system have challenged the efficiency of the visual shade-matching protocol. The aim of this study was to evaluate the repeatability and reliability of human eye in visual shade selection. Fifty-four volunteering dentists were asked to match the shade of an upper right central incisor tooth of a single subject. The Vita 3D-Master shade guide was used for the protocol. Before each shade-matching procedure, the definitive codes of the shade tabs were hidden by an opaque strip and the shade tabs were placed into the guide randomly. The procedure was repeated 1 month later to ensure that visual memory did not affect the results. The L*, a* and b* values of the shade tabs were measured with a dental spectrophotometer (Vita Easyshade) to produce quantitative values to evaluate the protocol. The paired samples t-test and Pearson correlation test were used to compare the 1st and 2nd selections. The Yates-corrected chi-square test was use to compare qualitative values. Statistical significance was accepted at P shade matching, but they are able to select clinically acceptable shades.
Anton Antonov
Full Text Available BACKGROUND: Avian brood parasites and their hosts are involved in complex offence-defense coevolutionary arms races. The most common pair of reciprocal adaptations in these systems is egg discrimination by hosts and egg mimicry by parasites. As mimicry improves, more advanced host adaptations evolve such as decreased intra- and increased interclutch variation in egg appearance to facilitate detection of parasitic eggs. As interclutch variation increases, parasites able to choose hosts matching best their own egg phenotype should be selected, but this requires that parasites know their own egg phenotype and select host nests correspondingly. METHODOLOGY/PRINCIPAL FINDINGS: We compared egg mimicry of common cuckoo Cuculus canorus eggs in naturally parasitized marsh warbler Acrocephalus palustris nests and their nearest unparasitized conspecific neighbors having similar laying dates and nest-site characteristics. Modeling of avian vision and image analyses revealed no evidence that cuckoos parasitize nests where their eggs better match the host eggs. Cuckoo eggs were as good mimics, in terms of background and spot color, background luminance, spotting pattern and egg size, of host eggs in the nests actually exploited as those in the neighboring unparasitized nests. CONCLUSIONS/SIGNIFICANCE: We reviewed the evidence for brood parasites selecting better-matching host egg phenotypes from several relevant studies and argue that such selection probably cannot exist in host-parasite systems where host interclutch variation is continuous and overall low or moderate. To date there is also no evidence that parasites prefer certain egg phenotypes in systems where it should be most advantageous, i.e., when both hosts and parasites lay polymorphic eggs. Hence, the existence of an ability to select host nests to maximize mimicry by brood parasites appears unlikely, but this possibility should be further explored in cuckoo-host systems where the host has evolved
Rate Adaptive Selective Segment Assignment for Reliable Wireless Video Transmission
Sajid Nazir
2012-01-01
Full Text Available A reliable video communication system is proposed based on data partitioning feature of H.264/AVC, used to create a layered stream, and LT codes for erasure protection. The proposed scheme termed rate adaptive selective segment assignment (RASSA is an adaptive low-complexity solution to varying channel conditions. The comparison of the results of the proposed scheme is also provided for slice-partitioned H.264/AVC data. Simulation results show competitiveness of the proposed scheme compared to optimized unequal and equal error protection solutions. The simulation results also demonstrate that a high visual quality video transmission can be maintained despite the adverse effect of varying channel conditions and the number of decoding failures can be reduced.
Wone, B W M; Madsen, Per; Donovan, E R;
2015-01-01
Metabolic rates are correlated with many aspects of ecology, but how selection on different aspects of metabolic rates affects their mutual evolution is poorly understood. Using laboratory mice, we artificially selected for high maximal mass-independent metabolic rate (MMR) without direct selecti...
Reliability Impacts in Life Support Architecture and Technology Selection
Lange Kevin E.; Anderson, Molly S.
2012-01-01
Quantitative assessments of system reliability and equivalent system mass (ESM) were made for different life support architectures based primarily on International Space Station technologies. The analysis was applied to a one-year deep-space mission. System reliability was increased by adding redundancy and spares, which added to the ESM. Results were thus obtained allowing a comparison of the ESM for each architecture at equivalent levels of reliability. Although the analysis contains numerous simplifications and uncertainties, the results suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support ESM and could influence the optimal degree of life support closure. Approaches for reducing reliability impacts were investigated and are discussed.
INDICATORS OF MAXIMAL FLEXOR FORCE OF LEFT AND RIGHT HAND FOR THE POLICE SELECTION CRITERIA PURPOSES
Milivoj Dopsaj
2006-06-01
factor the right hand participated with 95.8% force, while left hand participated with 95.0% force. We have therefore demonstrated, among tested population, measurement of right hand force is more representative estimation of the given variable. Based on the distribution of results for right hand force, as function of isolated cluster criterion, distribution of the tested population in respect to Cluster1-7 is following: 18.53%, 27.94%, 24.62%, 17.98%, 8.02%, 2.63%, 0.28%, respectively. The value of the bordering minimum for right hand force of Cluster 2 is 56.87 DaN, which represents 18.5‰ (percentile of tested population. In regard to tested policemen population between 19 and 24 years of age, results of right hand grip force is test of choice for estimation of maximal hand flexor force. The value of inflexion point (point of separation in regard to selection criterion – acceptable/ unacceptable is on the level of 56.87 DaN, for the right hand grip force and its placed among 18.5‰ (percentile of tested population.
Young Children's Selective Learning of Rule Games from Reliable and Unreliable Models
Rakoczy, Hannes; Warneken, Felix; Tomasello, Michael
2009-01-01
We investigated preschoolers' selective learning from models that had previously appeared to be reliable or unreliable. Replicating previous research, children from 4 years selectively learned novel words from reliable over unreliable speakers. Extending previous research, children also selectively learned other kinds of acts--novel games--from…
Brinklov, Cecilie Fau; Thorsen, Ida Kær; Karstoft, Kristian;
2016-01-01
Background: Prevention of multi-morbidities following non-communicable diseases requires a systematic registration of adverse modifiable risk factors, including low physical fitness. The aim of the study was to establish criterion validity and reliability of a smartphone app (InterWalk) delivered...
Brinklov, Cecilie Fau; Thorsen, Ida Kær; Karstoft, Kristian
2016-01-01
Background: Prevention of multi-morbidities following non-communicable diseases requires a systematic registration of adverse modifiable risk factors, including low physical fitness. The aim of the study was to establish criterion validity and reliability of a smartphone app (InterWalk) delivered...... calorimetry and the acceleration (vector magnitude) from the smartphone was obtained. The vector magnitude was used to predict VO2peak along with the co-variates weight, height and sex. The validity of the algorithm was tested when the smartphone was placed in the right pocket of the pants or jacket....... The algorithm was validated using leave-one-out cross validation. Test-retest reliability was tested in a subset of participants (N = 10). Results: The overall VO2peak prediction of the algorithm (R2) was 0.60 and 0.45 when the smartphone was placed in the pockets of the pants and jacket, respectively (p
Selected Methods For Increases Reliability The Of Electronic Systems Security
Paś Jacek
2015-11-01
Full Text Available The article presents the issues related to the different methods to increase the reliability of electronic security systems (ESS for example, a fire alarm system (SSP. Reliability of the SSP in the descriptive sense is a property preservation capacity to implement the preset function (e.g. protection: fire airport, the port, logistics base, etc., at a certain time and under certain conditions, e.g. Environmental, despite the possible non-compliance by a specific subset of elements this system. Analyzing the available literature on the ESS-SSP is not available studies on methods to increase the reliability (several works similar topics but moving with respect to the burglary and robbery (Intrusion. Based on the analysis of the set of all paths in the system suitability of the SSP for the scenario mentioned elements fire events (device critical because of security.
Reliability of pedigree-based and genomic evaluations in selected populations.
Gorjanc, Gregor; Bijma, Piter; Hickey, John M
2015-08-14
Reliability is an important parameter in breeding. It measures the precision of estimated breeding values (EBV) and, thus, potential response to selection on those EBV. The precision of EBV is commonly measured by relating the prediction error variance (PEV) of EBV to the base population additive genetic variance (base PEV reliability), while the potential for response to selection is commonly measured by the squared correlation between the EBV and breeding values (BV) on selection candidates (reliability of selection). While these two measures are equivalent for unselected populations, they are not equivalent for selected populations. The aim of this study was to quantify the effect of selection on these two measures of reliability and to show how this affects comparison of breeding programs using pedigree-based or genomic evaluations. Two scenarios with random and best linear unbiased prediction (BLUP) selection were simulated, where the EBV of selection candidates were estimated using only pedigree, pedigree and phenotype, genome-wide marker genotypes and phenotype, or only genome-wide marker genotypes. The base PEV reliabilities of these EBV were compared to the corresponding reliabilities of selection. Realized genetic selection intensity was evaluated to quantify the potential of selection on the different types of EBV and, thus, to validate differences in reliabilities. Finally, the contribution of different underlying processes to changes in additive genetic variance and reliabilities was quantified. The simulations showed that, for selected populations, the base PEV reliability substantially overestimates the reliability of selection of EBV that are mainly based on old information from the parental generation, as is the case with pedigree-based prediction. Selection on such EBV gave very low realized genetic selection intensities, confirming the overestimation and importance of genotyping both male and female selection candidates. The two measures of
Pan, Indranil; Ghosh, Soumyajit; Gupta, Amitava; 10.1109/PACC.2011.5978958
2012-01-01
Networked Control Systems (NCSs) are often associated with problems like random data losses which might lead to system instability. This paper proposes a method based on the use of variable controller gains to achieve maximum parametric robustness of the plant controlled over a network. Stability using variable controller gains under data loss conditions is analyzed using a suitable Linear Matrix Inequality (LMI) formulation. Also, a Particle Swarm Optimization (PSO) based technique is used to maximize parametric robustness of the plant.
Tsai, Tein-Shun; Lee, How-Jing; Tu, Ming-Chung
2009-11-01
With bioenergetic modeling, we tested the hypothesis that reptiles maximize net energy gain by postprandial thermal selection. Previous studies have shown that Chinese green tree vipers (Trimeresurus s. stejnegeri) have postprandial thermophily (mean preferred temperature T(p) for males =27.8 degrees C) in a linear thigmothermal gradient when seclusion sites and water existed. With some published empirical models of digestion associated factors for this snake, we calculated the average rate (E(net)) and efficiency (K(net)) of net energy gain from possible combinations of meal size, activity level, and feeding frequency at each temperature. The simulations consistently revealed that E(net) maximizes at the T(p) of these snakes. Although the K(net) peaks at a lower temperature than E(net), the value of K(net) remains high (>=0.85 in ratio to maximum) at the peak temperature of E(net). This suggested that the demands of both E(net) and K(net) can be attained by postprandial thermal selection in this snake. In conclusion, the data support our prediction that postprandial thermal selection may maximize net energy gain.
Development of an Environment for Software Reliability Model Selection
1992-09-01
t-1, the reliability of the system is estimated to be [1:954] •i~t): •_’(i• !(2-11) + tp(i,j3) where 5 and f5 are the NIL estimates of a, 3. This...extern FILE f5 extern double ka[]; static FILE *fpl; static boolean DIFFERENT = TRUE; static double LRT, X1[MAX..SAMPLES], Yi[MAX-.SAMPLES]; .static int...function replaced** ** by printfo) function double Betacf(double a, double b, double x) { double qap, qam, qab, em, tem, d; double bz, bm = 1.0, bp, bpp
Yee, Jennifer C; Beichman, Charles; Novati, Sebastiano Calchi; Carey, Sean; Gaudi, B Scott; Henderson, Calen; Nataf, David; Penny, Matthew; Shvartzvald, Yossi; Zhu, Wei
2015-01-01
Space-based microlens parallax measurements are a powerful tool for understanding planet populations, especially their distribution throughout the Galaxy. However, if space-based observations of the microlensing events must be specifically targeted, it is crucial that microlensing events enter the parallax sample without reference to the known presence or absence of planets. Hence, it is vital to define objective criteria for selecting events where possible and to carefully consider and minimize the selection biases where not possible so that the final sample represents a controlled experiment. We present objective criteria for initiating observations and determining their cadence for a subset of events, and we define procedures for isolating subjective decision making from information about detected planets for the remainder of events. We also define procedures to resolve conflicts between subjective and objective selections. These procedures maximize planet sensitivity of the sample as a whole by allowing f...
Reliable Path Selection Problem in Uncertain Traffic Network after Natural Disaster
Jing Wang
2013-01-01
Full Text Available After natural disaster, especially for large-scale disasters and affected areas, vast relief materials are often needed. In the meantime, the traffic networks are always of uncertainty because of the disaster. In this paper, we assume that the edges in the network are either connected or blocked, and the connection probability of each edge is known. In order to ensure the arrival of these supplies at the affected areas, it is important to select a reliable path. A reliable path selection model is formulated, and two algorithms for solving this model are presented. Then, adjustable reliable path selection model is proposed when the edge of the selected reliable path is broken. And the corresponding algorithms are shown to be efficient both theoretically and numerically.
Selecting reliable and robust freshwater macroalgae for biomass applications.
Lawton, Rebecca J; de Nys, Rocky; Paul, Nicholas A
2013-01-01
Intensive cultivation of freshwater macroalgae is likely to increase with the development of an algal biofuels industry and algal bioremediation. However, target freshwater macroalgae species suitable for large-scale intensive cultivation have not yet been identified. Therefore, as a first step to identifying target species, we compared the productivity, growth and biochemical composition of three species representative of key freshwater macroalgae genera across a range of cultivation conditions. We then selected a primary target species and assessed its competitive ability against other species over a range of stocking densities. Oedogonium had the highest productivity (8.0 g ash free dry weight m⁻² day⁻¹), lowest ash content (3-8%), lowest water content (fresh weigh: dry weight ratio of 3.4), highest carbon content (45%) and highest bioenergy potential (higher heating value 20 MJ/kg) compared to Cladophora and Spirogyra. The higher productivity of Oedogonium relative to Cladophora and Spirogyra was consistent when algae were cultured with and without the addition of CO₂ across three aeration treatments. Therefore, Oedogonium was selected as our primary target species. The competitive ability of Oedogonium was assessed by growing it in bi-cultures and polycultures with Cladophora and Spirogyra over a range of stocking densities. Cultures were initially stocked with equal proportions of each species, but after three weeks of growth the proportion of Oedogonium had increased to at least 96% (±7 S.E.) in Oedogonium-Spirogyra bi-cultures, 86% (±16 S.E.) in Oedogonium-Cladophora bi-cultures and 82% (±18 S.E.) in polycultures. The high productivity, bioenergy potential and competitive dominance of Oedogonium make this species an ideal freshwater macroalgal target for large-scale production and a valuable biomass source for bioenergy applications. These results demonstrate that freshwater macroalgae are thus far an under-utilised feedstock with much potential
Selecting reliable and robust freshwater macroalgae for biomass applications.
Rebecca J Lawton
Full Text Available Intensive cultivation of freshwater macroalgae is likely to increase with the development of an algal biofuels industry and algal bioremediation. However, target freshwater macroalgae species suitable for large-scale intensive cultivation have not yet been identified. Therefore, as a first step to identifying target species, we compared the productivity, growth and biochemical composition of three species representative of key freshwater macroalgae genera across a range of cultivation conditions. We then selected a primary target species and assessed its competitive ability against other species over a range of stocking densities. Oedogonium had the highest productivity (8.0 g ash free dry weight m⁻² day⁻¹, lowest ash content (3-8%, lowest water content (fresh weigh: dry weight ratio of 3.4, highest carbon content (45% and highest bioenergy potential (higher heating value 20 MJ/kg compared to Cladophora and Spirogyra. The higher productivity of Oedogonium relative to Cladophora and Spirogyra was consistent when algae were cultured with and without the addition of CO₂ across three aeration treatments. Therefore, Oedogonium was selected as our primary target species. The competitive ability of Oedogonium was assessed by growing it in bi-cultures and polycultures with Cladophora and Spirogyra over a range of stocking densities. Cultures were initially stocked with equal proportions of each species, but after three weeks of growth the proportion of Oedogonium had increased to at least 96% (±7 S.E. in Oedogonium-Spirogyra bi-cultures, 86% (±16 S.E. in Oedogonium-Cladophora bi-cultures and 82% (±18 S.E. in polycultures. The high productivity, bioenergy potential and competitive dominance of Oedogonium make this species an ideal freshwater macroalgal target for large-scale production and a valuable biomass source for bioenergy applications. These results demonstrate that freshwater macroalgae are thus far an under-utilised feedstock with
Balakrishnan, N; Nagaraja, HN
2007-01-01
S. Panchapakesan has made significant contributions to ranking and selection and has published in many other areas of statistics, including order statistics, reliability theory, stochastic inequalities, and inference. Written in his honor, the twenty invited articles in this volume reflect recent advances in these areas and form a tribute to Panchapakesan's influence and impact on these areas. Thematically organized, the chapters cover a broad range of topics from: Inference; Ranking and Selection; Multiple Comparisons and Tests; Agreement Assessment; Reliability; and Biostatistics. Featuring
Hadjtaieb, Amir
2013-09-12
In this paper, we propose an incremental multinode relaying protocol with arbitrary N-relay nodes that allows an efficient use of the channel spectrum. The destination combines the received signals from the source and the relays using maximal ratio Combining (MRC). The transmission ends successfully once the accumulated signal-to-noise ratio (SNR) exceeds a predefined threshold. The number of relays participating in the transmission is adapted to the channel conditions based on the feedback from the destination. The use of incremental relaying allows obtaining a higher spectral efficiency. Moreover, the symbol error probability (SEP) performance is enhanced by using MRC at the relays. The use of MRC at the relays implies that each relay overhears the signals from the source and all previous relays and combines them using MRC. The proposed protocol differs from most of existing relaying protocol by the fact that it combines both incremental relaying and MRC at the relays for a multinode topology. Our analyses for a decode-and-forward mode show that: (i) compared to existing multinode relaying schemes, the proposed scheme can essentially achieve the same SEP performance but with less average number of time slots, (ii) compared to schemes without MRC at the relays, the proposed scheme can approximately achieve a 3 dB gain.
An item selection procedure to maximise scale reliability and validity
J. Raubenheimer
2004-10-01
Full Text Available Wille (1996 proposed an item selection strategy which may be used to maximise, first, the internal consistency and, next, the convergent and discriminant validity of items in multi-dimensional Likert-type questionnaires or scales. In terms of his strategy, the latter aspects of validity are maximised by means of exploratory factor analyses. In this article, it is done by means of Tateneni, Mels, Cudeck and Browne’s (2001 Comprehensive Exploratory Factor Analysis (CEFA program which implements exploratory factor analysis, but provides the advantages of standard confirmatory factor analysis (e.g., the computation of the standard errors of the rotated factor loadings and measures of “model" fit. The benefits that accrue by using this incremental approach are demonstrated in terms of Allport and Ross’ (1967 Religious Orientation Scale, a widely-used psychological instrument. Opsomming Wille (1996 het ’n itemseleksiestrategie voorgestel om eerstens die interne konsekwentheid, en tweedens die konvergente en divergente geldigheid van items in multidimensionele Likert-tipe vraelyste of skale te maksimeer. Volgens sy strategie word laasgenoemde aspekte van geldigheid deur middel van eksploratiewe faktorontledings gemaksimeer. In hierdie artikel, sal dit gedoen word deur Tateneni, Mels, Cudeck en Browne (2001 se program vir Omvattende Eksploratiewe Faktorontleding (CEFA te gebruik, wat eksploratiewe faktorontleding aanwend, maar ook die voordele van gewone, bevestigende faktorontleding (bv., die berekening van die standaardfoute van die geroteerde faktorbeladings en indekse van modelpassing bied. Die voordele wat spruit uit die toepassing van hierdie inkrementele benadering word gedemonstreer aan die hand van Allport en Ross (1967 se Religious Orientation Scale, ’n gewilde sielkundige meetintrument.
Selection of suitable hand gestures for reliable myoelectric human computer interface
2015-01-01
Background Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Methods Experiments were conducted where sEMG was recorded from the muscles of the forearm while s...
Mainak Dey
2013-02-01
Full Text Available This paper introduces a new stock portfolio selection model in non-stochastic environment. Following the principle of maximum entropy, a new entropy-cost ratio function is introduced as the objective function. The uncertain returns, risks and dividends of the securities are considered as interval numbers. Along with the objective function, eight different types of constraints are used in the model to convert it into a pragmatic one. Three different models have been proposed by defining the future financial market optimistically, pessimistically and in the combined form to model the portfolio selection problem. To illustrate the effectiveness and tractability of the proposed models, these are tested on a set of data from Bombay Stock Exchange (BSE. The solution has been done by genetic algorithm.
Band selection for nonlinear unmixing of hyperspectral images as a maximal clique problem.
Imbiriba, Tales; Bermudez, Jose Carlos; Richard, Cedric
2017-03-01
Kernel-based nonlinear mixing models have been applied to unmix spectral information of hyperspectral images when the type of mixing occurring in the scene is too complex or unknown. Such methods, however, usually require the inversion of matrices of sizes equal to the number of spectral bands. Reducing the computational load of these methods remains a challenge in large scale applications. This paper proposes a centralized band selection (BS) method for supervised unmixing in the reproducing kernel Hilbert space (RKHS). It is based upon the coherence criterion, which sets the largest value allowed for correlations between the basis kernel functions characterizing the selected bands in the unmixing model. We show that the proposed BS approach is equivalent to solving a maximum clique problem (MCP), i.e., searching for the biggest complete subgraph in a graph. Furthermore, we devise a strategy for selecting the coherence threshold and the Gaussian kernel bandwidth using coherence bounds for linearly independent bases. Simulation results illustrate the efficiency of the proposed method.
Ibanez, O M; Stiffel, C; Ribeiro, O G; Cabrera, W K; Massa, S; de Franco, M; Sant'Anna, O A; Decreusefond, C; Mouton, D; Siqueira, M
1992-10-01
The genetic regulation of acute inflammatory reaction (AIR) was studied by the method of bidirectional selective breeding, used to produce a line of mice giving the maximal and a line of mice giving the minimal inflammatory reaction (AIR max and AIR min, respectively). The AIR was triggered by subcutaneous injection of a neutral substrate (suspension of polyacrylamide microbeads), and measured by the leukocyte and serum protein accumulation in the exudate. The two parameters are positively correlated and present a normal frequency distribution. The highly genetically heterogeneous foundation population was produced by the equipoised intercrossing of eight inbred strains of mice, and selective breeding carried out by assortative matings of extreme phenotypes. The response to selection in 11 consecutive generations was highly asymmetrical: a marked AIR increase in the AIR max and no change in the AIR min line occurred. The mean value of realized heritability in the AIR max line was 0.26 and 0.18 for cell and protein concentrations, respectively. The response to selection must have resulted from the interaction of seven to nine independent gene loci endowed with additive effects. The lack of response to selection of the AIR min line is discussed. The large inter-line difference opens new possibilities for studying the biochemistry and molecular genetics of inflammation, and also for investigating the beneficial or detrimental effect of inflammatory responses.
Rahnama, Nader; Gaeini, Abbas Ali; Kazemi, Fahimeh
2010-05-01
Consumption of energy drinks has become widespread among athletes. The effectiveness of Red Bull and Hype energy drinks on selected indices of maximal cardiorespiratory fitness and blood lactate levels in male athletes was examined in this study. TEN MALE STUDENT ATHLETES (AGE: 22.4 ± 2.1 years, height: 180.8 ± 7.7 cm, weight: 74.2 ± 8.5 kg) performed three randomized maximal oxygen consumption tests on a treadmill. Each test was separated by four days and participants were asked to ingest Red Bull, Hype or placebo drinks 40 minutes before the exercise bout. The VO (2max), time to exhaustion, heart rate and lactate were measured to determine if the caffeine-based beverages influence performance. ANOVA test was used for analyzing data. A greater value was observed in VO (2max)and time to exhaustion for the Red Bull and Hype trial compared to the placebo trial (p drinks (p > 0.05). For blood lactate levels no significant changes were observed before and two minute after the test (p > 0.05). Ingestion of Red Bull and Hype prior to exercise testing is effective on some indices of cardiorespiratory fitness but not on the blood lactate levels.
Nader Rahnama
2010-01-01
Full Text Available Background: Consumption of energy drinks has become widespread among athletes. The effectiveness of Red Bull and Hype energy drinks on selected indices of maximal cardiorespiratory fitness and blood lactate levels in male athletes was examined in this study. Methods: Ten male student athletes (age: 22.4 ± 2.1 years, height: 180.8 ± 7.7 cm, weight: 74.2 ± 8.5 kg performed three randomized maximal oxygen consumption tests on a treadmill. Each test was separated by four days and participants were asked to ingest Red Bull, Hype or placebo drinks 40 minutes before the exercise bout. The VO 2max , time to exhaustion, heart rate and lactate were measured to determine if the caffeine-based beverages influence performance. ANOVA test was used for analyzing data. Results: A greater value was observed in VO 2max and time to exhaustion for the Red Bull and Hype trial compared to the placebo trial (p 0.05. For blood lactate levels no significant changes were observed before and two minute after the test (p > 0.05. Conclusions: Ingestion of Red Bull and Hype prior to exercise testing is effective on some indices of cardiorespira-tory fitness but not on the blood lactate levels.
Mohanty, Sankhya; Hattel, Jesper Henri
2015-01-01
to generate optimized cellular scanning strategies and processing parameters, with an objective of reducing thermal asymmetries and mechanical deformations. The optimized scanning strategies are used for selective laser melting of the standard samples, and experimental and numerical results are compared....... gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge.In this paper, a methodology for generating reliable, optimized scanning paths...
EVALUATION OF HUMAN RELIABILITY IN SELECTED ACTIVITIES IN THE RAILWAY INDUSTRY
Erika SUJOVÁ
2016-07-01
Full Text Available The article focuses on evaluation of human reliability in the human – machine system in the railway industry. Based on a survey of a train dispatcher and of selected activities, we have identified risk factors affecting the dispatcher‘s work and the evaluated risk level of their influence on the reliability and safety of preformed activities. The research took place at the authors‘ work place between 2012-2013. A survey method was used. With its help, authors were able to identify selected work activities of train dispatcher’s risk factors that affect his/her work and the evaluated seriousness of its in-fluence on the reliability and safety of performed activities. Amongst the most important finding fall expressions of un-clear and complicated internal regulations and work processes, a feeling of being overworked, fear for one’s safety at small, insufficiently protected stations.
Evaluating the Reliability of Selected School-Based Indices of Adequate Reading Progress
Wheeler, Courtney E.
2010-01-01
The present study examined the stability (i.e., 4-month and 12-month test-retest reliability) of six selected school-based indices of adequate reading progress. The total sampling frame included between 3970 and 5655 schools depending on the index and research question. Each school had at least 40 second-grade students that had complete Oral…
Carrier, David R; Schilling, Nadja; Anders, Christoph
2015-11-04
The selective forces that played a role in the evolution of the musculoskeletal system of the genus Homo have long been debated and remain poorly understood. In this investigation, we introduce a new approach for testing alternative hypotheses. Our analysis is based on the premise that natural selection can be expected to have resulted in muscles that are large enough to achieve necessary levels of maximum performance in essential behaviors, but not larger. We used surface electromyography in male subjects to identify maximum activation levels in 13 muscles of the back and leg during eight behaviors that have been suggested to have been important to foraging, hunting and fighting performance in early humans. We asked two questions: (1) what behaviors produce maximum activation in each of the investigated muscles and (2) are there specific behaviors that elicit maximum recruitment from all or most of the muscles? We found that in eight of the 13 muscles, the highest activity occurred during maximal effort vertical jumping (i.e. whole-body acceleration). Punching produced the highest median activity in the other five muscles. Together, jumping and punching accounted for 73% of the incidences of maximum activity among all of the muscles and from all of the subjects. Thus, the size of the muscles of the back and leg appear to be more related to the demands of explosive behaviors rather than those of high speed sprinting or sustained endurance running. These results are consistent with the hypothesis that selection on aggressive behavior played an important role in the evolution of the genus Homo.
David R. Carrier
2015-12-01
Full Text Available The selective forces that played a role in the evolution of the musculoskeletal system of the genus Homo have long been debated and remain poorly understood. In this investigation, we introduce a new approach for testing alternative hypotheses. Our analysis is based on the premise that natural selection can be expected to have resulted in muscles that are large enough to achieve necessary levels of maximum performance in essential behaviors, but not larger. We used surface electromyography in male subjects to identify maximum activation levels in 13 muscles of the back and leg during eight behaviors that have been suggested to have been important to foraging, hunting and fighting performance in early humans. We asked two questions: (1 what behaviors produce maximum activation in each of the investigated muscles and (2 are there specific behaviors that elicit maximum recruitment from all or most of the muscles? We found that in eight of the 13 muscles, the highest activity occurred during maximal effort vertical jumping (i.e. whole-body acceleration. Punching produced the highest median activity in the other five muscles. Together, jumping and punching accounted for 73% of the incidences of maximum activity among all of the muscles and from all of the subjects. Thus, the size of the muscles of the back and leg appear to be more related to the demands of explosive behaviors rather than those of high speed sprinting or sustained endurance running. These results are consistent with the hypothesis that selection on aggressive behavior played an important role in the evolution of the genus Homo.
Mohanty, Sankhya; Hattel, Jesper Henri
2015-01-01
Selective laser melting is yet to become a standardized industrial manufacturing technique. The process continues to suffer from defects such as distortions, residual stresses, localized deformations and warpage caused primarily due to the localized heating, rapid cooling and high temperature...... gradients that occur during the process. While process monitoring and control of selective laser melting is an active area of research, establishing the reliability and robustness of the process still remains a challenge.In this paper, a methodology for generating reliable, optimized scanning paths...... and process parameters for selective laser melting of a standard sample is introduced. The processing of the sample is simulated by sequentially coupling a calibrated 3D pseudo-analytical thermal model with a 3D finite element mechanical model.The optimized processing parameters are subjected to a Monte Carlo...
González, M; Gutiérrez, C; Martínez, R
2012-09-01
A two-dimensional bisexual branching process has recently been presented for the analysis of the generation-to-generation evolution of the number of carriers of a Y-linked gene. In this model, preference of females for males with a specific genetic characteristic is assumed to be determined by an allele of the gene. It has been shown that the behavior of this kind of Y-linked gene is strongly related to the reproduction law of each genotype. In practice, the corresponding offspring distributions are usually unknown, and it is necessary to develop their estimation theory in order to determine the natural selection of the gene. Here we deal with the estimation problem for the offspring distribution of each genotype of a Y-linked gene when the only observable data are each generation's total numbers of males of each genotype and of females. We set out the problem in a non parametric framework and obtain the maximum likelihood estimators of the offspring distributions using an expectation-maximization algorithm. From these estimators, we also derive the estimators for the reproduction mean of each genotype and forecast the distribution of the future population sizes. Finally, we check the accuracy of the algorithm by means of a simulation study.
Kozine, Igor; Christensen, P.; Winther-Jensen, M.
2000-01-01
The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessingwind turbine availability while ranking...... the contributions at both the component and system levels. The project resulted in a software package combining a failure database with programs for predicting WTB availability and the reliability of all thecomponents and systems, especially the safety system. The report consists of a description of the theoretical...... foundation of the reliability and availability analyses and of sections devoted to the development of the WTB reliability models as well as adescription of the features of the database and software developed. The project comprises analysis of WTBs NM 600/44, 600/48, 750/44 and 750/48, all of which have...
Reliability and validity of admissions tools used to select students for the health professions.
Salvatori, P
2001-01-01
The selection of students for the health professions is typically a very competitive multi-staged process that includes assessment of both cognitive abilities and personal qualities. The need for reliable and valid assessment measures is obvious. This review of the health professions literature examines the evidence to support the use of various selection tools. It is clear that pre-admission overall grade point average (GPA) is the best predictor of academic performance in all of the health professions; however, the relationship between pre-admission GPA and clinical performance is less clear. The Medical College Admission Test is a good predictor of performance of medical students in terms of in-course grades and licencing examination scores but a similar test does not exist in the other health professions. Controversy remains as to the value of personal interviews and written submissions as selection tools, although it is clear that training of assessors and explicit rating guidelines enhance their reliability and validity. Ongoing research is needed to find more reliable and valid ways of assessing non-cognitive characteristics of applicants.
Ellinwood, Nicholas; Dobrev, Dobromir; Morotti, Stefano; Grandi, Eleonora
2017-09-01
The KV1.5 potassium channel, which underlies the ultra-rapid delayed-rectifier current (IKur) and is predominantly expressed in atria vs. ventricles, has emerged as a promising target to treat atrial fibrillation (AF). However, while numerous KV1.5-selective compounds have been screened, characterized, and tested in various animal models of AF, evidence of antiarrhythmic efficacy in humans is still lacking. Moreover, current guidelines for pre-clinical assessment of candidate drugs heavily rely on steady-state concentration-response curves or IC50 values, which can overlook adverse cardiotoxic effects. We sought to investigate the effects of kinetics and state-dependent binding of IKur-targeting drugs on atrial electrophysiology in silico and reveal the ideal properties of IKur blockers that maximize anti-AF efficacy and minimize pro-arrhythmic risk. To this aim, we developed a new Markov model of IKur that describes KV1.5 gating based on experimental voltage-clamp data in atrial myocytes from patient right-atrial samples in normal sinus rhythm. We extended the IKur formulation to account for state-specificity and kinetics of KV1.5-drug interactions and incorporated it into our human atrial cell model. We simulated 1- and 3-Hz pacing protocols in drug-free conditions and with a [drug] equal to the IC50 value. The effects of binding and unbinding kinetics were determined by examining permutations of the forward (kon) and reverse (koff) binding rates to the closed, open, and inactivated states of the KV1.5 channel. We identified a subset of ideal drugs exhibiting anti-AF electrophysiological parameter changes at fast pacing rates (effective refractory period prolongation), while having little effect on normal sinus rhythm (limited action potential prolongation). Our results highlight that accurately accounting for channel interactions with drugs, including kinetics and state-dependent binding, is critical for developing safer and more effective pharmacological anti
Lin, Yi-Kuei; Yeh, Cheng-Ta
2013-05-01
From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.
Kozine, I.; Christensen, P.; Winther-Jensen, M.
2000-01-01
The objective of this project was to develop and establish a database for collecting reliability and reliability-related data, for assessing the reliability of wind turbine components and subsystems and wind turbines as a whole, as well as for assessing wind turbine availability while ranking the contributions at both the component and system levels. The project resulted in a software package combining a failure database with programs for predicting WTB availability and the reliability of all the components and systems, especially the safety system. The report consists of a description of the theoretical foundation of the reliability and availability analyses and of sections devoted to the development of the WTB reliability models as well as a description of the features of the database and software developed. The project comprises analysis of WTBs NM 600/44, 600/48, 750/44 and 750/48, all of which have similar safety systems. The database was established with Microsoft Access Database Management System, the software for reliability and availability assessments was created with Visual Basic. (au)
Panos G. Datskos; Michael J. Sepaniak; Nickolay Lavrik; Pampa Dutta; Mustafa Culha
2005-12-28
The main objective of this research program is to develop robust and reliable micro-electro-mechanical sensing systems, based on microcantilevers (MCs), that can operate in liquid environments with high levels of sensitivity and selectivity. The chemical responses of MCs result from analyte-induced differential stress at the cantilever surfaces. We aim to employ various surface nanostructuring strategies that enhance these stresses and hence the degree of static bending of the cantilevers. Receptor phases as self assembled monolayers (SAMs) and thin films are being synthesized and tested to provide selectivity. Selectivity is chemically enhanced by using different phases on individual MCs in arrays and by adding a spectroscopic component, surface enhanced Raman spectrometry (SERS), in hybrid approaches to sensing. Significant progress was made in tasks that were listed in the work plan for DOE EMSP project ''Hybrid Micro-Electro-Mechanical Systems for Highly Reliable and Selective Characterization of Tank Waste''. Several project areas are listed below and discussed and referenced to our literature on the topics.
Panos G. Datskos; Michael J. Sepaniak; Nickolay Lavrik; Pampa Dutta; Mustafa Culha
2005-12-28
The main objective of this research program is to develop robust and reliable micro-electro-mechanical sensing systems, based on microcantilevers (MCs), that can operate in liquid environments with high levels of sensitivity and selectivity. The chemical responses of MCs result from analyte-induced differential stress at the cantilever surfaces. We aim to employ various surface nanostructuring strategies that enhance these stresses and hence the degree of static bending of the cantilevers. Receptor phases as self assembled monolayers (SAMs) and thin films are being synthesized and tested to provide selectivity. Selectivity is chemically enhanced by using different phases on individual MCs in arrays and by adding a spectroscopic component, surface enhanced Raman spectrometry (SERS), in hybrid approaches to sensing. Significant progress was made in tasks that were listed in the work plan for DOE EMSP project ''Hybrid Micro-Electro-Mechanical Systems for Highly Reliable and Selective Characterization of Tank Waste''. Several project areas are listed below and discussed and referenced to our literature on the topics.
Brendle, Joerg
2016-01-01
We show that, consistently, there can be maximal subtrees of P (omega) and P (omega) / fin of arbitrary regular uncountable size below the size of the continuum. We also show that there are no maximal subtrees of P (omega) / fin with countable levels. Our results answer several questions of Campero, Cancino, Hrusak, and Miranda.
Reliable selection of earthquake ground motions for performance-based design
Katsanos, Evangelos; Sextos, A.G.
2016-01-01
A decision support process is presented to accommodate selecting and scaling of earthquake motions as required for the time domain analysis of structures. Prequalified code-compatible suites of seismic motions are provided through a multi-criterion approach to satisfy prescribed reduced variability...... of selected Engineering Demand Parameters. Such a procedure, even though typically overlooked, is imperative to increase the reliability of the average response values, as required for the code-prescribed design verification of structures. Structure-related attributes such as the dynamic characteristics...... of the method, by being subjected to numerous suites of motions that were highly ranked according to both the proposed approach (δsv-sc) and the conventional index (δconv), already used by most existing code-based earthquake records selection and scaling procedures. The findings reveal the superiority...
Meguro,Tadamichi
1986-08-01
Full Text Available Maximal expiratory volume-time and flow-volume (MEVT and MEFV curves were drawn for young male nonsmoking healthy adults and for young male nonsmoking asthmatic patients. Eleven parameters, two MEVT (%FVC and FEV1.0%, six MEFV (PFR, V75, V50, V25, V10 and V50/V25, and three MTC parameters (MTC75-50, MTC50-25 and MTC25-RV were used for the multivariate analysis. The multivariate analysis in this study consisted of correlation coefficient matrix computation, the test for mean values in the multivariates, and the linear discriminant analysis using the all possible selection procedure (APSP. Correlation coefficients among flow rate parameters and flow rate related parameters in high lung volumes were different between the two groups. In the eleven-parameter discriminant analysis by APSP using single parameters, PFR, V75 (flow rate at 75% of forced vital capacity, and FEV1.0% were considered to be the effective parameters. In the seven-parameter discriminant analysis using the parameter groups, the group of all parameters and the %FVC and flow rate-related parameter group were considered to be the effective numerical alternatives to MEFV curves discriminating between healthy adults and asthmatic patients.
Hoagland, R J; Mendoza, M; Armon, C; Barohn, R J; Bryan, W W; Goodpasture, J C; Miller, R G; Parry, G J; Petajan, J H; Ross, M A
1997-06-01
Maximal voluntary isometric contraction (MVIC) is becoming widely used for monitoring disease progression in amyotrophic lateral sclerosis (ALS). We evaluated the variability of MVIC in a large multicenter (29 sites) drug trial in ALS. Intra- and interrater variability were assessed twice during the 19-month study. Intrarater reliability increased from the first to the second test, approaching the reliability reported for a single experienced clinical evaluator, but interrater reliability did not. Multiple clinical evaluators in a single site increased the variability of MVIC measurements. Rigorous quality assurance standards and monitoring of clinical evaluators should be incorporated into the design of multicenter studies using MVIC, since low variability is necessary to detect a modest treatment effect.
K B Athreya
2009-09-01
It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy $\\int fh_id_=_i$ for $i=1,2,\\ldots,\\ldots k$ the maximizer of entropy is an $f_0$ that is proportional to $\\exp(\\sum c_i h_i)$ for some choice of $c_i$. An extension of this to a continuum of constraints and many examples are presented.
Sensor Selection and Data Validation for Reliable Integrated System Health Management
Garg, Sanjay; Melcher, Kevin J.
2008-01-01
For new access to space systems with challenging mission requirements, effective implementation of integrated system health management (ISHM) must be available early in the program to support the design of systems that are safe, reliable, highly autonomous. Early ISHM availability is also needed to promote design for affordable operations; increased knowledge of functional health provided by ISHM supports construction of more efficient operations infrastructure. Lack of early ISHM inclusion in the system design process could result in retrofitting health management systems to augment and expand operational and safety requirements; thereby increasing program cost and risk due to increased instrumentation and computational complexity. Having the right sensors generating the required data to perform condition assessment, such as fault detection and isolation, with a high degree of confidence is critical to reliable operation of ISHM. Also, the data being generated by the sensors needs to be qualified to ensure that the assessments made by the ISHM is not based on faulty data. NASA Glenn Research Center has been developing technologies for sensor selection and data validation as part of the FDDR (Fault Detection, Diagnosis, and Response) element of the Upper Stage project of the Ares 1 launch vehicle development. This presentation will provide an overview of the GRC approach to sensor selection and data quality validation and will present recent results from applications that are representative of the complexity of propulsion systems for access to space vehicles. A brief overview of the sensor selection and data quality validation approaches is provided below. The NASA GRC developed Systematic Sensor Selection Strategy (S4) is a model-based procedure for systematically and quantitatively selecting an optimal sensor suite to provide overall health assessment of a host system. S4 can be logically partitioned into three major subdivisions: the knowledge base, the down-select
Zhang, Zhenhua; Tian, Jin; Du, Juan
2017-02-01
We demonstrate a simple way to realize control of population transfer and creation of two orthogonal maximally superposition states in a Λ-type four-level system with closely spaced doublet target states via a pair of pump and chirped Stokes pulses. It is illustrated that the population in the initial state can be selectively, completely and robustly transferred to either of the doublet target states via chirped adiabatic passage with the suitable chirp rate and frequency detuning of the Stokes pulse. Besides, creation of two orthogonal maximally superposition states between the initial state and intermediate state with equal amplitude but inverse relative phases is also shown, which may have potential applications in the preparations of quantum bits.
Zolkowska, Dorota; Kondrat-Wrobel, Maria W; Florek-Luszczki, Magdalena; Luszczki, Jarogniew J
2016-02-04
The aim of this study was to determine the effects of 2-methyl-6-(phenylethynyl)pyridine (MPEP - a selective antagonist for the glutamate metabotropic receptor subtype mGluR5) on the protective action of some novel antiepileptic drugs (lamotrigine, oxcarbazepine, pregabalin and topiramate) against maximal electroshock-induced seizures in mice. Brain concentrations of antiepileptic drugs were measured to determine whether MPEP altered pharmacokinetics of antiepileptic drugs. Intraperitoneal injection of 1.5 and 2mg/kg of MPEP significantly elevated the threshold for electroconvulsions in mice, whereas MPEP at a dose of 1mg/kg considerably enhanced the anticonvulsant activity of pregabalin and topiramate, but not that of lamotrigine or oxcarbazepine in the maximal electroshock-induced seizures in mice. Pharmacokinetic results revealed that MPEP (1mg/kg) did not alter total brain concentrations of pregabalin and topiramate, and the observed effect in the mouse maximal electroshock seizure model was pharmacodynamic in nature. Collectively, our preclinical data suggest that MPEP may be a safe and beneficial adjunct to the therapeutic effects of antiepileptic drugs in human patients. Copyright © 2015 Elsevier Inc. All rights reserved.
Selection of reliable reference genes in Caenorhabditis elegans for analysis of nanotoxicity.
Yanqiong Zhang
Full Text Available Despite rapid development and application of a wide range of manufactured metal oxide nanoparticles (NPs, the understanding of potential risks of using NPs is less completed, especially at the molecular level. The nematode Caenorhabditis elegans (C.elegans has been emerging as an environmental model to study the molecular mechanism of environmental contaminations, using standard genetic tools such as the real-time quantitative PCR (RT-qPCR. The most important factor that may affect the accuracy of RT-qPCR is to choose appropriate genes for normalization. In this study, we selected 13 reference gene candidates (act-1, cdc-42, pmp-3, eif-3.C, actin, act-2, csq-1, Y45F10D.4, tba-1, mdh-1, ama-1, F35G12.2, and rbd-1 to test their expression stability under different doses of nano-copper oxide (CuO 0, 1, 10, and 50 µg/mL using RT-qPCR. Four algorithms, geNorm, NormFinder, BestKeeper, and the comparative ΔCt method, were employed to evaluate these 13 candidates expressions. As a result, tba-1, Y45F10D.4 and pmp-3 were the most reliable, which may be used as reference genes in future study of nanoparticle-induced genetic response using C.elegans.
Mohanty, Sankhya; Hattel, Jesper Henri
2016-01-01
modelling has been adopted towards improving the predictability of the outputs from the selective laser melting process. Establishing the reliability of the process, however, is still a challenge, especially in components having overhanging structures.In this paper, a systematic approach towards...... powder bed without support structures, by determining cumulative probability distribution functions for average layer thickness, sample density and thermal homogeneity....... establishing reliability of overhanging structure production by selective laser melting has been adopted. A calibrated, fast, multiscale thermal model is used to simulate the single track formation on a thick powder bed. Single tracks are manufactured on a thick powder bed using same processing parameters...
Rozario, Philip A; Kidahashi, Miwako; DeRienzis, Daniel R
2011-02-01
This qualitative study of 45 older adults examines how they allocate their resources in the face of chronic health conditions. Participants were recruited from 2 senior centers and interviewed about their repertoire of activities, any changes in those activities in later life, and meanings they ascribed to those changes. The Selection, Optimization, and Compensation model guided our analysis and interpretation of participants' responses. The findings demonstrate the complexity of participants' responses to age-related changes, particularly in how they adapted and negotiated both their perception and life goals when faced with changing social landscapes. We discuss some implications and nuances of our findings.
Mohanty, Sankhya; Hattel, Jesper Henri
2016-01-01
Repeatability and reproducibility of parts produced by selective laser melting is a standing issue, and coupled with a lack of standardized quality control presents a major hindrance towards maturing of selective laser melting as an industrial scale process. Consequently, numerical process...... modelling has been adopted towards improving the predictability of the outputs from the selective laser melting process. Establishing the reliability of the process, however, is still a challenge, especially in components having overhanging structures.In this paper, a systematic approach towards...... establishing reliability of overhanging structure production by selective laser melting has been adopted. A calibrated, fast, multiscale thermal model is used to simulate the single track formation on a thick powder bed. Single tracks are manufactured on a thick powder bed using same processing parameters...
JOANEL U. BUENAVENTURA
2014-04-01
Full Text Available This study aimed to help Batelec-1 in Calatagan to expand and improve their service to provide a more reliable service to end – users. Descriptive type of research method was utilized in the study. The results reveal that the overall assessment of the respondents on the level of service reliability in terms of construction services was reliable. However, clearing of line schedule and available maintenance equipment and machine was assessed less reliable. On the level of consumers’ satisfaction in construction and maintenance services, the overall assessment of the respondents was satisfied but interruption duration and action on complaints and request got the lowest mean score and interpreted as less satisfied. The results also show that the common problems and complaints encountered by the consumers were lack of information drive, unavailability of consumer hotline, inadequate facilities and equipment, and delayed action on service request/complaints. A proposed action plan is designed to provide continuous reliability on Batelec-1 services. Based on the results of the study, construction and maintenance services of Batelec-1 in the selected Barangays in Calatagan are considered reliable, yet, can still be improved. Electric consumers are generally satisfied on personnel, construction and maintenance services yet less satisfied on information dissemination. The common problems and complaints from the consumers include lack of information drive and inavailability of consumer hotline. An action plan is proposed to provide continuous reliability on Batelec-1 services.
Fisher, Carla L; Nussbaum, Jon F
Interpersonal communication is a fundamental part of being and key to health. Interactions within family are especially critical to wellness across time. Family communication is a central means of adaptation to stress, coping, and successful aging. Still, no theoretical argument in the discipline exists that prioritizes kin communication in health. Theoretical advances can enhance interventions and policies that improve family life. This article explores socioemotional selectivity theory (SST), which highlights communication in our survival. Communication partner choice is based on one's time perspective, which affects our prioritization of goals to survive-goals sought socially. This is a first test of SST in a family communication study on women's health and aging. More than 300 women of varying ages and health status participated. Two time factors, later adulthood and late-stage breast cancer, lead women to prioritize family communication. Findings provide a theoretical basis for prioritizing family communication issues in health reform.
Study of selected problems of reliability of the supply chain in the trading company
2010-06-01
Full Text Available The paper presents the problems of the reliability of the supply chain as a whole in the dependence on the reliability of its elements. Different variants of reserving of canals (prime and reserve ones and issues connected with their switching are discussed.
Isaksen, Jesper; Hertel, Niels Thomas; Kjær, Niels Kristian
2013-01-01
In order to optimise the selection process for admission to specialist training in family medicine, we developed a new design for structured applications and selection interviews. The design contains semi-structured interviews, which combine individualised elements from the applications with stan...... with standardised behaviour-based questions. This paper describes the design of the tool, and offers reflections concerning its acceptability, reliability and feasibility....
Selection of Reliable Reference Genes for Gene Expression Studies on Rhododendron molle G. Don
Zheng Xiao
2016-10-01
Full Text Available The quantitative real-time polymerase chain reaction (qRT-PCR approach has become a widely used method to analyze expression patterns of target genes. The selection of an optimal reference gene is a prerequisite for the accurate normalization of gene expression in qRT-PCR. The present study constitutes the first systematic evaluation of potential reference genes in Rhododendron molle G. Don. Eleven candidate reference genes in different tissues and flowers at different developmental stages of R. molle were assessed using the following three software packages: GeNorm, NormFinder and BestKeeper. The results showed that EF1-α (elongation factor 1-alpha, 18S (18s ribosomal RNA and RPL3 (ribosomal protein L3 were the most stable reference genes in developing rhododendron flowers and, thus, in all of the tested samples, while tublin (TUB was the least stable. ACT5 (actin, RPL3, 18S and EF1-α were found to be the top four choices for different tissues, whereas TUB was not found to favor qRT-PCR normalization in these tissues. Three stable reference genes are recommended for the normalization of qRT-PCR data in R. molle. Furthermore, the expression profiles of RmPSY (phytoene synthase and RmPDS (phytoene dehydrogenase were assessed using EF1-α, 18S, ACT5, and RPL3 and their combination as internals. Similar trends were found, but these trends varied when the least stable reference gene TUB was used. The results further prove that it is necessary to validate the stability of reference genes prior to their use for normalization under different experimental conditions. This study provides useful information for reliable qRT-PCR data normalization in gene studies of R. molle.
Mohanty, Sankhya; Hattel, Jesper H.
2016-04-01
Repeatability and reproducibility of parts produced by selective laser melting is a standing issue, and coupled with a lack of standardized quality control presents a major hindrance towards maturing of selective laser melting as an industrial scale process. Consequently, numerical process modelling has been adopted towards improving the predictability of the outputs from the selective laser melting process. Establishing the reliability of the process, however, is still a challenge, especially in components having overhanging structures. In this paper, a systematic approach towards establishing reliability of overhanging structure production by selective laser melting has been adopted. A calibrated, fast, multiscale thermal model is used to simulate the single track formation on a thick powder bed. Single tracks are manufactured on a thick powder bed using same processing parameters, but at different locations in a powder bed and in different laser scanning directions. The difference in melt track widths and depths captures the effect of changes in incident beam power distribution due to location and processing direction. The experimental results are used in combination with numerical model, and subjected to uncertainty and reliability analysis. Cumulative probability distribution functions obtained for melt track widths and depths are found to be coherent with observed experimental values. The technique is subsequently extended for reliability characterization of single layers produced on a thick powder bed without support structures, by determining cumulative probability distribution functions for average layer thickness, sample density and thermal homogeneity.
Skvortsova, Vasilisa; Degos, Bertrand; Welter, Marie-Laure; Vidailhet, Marie; Pessiglione, Mathias
2017-06-21
Instrumental learning is a fundamental process through which agents optimize their choices, taking into account various dimensions of available options such as the possible reward or punishment outcomes and the costs associated with potential actions. Although the implication of dopamine in learning from choice outcomes is well established, less is known about its role in learning the action costs such as effort. Here, we tested the ability of patients with Parkinson's disease (PD) to maximize monetary rewards and minimize physical efforts in a probabilistic instrumental learning task. The implication of dopamine was assessed by comparing performance ON and OFF prodopaminergic medication. In a first sample of PD patients (n = 15), we observed that reward learning, but not effort learning, was selectively impaired in the absence of treatment, with a significant interaction between learning condition (reward vs effort) and medication status (OFF vs ON). These results were replicated in a second, independent sample of PD patients (n = 20) using a simplified version of the task. According to Bayesian model selection, the best account for medication effects in both studies was a specific amplification of reward magnitude in a Q-learning algorithm. These results suggest that learning to avoid physical effort is independent from dopaminergic circuits and strengthen the general idea that dopaminergic signaling amplifies the effects of reward expectation or obtainment on instrumental behavior.SIGNIFICANCE STATEMENT Theoretically, maximizing reward and minimizing effort could involve the same computations and therefore rely on the same brain circuits. Here, we tested whether dopamine, a key component of reward-related circuitry, is also implicated in effort learning. We found that patients suffering from dopamine depletion due to Parkinson's disease were selectively impaired in reward learning, but not effort learning. Moreover, anti-parkinsonian medication restored the
Service Priority based Reliable Routing Path Select Method in Smart Grid Communication Network
Kaixuan Wang
2012-11-01
Full Text Available The new challenges and schemes for the Smart Grid require high reliable transmission technologiesto support various types of electrical services and applications. This paper concentrates the degree of importance of services and tries to allocate more important service to more reliable network routing path to deliver the key instructions in the Smart Grid communication networks. Pareto probability distribution is used to weight the reliability of IP-based router path. In order to definition the relationship of service and reliability of router path, we devise a mapping and optimal function is presented to access. An optimal method is used for adapting to find the appropriate value to match the objective function. Finally, we validate the proposed algorithms by experiments. The simulation results show that the proposed algorithm outperforms the random routing algorithms.
Middleton, Addie; Fulk, George D; Herter, Troy M; Beets, Michael W; Donley, Jonathan; Fritz, Stacy L
2016-07-01
To determine the degree to which self-selected walking speed (SSWS), maximal walking speed (MWS), and walking speed reserve (WSR) are associated with fall status among community-dwelling older adults. WS and 1-year falls history data were collected on 217 community-dwelling older adults (median age = 82, range 65-93 years) at a local outpatient PT clinic and local retirement communities and senior centers. WSR was calculated as a difference (WSRdiff = MWS - SSWS) and ratio (WSRratio = MWS/SSWS). SSWS (P risk assessment. Combining SSWS and MWS to calculate an individual's WSR does not provide additional insight into fall status in this population. Complete the self-assessment activity and evaluation online at http://www.physiatry.org/JournalCME CME OBJECTIVES:: Upon completion of this article, the reader should be able to: (1) Describe the different methods for calculating walking speed reserve and discuss the potential of the metric as an outcome measure; (2) Explain the degree to which self-selected walking speed, maximal walking speed, and walking speed reserve are associated with fall status among community-dwelling older adults; and (3) Discuss potential limitations to using walking speed reserve to identify fall status in populations without mobility restrictions. Advanced : The Association of Academic Physiatrists is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. The Association of Academic Physiatrists designates this activity for a maximum of 1.5 AMA PRA Category 1 Credit(s). Physicians should only claim credit commensurate with the extent of their participation in the activity.
A Feature Selection Method Based on Maximal Marginal Relevance%一种基于最大边缘相关的特征选择方法
刘赫; 张相洪; 刘大有; 李燕军; 尹立军
2012-01-01
文本分类的特点是高维的特征空间和高度的特征冗余.针对这两个特点,采用x2统计量处理高维的特征空间,利用信息新颖度的思想处理高度的特征冗余,根据最大边缘相关的定义,将二者有机结合,提出一种基于最大边缘相关的特征选择方法.该方法可以在特征选择过程中减少大量的冗余特征.最后,在Reuters-21578 Topl0和OHSCAL两个文本数据集上进行实验.实验结果表明,基于最大边缘相关的特征选择方法比x2统计量和信息增益两种特征选择方法更高效,并且能够提高naive Bayes,Rocchio和kNN 3种不同分类器的性能.%With the rapid growth of textual information on the Internet, text categorization has already been one of the key research directions in data mining. Text categorization is a supervised learning process, defined as automatically distributing free text into one or more predefined categories. At the present, text categorization is necessary for managing textual information and has been applied into many fields. However, text categorization has two characteristics: high dimensionality of feature space and high level of feature redundancy. For the two characteristics, X is used to deal with high dimensionality of feature space, and information novelty is used to deal with high level of feature redundancy. According to the definition of maximal marginal relevance, a feature selection method based on maximal marginal relevance is proposed, which can reduce redundancy between features in the process of feature selection. Furthermore, the experiments are carried out on two text data sets, namely, Reuters-21578 ToplO and OHSCAL. The results indicate that the featureselection method based on maximal marginal relevance is more efficient than X and information gain. Moveover it can improve the performance of three different categorizers, namely, naive Bayes, Rocchio and k NN.
MAXIMS VIOLATIONS IN LITERARY WORK
Widya Hanum Sari Pertiwi
2015-12-01
Full Text Available This study was qualitative research action that focuses to find out the flouting of Gricean maxims and the functions of the flouting in the tales which are included in collection of children literature entitled My Giant Treasury of Stories and Rhymes. The objective of the study is generally to identify the violation of maxims of quantity, quality, relevance, and manner in the data sources and also to analyze the use of the flouting in the tales which are included in the book. Qualitative design using categorizing strategies, specifically coding strategy, was applied. Thus, the researcher as the instrument in this investigation was selecting the tales, reading them, and gathering every item which reflects the violation of Gricean maxims based on some conditions of flouting maxims. On the basis of the data analysis, it was found that the some utterances in the tales, both narration and conversation, flouting the four maxims of conversation, namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner. The researcher has also found that the flouting of maxims has one basic function that is to encourage the readers’ imagination toward the tales. This one basic function is developed by six others functions: (1 generating specific situation, (2 developing the plot, (3 enlivening the characters’ utterance, (4 implicating message, (5 indirectly characterizing characters, and (6 creating ambiguous setting. Keywords: children literature, tales, flouting maxims
Juciane Maria de Andrade Castro
2013-01-01
Full Text Available Airway smooth muscle constriction induced by cholinergic agonists such as methacholine (MCh, which is typically increased in asthmatic patients, is regulated mainly by muscle muscarinic M3 receptors and negatively by vagal muscarinic M2 receptors. Here we evaluated basal (intrinsic and allergen-induced (extrinsic airway responses to MCh. We used two mouse lines selected to respond maximally (AIRmax or minimally (AIRmin to innate inflammatory stimuli. We found that in basal condition AIRmin mice responded more vigorously to MCh than AIRmax. Treatment with a specific M2 antagonist increased airway response of AIRmax but not of AIRmin mice. The expression of M2 receptors in the lung was significantly lower in AIRmin compared to AIRmax animals. AIRmax mice developed a more intense allergic inflammation than AIRmin, and both allergic mouse lines increased airway responses to MCh. However, gallamine treatment of allergic groups did not affect the responses to MCh. Our results confirm that low or dysfunctional M2 receptor activity is associated with increased airway responsiveness to MCh and that this trait was inherited during the selective breeding of AIRmin mice and was acquired by AIRmax mice during allergic lung inflammation.
Events data as Bismarck’s sausages? Intercoder reliability, coders' selection and data quality
Ruggeri, A.; Gizelis, T.I.; Dorussen, H.
2011-01-01
Precise measurement is difficult but essential in the generation of high-quality data, and it is therefore remarkable that often so little attention is paid to intercoder reliability. It is commonly recognized that poor validity leads to systematic errors and biased inference. In contrast, low relia
De Rosa, G; Grasso, F; Winckler, C; Bilancione, A; Pacelli, C; Masucci, F; Napolitano, F
2015-10-01
Within the general aim of developing a Welfare Quality system for monitoring dairy buffalo welfare, this study focused on prevalence and interobserver reliability of the animal-related variables to be included in the scheme. As most of the measures were developed for cattle, the study also aimed to verify their prevalence for buffaloes. Thirty animal-based measures (22 clinical and 8 behavioral measurements) and 20 terms used for qualitative behavior assessment were assessed in 42 loose-housed buffalo farms. All farms were located in central-southern Italy. Two assessors were used (1 male and 1 female). The time needed to record all measures (animal-, resource-, and management-based) was 5.47 ± 0.48 h (mean ± SD). Interobserver reliability of animal-based measures was evaluated using Spearman rank correlation coefficient test (rs). If 0.7 is considered as threshold for high interobserver reliability, all animal-based measures were above this level. In particular, most of the coefficients were above 0.85, with higher values observed for prevalence of animals that can be touched (rs = 0.99) and prevalence of animals with iatrogenic abscess (rs = 0.97), whereas lower coefficients were found for the prevalence of vulvar discharge (rs = 0.74) and dewlap edema (rs = 0.73). Twelve out of the 20 terms used for the qualitative behavior assessment reached a satisfactory interobserver reliability (rs = 0.65). Principal component analysis of qualitative behavior assessment scores was conducted for each assessor. Both principal component 1 and principal component 2 showed high interobserver reliability (rs = 0.80 and 0.79, respectively). In addition, relevant proportions of animals were affected by welfare issues specific to buffaloes, such as overgrown claws (median = 34.1%), withers hygroma (median = 13.3%), and vulvar or uterine prolapse (median = 9.3%). We concluded that most of the investigated measures could be reliably included in the final scheme, which can be used as
Resource-efficient path-protection schemes and online selection of routes in reliable WDM networks
Monti, Paolo; Tacca, Marco; Fumagalli, Andrea
2004-04-01
The optimal choice of routing and wavelength assignment (RWA) for the working and protection path-pair of the newly generated demand request is often a complex problem in reliable wavelength-division-multiplexed (WDM) networks subject to dynamic traffic. The challenge is twofold: how to provide the required reliability level without over-reserving network resources and how to find a good solution of the RWA problem under constrained computational time. Two important contributions are made. First, the shared path protection (SPP) switching scheme is generalized to guarantee the required (differentiated) level of reliability to all arriving demands, while, at the same time, ensuring that they contain the required amount of reserved network resources. This generalization is referred to as SPP-DiR. Second, an approach for choosing the working and protection path-pair routing for the arriving demand is proposed. The approach is based on a matrix of preselected path-pairs: the disjoint path-pair matrix (DPM). Results show that, when the SPP-DiR scheme is applied, a small reduction in demand reliability corresponds to a significant reduction of the required network resources, when compared with the conventional SPP. In turn, the demand blocking probability may be reduced more than one order of magnitude. It is also shown that the DPM approach is suitable for obtaining satisfactory RWA solutions in both SPP-DiR and conventional SPP networks. The use of the DPM is most suited when the time for solving the RWA problem is constrained, e.g., when demand requests must be served swiftly.
Reliability and Validity of Selected PROMIS Measures in People with Rheumatoid Arthritis.
Susan J Bartlett
Full Text Available To evaluate the reliability and validity of 11 PROMIS measures to assess symptoms and impacts identified as important by people with rheumatoid arthritis (RA.Consecutive patients (N = 177 in an observational study completed PROMIS computer adapted tests (CATs and a short form (SF assessing pain, fatigue, physical function, mood, sleep, and participation. We assessed test-test reliability and internal consistency using correlation and Cronbach's alpha. We assessed convergent validity by examining Pearson correlations between PROMIS measures and existing measures of similar domains and known groups validity by comparing scores across disease activity levels using ANOVA.Participants were mostly female (82% and white (83% with mean (SD age of 56 (13 years; 24% had ≤ high school, 29% had RA ≤ 5 years with 13% ≤ 2 years, and 22% were disabled. PROMIS Physical Function, Pain Interference and Fatigue instruments correlated moderately to strongly (rho's ≥ 0.68 with corresponding PROs. Test-retest reliability ranged from .725-.883, and Cronbach's alpha from .906-.991. A dose-response relationship with disease activity was evident in Physical Function with similar trends in other scales except Anger.These data provide preliminary evidence of reliability and construct validity of PROMIS CATs to assess RA symptoms and impacts, and feasibility of use in clinical care. PROMIS instruments captured the experiences of RA patients across the broad continuum of RA symptoms and function, especially at low disease activity levels. Future research is needed to evaluate performance in relevant subgroups, assess responsiveness and identify clinically meaningful changes.
Are maximizers really unhappy? The measurement of maximizing tendency,
Dalia L. Diab
2008-06-01
Full Text Available Recent research suggesting that people who maximize are less happy than those who satisfice has received considerable fanfare. The current study investigates whether this conclusion reflects the construct itself or rather how it is measured. We developed an alternative measure of maximizing tendency that is theory-based, has good psychometric properties, and predicts behavioral outcomes. In contrast to the existing maximization measure, our new measure did not correlate with life (dissatisfaction, nor with most maladaptive personality and decision-making traits. We conclude that the interpretation of maximizers as unhappy may be due to poor measurement of the construct. We present a more reliable and valid measure for future researchers to use.
Reliable Refuge: Two Sky Island Scorpion Species Select Larger, Thermally Stable Retreat Sites.
Becker, Jamie E; Brown, Christopher A
2016-01-01
Sky island scorpions shelter under rocks and other surface debris, but, as with other scorpions, it is unclear whether these species select retreat sites randomly. Furthermore, little is known about the thermal preferences of scorpions, and no research has been done to identify whether reproductive condition might influence retreat site selection. The objectives were to (1) identify physical or thermal characteristics for retreat sites occupied by two sky island scorpions (Vaejovis cashi Graham 2007 and V. electrum Hughes 2011) and those not occupied; (2) determine whether retreat site selection differs between the two study species; and (3) identify whether thermal selection differs between species and between gravid and non-gravid females of the same species. Within each scorpion's habitat, maximum dimensions of rocks along a transect line were measured and compared to occupied rocks to determine whether retreat site selection occurred randomly. Temperature loggers were placed under a subset of occupied and unoccupied rocks for 48 hours to compare the thermal characteristics of these rocks. Thermal gradient trials were conducted before parturition and after dispersal of young in order to identify whether gravidity influences thermal preference. Vaejovis cashi and V. electrum both selected larger retreat sites that had more stable thermal profiles. Neither species appeared to have thermal preferences influenced by reproductive condition. However, while thermal selection did not differ among non-gravid individuals, gravid V. electrum selected warmer temperatures than its gravid congener. Sky island scorpions appear to select large retreat sites to maintain thermal stability, although biotic factors (e.g., competition) could also be involved in this choice. Future studies should focus on identifying the various biotic or abiotic factors that could influence retreat site selection in scorpions, as well as determining whether reproductive condition affects thermal
Finlayson, Heather C; Townson, Andrea F
2011-04-01
The development of a process to select the best residents for training programs is challenging. There is a paucity of literature to support the implementation of an evidence-based approach or even best practice for program directors and selection committees. Although assessment of traditional academic markers such as clerkship grades and licensing examination scores can be helpful, these measures typically fail to capture performance in the noncognitive domains of medicine. In the specialty of physical medicine and rehabilitation, physician competencies such as communication, health advocacy, and managerial and collaborative skills are of particular importance, but these are often difficult to evaluate in admission interviews. Recent research on admission processes for medical schools has demonstrated reliability and validity of the "multiple mini-interview." The objective of our project was to develop and evaluate the multiple mini-interview for a physical medicine and rehabilitation residency training program, with a focus on assessment of the noncognitive physician competencies. We found that the process was feasible, time efficient, and cost-efficient and that there was good interrater reliability. The multiple mini-interview may be applied to other physical medicine and rehabilitation residency programs. Further research is needed to confirm reliability and determine validity.
Sensitivity Analysis on the Reliability of an Offshore Winch Regarding Selected Gearbox Parameters
Lothar Wöll
2017-04-01
Full Text Available To match the high expectations and demands of customers for long-lasting machines, the development of reliable products is crucial. Furthermore, for reasons of competitiveness, it is necessary to know the future product lifetime as accurately as possible to avoid over-dimensioning. Additionally, a more detailed system understanding enables the designer to influence the life expectancy of the product without performing an extensive amount of expensive and time-consuming tests. In early development stages of new equipment only very basic information about the future system design, like the ratio or the system structure, is available. Nevertheless, a reliable lifetime prediction of the system components and subsequently of the system itself is necessary to evaluate possible design alternatives and to identify critical components beforehand. Lifetime predictions, however, require many parameters, which are often not known in these early stages. Therefore, this paper performs a sensitivity analysis on the drivetrain of an offshore winch with active heave compensation for two typical load cases. The influences of the parameters gear center distance and ambient temperature are investigated by varying the parameters within typical ranges and evaluating the quantitative effect on the lifetime.
Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M
2014-12-01
Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.
McGugin, Rankin Williams; Gatenby, J Christopher; Gore, John C; Gauthier, Isabel
2012-10-16
The fusiform face area (FFA) is a region of human cortex that responds selectively to faces, but whether it supports a more general function relevant for perceptual expertise is debated. Although both faces and objects of expertise engage many brain areas, the FFA remains the focus of the strongest modular claims and the clearest predictions about expertise. Functional MRI studies at standard-resolution (SR-fMRI) have found responses in the FFA for nonface objects of expertise, but high-resolution fMRI (HR-fMRI) in the FFA [Grill-Spector K, et al. (2006) Nat Neurosci 9:1177-1185] and neurophysiology in face patches in the monkey brain [Tsao DY, et al. (2006) Science 311:670-674] reveal no reliable selectivity for objects. It is thus possible that FFA responses to objects with SR-fMRI are a result of spatial blurring of responses from nonface-selective areas, potentially driven by attention to objects of expertise. Using HR-fMRI in two experiments, we provide evidence of reliable responses to cars in the FFA that correlate with behavioral car expertise. Effects of expertise in the FFA for nonface objects cannot be attributed to spatial blurring beyond the scale at which modular claims have been made, and within the lateral fusiform gyrus, they are restricted to a small area (200 mm(2) on the right and 50 mm(2) on the left) centered on the peak of face selectivity. Experience with a category may be sufficient to explain the spatially clustered face selectivity observed in this region.
Selection of reliable reference genes for gene expression studies in peach using real-time PCR
Zhou Jun
2009-07-01
Full Text Available Abstract Background RT-qPCR is a preferred method for rapid and reliable quantification of gene expression studies. Appropriate application of RT-qPCR in such studies requires the use of reference gene(s as an internal control to normalize mRNA levels between different samples for an exact comparison of gene expression level. However, recent studies have shown that no single reference gene is universal for all experiments. Thus, the identification of high quality reference gene(s is of paramount importance for the interpretation of data generated by RT-qPCR. Only a few studies on reference genes have been done in plants and none in peach (Prunus persica L. Batsch. Therefore, the present study was conducted to identify suitable reference gene(s for normalization of gene expression in peach. Results In this work, eleven reference genes were investigated in different peach samples using RT-qPCR with SYBR green. These genes are: actin 2/7 (ACT, cyclophilin (CYP2, RNA polymerase II (RP II, phospholipase A2 (PLA2, ribosomal protein L13 (RPL13, glyceraldehyde-3-phosphate dehydrogenase (GAPDH, 18S ribosomal RNA (18S rRNA, tubblin beta (TUB, tubblin alpha (TUA, translation elongation factor 2 (TEF2 and ubiquitin 10 (UBQ10. All eleven reference genes displayed a wide range of Cq values in all samples, indicating that they expressed variably. The stability of these genes except for RPL13 was determined by three different descriptive statistics, geNorm, NormFinder and BestKeeper, which produced highly comparable results. Conclusion Our study demonstrates that expression stability varied greatly between genes studied in peach. Based on the results from geNorm, NormFinder and BestKeeper analyses, for all the sample pools analyzed, TEF2, UBQ10 and RP II were found to be the most suitable reference genes with a very high statistical reliability, and TEF2 and RP II for the other sample series, while 18S rRNA, RPL13 and PLA2 were unsuitable as internal controls
Holladay, A. M.
1978-01-01
Guidelines are given for the selection and application of three types of tantalum electrolytic capacitors in current use in the design of electrical and electronic circuits for space flight missions. In addition, the guidelines supplement requirements of existing military specifications used in the procurement of capacitors. A need exists for these guidelines to assist designers in preventing some of the recurring, serious problems experienced with tantalum electrolytic capacitors in the recent past. The three types of capacitors covered by these guidelines are; solid, wet foil, and tantalum cased wet slug.
Wang, Zhaohai; Wang, Ya; Yang, Jing; Hu, Keke; An, Baoguang; Deng, Xiaolong; Li, Yangsheng
2016-07-01
Stable and uniform expression of reference genes across samples plays a key role in accurate normalization of gene expression by reverse-transcription quantitative polymerase chain reaction (RT-qPCR). For rice study, there is still a lack of validation and recommendation of appropriate reference genes with high stability depending on experimental conditions. Eleven candidate reference genes potentially owning high stability were evaluated by geNorm and NormFinder for their expression stability in 22 various experimental conditions. Best combinations of multiple reference genes were recommended depending on experimental conditions, and the holistic stability of reference genes was also evaluated. Reference genes would become more variable and thus needed to be critically selected in experimental groups of tissues, heat, 6-benzylamino purine, and drought, but they were comparatively stable under cold, wound, and ultraviolet-B stresses. Triosephosphate isomerase (TI), profilin-2 (Profilin-2), ubiquitin-conjugating enzyme E2 (UBC), endothelial differentiation factor (Edf), and ADP-ribosylation factor (ARF) were stable in most of our experimental conditions. No universal reference gene showed good stability in all experimental conditions. To get accurate expression result, suitable combination of multiple reference genes for a specific experimental condition would be a better choice. This study provided an application guideline to select stable reference genes for rice gene expression study.
Profit maximization mitigates competition
Dierker, Egbert; Grodal, Birgit
1996-01-01
We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...
王居平
2003-01-01
It' s very important to determine the weights of multiple index comprehensive evaluation. Selected sub-scribing to scientific and technical periodicals is a problem of comprehensive evaluation. Based on maximizing the devia-tions, this paper proposes a mode to determine the weights of multiple indexes in subscribing to periodicals. This methodmakes full use of the information supplied by subjective and objective weighting methods, and the evaluation result is satis-factory.
Maximally incompatible quantum observables
Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)
2014-05-01
The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.
Ellegaard, K; Torp-Pedersen, S; Lund, Hans;
2008-01-01
PURPOSE: The amount of colour Doppler activity in the inflamed synovium is used to quantify inflammatory activity. The measurements may vary due to image selection, quantification method, and point in cardiac cycle. This study investigated the test-retest reliability of ultrasound colour Doppler...... by the anatomical position only or by the anatomical position with maximum colour Doppler activity. Subsequently, the amount of colour Doppler was measured in an area defined by either the synovial tissue or by specific anatomical structures surrounding the synovial tissue. RESULTS: The best test-retest reliability...... test-retest reliability was found if the images were selected by anatomical position only and the quantification was done in an area defined by the synovial tissue (ICC [2.1] = 0.48 and SW = 0.049). CONCLUSION: The study showed that colour Doppler measurements are reliable if the images for analysis...
Kempowsky-Hamon, Tatiana; Valle, Carine; Lacroix-Triki, Magali; Hedjazi, Lyamine; Trouilh, Lidwine; Lamarre, Sophie; Labourdette, Delphine; Roger, Laurence; Mhamdi, Loubna; Dalenc, Florence; Filleron, Thomas; Favre, Gilles; François, Jean-Marie; Le Lann, Marie-Véronique; Anton-Leberre, Véronique
2015-02-07
Personalized medicine has become a priority in breast cancer patient management. In addition to the routinely used clinicopathological characteristics, clinicians will have to face an increasing amount of data derived from tumor molecular profiling. The aims of this study were to develop a new gene selection method based on a fuzzy logic selection and classification algorithm, and to validate the gene signatures obtained on breast cancer patient cohorts. We analyzed data from four published gene expression datasets for breast carcinomas. We identified the best discriminating genes by comparing molecular expression profiles between histologic grade 1 and 3 tumors for each of the training datasets. The most pertinent probes were selected and used to define fuzzy molecular grade 1-like (good prognosis) and fuzzy molecular grade 3-like (poor prognosis) profiles. To evaluate the prognostic performance of the fuzzy grade signatures in breast cancer tumors, a Kaplan-Meier analysis was conducted to compare the relapse-free survival deduced from histologic grade and fuzzy molecular grade classification. We applied the fuzzy logic selection on breast cancer databases and obtained four new gene signatures. Analysis in the training public sets showed good performance of these gene signatures for grade (sensitivity from 90% to 95%, specificity 67% to 93%). To validate these gene signatures, we designed probes on custom microarrays and tested them on 150 invasive breast carcinomas. Good performance was obtained with an error rate of less than 10%. For one gene signature, among 74 histologic grade 3 and 18 grade 1 tumors, 88 cases (96%) were correctly assigned. Interestingly histologic grade 2 tumors (n = 58) were split in these two molecular grade categories. We confirmed the use of fuzzy logic selection as a new tool to identify gene signatures with good reliability and increased classification power. This method based on artificial intelligence algorithms was successfully
黄嘉健; 王昌照; 郑文杰; 汪隆君
2015-01-01
In allusion to the subjective selection of maintenance objects and historical data based reliability assessment, a reliability-centered maintenance selection model for distribution network based on condition monitoring is put forward, and the solving algorithm is given. According to the quantitative relationship between condition monitoring variables of equipment and its failure rate, a multi-objective reliability-centered maintenance selection model, which maximizes variation of system average interruption frequency, system average interruption duration and expected energy not supplied with the constraints of budgets and human resources, is established; the normalized normal constraint method is employed to solve the model, and obtains fast and precisely Pareto frontier where the distribution of optimal solutions becomes uniform; the fuzzy selection strategy is used to determine the best compromise solution. The numerical results of RBTS-BUS6 system validate the effectiveness of the proposed reliability-centered maintenance selection model and the solving algorithm.%针对配电网检修的对象选取主观性强、可靠性评估采用历史平均数据等问题，提出基于状态监测的配电网可靠性检修选择模型及算法。根据设备状态监测量与故障率的关系，建立了以检修前后系统平均停电频率、系统平均停电时间、系统缺电量的变化量最大为目标，以预算成本和人力资源为约束条件的多目标配电网可靠性检修选择模型，基于规格化法向约束方法高效准确获得完整且均匀分布的Pareto前沿，采用模糊选择策略确定最优折中解。对RBTS-BUS6系统的计算表明，所提出模型及算法能够有效求解配电网可靠性检修问题。
Chunsheng Liu
Full Text Available It is increasingly evident about the difficulty to monitor chemical exposure through biomarkers as almost all the biomarkers so far proposed are not specific for any individual chemical. In this proof-of-concept study, adult male zebrafish (Danio rerio were exposed to 5 or 25 µg/L 17β-estradiol (E2, 100 µg/L lindane, 5 nM 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD or 15 mg/L arsenic for 96 h, and the expression profiles of 59 genes involved in 7 pathways plus 2 well characterized biomarker genes, vtg1 (vitellogenin1 and cyp1a1 (cytochrome P450 1A1, were examined. Relative distance (RD computational model was developed to screen favorable genes and generate appropriate gene sets for the differentiation of chemicals/concentrations selected. Our results demonstrated that the known biomarker genes were not always good candidates for the differentiation of pair of chemicals/concentrations, and other genes had higher potentials in some cases. Furthermore, the differentiation of 5 chemicals/concentrations examined were attainable using expression data of various gene sets, and the best combination was the set consisting of 50 genes; however, as few as two genes (e.g. vtg1 and hspa5 [heat shock protein 5] were sufficient to differentiate the five chemical/concentration groups in the present test. These observations suggest that multi-parameter arrays should be more reliable for biomonitoring of chemical exposure than traditional biomarkers, and the RD computational model provides an effective tool for the selection of parameters and generation of parameter sets.
Liu, Chunsheng; Xu, Hongyan; Lam, Siew Hong; Gong, Zhiyuan
2013-01-01
It is increasingly evident about the difficulty to monitor chemical exposure through biomarkers as almost all the biomarkers so far proposed are not specific for any individual chemical. In this proof-of-concept study, adult male zebrafish (Danio rerio) were exposed to 5 or 25 µg/L 17β-estradiol (E2), 100 µg/L lindane, 5 nM 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) or 15 mg/L arsenic for 96 h, and the expression profiles of 59 genes involved in 7 pathways plus 2 well characterized biomarker genes, vtg1 (vitellogenin1) and cyp1a1 (cytochrome P450 1A1), were examined. Relative distance (RD) computational model was developed to screen favorable genes and generate appropriate gene sets for the differentiation of chemicals/concentrations selected. Our results demonstrated that the known biomarker genes were not always good candidates for the differentiation of pair of chemicals/concentrations, and other genes had higher potentials in some cases. Furthermore, the differentiation of 5 chemicals/concentrations examined were attainable using expression data of various gene sets, and the best combination was the set consisting of 50 genes; however, as few as two genes (e.g. vtg1 and hspa5 [heat shock protein 5]) were sufficient to differentiate the five chemical/concentration groups in the present test. These observations suggest that multi-parameter arrays should be more reliable for biomonitoring of chemical exposure than traditional biomarkers, and the RD computational model provides an effective tool for the selection of parameters and generation of parameter sets.
Parker, Andrew M.; Wandi Bruine de Bruin; Baruch Fischhoff
2007-01-01
Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...
Ellegaard, K.; Torp-Pedersen, S.; Lund, H.;
2008-01-01
Purpose: The amount Of colour Doppler activity in the inflamed synovium is used to quantity inflammatory activity. The measurements may vary due to image selection, quantification method, and point in cardiac cycle. This study investigated the test-retest reliability Of ultrasound colour Doppler ...
Florek-Luszczki, Magdalena; Zagaja, Miroslaw; Luszczki, Jarogniew J
2015-08-01
The influence of arachidonyl-2'-chloroethylamide (ACEA - a selective cannabinoid CB1 receptor agonist) on the anticonvulsant potency and acute adverse-effect potentials of clobazam, lacosamide, and pregabalin was determined in the maximal electroshock-induced seizure model and chimney test in mice. ACEA (2.5 mg/kg, i.p.) significantly enhanced the anticonvulsant potency of pregabalin in the mouse maximal electroshock-induced seizure model by decreasing the median effective dose (ED50 ) of pregabalin from 125.39 to 78.06 mg/kg (P clobazam and lacosamide in the mouse maximal electroshock-induced seizure model. On the other hand, ACEA (2.5 mg/kg) did not affect acute adverse effects of clobazam, lacosamide or pregabalin, and the median toxic doses (TD50 ) for the studied anti-epileptic drugs in combination with ACEA did not differ from the TD50 values as determined for the drugs administered alone in the chimney test. In conclusion, ACEA ameliorates the pharmacological profile of pregabalin, when considering both the anticonvulsant and the acute adverse effects of the drug in preclinical study on animals. The combination of pregabalin with ACEA can be of pivotal importance for patients with epilepsy as a potentially advantageous combination if the results from this study translate into clinical settings.
Ming Yi WANG; Guo ZHAO
2005-01-01
A right R-module E over a ring R is said to be maximally injective in case for any maximal right ideal m of R, every R-homomorphism f : m → E can be extended to an R-homomorphism f' : R → E. In this paper, we first construct an example to show that maximal injectivity is a proper generalization of injectivity. Then we prove that any right R-module over a left perfect ring R is maximally injective if and only if it is injective. We also give a partial affirmative answer to Faith's conjecture by further investigating the property of maximally injective rings. Finally, we get an approximation to Faith's conjecture, which asserts that every injective right R-module over any left perfect right self-injective ring R is the injective hull of a projective submodule.
Andrew M. Parker
2007-12-01
Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.
Brüstle, Thomas; Pérotin, Matthieu
2012-01-01
Maximal green sequences are particular sequences of quiver mutations which were introduced by Keller in the context of quantum dilogarithm identities and independently by Cecotti-Cordova-Vafa in the context of supersymmetric gauge theory. Our aim is to initiate a systematic study of these sequences from a combinatorial point of view. Interpreting maximal green sequences as paths in various natural posets arising in representation theory, we prove the finiteness of the number of maximal green sequences for cluster finite quivers, affine quivers and acyclic quivers with at most three vertices. We also give results concerning the possible numbers and lengths of these maximal green sequences. Finally we describe an algorithm for computing maximal green sequences for arbitrary valued quivers which we used to obtain numerous explicit examples that we present.
Korytar, Richard; Pruneda, Miguel; Ordejon, Pablo; Lorente, Nicolas [Centre d' Investigacio en Nanociencia i Nanotecnologia (CSIC-ICN), Campus de la UAB, E-08193 Bellaterra (Spain); Junquera, Javier, E-mail: rkorytar@cin2.e [Departamento de Ciencias de la Tierra y Fisica de la Materia Condensada, Universidad de Cantabria, E-39005 Santander (Spain)
2010-09-29
We have adapted the maximally localized Wannier function approach of Souza et al (2002 Phys. Rev. B 65 035109) to the density functional theory based SIESTA code (Soler et al 2002 J. Phys.: Condens. Mater. 14 2745) and applied it to the study of Co substitutional impurities in bulk copper as well as to the Cu(111) surface. In the Co impurity case, we have reduced the problem to the Co d-electrons and the Cu sp-band, permitting us to obtain an Anderson-like Hamiltonian from well defined density functional parameters in a fully orthonormal basis set. In order to test the quality of the Wannier approach to surfaces, we have studied the electronic structure of the Cu(111) surface by again transforming the density functional problem into the Wannier representation. An excellent description of the Shockley surface state is attained, permitting us to be confident in the application of this method to future studies of magnetic adsorbates in the presence of an extended surface state.
Logan, Jeffrey S. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Paranhos, Elizabeth [Energy Innovation Partners, Seattle, WA (United States); Kozak, Tracy G. [Energy Innovation Partners, Seattle, WA (United States); Boyd, William [Univ. of Colorado, Boulder, CO (United States)
2017-07-31
This study focuses on onshore natural gas operations and examines the extent to which oil and gas firms have embraced certain organizational characteristics that lead to 'high reliability' - understood here as strong safety and reliability records over extended periods of operation. The key questions that motivated this study include whether onshore oil and gas firms engaged in exploration and production (E&P) and midstream (i.e., natural gas transmission and storage) are implementing practices characteristic of high reliability organizations (HROs) and the extent to which any such practices are being driven by industry innovations and standards and/or regulatory requirements.
Inapproximability of maximal strip recovery
Jiang, Minghui
2009-01-01
In comparative genomic, the first step of sequence analysis is usually to decompose two or more genomes into syntenic blocks that are segments of homologous chromosomes. For the reliable recovery of syntenic blocks, noise and ambiguities in the genomic maps need to be removed first. Maximal Strip Recovery (MSR) is an optimization problem proposed by Zheng, Zhu, and Sankoff for reliably recovering syntenic blocks from genomic maps in the midst of noise and ambiguities. Given $d$ genomic maps as sequences of gene markers, the objective of \\msr{d} is to find $d$ subsequences, one subsequence of each genomic map, such that the total length of syntenic blocks in these subsequences is maximized. For any constant $d \\ge 2$, a polynomial-time 2d-approximation for \\msr{d} was previously known. In this paper, we show that for any $d \\ge 2$, \\msr{d} is APX-hard, even for the most basic version of the problem in which all gene markers are distinct and appear in positive orientation in each genomic map. Moreover, we provi...
Selection of Equipment Software Reliability Metrics Based on GQM%基于GQM的装备软件可靠性参数选取方法
韩坤; 吴纬; 陈守华; 帅勇
2015-01-01
针对装备软件无可靠性定量要求以及开发过程缺少监管的问题，提出基于GQM（Goal-Question-Metric）的装备软件可靠性参数选取方法。首先构建了软件可靠性通用参数集和装备软件可靠性特有参数集，然后按照GQM方法的框架，从不同角度出发，制定装备软件可靠性度量目标，列出为实现目标需要回答的一系列问题，进而以回答问题的方式，确定不同情况下适用的软件可靠性参数，最终建立装备软件可靠性参数体系。%Aiming at the problem of no equipment software reliability quantitative requirement and lacking of supervision to development process,a method for selecting of equipment software reliability metrics based on GQM (Goal-Question-Metric)is proposed. Firstly,universal metrics of software reliability and specific metrics of equipment software reliability are set up separately. Secondly, following the framework of GQM,goals of equipment software reliability measurement are established from different point of view,and a series of questions are listed according to the goals. Software reliability metrics suitable for specific situation are selected in way of answering question,and system of equipment software reliability metrics is established finally.
Varanasi, Jhansi L; Sinha, Pallavi; Das, Debabrata
2017-05-01
To selectively enrich an electrogenic mixed consortium capable of utilizing dark fermentative effluents as substrates in microbial fuel cells and to further enhance the power outputs by optimization of influential anodic operational parameters. A maximum power density of 1.4 W/m(3) was obtained by an enriched mixed electrogenic consortium in microbial fuel cells using acetate as substrate. This was further increased to 5.43 W/m(3) by optimization of influential anodic parameters. By utilizing dark fermentative effluents as substrates, the maximum power densities ranged from 5.2 to 6.2 W/m(3) with an average COD removal efficiency of 75% and a columbic efficiency of 10.6%. A simple strategy is provided for selective enrichment of electrogenic bacteria that can be used in microbial fuel cells for generating power from various dark fermentative effluents.
Rudiger Bubner
1998-12-01
Full Text Available Even though the maxims' theory is not at thecenter of Kant's ethics, it is the unavoidable basis of the categoric imperative's formulation. Kant leanson the transmitted representations of modem moral theory. During the last decades, the notion of maxims has deserved more attention, due to the philosophy of language's debates on rules, and due to action theory's interest in this notion. I here by brietly expound my views in these discussions.
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-07
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.
Fosgerau, Mogens; Karlström, Anders
2010-01-01
We derive the value of reliability in the scheduling of an activity of random duration, such as travel under congested conditions. Using a simple formulation of scheduling utility, we show that the maximal expected utility is linear in the mean and standard deviation of trip duration, regardless...
杨艳
2014-01-01
以主题公园、旅行社和酒店三者所构成的旅游供应链为研究对象，将酒店之间的竞争因素考虑到旅游供应链的发展战略选择中，以供应链整体利润最大化为目标展开研究并构建模型进行定量的分析。%In this paper, with the tourism supply chain composed by a theme park, travel agencies and hotels as the subject of the study, we incorporated the competition between hotels into the selection of the development strategies of the tourism supply chain and then with the maximization of the total profit of the supply chain as the objective, established a mathematical model for its quantitative analysis.
Vermeulen, M.I.; Tromp, F.; Zuithoff, N.P.; Pieters, R.H.; Damoiseaux, R.A.; Kuyvenhoven, M.M.
2014-01-01
Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the compete
Vermeulen, M.I.; Tromp, F.; Zuithoff, N.P.; Pieters, R.H.; Damoiseaux, R.A.; Kuyvenhoven, M.M.
2014-01-01
Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the
Liu Yang
2010-08-01
Full Text Available Abstract Background Children's health and health behaviour are essential for their development and it is important to obtain abundant and accurate information to understand young people's health and health behaviour. The Health Behaviour in School-aged Children (HBSC study is among the first large-scale international surveys on adolescent health through self-report questionnaires. So far, more than 40 countries in Europe and North America have been involved in the HBSC study. The purpose of this study is to assess the test-retest reliability of selected items in the Chinese version of the HBSC survey questionnaire in a sample of adolescents in Beijing, China. Methods A sample of 95 male and female students aged 11 or 15 years old participated in a test and retest with a three weeks interval. Student Identity numbers of respondents were utilized to permit matching of test-retest questionnaires. 23 items concerning physical activity, sedentary behaviour, sleep and substance use were evaluated by using the percentage of response shifts and the single measure Intraclass Correlation Coefficients (ICC with 95% confidence interval (CI for all respondents and stratified by gender and age. Items on substance use were only evaluated for school children aged 15 years old. Results The percentage of no response shift between test and retest varied from 32% for the item on computer use at weekends to 92% for the three items on smoking. Of all the 23 items evaluated, 6 items (26% showed a moderate reliability, 12 items (52% displayed a substantial reliability and 4 items (17% indicated almost perfect reliability. No gender and age group difference of the test-retest reliability was found except for a few items on sedentary behaviour. Conclusions The overall findings of this study suggest that most selected indicators in the HBSC survey questionnaire have satisfactory test-retest reliability for the students in Beijing. Further test-retest studies in a large
Janusz Brzozowski
2014-05-01
Full Text Available The atoms of a regular language are non-empty intersections of complemented and uncomplemented quotients of the language. Tight upper bounds on the number of atoms of a language and on the quotient complexities of atoms are known. We introduce a new class of regular languages, called the maximally atomic languages, consisting of all languages meeting these bounds. We prove the following result: If L is a regular language of quotient complexity n and G is the subgroup of permutations in the transition semigroup T of the minimal DFA of L, then L is maximally atomic if and only if G is transitive on k-subsets of 1,...,n for 0 <= k <= n and T contains a transformation of rank n-1.
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline with...... to the saline/oil emulsion. Placing of the challenge patches affected the response, as simultaneous chlorocresol challenge on the flank located 2 cm closer to the abdomen than the usual challenge site gave decreased reactions....
Maximization Paradox: Result of Believing in an Objective Best.
Luan, Mo; Li, Hong
2017-05-01
The results from four studies provide reliable evidence of how beliefs in an objective best influence the decision process and subjective feelings. A belief in an objective best serves as the fundamental mechanism connecting the concept of maximizing and the maximization paradox (i.e., expending great effort but feeling bad when making decisions, Study 1), and randomly chosen decision makers operate similar to maximizers once they are manipulated to believe that the best is objective (Studies 2A, 2B, and 3). In addition, the effect of a belief in an objective best on the maximization paradox is moderated by the presence of a dominant option (Study 3). The findings of this research contribute to the maximization literature by demonstrating that believing in an objective best leads to the maximization paradox. The maximization paradox is indeed the result of believing in an objective best.
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
Vandesompele Jo
2008-01-01
Full Text Available Abstract Background In the nematode Caenorhabditis elegans the conserved Ins/IGF-1 signaling pathway regulates many biological processes including life span, stress response, dauer diapause and metabolism. Detection of differentially expressed genes may contribute to a better understanding of the mechanism by which the Ins/IGF-1 signaling pathway regulates these processes. Appropriate normalization is an essential prerequisite for obtaining accurate and reproducible quantification of gene expression levels. The aim of this study was to establish a reliable set of reference genes for gene expression analysis in C. elegans. Results Real-time quantitative PCR was used to evaluate the expression stability of 12 candidate reference genes (act-1, ama-1, cdc-42, csq-1, eif-3.C, mdh-1, gpd-2, pmp-3, tba-1, Y45F10D.4, rgs-6 and unc-16 in wild-type, three Ins/IGF-1 pathway mutants, dauers and L3 stage larvae. After geNorm analysis, cdc-42, pmp-3 and Y45F10D.4 showed the most stable expression pattern and were used to normalize 5 sod expression levels. Significant differences in mRNA levels were observed for sod-1 and sod-3 in daf-2 relative to wild-type animals, whereas in dauers sod-1, sod-3, sod-4 and sod-5 are differentially expressed relative to third stage larvae. Conclusion Our findings emphasize the importance of accurate normalization using stably expressed reference genes. The methodology used in this study is generally applicable to reliably quantify gene expression levels in the nematode C. elegans using quantitative PCR.
Michael J. Sepaniak
2008-10-08
Innovative technology of sensory and selective chemical monitoring of hazardous wastes present in storage tanks are of continued importance to the environment. This multifaceted research program exploits the unique characteristics of micro and nano-fabricated cantilever-based, micro-electro-mechanical systems (MEMES) and nano-electro-mechanical systems (NEMS) in chemical sensing.
Fernández-Guerra, Paula; Birkler, Rune I D; Merinero, Begoña
2014-01-01
Selected reaction monitoring (SRM) mass spectrometry can quantitatively measure proteins by specific targeting of peptide sequences, and allows the determination of multiple proteins in one single analysis. Here, we show the feasibility of simultaneous measurements of multiple proteins in mitocho......Selected reaction monitoring (SRM) mass spectrometry can quantitatively measure proteins by specific targeting of peptide sequences, and allows the determination of multiple proteins in one single analysis. Here, we show the feasibility of simultaneous measurements of multiple proteins......, whereas mRNA levels were almost unaltered, indicating instability of E1α and E1β monomers. Using SRM we elucidated the protein effects of mutations generating premature termination codons or misfolded proteins. SRM is a complement to transcript level measurements and a valuable tool to shed light...
Sun, Zhan-Bin; Li, Shi-Dong; Sun, Man-Hong
2015-07-01
Reference genes are important to precisely quantify gene expression by real-time PCR. In order to identify stable and reliable expressed genes in mycoparasite Clonostachys rosea in different modes of nutrition, seven commonly used housekeeping genes, 18S rRNA, actin, β-tubulin, elongation factor 1, ubiquitin, ubiquitin-conjugating enzyme and glyceraldehyde-3-phosphate dehydrogenase, from the effective biocontrol isolate C. rosea 67-1 were tested for their expression under sclerotial induction and during vegetative growth on PDA medium. Analysis by three software programs showed that differences existed among the candidates. Elongation factor 1 was most stable; the M value in geNorm, SD value in Bestkeeper and stability value in Normfinder analysis were 0.405, 0.450 and 0.442, respectively, indicating that the gene elongation factor 1 could be used to normalize gene expression in C. rosea in both vegetative growth and parasitic process. By using elongation factor 1, the expression of a serine protease gene, sep, in different conditions was assessed, which was consistent with the transcriptomic data. This research provides an effective method to quantitate expression changes of target genes in C. rosea, and will assist in further investigation of parasitism-related genes of this fungus.
Social group utility maximization
Gong, Xiaowen; Yang, Lei; Zhang, Junshan
2014-01-01
This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b
Brandes, U; Gaertler, M; Goerke, R; Hoefer, M; Nikoloski, Z; Wagner, D
2006-01-01
Several algorithms have been proposed to compute partitions of networks into communities that score high on a graph clustering index called modularity. While publications on these algorithms typically contain experimental evaluations to emphasize the plausibility of results, none of these algorithms has been shown to actually compute optimal partitions. We here settle the unknown complexity status of modularity maximization by showing that the corresponding decision version is NP-complete in the strong sense. As a consequence, any efficient, i.e. polynomial-time, algorithm is only heuristic and yields suboptimal partitions on many instances.
Antczak, K; Wilczyńska, U
1980-01-01
Part II presents a statistical model devised by the authors for evaluating toxicological analyses results. The model includes: 1. Establishment of a reference value, basing on our own measurements taken by two independent analytical methods. 2. Selection of laboratories -- basing on the deviation of the obtained values from reference ones. 3. On consideration of variance analysis, t-student's test and differences test, subsequent quality controls and particular laboratories have been evaluated.
Confidence intervals for maximal reliability of probability judgments
K.Y. Lam (Kar Yin); A.J. Koning (Alex); Ph.H.B.F. Franses (Philip Hans)
2007-01-01
textabstractSubjective probabilities play an important role in marketing research, for example where individuals rate the likelihood that they will purchase a new to develop product. The tau-equivalent model can describe the joint behaviour of multiple test items measuring the same subjective probab
Patrik L Ståhl
Full Text Available Biomarker identification is of utmost importance for the development of novel diagnostics and therapeutics. Here we make use of a translational database selection strategy, utilizing data from the Human Protein Atlas (HPA on differentially expressed protein patterns in healthy and breast cancer tissues as a means to filter out potential biomarkers for underlying genetic causatives of the disease. DNA was isolated from ten breast cancer biopsies, and the protein coding and flanking non-coding genomic regions corresponding to the selected proteins were extracted in a multiplexed format from the samples using a single DNA sequence capture array. Deep sequencing revealed an even enrichment of the multiplexed samples and a great variation of genetic alterations in the tumors of the sampled individuals. Benefiting from the upstream filtering method, the final set of biomarker candidates could be completely verified through bidirectional Sanger sequencing, revealing a 40 percent false positive rate despite high read coverage. Of the variants encountered in translated regions, nine novel non-synonymous variations were identified and verified, two of which were present in more than one of the ten tumor samples.
Maximizing without difficulty: A modified maximizing scale and its correlates
Linda Lai
2010-01-01
This article presents several studies that replicate and extend previous research on maximizing. A modified scale for measuring individual maximizing tendency is introduced. The scale has adequate psychometric properties and reflects maximizers' aspirations for high standards and their preference for extensive alternative search, but not the decision difficulty aspect included in several previous studies. Based on this scale, maximizing is positively correlated with optimism, need for cogniti...
DNA solution of the maximal clique problem.
Ouyang, Q; Kaplan, P D; Liu, S; Libchaber, A
1997-10-17
The maximal clique problem has been solved by means of molecular biology techniques. A pool of DNA molecules corresponding to the total ensemble of six-vertex cliques was built, followed by a series of selection processes. The algorithm is highly parallel and has satisfactory fidelity. This work represents further evidence for the ability of DNA computing to solve NP-complete search problems.
HEMI: Hyperedge Majority Influence Maximization
Gangal, Varun; Narayanam, Ramasuri
2016-01-01
In this work, we consider the problem of influence maximization on a hypergraph. We first extend the Independent Cascade (IC) model to hypergraphs, and prove that the traditional influence maximization problem remains submodular. We then present a variant of the influence maximization problem (HEMI) where one seeks to maximize the number of hyperedges, a majority of whose nodes are influenced. We prove that HEMI is non-submodular under the diffusion model proposed.
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...... with 30% (v/v) ethanol or saline, respectively. Relative viscosity was used as one measure of physical properties of the emulsion. Higher degrees of sensitization (but not rates) were obtained at the 48 h challenge reading with the oil/propylene glycol and oil/saline + ethanol emulsions compared...... to the saline/oil emulsion. Placing of the challenge patches affected the response, as simultaneous chlorocresol challenge on the flank located 2 cm closer to the abdomen than the usual challenge site gave decreased reactions....
Li, Jianbo; Jia, Huixia; Han, Xiaojiao; Zhang, Jin; Sun, Pei; Lu, Mengzhu; Hu, Jianjun
2016-01-01
Salix psammophila is a desert shrub willow that has extraordinary adaptation to abiotic stresses and plays an important role in maintaining local ecosystems. Moreover, S. psammophila is regarded as a promising biomass feedstock because of its high biomass yields and short rotation coppice cycle. However, few suitable reference genes (RGs) for quantitative real-time polymerase chain reaction (qRT-PCR) constrain the study on normalization of gene expression in S. psammophila until now. Here, we investigated the expression stabilities of 14 candidate RGs across tissue types and under four abiotic stress treatments, including heat, cold, salt, and drought treatments. After calculation of PCR efficiencies, three different software, NormFinder, geNorm, and BestKeeper were employed to analyze systematically the qRT-PCR data, and the outputs were merged by RankAggreg software. The optimal RGs selected for gene expression analysis were EF1α (Elongation factor-1 alpha) and OTU (OTU-like cysteine protease family protein) for different tissue types, UBC (Ubiquitin-conjugating enzyme E2) and LTA4H (Leukotriene A-4 hydrolase homolog) for heat treatment, HIS (Histone superfamily protein H3) and ARF2 (ADP-ribosylation factor 2) for cold treatment, OTU and ACT7 (Actin 7) for salt treatment, UBC and LTA4H for drought treatment. The expression of UBC, ARF2, and VHAC (V-type proton ATPase subunit C) varied the least across tissue types and under abiotic stresses. Furthermore, the relative genes expression profiles of one tissue-specific gene WOX1a (WUSCHEL-related homeobox 1a), and four stress-inducible genes, including Hsf-A2 (Heat shock transcription factors A2), CBF3 (C-repeat binding factor 3), HKT1 (High-Affinity K(+) Transporter 1), and GST (Glutathione S-transferase), were conducted to confirm the validity of the RGs in this study. These results provided an important RGs application guideline for gene expression characterization in S. psammophila.
Niu, Xiaoping; Qi, Jianmin; Zhang, Gaoyang; Xu, Jiantang; Tao, Aifen; Fang, Pingping; Su, Jianguang
2015-01-01
To accurately measure gene expression using quantitative reverse transcription PCR (qRT-PCR), reliable reference gene(s) are required for data normalization. Corchorus capsularis, an annual herbaceous fiber crop with predominant biodegradability and renewability, has not been investigated for the stability of reference genes with qRT-PCR. In this study, 11 candidate reference genes were selected and their expression levels were assessed using qRT-PCR. To account for the influence of experimental approach and tissue type, 22 different jute samples were selected from abiotic and biotic stress conditions as well as three different tissue types. The stability of the candidate reference genes was evaluated using geNorm, NormFinder, and BestKeeper programs, and the comprehensive rankings of gene stability were generated by aggregate analysis. For the biotic stress and NaCl stress subsets, ACT7 and RAN were suitable as stable reference genes for gene expression normalization. For the PEG stress subset, UBC, and DnaJ were sufficient for accurate normalization. For the tissues subset, four reference genes TUBβ, UBI, EF1α, and RAN were sufficient for accurate normalization. The selected genes were further validated by comparing expression profiles of WRKY15 in various samples, and two stable reference genes were recommended for accurate normalization of qRT-PCR data. Our results provide researchers with appropriate reference genes for qRT-PCR in C. capsularis, and will facilitate gene expression study under these conditions.
Xiaoping eNiu
2015-10-01
Full Text Available To accurately measure gene expression using quantitative reverse transcription PCR (qRT-PCR, reliable reference gene(s are required for data normalization. Corchorus capsularis, an annual herbaceous fiber crop with predominant biodegradability and renewability, has not been investigated for the stability of reference genes with qRT-PCR. In this study, 11 candidate reference genes were selected and their expression levels were assessed using qRT-PCR. To account for the influence of experimental approach and tissue type, 22 different jute samples were selected from abiotic and biotic stress conditions as well as 3 different tissue types. The stability of the candidate reference genes was evaluated using geNorm, NormFinder and BestKeeper programs, and the comprehensive rankings of gene stability were generated by aggregate analysis. For the biotic stress and NaCl stress subsets, ACT7 and RAN were suitable as stable reference genes for gene expression normalization. For the PEG stress subset, UBC and DnaJ were sufficient for accurate normalization. For the tissues subset, four reference genes TUBβ, UBI, EF1α and RAN were sufficient for accurate normalization. The selected genes were further validated by comparing expression profiles of WRKY15 in various samples, and two stable reference genes were recommended for accurate normalization of qRT-PCR data. Our results provide researchers with appropriate reference genes for qRT-PCR in C. capsularis, and will facilitate gene expression study under these conditions.
Huang, Huan; Baddour, Natalie; Liang, Ming
2017-03-01
Bearing signals are often contaminated by in-band interferences and random noise. Oscillatory Behavior-based Signal Decomposition (OBSD) is a new technique which decomposes a signal according to its oscillatory behavior, rather than frequency or scale. Due to the low oscillatory transients of bearing fault-induced signals, the OBSD can be used to effectively extract bearing fault signatures from a blurred signal. However, the quality of the result highly relies on the selection of method-related parameters. Such parameters are often subjectively selected and a systematic approach has not been reported in the literature. As such, this paper proposes a systematic approach to automatic selection of OBSD parameters for reliable extraction of bearing fault signatures. The OBSD utilizes the idea of Morphological Component Analysis (MCA) that optimally projects the original signal to low oscillatory wavelets and high oscillatory wavelets established via the Tunable Q-factor Wavelet Transform (TQWT). In this paper, the effects of the selection of each parameter on the performance of the OBSD for bearing fault signature extraction are investigated. It is found that some method-related parameters can be fixed at certain values due to the nature of bearing fault-induced impulses. To adaptively tune the remaining parameters, index-guided parameter selection algorithms are proposed. A Convergence Index (CI) is proposed and a CI-guided self-tuning algorithm is developed to tune the convergence-related parameters, namely, penalty factor and number of iterations. Furthermore, a Smoothness Index (SI) is employed to measure the effectiveness of the extracted low oscillatory component (i.e. bearing fault signature). It is shown that a minimum SI implies an optimal result with respect to the adjustment of relevant parameters. Thus, two SI-guided automatic parameter selection algorithms are also developed to specify two other parameters, i.e., Q-factor of high-oscillatory wavelets and
Hartzell, Allyson L; Shea, Herbert R
2010-01-01
This book focuses on the reliability and manufacturability of MEMS at a fundamental level. It demonstrates how to design MEMs for reliability and provides detailed information on the different types of failure modes and how to avoid them.
Swanepoel, Konrad J
2011-01-01
A subset of a normed space X is called equilateral if the distance between any two points is the same. Let m(X) be the smallest possible size of an equilateral subset of X maximal with respect to inclusion. We first observe that Petty's construction of a d-dimensional X of any finite dimension d >= 4 with m(X)=4 can be generalised to show that m(X\\oplus_1\\R)=4 for any X of dimension at least 2 which has a smooth point on its unit sphere. By a construction involving Hadamard matrices we then show that both m(\\ell_p) and m(\\ell_p^d) are finite and bounded above by a function of p, for all 1 1 such that m(X) <= d+1 for all d-dimensional X with Banach-Mazur distance less than c from \\ell_p^d. Using Brouwer's fixed-point theorem we show that m(X) <= d+1 for all d-\\dimensional X with Banach-Mazur distance less than 3/2 from \\ell_\\infty^d. A graph-theoretical argument furthermore shows that m(\\ell_\\infty^d)=d+1. The above results lead us to conjecture that m(X) <= 1+\\dim X.
Unified Maximally Natural Supersymmetry
Huang, Junwu
2016-01-01
Maximally Natural Supersymmetry, an unusual weak-scale supersymmetric extension of the Standard Model based upon the inherently higher-dimensional mechanism of Scherk-Schwarz supersymmetry breaking (SSSB), possesses remarkably good fine tuning given present LHC limits. Here we construct a version with precision $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ unification: $\\sin^2 \\theta_W(M_Z) \\simeq 0.231$ is predicted to $\\pm 2\\%$ by unifying $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ into a 5D $SU(3)_{\\rm EW}$ theory at a Kaluza-Klein scale of $1/R_5 \\sim 4.4\\,{\\rm TeV}$, where SSSB is simultaneously realised. Full unification with $SU(3)_{\\rm C}$ is accommodated by extending the 5D theory to a $N=4$ supersymmetric $SU(6)$ gauge theory on a 6D rectangular orbifold at $1/R_6 \\sim 40 \\,{\\rm TeV}$. TeV-scale states beyond the SM include exotic charged fermions implied by $SU(3)_{\\rm EW}$ with masses lighter than $\\sim 1.2\\,{\\rm TeV}$, and squarks in the mass range $1.4\\,{\\rm TeV} - 2.3\\,{\\rm TeV}$, providing distinct signature...
Mouraux, André; Marot, Emilie; Legrain, Valéry
2014-02-21
Currently, the study of nociception in humans relies mainly on thermal stimulation of heat-sensitive nociceptive afferents. To circumvent some limitations of thermal stimulation, it was proposed that intra-epidermal electrical stimulation (IES) could be used as an alternative method to activate nociceptors selectively. The selectivity of IES relies on the fact that it can generate a very focal electrical current and, thereby, activate nociceptive free nerve endings located in the epidermis without concomitantly activating non-nociceptive mechanoreceptors located more deeply in the dermis. However, an important limitation of IES is that it is selective for nociceptors only when very low current intensities are used. At these intensities, the stimulus generates a very weak percept, and the signal-to-noise ratio of the elicited evoked potentials (EPs) is very low. To circumvent this limitation, it was proposed that the strength of the nociceptive afferent volley could be increased through temporal summation, using short trains of repeated IES. Here, we characterized the intensity of perception and EPs elicited by trains of 2, 3 and 4 IES delivered using a 5-ms inter-stimulus interval. We found that both the intensity of perception and the magnitude of EPs significantly increased with the number of pulses. In contrast, the latency of the elicited EPs was not affected by the number of pulses, indicating that temporal summation did not affect the type of activated fibers and, therefore, that trains of IES can be used to increase the reliability of stimulus-evoked responses while still preserving its selectivity for nociceptors.
Kopáček Jaroslav
2016-01-01
Full Text Available This paper focuses on the importance of detection reliability, especially in complex fluid systems for demanding production technology. The initial criterion for assessing the reliability is the failure of object (element, which is seen as a random variable and their data (values can be processed using by the mathematical methods of theory probability and statistics. They are defined the basic indicators of reliability and their applications in calculations of serial, parallel and backed-up systems. For illustration, there are calculation examples of indicators of reliability for various elements of the system and for the selected pneumatic circuit.
Bendell, A
1986-01-01
Software Reliability reviews some fundamental issues of software reliability as well as the techniques, models, and metrics used to predict the reliability of software. Topics covered include fault avoidance, fault removal, and fault tolerance, along with statistical methods for the objective assessment of predictive accuracy. Development cost models and life-cycle cost models are also discussed. This book is divided into eight sections and begins with a chapter on adaptive modeling used to predict software reliability, followed by a discussion on failure rate in software reliability growth mo
Knol Dirk L
2010-09-01
Full Text Available Abstract Background The COSMIN checklist is a tool for evaluating the methodological quality of studies on measurement properties of health-related patient-reported outcomes. The aim of this study is to determine the inter-rater agreement and reliability of each item score of the COSMIN checklist (n = 114. Methods 75 articles evaluating measurement properties were randomly selected from the bibliographic database compiled by the Patient-Reported Outcome Measurement Group, Oxford, UK. Raters were asked to assess the methodological quality of three articles, using the COSMIN checklist. In a one-way design, percentage agreement and intraclass kappa coefficients or quadratic-weighted kappa coefficients were calculated for each item. Results 88 raters participated. Of the 75 selected articles, 26 articles were rated by four to six participants, and 49 by two or three participants. Overall, percentage agreement was appropriate (68% was above 80% agreement, and the kappa coefficients for the COSMIN items were low (61% was below 0.40, 6% was above 0.75. Reasons for low inter-rater agreement were need for subjective judgement, and accustom to different standards, terminology and definitions. Conclusions Results indicated that raters often choose the same response option, but that it is difficult on item level to distinguish between articles. When using the COSMIN checklist in a systematic review, we recommend getting some training and experience, completing it by two independent raters, and reaching consensus on one final rating. Instructions for using the checklist are improved.
Kwon, Oh Sang; Oh, Jun Kyu; Kim, Mi Hyang; Park, So Hyun; Pyo, Hyun Keol; Kim, Kyu Han; Cho, Kwang Hyun; Eun, Hee Chul
2006-02-01
Of the numerous assays used to assess hair growth, hair follicle organ culture model is one of the most popular and powerful in vitro systems. Changes in hair growth are commonly employed as a measurement of follicular activity. Hair cycle stage of mouse vibrissa follicles in vivo is known to determine subsequent hair growth and follicle behavior in vitro and it is recommended that follicles be taken at precisely the same cyclic stage. This study was performed to evaluate whether categorization of human hair follicles by the growth in vivo could be used to select follicles of the defined anagen stage for more consistent culture. Occipital scalp samples were obtained from three subjects, 2 weeks later after hair bleaching. Hair growth and follicle length of isolated anagen VI follicles were measured under a videomicroscope. Follicles were categorized into four groups according to hair growth and some were cultured ex vivo for 6 days. Follicles showed considerable variations with respect to hair growth and follicle length; however, these two variables were relatively well correlated. Hair growth in culture was closely related with hair growth rate in vivo. Moreover, minoxidil uniquely demonstrated a significant increase of hair growth in categorized hair follicles assumed at a similar early anagen VI stage of hair cycle. Selection of follicles at a defined stage based on hair-growth rate would permit a more reliable outcome in human hair follicle organ culture.
Maximal subgroups of finite groups
S. Srinivasan
1990-01-01
Full Text Available In finite groups maximal subgroups play a very important role. Results in the literature show that if the maximal subgroup has a very small index in the whole group then it influences the structure of the group itself. In this paper we study the case when the index of the maximal subgroups of the groups have a special type of relation with the Fitting subgroup of the group.
Finding Maximal Quasiperiodicities in Strings
Brodal, Gerth Stølting; Pedersen, Christian N. S.
2000-01-01
of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...
Maximizing Entropy over Markov Processes
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2013-01-01
computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...
Maximizing entropy over Markov processes
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2014-01-01
computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...
Maximizing results in reconstruction of cheek defects.
Mureau, Marc A M; Hofer, Stefan O P
2009-07-01
The face is exceedingly important, as it is the medium through which individuals interact with the rest of society. Reconstruction of cheek defects after trauma or surgery is a continuing challenge for surgeons who wish to reliably restore facial function and appearance. Important in aesthetic facial reconstruction are the aesthetic unit principles, by which the face can be divided in central facial units (nose, lips, eyelids) and peripheral facial units (cheeks, forehead, chin). This article summarizes established options for reconstruction of cheek defects and provides an overview of several modifications as well as tips and tricks to avoid complications and maximize aesthetic results.
Longjian Niu
2015-06-01
Full Text Available Real-time quantitative PCR (RT-qPCR is a reliable and widely used method for gene expression analysis. The accuracy of the determination of a target gene expression level by RT-qPCR demands the use of appropriate reference genes to normalize the mRNA levels among different samples. However, suitable reference genes for RT-qPCR have not been identified in Sacha inchi (Plukenetia volubilis, a promising oilseed crop known for its polyunsaturated fatty acid (PUFA-rich seeds. In this study, using RT-qPCR, twelve candidate reference genes were examined in seedlings and adult plants, during flower and seed development and for the entire growth cycle of Sacha inchi. Four statistical algorithms (delta cycle threshold (ΔCt, BestKeeper, geNorm, and NormFinder were used to assess the expression stabilities of the candidate genes. The results showed that ubiquitin-conjugating enzyme (UCE, actin (ACT and phospholipase A22 (PLA were the most stable genes in Sacha inchi seedlings. For roots, stems, leaves, flowers, and seeds from adult plants, 30S ribosomal protein S13 (RPS13, cyclophilin (CYC and elongation factor-1alpha (EF1α were recommended as reference genes for RT-qPCR. During the development of reproductive organs, PLA, ACT and UCE were the optimal reference genes for flower development, whereas UCE, RPS13 and RNA polymerase II subunit (RPII were optimal for seed development. Considering the entire growth cycle of Sacha inchi, UCE, ACT and EF1α were sufficient for the purpose of normalization. Our results provide useful guidelines for the selection of reliable reference genes for the normalization of RT-qPCR data for seedlings and adult plants, for reproductive organs, and for the entire growth cycle of Sacha inchi.
张鹏
2013-01-01
针对不允许卖空的情况,分别提出不合有无风险资产(即仅含有风险资产)和含有无风险资产两种情况的效用最大化投资组合模型,并运用不等式组旋转算法的参数法研究两模型有效前沿的结构特征.结果表明,当仅含有风险资产时,投资组合的有效前沿是一条连续但不一定光滑的分段抛物线;当含有无风险资产,且风险资产是不允卖空的而无风险资产是允许卖空的时,投资组合有效前沿是一条连续的射线.两种情况的风险偏好系数均只在某个区间能较好地反映投资者对期望收益率和风险的权衡.此外,还验证了不等式组的旋转算法具有操作简便且计算效率高等优点.%The paper proposes two portfolio selection models with risk assets or with risk-free assets maximizing the utility without short sales and uses a parameter method based on the pivoting algorithm to study the efficient frontier. The result indicates that the efficient frontier of the model with risk assets only is continuous but not always derivative parabolas, but the efficient frontier of the model with risk-free asset permitting short sales and with risk assets not permitting short sales is a continuous line. The risk preference coefficients only reflect the investor's trade-off between the expected rate of return and risk within some intervals. The algorithm is proved to be efficient and easy to operate.
常浩
2013-01-01
This paper applies martingale approach to study a dynamic portfolio selection problem in an incomplete market. By reducing the dimension of Brownian motion, we transform an incomplete market into a complete one and use martingale approach to investigate the optimal investment strategy for logarithm utility maximization in the completed market, In addition, we obtain the explicit expression of the optimal investment strategy. We derive the optimal investment strategy in the original incomplete market by applying the parameters relationships between in the original incomplete market and in the completed one. Numerical examples are given to compare the optimal portfolios for logarithm utility with those for power utility and exponential utility both in a complete market and in an incomplete one.%应用鞅方法研究不完全市场下的动态投资组合优化问题.首先,通过降低布朗运动的维数将不完全金融市场转化为完全金融市场,并在转化后的完全金融市场里应用鞅方法研究对数效用函数下的动态投资组合问题,得到了最优投资策略的显示表达式.然后,根据转化后的完全金融市场与原不完全金融市场之间的参数关系,得到原不完全金融市场下的最优投资策略.算例分析比较了不完全金融市场与转化后的完全金融市场下最优投资策略的变化趋势,并与幂效用、指数效用下最优投资策略的变化趋势做了比较.
Gonzalez-Sanchez, Jon
2010-01-01
Let $w = w(x_1,..., x_n)$ be a word, i.e. an element of the free group $F =$ on $n$ generators $x_1,..., x_n$. The verbal subgroup $w(G)$ of a group $G$ is the subgroup generated by the set $\\{w (g_1,...,g_n)^{\\pm 1} | g_i \\in G, 1\\leq i\\leq n \\}$ of all $w$-values in $G$. We say that a (finite) group $G$ is $w$-maximal if $|G:w(G)|> |H:w(H)|$ for all proper subgroups $H$ of $G$ and that $G$ is hereditarily $w$-maximal if every subgroup of $G$ is $w$-maximal. In this text we study $w$-maximal and hereditarily $w$-maximal (finite) groups.
Reliable MPR Selection Algorithm Based on Link Prediction%基于链路预测的可靠MPR选择算法
郭玉婷; 李强
2015-01-01
In order to reduce the effect of the mobility of mobile ad-hoc network(MANET)on routing performance, this paper proposed a reliable MPR selection algorithm based on link prediction. The valid time of remaining chain between this node and adjacent node was predicted by both the distance between nodes and the range of wireless transmission. The paper also proposed a new measuring method on selecting multi-point relay by applying the remaining link effective time(RTTQ)as OLSR routing protocol. A lot of NS2 network simulation were conducted in this paper.The results show that a number of properties,such as MPR survival time, packet delivery ratio (PDR) and average throughput(ATT),can be improved by using the MPR nodes,in which the RTTQ was greater than the critical value.%为了降低移动自组织网络（MANET）中移动性对路由性能造成的影响，该文提出基于链路预测的可靠MPR选择算法，通过节点间距离和无线传输范围来预测节点与其相邻节点之间的剩余链路有效时间，并提出将剩余链路有效时间（RTTQ）作为OLSR路由协议选择多点中继（MPR）的新的度量方法。利用NS2进行了大量的网络仿真，结果显示采用RTTQ大于临界值的MPR节点，可以提高多项性能，如MPR生存时间、分组交付率（PDR）和平均吞吐量（ATT）。
Reliable MPR Selection Algorithm Based on Link Pediction%基于链路预测的可靠MPR选择算法
郭玉婷; 李强
2015-01-01
In order to reduce the effect of the mobility of mobile ad-hoc network (MANET) on routing performance, this paper proposed a reliable MPR selection algorithm based on link prediction. The valid time of remaining chain between this node and adjacent node was predicted by both the distance between nodes and the range of wireless transmission. The paper also proposed a new measuring method on selecting multi-point relay by applying the remaining link effective time (RTTQ) as OLSR routing protocol. A lot of NS2 network simulation were conducted in this paper. The results show that a number of properties, such as MPR survival time, packet delivery ratio (PDR) and average throughput(ATT), can be improved by using the MPR nodes, in which the RTTQ was greater than the critical value.%为了降低移动自组织网络（MANET）中移动性对路由性能造成的影响，该文提出基于链路预测的可靠MPR选择算法，通过节点间距离和无线传输范围来预测节点与其相邻节点之间的剩余链路有效时间，并提出将剩余链路有效时间（RTTQ）作为OLSR路由协议选择多点中继（MPR）的新的度量方法。利用NS2进行了大量的网络仿真，结果显示采用RTTQ大于临界值的MPR节点，可以提高多项性能，如MPR生存时间、分组交付率（PDR）和平均吞吐量（ATT）。
Maximizing without difficulty: A modified maximizing scale and its correlates
Lai, Linda
2010-01-01
... included in several previous studies. Based on this scale, maximizing is positively correlated with optimism, need for cognition, desire for consistency, risk aversion, intrinsic motivation, self-efficacy and perceived workload, whereas...
Maximizing and customer loyalty: Are maximizers less loyal?
Linda Lai
2011-06-01
Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.
Principles of maximally classical and maximally realistic quantum mechanics
S M Roy
2002-08-01
Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2-dimensional phase space, a maximally realistic quantum mechanics can have quantum probabilities of no more than + 1 complete commuting cets (CCS) of observables coexisting as marginals of one positive phase space density. Here I formulate a stationary principle which gives a nonperturbative deﬁnition of a maximally classical as well as maximally realistic phase space density. I show that the maximally classical trajectories are in fact exactly classical in the simple examples of coherent states and bound states of an oscillator and Gaussian free particle states. In contrast, it is known that the de Broglie–Bohm realistic theory gives highly nonclassical trajectories.
Lazzaroni, Massimo
2012-01-01
This book gives a practical guide for designers and users in Information and Communication Technology context. In particular, in the first Section, the definition of the fundamental terms according to the international standards are given. Then, some theoretical concepts and reliability models are presented in Chapters 2 and 3: the aim is to evaluate performance for components and systems and reliability growth. Chapter 4, by introducing the laboratory tests, puts in evidence the reliability concept from the experimental point of view. In ICT context, the failure rate for a given system can be
Maximizing ROI with yield management
Neil Snyder
2001-01-01
.... the technology is based on the concept of yield management, which aims to sell the right product to the right customer at the right price and the right time therefore maximizing revenue, or yield...
Chen, Qing; Zhang, Jinxiu; Hu, Ze
2017-02-23
This article investigates the dynamic topology control problemof satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites' relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime.
Are CEOs Expected Utility Maximizers?
John List; Charles Mason
2009-01-01
Are individuals expected utility maximizers? This question represents much more than academic curiosity. In a normative sense, at stake are the fundamental underpinnings of the bulk of the last half-century's models of choice under uncertainty. From a positive perspective, the ubiquitous use of benefit-cost analysis across government agencies renders the expected utility maximization paradigm literally the only game in town. In this study, we advance the literature by exploring CEO's preferen...
Gaussian maximally multipartite entangled states
Facchi, Paolo; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio
2009-01-01
We introduce the notion of maximally multipartite entangled states (MMES) in the context of Gaussian continuous variable quantum systems. These are bosonic multipartite states that are maximally entangled over all possible bipartitions of the system. By considering multimode Gaussian states with constrained energy, we show that perfect MMESs, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of MMESs and their frustration for n <= 7.
All maximally entangling unitary operators
Cohen, Scott M. [Department of Physics, Duquesne University, Pittsburgh, Pennsylvania 15282 (United States); Department of Physics, Carnegie-Mellon University, Pittsburgh, Pennsylvania 15213 (United States)
2011-11-15
We characterize all maximally entangling bipartite unitary operators, acting on systems A and B of arbitrary finite dimensions d{sub A}{<=}d{sub B}, when ancillary systems are available to both parties. Several useful and interesting consequences of this characterization are discussed, including an understanding of why the entangling and disentangling capacities of a given (maximally entangling) unitary can differ and a proof that these capacities must be equal when d{sub A}=d{sub B}.
Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo
2016-01-01
Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.
Caputo, Pierpaolo; Rovagnati, Marco; Pakrawanan, Hamid; Carzaniga, Pier Luigi
2014-01-01
An article in the BMJ issueof May 2012 (11) tackled the issue of safeguarding health by preventing diagnostic overtreatment. An observation of the diagnostic options in clinical routine enabled us to critically assess the appropriateness or notof the use of ionising radiation in monitoring acute diverticulitis by means of CT imaging. This disease, which has alwaysbeen frequent in elderly patients, has recently assumed a new role as an endemicdiseasein the Caucasian populationaged 40 to 50 in the Western world (6). We considered 79 cases coming under observation in the Emergency Roomover a period of 115 months, selected from a pool of 136 according to Hinchey Score (Hs) 0-1a-1b- assigned on admission after an Ultrasound(US) examination . The choice of the first diagnostic approach depended on the severity of the patient 's clinical condition, the degree of collaboration of the same and the discretion of the radiologist, although the concerted opinion was to prefer the US test given its clearly- established advantages of being convenient and harmless. During the period of recovery we noted the tendency to subordinate the choice of instrument to the habit and discretion of the attending practitioner. Our proposal was to introduce a standardised personal criterion which took into account the problem of stochastic harm from ionising radiation. The need of exposure or not to verify the clinical condition by means of a CT as opposed to a US was thus deduced by means of an Reliability Ultrasound Score (RUS) RESULTS: Using such score we were able to schedule in 14 out of the 37 cases in one branch of the study, an effective diagnostic check-up programmein safety and with an overall saving of 32 % of the ionising radiation. During this study wequantified a total amount of miniSivertnot dispensed, in 79 cases with Hs2°.
A. Garmroodi Asil
2017-09-01
To further reduce the sulfur dioxide emission of the entire refining process, two scenarios of acid gas or air preheats are investigated when either of them is used simultaneously with the third enrichment scheme. The maximum overall sulfur recovery efficiency and highest combustion chamber temperature is slightly higher for acid gas preheats but air preheat is more favorable because it is more benign. To the best of our knowledge, optimization of the entire GTU + enrichment section and SRU processes has not been addressed previously.
Algebraic curves of maximal cyclicity
Caubergh, Magdalena; Dumortier, Freddy
2006-01-01
The paper deals with analytic families of planar vector fields, studying methods to detect the cyclicity of a non-isolated closed orbit, i.e. the maximum number of limit cycles that can locally bifurcate from it. It is known that this multi-parameter problem can be reduced to a single-parameter one, in the sense that there exist analytic curves in parameter space along which the maximal cyclicity can be attained. In that case one speaks about a maximal cyclicity curve (mcc) in case only the number is considered and of a maximal multiplicity curve (mmc) in case the multiplicity is also taken into account. In view of obtaining efficient algorithms for detecting the cyclicity, we investigate whether such mcc or mmc can be algebraic or even linear depending on certain general properties of the families or of their associated Bautin ideal. In any case by well chosen examples we show that prudence is appropriate.
BOUNDEDNESS OF MAXIMAL SINGULAR INTEGRALS
CHEN JIECHENG; ZHU XIANGRONG
2005-01-01
The authors study the singular integrals under the Hormander condition and the measure not satisfying the doubling condition. At first, if the corresponding singular integral is bounded from L2 to itseff, it is proved that the maximal singu lar integral is bounded from L∞ to RBMO except that it is infinite μ-a.e. on Rd. A sufficient condition and a necessary condition such that the maximal singular integral is bounded from L2 to itself are also obtained. There is a small gap between the two conditions.
Understanding of English Contracts though Relation Maxims
XU Chi-ying; JIANG Li-hui
2013-01-01
Contract is the legal evidence of the concerning parties of business. And this lead to its unique characteristics:technical terms, archaism, borrowed words, juxtaposition, and abbreviation. The understanding of contracts is of vital importance for each party, because it concerns the share of interests. In order to avoid ambiguity that some words or sentence in English contracts may lead to, and achieve“best relevance and least effort”of communication, this paper, by applying relation maxim, deeply analyze how to understand English contracts though selection of words, modification, the complexity and simplicity of sentence.
Network architecture underlying maximal separation of neuronal representations
Ron A Jortner
2013-01-01
Full Text Available One of the most basic and general tasks faced by all nervous systems is extracting relevant information from the organism’s surrounding world. While physical signals available to sensory systems are often continuous, variable, overlapping and noisy, high-level neuronal representations used for decision-making tend to be discrete, specific, invariant, and highly separable. This study addresses the question of how neuronal specificity is generated. Inspired by experimental findings on network architecture in the olfactory system of the locust, I construct a highly simplified theoretical framework which allows for analytic solution of its key properties. For generalized feed-forward systems, I show that an intermediate range of connectivity values between source- and target-populations leads to a combinatorial explosion of wiring possibilities, resulting in input spaces which are, by their very nature, exquisitely sparsely populated. In particular, connection probability ½, as found in the locust antennal-lobe–mushroom-body circuit, serves to maximize separation of neuronal representations across the target Kenyon-cells, and explains their specific and reliable responses. This analysis yields a function expressing response specificity in terms of lower network-parameters; together with appropriate gain control this leads to a simple neuronal algorithm for generating arbitrarily sparse and selective codes and linking network architecture and neural coding. I suggest a way to easily construct ecologically meaningful representations from this code.
Understanding maximal repetitions in strings
Crochemore, Maxime
2008-01-01
The cornerstone of any algorithm computing all repetitions in a string of length n in O(n) time is the fact that the number of runs (or maximal repetitions) is O(n). We give a simple proof of this result. As a consequence of our approach, the stronger result concerning the linearity of the sum of exponents of all runs follows easily.
IMRank: Influence Maximization via Finding Self-Consistent Ranking
Cheng, Suqi; Shen, Hua-Wei; Huang, Junming; Chen, Wei; Cheng, Xue-Qi
2014-01-01
Influence maximization, fundamental for word-of-mouth marketing and viral marketing, aims to find a set of seed nodes maximizing influence spread on social network. Early methods mainly fall into two paradigms with certain benefits and drawbacks: (1)Greedy algorithms, selecting seed nodes one by one, give a guaranteed accuracy relying on the accurate approximation of influence spread with high computational cost; (2)Heuristic algorithms, estimating influence spread using efficient heuristics,...
2017-01-17
convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them. This report was cleared for public release...testing for reliability prediction of devices exhibiting multiple failure mechanisms. Also presented was an integrated accelerating and measuring ...13 Table 2 T, V, F and matrix versus measured FIT
Cardiorespiratory Coordination in Repeated Maximal Exercise
Sergi Garcia-Retortillo
2017-06-01
Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC
Cardiorespiratory Coordination in Repeated Maximal Exercise.
Garcia-Retortillo, Sergi; Javierre, Casimiro; Hristovski, Robert; Ventura, Josep L; Balagué, Natàlia
2017-01-01
Increases in cardiorespiratory coordination (CRC) after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC) analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate) was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1) were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax), maximal oxygen consumption (VO2 max), or ventilatory threshold (VT), an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08) was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43) in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT) between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC evaluation in
Note on maximal distance separable codes
YANG Jian-sheng; WANG De-xiu; JIN Qing-fang
2009-01-01
In this paper, the maximal length of maximal distance separable(MDS)codes is studied, and a new upper bound formula of the maximal length of MDS codes is obtained. Especially, the exact values of the maximal length of MDS codes in some parameters are given.
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
Asymptotics of robust utility maximization
Knispel, Thomas
2012-01-01
For a stochastic factor model we maximize the long-term growth rate of robust expected power utility with parameter $\\lambda\\in(0,1)$. Using duality methods the problem is reformulated as an infinite time horizon, risk-sensitive control problem. Our results characterize the optimal growth rate, an optimal long-term trading strategy and an asymptotic worst-case model in terms of an ergodic Bellman equation. With these results we propose a duality approach to a "robust large deviations" criterion for optimal long-term investment.
Multivariate residues and maximal unitarity
Søgaard, Mads; Zhang, Yang
2013-12-01
We extend the maximal unitarity method to amplitude contributions whose cuts define multidimensional algebraic varieties. The technique is valid to all orders and is explicitly demonstrated at three loops in gauge theories with any number of fermions and scalars in the adjoint representation. Deca-cuts realized by replacement of real slice integration contours by higher-dimensional tori encircling the global poles are used to factorize the planar triple box onto a product of trees. We apply computational algebraic geometry and multivariate complex analysis to derive unique projectors for all master integral coefficients and obtain compact analytic formulae in terms of tree-level data.
Beeping a Maximal Independent Set
Afek, Yehuda; Alon, Noga; Bar-Joseph, Ziv; Cornejo, Alejandro; Haeupler, Bernhard; Kuhn, Fabian
2012-01-01
We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot...
Maximal Congruences on Some Semigroups
Jintana Sanwong; R.P. Sullivan
2007-01-01
In 1976 Howie proved that a finite congruence-free semigroup is a simple group if it has at least three elements but no zero elementInfinite congruence-free semigroups are far more complicated to describe, but some have been constructed using semigroups of transformations (for example, by Howie in 1981 and by Marques in 1983)Here, forcertain semigroups S of numbers and of transformations, we determine all congruences p on S such that S/p is congruence-free, that is, we describe all maximal congruences on such semigroups S.
Maximizing Statistical Power When Verifying Probabilistic Forecasts of Hydrometeorological Events
DeChant, C. M.; Moradkhani, H.
2014-12-01
Hydrometeorological events (i.e. floods, droughts, precipitation) are increasingly being forecasted probabilistically, owing to the uncertainties in the underlying causes of the phenomenon. In these forecasts, the probability of the event, over some lead time, is estimated based on some model simulations or predictive indicators. By issuing probabilistic forecasts, agencies may communicate the uncertainty in the event occurring. Assuming that the assigned probability of the event is correct, which is referred to as a reliable forecast, the end user may perform some risk management based on the potential damages resulting from the event. Alternatively, an unreliable forecast may give false impressions of the actual risk, leading to improper decision making when protecting resources from extreme events. Due to this requisite for reliable forecasts to perform effective risk management, this study takes a renewed look at reliability assessment in event forecasts. Illustrative experiments will be presented, showing deficiencies in the commonly available approaches (Brier Score, Reliability Diagram). Overall, it is shown that the conventional reliability assessment techniques do not maximize the ability to distinguish between a reliable and unreliable forecast. In this regard, a theoretical formulation of the probabilistic event forecast verification framework will be presented. From this analysis, hypothesis testing with the Poisson-Binomial distribution is the most exact model available for the verification framework, and therefore maximizes one's ability to distinguish between a reliable and unreliable forecast. Application of this verification system was also examined within a real forecasting case study, highlighting the additional statistical power provided with the use of the Poisson-Binomial distribution.
Knowledge discovery by accuracy maximization.
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-04-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold's topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan's presidency and not from its beginning.
Maximal right smooth extension chains
Huang, Yun Bao
2010-01-01
If $w=u\\alpha$ for $\\alpha\\in \\Sigma=\\{1,2\\}$ and $u\\in \\Sigma^*$, then $w$ is said to be a \\textit{simple right extension}of $u$ and denoted by $u\\prec w$. Let $k$ be a positive integer and $P^k(\\epsilon)$ denote the set of all $C^\\infty$-words of height $k$. Set $u_{1},\\,u_{2},..., u_{m}\\in P^{k}(\\epsilon)$, if $u_{1}\\prec u_{2}\\prec ...\\prec u_{m}$ and there is no element $v$ of $P^{k}(\\epsilon)$ such that $v\\prec u_{1}\\text{or} u_{m}\\prec v$, then $u_{1}\\prec u_{2}\\prec...\\prec u_{m}$ is said to be a \\textit{maximal right smooth extension (MRSE) chains}of height $k$. In this paper, we show that \\textit{MRSE} chains of height $k$ constitutes a partition of smooth words of height $k$ and give the formula of the number of \\textit{MRSE} chains of height $k$ for each positive integer $k$. Moreover, since there exist the minimal height $h_1$ and maximal height $h_2$ of smooth words of length $n$ for each positive integer $n$, we find that \\textit{MRSE} chains of heights $h_1-1$ and $h_2+1$ are good candidates t...
Saiz, P; Rocha, R; Andreeva, J
2007-01-01
We are offering a system to track the efficiency of different components of the GRID. We can study the performance of both the WMS and the data transfers At the moment, we have set different parts of the system for ALICE, ATLAS, CMS and LHCb. None of the components that we have developed are VO specific, therefore it would be very easy to deploy them for any other VO. Our main goal is basically to improve the reliability of the GRID. The main idea is to discover as soon as possible the different problems that have happened, and inform the responsible. Since we study the jobs and transfers issued by real users, we see the same problems that users see. As a matter of fact, we see even more problems than the end user does, since we are also interested in following up the errors that GRID components can overcome by themselves (like for instance, in case of a job failure, resubmitting the job to a different site). This kind of information is very useful to site and VO administrators. They can find out the efficien...
Online Learning of Assignments that Maximize Submodular Functions
Golovin, Daniel; Streeter, Matthew
2009-01-01
Which ads should we display in sponsored search in order to maximize our revenue? How should we dynamically rank information sources to maximize value of information? These applications exhibit strong diminishing returns: Selection of redundant ads and information sources decreases their marginal utility. We show that these and other problems can be formalized as repeatedly selecting an assignment of items to positions to maximize a sequence of monotone submodular functions that arrive one by one. We present an efficient algorithm for this general problem and analyze it in the no-regret model. Our algorithm possesses strong theoretical guarantees, such as a performance ratio that converges to the optimal constant of 1-1/e. We empirically evaluate our algorithm on two real-world online optimization problems on the web: ad allocation with submodular utilities, and dynamically ranking blogs to detect information cascades.
MAXIMIZING THE BENEFITS OF ERP SYSTEMS
Paulo André da Conceição Menezes
2010-04-01
Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.
MAXIMIZING THE BENEFITS OF ERP SYSTEMS
Paulo André Da Conceiçao Menezes
2010-04-01
Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.
Dispatch Scheduling to Maximize Exoplanet Detection
Johnson, Samson; McCrady, Nate; MINERVA
2016-01-01
MINERVA is a dedicated exoplanet detection telescope array using radial velocity measurements of nearby stars to detect planets. MINERVA will be a completely robotic facility, with a goal of maximizing the number of exoplanets detected. MINERVA requires a unique application of queue scheduling due to its automated nature and the requirement of high cadence observations. A dispatch scheduling algorithm is employed to create a dynamic and flexible selector of targets to observe, in which stars are chosen by assigning values through a weighting function. I designed and have begun testing a simulation which implements the functions of a dispatch scheduler and records observations based on target selections through the same principles that will be used at the commissioned site. These results will be used in a larger simulation that incorporates weather, planet occurrence statistics, and stellar noise to test the planet detection capabilities of MINERVA. This will be used to heuristically determine an optimal observing strategy for the MINERVA project.
Zakir, Ali; Bengtsson, Marie; Sadek, Medhat M; Hansson, Bill S; Witzgall, Peter; Anderson, Peter
2013-09-01
Animals depend on reliable sensory information for accurate behavioural decisions. For herbivorous insects it is crucial to find host plants for feeding and reproduction, and these insects must be able to differentiate suitable from unsuitable plants. Volatiles are important cues for insect herbivores to assess host plant quality. It has previously been shown that female moths of the Egyptian cotton leafworm, Spodoptera littoralis (Lepidoptera: Noctuidae), avoid oviposition on damaged cotton Gossypium hirsutum, which may mediated by herbivore-induced plant volatiles (HIPVs). Among the HIPVs, some volatiles are released following any type of damage while others are synthesized de novo and released by the plants only in response to herbivore damage. In behavioural experiments we here show that oviposition by S. littoralis on undamaged cotton plants was reduced by adding volatiles collected from plants with ongoing herbivory. Gas chromatography-electroantennographic detection (GC-EAD) recordings revealed that antennae of mated S. littoralis females responded to 18 compounds from a collection of headspace volatiles of damaged cotton plants. Among these compounds, a blend of the seven de novo synthesized volatile compounds was found to reduce oviposition in S. littoralis on undamaged plants under both laboratory and ambient (field) conditions in Egypt. Volatile compounds that are not produced de novo by the plants did not affect oviposition. Our results show that ovipositing females respond specifically to the de novo synthesized volatiles released from plants under herbivore attack. We suggest that these volatiles provide reliable cues for ovipositing females to detect plants that could provide reduced quality food for their offspring and an increased risk of competition and predation.
Gasthuys Frank
2006-04-01
Full Text Available Abstract Background Real-time quantitative PCR can be a very powerful and accurate technique to examine gene transcription patterns in different biological conditions. One of the critical steps in comparing transcription profiles is accurate normalisation. In most of the studies published on real-time PCR in horses, normalisation occurred against only one reference gene, usually GAPDH or ACTB, without validation of its expression stability. This might result in unreliable conclusions, because it has been demonstrated that the expression levels of so called "housekeeping genes" may vary considerably in different tissues, cell types or disease stages, particularly in clinical samples associated with malignant disease. The goal of this study was to establish a reliable set of reference genes for studies concerning normal equine skin and equine sarcoids, which are the most common skin tumour in horses. Results In the present study the gene transcription levels of 6 commonly used reference genes (ACTB, B2M, HPRT1, UBB, TUBA1 and RPL32 were determined in normal equine skin and in equine sarcoids. After applying the geNorm applet to this set of genes, TUBA1, ACTB and UBB were found to be most stable in normal skin and B2M, ACTB and UBB in equine sarcoids. Conclusion Based on these results, TUBA1, ACTB and UBB, respectively B2M, ACTB and UBB can be proposed as reference gene panels for accurate normalisation of quantitative data for normal equine skin, respectively equine sarcoids. When normal skin and equine sarcoids are compared, the use of the geometric mean of UBB, ACTB and B2M can be recommended as a reliable and accurate normalisation factor.
李瑞杰
2014-01-01
载波聚合是LTE-Advanced的关键技术之一，它能有效地解决LTE-A系统带宽扩展问题。合理的选择成分载波将使得数据业务在传输时具有高的可靠性。本文提出了一种提高数据传输可靠性的成分载波选择算法，综合考虑成分载波信道质量和负载均衡两方面，对成分载波进行选择。通过仿真表明，该算法可以有效地选择成分载波进行聚合，降低数据传输的误比特率和误块率，减少平均传输次数，从而提高数据传输的可靠性，降低系统开销。%Carrier aggregation technology is one of the key technologies in LTE-A system, which can effectively solve the bandwidth extension problem in LTE-A system. Carrier aggregation select proper the component carrier during transmission will make the data with high reliability. This paper presents a component carrier selection algorithm to improve the reliability of data transmission, considering both component carrier channels quality and load balancing, then select the component carrier. The simulation results show that this algorithm can efficiently aggregate the component carrier and decrease bit error rate and block error rate of data transmission, reducing the average number of transmission, thereby increasing the reliability of data transmission, reducing system overhead.
Quantitative approaches for profit maximization in direct marketing
van der Scheer, H.R.
1998-01-01
An effective direct marketing campaign aims at selecting those targets, offer and communication elements - at the right time - that maximize the net profits. The list of individuals to be mailed, i.e. the targets, is considered to be the most important component. Therefore, a large amount of direct
Teacher Praise: Maximizing the Motivational Impact. Teaching Strategies.
McVey, Mary D.
2001-01-01
Recognizes the influence of praise on human behavior, and provides specific suggestions on how to maximize the positive effects of praise when intended as positive reinforcement. Examines contingency, specificity, and selectivity aspects of praise. Cautions teachers to avoid the controlling effects of praise and the possibility that praise may…
Quantitative approaches for profit maximization in direct marketing
van der Scheer, H.R.
1998-01-01
An effective direct marketing campaign aims at selecting those targets, offer and communication elements - at the right time - that maximize the net profits. The list of individuals to be mailed, i.e. the targets, is considered to be the most important component. Therefore, a large amount of direct
The maximal D = 4 supergravities
Wit, Bernard de [Institute for Theoretical Physics and Spinoza Institute, Utrecht University, Postbus 80.195, NL-3508 TD Utrecht (Netherlands); Samtleben, Henning [Laboratoire de Physique, ENS Lyon, 46 allee d' Italie, F-69364 Lyon CEDEX 07 (France); Trigiante, Mario [Dept. of Physics, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Turin (Italy)
2007-06-15
All maximal supergravities in four space-time dimensions are presented. The ungauged Lagrangians can be encoded in an E{sub 7(7)}-Sp(56; R)/GL(28) matrix associated with the freedom of performing electric/magnetic duality transformations. The gauging is defined in terms of an embedding tensor {theta} which encodes the subgroup of E{sub 7(7)} that is realized as a local invariance. This embedding tensor may imply the presence of magnetic charges which require corresponding dual gauge fields. The latter can be incorporated by using a recently proposed formulation that involves tensor gauge fields in the adjoint representation of E{sub 7(7)}. In this formulation the results take a universal form irrespective of the electric/magnetic duality basis. We present the general class of supersymmetric and gauge invariant Lagrangians and discuss a number of applications.
Maximizing profit using recommender systems
Das, Aparna; Ricketts, Daniel
2009-01-01
Traditional recommendation systems make recommendations based solely on the customer's past purchases, product ratings and demographic data without considering the profitability the items being recommended. In this work we study the question of how a vendor can directly incorporate the profitability of items into its recommender so as to maximize its expected profit while still providing accurate recommendations. Our approach uses the output of any traditional recommender system and adjust them according to item profitabilities. Our approach is parameterized so the vendor can control how much the recommendation incorporating profits can deviate from the traditional recommendation. We study our approach under two settings and show that it achieves approximately 22% more profit than traditional recommendations.
The maximal D=5 supergravities
de Wit, Bernard; Trigiante, M; Wit, Bernard de; Samtleben, Henning; Trigiante, Mario
2007-01-01
The general Lagrangian for maximal supergravity in five spacetime dimensions is presented with vector potentials in the \\bar{27} and tensor fields in the 27 representation of E_6. This novel tensor-vector system is subject to an intricate set of gauge transformations, describing 3(27-t) massless helicity degrees of freedom for the vector fields and 3t massive spin degrees of freedom for the tensor fields, where the (even) value of t depends on the gauging. The kinetic term of the tensor fields is accompanied by a unique Chern-Simons coupling which involves both vector and tensor fields. The Lagrangians are completely encoded in terms of the embedding tensor which defines the E_6 subgroup that is gauged by the vectors. The embedding tensor is subject to two constraints which ensure the consistency of the combined vector-tensor gauge transformations and the supersymmetry of the full Lagrangian. This new formulation encompasses all possible gaugings.
Constraint Propagation as Information Maximization
Abdallah, A Nait
2012-01-01
Dana Scott used the partial order among partial functions for his mathematical model of recursively defined functions. He interpreted the partial order as one of information content. In this paper we elaborate on Scott's suggestion of regarding computation as a process of information maximization by applying it to the solution of constraint satisfaction problems. Here the method of constraint propagation can be interpreted as decreasing uncertainty about the solution -- that is, as gain in information about the solution. As illustrative example we choose numerical constraint satisfaction problems to be solved by interval constraints. To facilitate this approach to constraint solving we formulate constraint satisfaction problems as formulas in predicate logic. This necessitates extending the usual semantics for predicate logic so that meaning is assigned not only to sentences but also to formulas with free variables.
Basu, Asit P; Basu, Sujit K
1998-01-01
This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul
A Revenue Maximization Approach for Provisioning Services in Clouds
Li Pan
2015-01-01
Full Text Available With the increased reliability, security, and reduced cost of cloud services, more and more users are attracted to having their jobs and applications outsourced into IAAS data centers. For a cloud provider, deciding how to provision services to clients is far from trivial. The objective of this decision is maximizing the provider’s revenue, while fulfilling its IAAS resource constraints. The above problem is defined as IAAS cloud provider revenue maximization (ICPRM problem in this paper. We formulate a service provision approach to help a cloud provider to determine which combination of clients to admit and in what Quality-of-Service (QoS levels and to maximize provider’s revenue given its available resources. We show that the overall problem is a nondeterministic polynomial- (NP- hard one and develop metaheuristic solutions based on the genetic algorithm to achieve revenue maximization. The experimental simulations and numerical results show that the proposed approach is both effective and efficient in solving ICPRM problems.
Beeping a Maximal Independent Set
Afek, Yehuda; Bar-Joseph, Ziv; Cornejo, Alejandro; Haeupler, Bernhard; Kuhn, Fabian
2012-01-01
We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot, beeping nodes receive no feedback, while silent nodes can only differentiate between none of its neighbors beeping, or at least one of its neighbors beeping. We start by proving a lower bound that shows that in this model, it is not possible to locally converge to an MIS in sub-polynomial time. We then study four different relaxations of the model which allow us to circumvent the lower bound and find an MIS in polylogarithmic time. First, we show that if a polynomial upper bound on the network size is known, it is possi...
Maximal switchability of centralized networks
Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu
2016-08-01
We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.
A Maximally Supersymmetric Kondo Model
Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC
2012-02-17
We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.
Biffali Elio
2009-07-01
Full Text Available Abstract Background Quantitative real-time polymerase chain reaction (RT-qPCR is valuable for studying the molecular events underlying physiological and behavioral phenomena. Normalization of real-time PCR data is critical for a reliable mRNA quantification. Here we identify reference genes to be utilized in RT-qPCR experiments to normalize and monitor the expression of target genes in the brain of the cephalopod mollusc Octopus vulgaris, an invertebrate. Such an approach is novel for this taxon and of advantage in future experiments given the complexity of the behavioral repertoire of this species when compared with its relatively simple neural organization. Results We chose 16S, and 18S rRNA, actB, EEF1A, tubA and ubi as candidate reference genes (housekeeping genes, HKG. The expression of 16S and 18S was highly variable and did not meet the requirements of candidate HKG. The expression of the other genes was almost stable and uniform among samples. We analyzed the expression of HKG into two different set of animals using tissues taken from the central nervous system (brain parts and mantle (here considered as control tissue by BestKeeper, geNorm and NormFinder. We found that HKG expressions differed considerably with respect to brain area and octopus samples in an HKG-specific manner. However, when the mantle is treated as control tissue and the entire central nervous system is considered, NormFinder revealed tubA and ubi as the most suitable HKG pair. These two genes were utilized to evaluate the relative expression of the genes FoxP, creb, dat and TH in O. vulgaris. Conclusion We analyzed the expression profiles of some genes here identified for O. vulgaris by applying RT-qPCR analysis for the first time in cephalopods. We validated candidate reference genes and found the expression of ubi and tubA to be the most appropriate to evaluate the expression of target genes in the brain of different octopuses. Our results also underline the
Jensen, Pamela K; Wujcik, Chad E; McGuire, Michelle K; McGuire, Mark A
2016-01-01
Simple high-throughput procedures were developed for the direct analysis of glyphosate [N-(phosphonomethyl)glycine] and aminomethylphosphonic acid (AMPA) in human and bovine milk and human urine matrices. Samples were extracted with an acidified aqueous solution on a high-speed shaker. Stable isotope labeled internal standards were added with the extraction solvent to ensure accurate tracking and quantitation. An additional cleanup procedure using partitioning with methylene chloride was required for milk matrices to minimize the presence of matrix components that can impact the longevity of the analytical column. Both analytes were analyzed directly, without derivatization, by liquid chromatography tandem mass spectrometry using two separate precursor-to-product transitions that ensure and confirm the accuracy of the measured results. Method performance was evaluated during validation through a series of assessments that included linearity, accuracy, precision, selectivity, ionization effects and carryover. Limits of quantitation (LOQ) were determined to be 0.1 and 10 µg/L (ppb) for urine and milk, respectively, for both glyphosate and AMPA. Mean recoveries for all matrices were within 89-107% at three separate fortification levels including the LOQ. Precision for replicates was ≤ 7.4% relative standard deviation (RSD) for milk and ≤ 11.4% RSD for urine across all fortification levels. All human and bovine milk samples used for selectivity and ionization effects assessments were free of any detectable levels of glyphosate and AMPA. Some of the human urine samples contained trace levels of glyphosate and AMPA, which were background subtracted for accuracy assessments. Ionization effects testing showed no significant biases from the matrix. A successful independent external validation was conducted using the more complicated milk matrices to demonstrate method transferability.
Is quantitative electromyography reliable?
Cecere, F; Ruf, S; Pancherz, H
1996-01-01
The reliability of quantitative electromyography (EMG) of the masticatory muscles was investigated in 14 subjects without any signs or symptoms of temporomandibular disorders. Integrated EMG activity from the anterior temporalis and masseter muscles was recorded bilaterally by means of bipolar surface electrodes during chewing and biting activities. In the first experiment, the influence of electrode relocation was investigated. No influence of electrode relocation on the recorded EMG signal could be detected. In a second experiment, three sessions of EMG recordings during five different chewing and biting activities were performed in the morning (I); 1 hour later without intermediate removal of the electrodes (II); and in the afternoon, using new electrodes (III). The method errors for different time intervals (I-II and I-III errors) for each muscle and each function were calculated. Depending on the time interval between the EMG recordings, the muscles considered, and the function performed, the individual errors ranged from 5% to 63%. The method error increased significantly (P masseter (mean 27.2%) was higher than for the temporalis (mean 20.0%). The largest function error was found during maximal biting in intercuspal position (mean 23.1%). Based on the findings, quantitative electromyography of the masticatory muscles seems to have a limited value in diagnostics and in the evaluation of individual treatment results.
Eugster, P.; Guerraoui, R.; Kouznetsov, P.
2001-01-01
This paper presents a new, non-binary measure of the reliability of broadcast algorithms, called Delta-Reliability. This measure quantifies the reliability of practical broadcast algorithms that, on the one hand, were devised with some form of reliability in mind, but, on the other hand, are not considered reliable according to the ``traditional'' notion of broadcast reliability [HT94]. Our specification of Delta-Reliability suggests a further step towards bridging the gap between theory and...
Reliability computation from reliability block diagrams
Chelson, P. O.; Eckstein, E. Y.
1975-01-01
Computer program computes system reliability for very general class of reliability block diagrams. Four factors are considered in calculating probability of system success: active block redundancy, standby block redundancy, partial redundancy, and presence of equivalent blocks in the diagram.
Response and Reliability Problems of Dynamic Systems
Nielsen, Søren R. K.
The present thesis consists of selected parts of the work performed by the author on stochastic dynamics and reliability theory of dynamically excited structures primarily during the period 1986-1996.......The present thesis consists of selected parts of the work performed by the author on stochastic dynamics and reliability theory of dynamically excited structures primarily during the period 1986-1996....
Reliability-Centric High-Level Synthesis
Tosun, S; Arvas, E; Kandemir, M; Xie, Yuan
2011-01-01
Importance of addressing soft errors in both safety critical applications and commercial consumer products is increasing, mainly due to ever shrinking geometries, higher-density circuits, and employment of power-saving techniques such as voltage scaling and component shut-down. As a result, it is becoming necessary to treat reliability as a first-class citizen in system design. In particular, reliability decisions taken early in system design can have significant benefits in terms of design quality. Motivated by this observation, this paper presents a reliability-centric high-level synthesis approach that addresses the soft error problem. The proposed approach tries to maximize reliability of the design while observing the bounds on area and performance, and makes use of our reliability characterization of hardware components such as adders and multipliers. We implemented the proposed approach, performed experiments with several designs, and compared the results with those obtained by a prior proposal.
Software reliability experiments data analysis and investigation
Walker, J. Leslie; Caglayan, Alper K.
1991-01-01
The objectives are to investigate the fundamental reasons which cause independently developed software programs to fail dependently, and to examine fault tolerant software structures which maximize reliability gain in the presence of such dependent failure behavior. The authors used 20 redundant programs from a software reliability experiment to analyze the software errors causing coincident failures, to compare the reliability of N-version and recovery block structures composed of these programs, and to examine the impact of diversity on software reliability using subpopulations of these programs. The results indicate that both conceptually related and unrelated errors can cause coincident failures and that recovery block structures offer more reliability gain than N-version structures if acceptance checks that fail independently from the software components are available. The authors present a theory of general program checkers that have potential application for acceptance tests.
Maximal inequalities for demimartingales and their applications
WANG XueJun; HU ShuHe
2009-01-01
In this paper,we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides.The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob's type maximal inequality for demimartingales,strong laws of large numbers and growth rate for demimartingales and associated random variables.At last,we give an equivalent condition of uniform integrability for demisubmartingales.
Maximal inequalities for demimartingales and their applications
无
2009-01-01
In this paper, we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides. The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob’s type maximal inequality for demimartingales, strong laws of large numbers and growth rate for demimartingales and associated random variables. At last, we give an equivalent condition of uniform integrability for demisubmartingales.
Task-oriented maximally entangled states
Agrawal, Pankaj; Pradhan, B, E-mail: agrawal@iopb.res.i, E-mail: bpradhan@iopb.res.i [Institute of Physics, Sachivalaya Marg, Bhubaneswar, Orissa 751 005 (India)
2010-06-11
We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.
Inflation in maximal gauged supergravities
Kodama, Hideo [Theory Center, KEK,Tsukuba 305-0801 (Japan); Department of Particles and Nuclear Physics,The Graduate University for Advanced Studies,Tsukuba 305-0801 (Japan); Nozawa, Masato [Dipartimento di Fisica, Università di Milano, and INFN, Sezione di Milano,Via Celoria 16, 20133 Milano (Italy)
2015-05-18
We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.
Regularized F-Measure Maximization for Feature Selection and Classification
Zhenqiu Liu
2009-01-01
benchmark, methylation, and high dimensional microarray data show that the performance of proposed algorithm is better or equivalent compared with the other popular classifiers in limited experiments.
Maximize Benefits, Minimize Risk: Selecting the Right HVAC Firm.
Golden, James T.
1993-01-01
An informal survey of 20 major urban school districts found that 40% were currently operating in a "break down" maintenance mode. A majority, 57.9%, also indicated they saw considerable benefits in contracting for heating, ventilating, and air conditioning (HVAC) maintenance services with outside firms. Offers guidelines in selecting…
Maximally selected chi-square statistics and umbrella orderings
Boulesteix, Anne-Laure; Strobl, Carolin
2006-01-01
Binary outcomes that depend on an ordinal predictor in a non-monotonic way are common in medical data analysis. Such patterns can be addressed in terms of cutpoints: for example, one looks for two cutpoints that define an interval in the range of the ordinal predictor for which the probability of a positive outcome is particularly high (or low). A chi-square test may then be performed to compare the proportions of positive outcomes in and outside this interval. However, if the two cutpoints a...
Network channel allocation and revenue maximization
Hamalainen, Timo; Joutsensalo, Jyrki
2002-09-01
This paper introduces a model that can be used to share link capacity among customers under different kind of traffic conditions. This model is suitable for different kind of networks like the 4G networks (fast wireless access to wired network) to support connections of given duration that requires a certain quality of service. We study different types of network traffic mixed in a same communication link. A single link is considered as a bottleneck and the goal is to find customer traffic profiles that maximizes the revenue of the link. Presented allocation system accepts every calls and there is not absolute blocking, but the offered data rate/user depends on the network load. Data arrival rate depends on the current link utilization, user's payment (selected CoS class) and delay. The arrival rate is (i) increasing with respect to the offered data rate, (ii) decreasing with respect to the price, (iii) decreasing with respect to the network load, and (iv) decreasing with respect to the delay. As an example, explicit formula obeying these conditions is given and analyzed.
Computing Maximally Supersymmetric Scattering Amplitudes
Stankowicz, James Michael, Jr.
This dissertation reviews work in computing N = 4 super-Yang--Mills (sYM) and N = 8 maximally supersymmetric gravity (mSUGRA) scattering amplitudes in D = 4 spacetime dimensions in novel ways. After a brief introduction and overview in Ch. 1, the various techniques used to construct amplitudes in the remainder of the dissertation are discussed in Ch. 2. This includes several new concepts such as d log and pure integrand bases, as well as how to construct the amplitude using exactly one kinematic point where it vanishes. Also included in this chapter is an outline of the Mathematica package on shell diagrams and numerics.m (osdn) that was developed for the computations herein. The rest of the dissertation is devoted to explicit examples. In Ch. 3, the starting point is tree-level sYM amplitudes that have integral representations with residues that obey amplitude relations. These residues are shown to have corresponding residue numerators that allow a double copy prescription that results in mSUGRA residues. In Ch. 4, the two-loop four-point sYM amplitude is constructed in several ways, showcasing many of the techniques of Ch. 2; this includes an example of how to use osdn. The two-loop five-point amplitude is also presented in a pure integrand representation with comments on how it was constructed from one homogeneous cut of the amplitude. On-going work on the two-loop n-point amplitude is presented at the end of Ch. 4. In Ch. 5, the three-loop four-point amplitude is presented in the d log representation and in the pure integrand representation. In Ch. 6, there are several examples of four- through seven-loop planar diagrams that illustrate how considerations of the singularity structure of the amplitude underpin dual-conformal invariance. Taken with the previous examples, this is additional evidence that the structure known to exist in the planar sector extends to the full theory. At the end of this chapter is a proof that all mSUGRA amplitudes have a pole at
Verweij, Jan F.
1993-01-01
Several issue's regarding VLSI reliability research in Europe are discussed. Organizations involved in stimulating the activities on reliability by exchanging information or supporting research programs are described. Within one such program, ESPRIT, a technical interest group on IC reliability was
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Are all maximally entangled states pure?
Cavalcanti, D; Terra-Cunha, M O
2005-01-01
In this Letter we study if all maximally entangled states are pure through several entanglement monotones. Our conclusions allow us to generalize the idea of monogamy of entanglement. Then we propose a polygamy of entanglement, which express that if a general multipartite state is maximally entangled it is necessarily factorized by any other system.
Sampling and Representation Complexity of Revenue Maximization
Dughmi, Shaddin; Han, Li; Nisan, Noam
2014-01-01
We consider (approximate) revenue maximization in auctions where the distribution on input valuations is given via "black box" access to samples from the distribution. We observe that the number of samples required -- the sample complexity -- is tightly related to the representation complexity of an approximately revenue-maximizing auction. Our main results are upper bounds and an exponential lower bound on these complexities.
Lisonek, Petr
1996-01-01
our classifications confirmthe maximality of previously known sets, the results in E^7 and E^8are new. Their counterpart in dimension larger than 10is a set of unit vectors with only two values of inner products in the Lorentz space R^{d,1}.The maximality of this set again follows from a bound due...
An ethical justification of profit maximization
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing b...
Alternative trailer configurations for maximizing payloads
Jason D. Thompson; Dana Mitchell; John Klepac
2017-01-01
In order for harvesting contractors to stay ahead of increasing costs, it is imperative that they employ all options to maximize productivity and efficiency. Transportation can account for half the cost to deliver wood to a mill. Contractors seek to maximize truck payload to increase productivity. The Forest Operations Research Unit, Southern Research Station, USDA...
Cohomology of Weakly Reducible Maximal Triangular Algebras
董浙; 鲁世杰
2000-01-01
In this paper, we introduce the concept of weakly reducible maximal triangular algebras φwhich form a large class of maximal triangular algebras. Let B be a weakly closed algebra containing 5φ, we prove that the cohomology spaces Hn(φ, B) (n≥1) are trivial.
Maximal Hypersurfaces in Spacetimes with Translational Symmetry
Bulawa, Andrew
2016-01-01
We consider four-dimensional vacuum spacetimes which admit a free isometric spacelike R-action. Taking a quotient with respect to the R-action produces a three-dimensional quotient spacetime. We establish several results regarding maximal hypersurfaces (spacelike hypersurfaces of zero mean curvature) in quotient spacetimes. First, we show that complete noncompact maximal hypersurfaces must either be flat cylinders S^1 x R or conformal to the Euclidean plane. Second, we establish a positive mass theorem for certain maximal hypersurfaces. Finally, while it is meaningful to use a bounded lapse when adopting the maximal hypersurface gauge condition in the four-dimensional (asymptotically flat) setting, it is shown here that nontrivial quotient spacetimes admit the maximal hypersurface gauge only with an unbounded lapse.
Wind turbine reliability database update.
Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.
2009-03-01
This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.
Energy, complexity and wealth maximization
Ayres, Robert
2016-01-01
This book is about the mechanisms of wealth creation, or what we like to think of as evolutionary “progress”. For the modern economy, natural wealth consists of complex physical structures of condensed (“frozen”) energy – mass - maintained in the earth’s crust far from thermodynamic equilibrium. However, we usually perceive wealth as created when mutation or “invention” – a change agent - introduces something different, and fitter, and usually after some part of the natural wealth of the planet has been exploited in an episode of “creative destruction”. Selection out of the resulting diversity is determined by survival in a competitive environment, whether a planet, a habitat, or a market. While human wealth is associated with money and what it can buy, it is ultimately based on natural wealth, both as materials transformed into useful artifacts, and how those artifacts, activated by energy, can create and transmit useful information. Humans have learned how to transform natural wealth i...
Measuring reliability under epistemic uncertainty:Review on non-probabilistic reliability metrics
Kang Rui; Zhang Qingyuan; Zeng Zhiguo; Enrico Zio; Li Xiaoyang
2016-01-01
In this paper, a systematic review of non-probabilistic reliability metrics is conducted to assist the selection of appropriate reliability metrics to model the influence of epistemic uncertainty. Five frequently used non-probabilistic reliability metrics are critically reviewed, i.e., evidence-theory-based reliability metrics, interval-analysis-based reliability metrics, fuzzy-interval-analysis-based reliability metrics, possibility-theory-based reliability metrics (posbist reliability) and uncertainty-theory-based reliability metrics (belief reliability). It is pointed out that a qualified reli-ability metric that is able to consider the effect of epistemic uncertainty needs to (1) compensate the conservatism in the estimations of the component-level reliability metrics caused by epistemic uncertainty, and (2) satisfy the duality axiom, otherwise it might lead to paradoxical and confusing results in engineering applications. The five commonly used non-probabilistic reliability metrics are compared in terms of these two properties, and the comparison can serve as a basis for the selection of the appropriate reliability metrics.
Reliability Generalization: "Lapsus Linguae"
Smith, Julie M.
2011-01-01
This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,…
Are all maximally entangled states pure?
Cavalcanti, D.; Brandão, F. G. S. L.; Terra Cunha, M. O.
2005-10-01
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
An ethical justification of profit maximization
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...
Robust utility maximization in a discontinuous filtration
Jeanblanc, Monique; Ngoupeyou, Armand
2012-01-01
We study a problem of utility maximization under model uncertainty with information including jumps. We prove first that the value process of the robust stochastic control problem is described by the solution of a quadratic-exponential backward stochastic differential equation with jumps. Then, we establish a dynamic maximum principle for the optimal control of the maximization problem. The characterization of the optimal model and the optimal control (consumption-investment) is given via a forward-backward system which generalizes the result of Duffie and Skiadas (1994) and El Karoui, Peng and Quenez (2001) in the case of maximization of recursive utilities including model with jumps.
When and why are reliable organizations favored?
Ethiraj, Sendil; Yi, Sangyoon
in this assertion. Principally, we show that whether reliable organizations are favored depends on the nature of the environment. When environments are complex, reliability is selected out. In more complex environments, variability is more valued by selection forces. Further, we also examine the consequences......In the 1980s, organization theory witnessed a decade long debate about the incentives and consequences of organizational change. Though the fountainhead of this debate was the observation that reliable organizations are the “consequence” rather than the “cause” of selection forces, much...
Worldwide Express: Exploiting Existing Contract Provisions to Maximize Savings
2012-06-01
reliable for freight shipping” ( Colbert , 2005). In response, the DoD and GSA launched Worldwide Express in 1998. “Worldwide Express offers...Roger K. and F. Ronald Frola. (1996). The Civil Air Reserve Fleet: Trends and Selected Issues. McClean: Logistics Management Institute. Colbert
HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL
HR Division
2000-01-01
Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...
Maximizing throughput by evaluating critical utilization paths
Weeda, P.J.
1991-01-01
Recently the relationship between batch structure, bottleneck machine and maximum throughput has been explored for serial, convergent and divergent process configurations consisting of two machines and three processes. In three of the seven possible configurations a multiple batch structure maximize
Relationship between maximal exercise parameters and individual ...
Relationship between maximal exercise parameters and individual time trial ... It is widely accepted that the ventilatory threshold (VT) is an important ... This study investigated whether the physiological responses during a 20km time trial (TT) ...
Jay R. Hoffman
2007-03-01
Full Text Available Maximal strength and power testing are common assessments that are used to evaluate strength/power athletes. The validity and reliability of these tests have been well established (Hoffman, 2006, however the order of testing may have a profound effect on test performance outcome. It is generally recommended that the least fatiguing and highly-skilled tests are performed first, while highly fatiguing tests are performed last (Hoffman, 2006. Recent research has demonstrated that maximal isometric contractions and maximal or near- maximal dynamic exercise can augment the rate of force development, increase jump height and enhance sprint cycle performance (Chiu et al., 2003; French et al., 2003. The use of a maximal or near-maximal activity to enhance strength and power performance has been termed "muscle postactivation potentiation", and appears to be more common in the experienced resistance-trained athletes than in the recreationally-trained population (Chiu et al., 2003. It is believed that postactivation potentiation can enhance muscle performance by increasing the neural signal that activates the muscle (Hamada et al., 2000. Since heavy loading in a similar movement pattern of exercise appears to enhance maximal strength and power performance in the experienced resistance-trained athlete, it may be hypothesized that the postactivation potentiation associated with heavy loading has the potential to augment subsequent performance of tests utilizing similar motion. Therefore, consideration of an appropriate sequence of athletic performance testing in strength and power athletes is warranted. We would like to share our experience on the effect of performing a maximal lower body strength test on vertical jump performance in experienced resistance-trained strength/power athletes.We examined 64 NCAA Division III American collegiate football players (age = 20.1 ± 1.9 yr; body mass = 97.5 ± 17.8 kg; height = 1.80 ± 0.12 m. All testing was performed
Simple technique for maximal thoracic muscle harvest.
Marshall, M Blair; Kaiser, Larry R; Kucharczuk, John C
2004-04-01
We present a modification of technique for standard muscle flap harvest, the placement of cutaneous traction sutures. This technique allows for maximal dissection of the thoracic muscles even through minimal incisions. Through improved exposure and traction, complete dissection of the muscle bed can be performed and the tissue obtained maximized. Because more muscle bulk is obtained with this technique, the need for a second muscle may be prevented.
MAXIMAL POINTS OF A REGULAR TRUTH FUNCTION
Every canonical linearly separable truth function is a regular function, but not every regular truth function is linearly separable. The most...promising method of determining which of the regular truth functions are linearly separable r quires finding their maximal and minimal points. In this...report is developed a quick, systematic method of finding the maximal points of any regular truth function in terms of its arithmetic invariants. (Author)
Maximal Subgroups of Skew Linear Groups
M. Mahdavi-Hezavehi
2002-01-01
Let D be an infinite division algebra of finite dimension over its centre Z(D) = F, and n a positive integer. The structure of maximal subgroups of skew linear groups are investigated. In particular, assume N is a normal subgroup of GLn(D) and M is a maximal subgroup of N containing Z(N). It is shown that if M/Z(N) is finite, then N is central.
Additive Approximation Algorithms for Modularity Maximization
Kawase, Yasushi; Matsui, Tomomi; Miyauchi, Atsushi
2016-01-01
The modularity is a quality function in community detection, which was introduced by Newman and Girvan (2004). Community detection in graphs is now often conducted through modularity maximization: given an undirected graph $G=(V,E)$, we are asked to find a partition $\\mathcal{C}$ of $V$ that maximizes the modularity. Although numerous algorithms have been developed to date, most of them have no theoretical approximation guarantee. Recently, to overcome this issue, the design of modularity max...
Maximal Frequent Itemset Generation Using Segmentation Apporach
M.Rajalakshmi
2011-09-01
Full Text Available Finding frequent itemsets in a data source is a fundamental operation behind Association Rule Mining.Generally, many algorithms use either the bottom-up or top-down approaches for finding these frequentitemsets. When the length of frequent itemsets to be found is large, the traditional algorithms find all thefrequent itemsets from 1-length to n-length, which is a difficult process. This problem can be solved bymining only the Maximal Frequent Itemsets (MFS. Maximal Frequent Itemsets are frequent itemsets whichhave no proper frequent superset. Thus, the generation of only maximal frequent itemsets reduces thenumber of itemsets and also time needed for the generation of all frequent itemsets as each maximal itemsetof length m implies the presence of 2m-2 frequent itemsets. Furthermore, mining only maximal frequentitemset is sufficient in many data mining applications like minimal key discovery and theory extraction. Inthis paper, we suggest a novel method for finding the maximal frequent itemset from huge data sourcesusing the concept of segmentation of data source and prioritization of segments. Empirical evaluationshows that this method outperforms various other known methods.
Chen, Qing; Zhang, Jinxiu; Hu, Ze
2017-01-01
This article investigates the dynamic topology control problem of satellite cluster networks (SCNs) in Earth observation (EO) missions by applying a novel metric of stability for inter-satellite links (ISLs). The properties of the periodicity and predictability of satellites’ relative position are involved in the link cost metric which is to give a selection criterion for choosing the most reliable data routing paths. Also, a cooperative work model with reliability is proposed for the situation of emergency EO missions. Based on the link cost metric and the proposed reliability model, a reliability assurance topology control algorithm and its corresponding dynamic topology control (RAT) strategy are established to maximize the stability of data transmission in the SCNs. The SCNs scenario is tested through some numeric simulations of the topology stability of average topology lifetime and average packet loss rate. Simulation results show that the proposed reliable strategy applied in SCNs significantly improves the data transmission performance and prolongs the average topology lifetime. PMID:28241474
Assuring reliability program effectiveness.
Ball, L. W.
1973-01-01
An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.
The Accelerator Reliability Forum
Lüdeke, Andreas; Giachino, R
2014-01-01
A high reliability is a very important goal for most particle accelerators. The biennial Accelerator Reliability Workshop covers topics related to the design and operation of particle accelerators with a high reliability. In order to optimize the over-all reliability of an accelerator one needs to gather information on the reliability of many different subsystems. While a biennial workshop can serve as a platform for the exchange of such information, the authors aimed to provide a further channel to allow for a more timely communication: the Particle Accelerator Reliability Forum [1]. This contribution will describe the forum and advertise it’s usage in the community.
Making literature reviews more reliable through application of lessons from systematic reviews.
Haddaway, N R; Woodcock, P; Macura, B; Collins, A
2015-12-01
Review articles can provide valuable summaries of the ever-increasing volume of primary research in conservation biology. Where findings may influence important resource-allocation decisions in policy or practice, there is a need for a high degree of reliability when reviewing evidence. However, traditional literature reviews are susceptible to a number of biases during the identification, selection, and synthesis of included studies (e.g., publication bias, selection bias, and vote counting). Systematic reviews, pioneered in medicine and translated into conservation in 2006, address these issues through a strict methodology that aims to maximize transparency, objectivity, and repeatability. Systematic reviews will always be the gold standard for reliable synthesis of evidence. However, traditional literature reviews remain popular and will continue to be valuable where systematic reviews are not feasible. Where traditional reviews are used, lessons can be taken from systematic reviews and applied to traditional reviews in order to increase their reliability. Certain key aspects of systematic review methods that can be used in a context-specific manner in traditional reviews include focusing on mitigating bias; increasing transparency, consistency, and objectivity, and critically appraising the evidence and avoiding vote counting. In situations where conducting a full systematic review is not feasible, the proposed approach to reviewing evidence in a more systematic way can substantially improve the reliability of review findings, providing a time- and resource-efficient means of maximizing the value of traditional reviews. These methods are aimed particularly at those conducting literature reviews where systematic review is not feasible, for example, for graduate students, single reviewers, or small organizations. © 2015 Society for Conservation Biology.
Maximizing Information Diffusion in the Cyber-physical Integrated Network
Hongliang Lu
2015-11-01
Full Text Available Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks.
Maximizing Information Diffusion in the Cyber-physical Integrated Network.
Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan
2015-11-11
Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks.
Enlightenment on Computer Network Reliability From Transportation Network Reliability
Hu Wenjun; Zhou Xizhao
2011-01-01
Referring to transportation network reliability problem, five new computer network reliability definitions are proposed and discussed. They are computer network connectivity reliability, computer network time reliability, computer network capacity reliability, computer network behavior reliability and computer network potential reliability. Finally strategies are suggested to enhance network reliability.
Component reliability for electronic systems
Bajenescu, Titu-Marius I
2010-01-01
The main reason for the premature breakdown of today's electronic products (computers, cars, tools, appliances, etc.) is the failure of the components used to build these products. Today professionals are looking for effective ways to minimize the degradation of electronic components to help ensure longer-lasting, more technically sound products and systems. This practical book offers engineers specific guidance on how to design more reliable components and build more reliable electronic systems. Professionals learn how to optimize a virtual component prototype, accurately monitor product reliability during the entire production process, and add the burn-in and selection procedures that are the most appropriate for the intended applications. Moreover, the book helps system designers ensure that all components are correctly applied, margins are adequate, wear-out failure modes are prevented during the expected duration of life, and system interfaces cannot lead to failure.
Human Reliability Program Overview
Bodin, Michael
2012-09-25
This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.
Power electronics reliability analysis.
Smith, Mark A.; Atcitty, Stanley
2009-12-01
This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.
Welfare-maximizing and revenue-maximizing tariffs with a few domestic firms
Bruno Larue; Jean-Philippe Gervais
2002-01-01
In this paper we compare the orthodox optimal tariff formula with the appropriate welfare-maximizing tariff when there are a few producing or importing firms. The welfare-maximizing tariff can be very low, voire negative in some cases, while in others it can even exceed the maximum-revenue tariff. The relationship between the welfare-maximizing tariff and the number of firms need not be monotonically increasing, because the tariff is not strictly used to internalize terms of trade externality...
Berg, Melanie; LaBel, Kenneth A.
2016-01-01
This presentation focuses on reliability and trust for the users portion of the FPGA design flow. It is assumed that the manufacturer prior to hand-off to the user tests FPGA internal components. The objective is to present the challenges of creating reliable and trusted designs. The following will be addressed: What makes a design vulnerable to functional flaws (reliability) or attackers (trust)? What are the challenges for verifying a reliable design versus a trusted design?
Maximizing Complementary Quantities by Projective Measurements
M. Souza, Leonardo A.; Bernardes, Nadja K.; Rossi, Romeu
2017-04-01
In this work, we study the so-called quantitative complementarity quantities. We focus in the following physical situation: two qubits ( q A and q B ) are initially in a maximally entangled state. One of them ( q B ) interacts with a N-qubit system ( R). After the interaction, projective measurements are performed on each of the qubits of R, in a basis that is chosen after independent optimization procedures: maximization of the visibility, the concurrence, and the predictability. For a specific maximization procedure, we study in detail how each of the complementary quantities behave, conditioned on the intensity of the coupling between q B and the N qubits. We show that, if the coupling is sufficiently "strong," independent of the maximization procedure, the concurrence tends to decay quickly. Interestingly enough, the behavior of the concurrence in this model is similar to the entanglement dynamics of a two qubit system subjected to a thermal reservoir, despite that we consider finite N. However, the visibility shows a different behavior: its maximization is more efficient for stronger coupling constants. Moreover, we investigate how the distinguishability, or the information stored in different parts of the system, is distributed for different couplings.
Reliability of Search and Rescue Action
Burciu, Zbigniew
2012-06-01
Determination of the reliability of Search and Rescue action system allows the SAR Mission Coordinator to increase the effectiveness of the action through the proper selection of operational characteristics of the system elements, in particular the selection of the rescue units and auxiliary units. The paper presents the example of the influence of water temperature and time of the action on the reliability of search and rescue action in the case of rescuing a survivor in the water.
Viking Lander reliability program
Pilny, M. J.
1978-01-01
The Viking Lander reliability program is reviewed with attention given to the development of the reliability program requirements, reliability program management, documents evaluation, failure modes evaluation, production variation control, failure reporting and correction, and the parts program. Lander hardware failures which have occurred during the mission are listed.
Reliability of electrical power systems for coal mines
Razgil' deev, G.I.; Kovalev, A.P.; Serkyuk, L.I.
1982-01-01
This is a method for evaluating the reliability of comprehensive mining power systems. The systems reliability is influenced by the selectivity of maximum protection, ground-short protection, the subdivision of the circuitry, the character of the systems failures, etc.
The effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake.
Oh, Deuk-Ja; Hong, Hyeon-Ok; Lee, Bo-Ae
2016-02-01
The purpose of this study is to investigate the effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake. To achieve the purpose of the study, a total of 30 subjects were selected, including 15 people who performed continued regular exercises and 15 people as the control group. With regard to data processing, the IBM SPSS Statistics ver. 21.0 was used to calculate the mean and standard deviation. The difference of mean change between groups was verified through an independent t-test. As a result, there were significant differences in resting heart rate, maximal heart rate, maximal systolic blood pressure, and maximal oxygen uptake. However, the maximal systolic blood pressure was found to be an exercise-induced high blood pressure. Thus, it is thought that a risk diagnosis for it through a regular exercise stress test is necessary.
Polyploidy Induction of Pteroceltis tatarinowii Maxim
Lin ZHANG; Feng WANG; Zhongkui SUN; Cuicui ZHU; Rongwei CHEN
2015-01-01
3%Objective] This study was conducted to obtain tetraploid Pteroceltis tatari-nowi Maxim. with excel ent ornamental traits. [Method] The stem apex growing points of Pteroceltis tatarinowi Maxim. were treated with different concentrations of colchicine solution for different hours to figure out a proper method and obtain poly-ploids. [Result] The most effective induction was obtained by treatment with 0.6%-0.8% colchicine for 72 h with 34.2% mutation rate. Flow cytometry and chromosome observation of the stem apex growing point of P. tatarinowi Maxim. proved that the tetraploid plants were successful y obtained with chromosome number 2n=4x=36. [Conclusion] The result not only fil s the blank of polyploid breeding of P. tatarinowi , but also provides an effective way to broaden the methods of cultivation of fast-growing, high-quality, disease-resilience, new varieties of Pteroceltis.
Quantum theory allows for absolute maximal contextuality
Amaral, Barbara; Cunha, Marcelo Terra; Cabello, Adán
2015-12-01
Contextuality is a fundamental feature of quantum theory and a necessary resource for quantum computation and communication. It is therefore important to investigate how large contextuality can be in quantum theory. Linear contextuality witnesses can be expressed as a sum S of n probabilities, and the independence number α and the Tsirelson-like number ϑ of the corresponding exclusivity graph are, respectively, the maximum of S for noncontextual theories and for the theory under consideration. A theory allows for absolute maximal contextuality if it has scenarios in which ϑ /α approaches n . Here we show that quantum theory allows for absolute maximal contextuality despite what is suggested by the examination of the quantum violations of Bell and noncontextuality inequalities considered in the past. Our proof is not constructive and does not single out explicit scenarios. Nevertheless, we identify scenarios in which quantum theory allows for almost-absolute-maximal contextuality.
The maximal process of nonlinear shot noise
Eliazar, Iddo; Klafter, Joseph
2009-05-01
In the nonlinear shot noise system-model shots’ statistics are governed by general Poisson processes, and shots’ decay-dynamics are governed by general nonlinear differential equations. In this research we consider a nonlinear shot noise system and explore the process tracking, along time, the system’s maximal shot magnitude. This ‘maximal process’ is a stationary Markov process following a decay-surge evolution; it is highly robust, and it is capable of displaying both a wide spectrum of statistical behaviors and a rich variety of random decay-surge sample-path trajectories. A comprehensive analysis of the maximal process is conducted, including its Markovian structure, its decay-surge structure, and its correlation structure. All results are obtained analytically and in closed-form.
Energy Band Calculations for Maximally Even Superlattices
Krantz, Richard; Byrd, Jason
2007-03-01
Superlattices are multiple-well, semiconductor heterostructures that can be described by one-dimensional potential wells separated by potential barriers. We refer to a distribution of wells and barriers based on the theory of maximally even sets as a maximally even superlattice. The prototypical example of a maximally even set is the distribution of white and black keys on a piano keyboard. Black keys may represent wells and the white keys represent barriers. As the number of wells and barriers increase, efficient and stable methods of calculation are necessary to study these structures. We have implemented a finite-element method using the discrete variable representation (FE-DVR) to calculate E versus k for these superlattices. Use of the FE-DVR method greatly reduces the amount of calculation necessary for the eigenvalue problem.
Reliability and Availability of Cloud Computing
Bauer, Eric
2012-01-01
A holistic approach to service reliability and availability of cloud computing Reliability and Availability of Cloud Computing provides IS/IT system and solution architects, developers, and engineers with the knowledge needed to assess the impact of virtualization and cloud computing on service reliability and availability. It reveals how to select the most appropriate design for reliability diligence to assure that user expectations are met. Organized in three parts (basics, risk analysis, and recommendations), this resource is accessible to readers of diverse backgrounds and experience le
Absence of parasympathetic reactivation after maximal exercise.
de Oliveira, Tiago Peçanha; de Alvarenga Mattos, Raphael; da Silva, Rhenan Bartels Ferreira; Rezende, Rafael Andrade; de Lima, Jorge Roberto Perrout
2013-03-01
The ability of the human organism to recover its autonomic balance soon after physical exercise cessation has an important impact on the individual's health status. Although the dynamics of heart rate recovery after maximal exercise has been studied, little is known about heart rate variability after this type of exercise. The aim of this study is to analyse the dynamics of heart rate and heart rate variability recovery after maximal exercise in healthy young men. Fifteen healthy male subjects (21·7 ± 3·4 years; 24·0 ± 2·1 kg m(-2) ) participated in the study. The experimental protocol consisted of an incremental maximal exercise test on a cycle ergometer, until maximal voluntary exhaustion. After the test, recovery R-R intervals were recorded for 5 min. From the absolute differences between peak heart rate values and the heart rate values at 1 and 5 min of the recovery, the heart rate recovery was calculated. Postexercise heart rate variability was analysed from calculations of the SDNN and RMSSD indexes, in 30-s windows (SDNN(30s) and RMSSD(30s) ) throughout recovery. One and 5 min after maximal exercise cessation, the heart rate recovered 34·7 (±6·6) and 75·5 (±6·1) bpm, respectively. With regard to HRV recovery, while the SDNN(30s) index had a slight increase, RMSSD(30s) index remained totally suppressed throughout the recovery, suggesting an absence of vagal modulation reactivation and, possibly, a discrete sympathetic withdrawal. Therefore, it is possible that the main mechanism associated with the fall of HR after maximal exercise is sympathetic withdrawal or a vagal tone restoration without vagal modulation recovery. © 2012 The Authors Clinical Physiology and Functional Imaging © 2012 Scandinavian Society of Clinical Physiology and Nuclear Medicine.
Maximizing band gaps in plate structures
Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard
2006-01-01
Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated...
Maximal and Minimal Congruences on Some Semigroups
Jintana SANWONG; Boorapa SINGHA; R.P.SULLIVAN
2009-01-01
In 2006,Sanwong and Sullivan described the maximal congruences on the semigroup N consisting of all non-negative integers under standard multiplication,and on the semigroup T(X) consisting of all total transformations of an infinite set X under composition. Here,we determine all maximal congruences on the semigroup Zn under multiplication modulo n. And,when Y X,we do the same for the semigroup T(X,Y) consisting of all elements of T(X) whose range is contained in Y. We also characterise the minimal congruences on T(X,Y).
Maximizing oil yields may not optimize economics
1987-03-01
The Los Alamos National Laboratory has used the ASPEN computer code to calculate the economics of different hydroretorting conditions. When the oil yield was maximized and a oil shale plant designed around this process, the costs turned out much higher than expected. However, calculations based on runs of less than maximum yields showed lower cost estimates. It is recommended that future efforts should be concentrated on minimizing production costs rather than maximizing yields. An oil shale plant has been designed around minimum production cost, but has not been able to be tested experimentally.
Maximal Inequalities for Dependent Random Variables
Hoffmann-Jorgensen, Jorgen
2016-01-01
Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X...
Singularity Structure of Maximally Supersymmetric Scattering Amplitudes
Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy
2014-01-01
We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....
Expectation Maximization for Hard X-ray Count Modulation Profiles
Benvenuto, Federico; Piana, Michele; Massone, Anna Maria
2013-01-01
This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI)} instrument. Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized for the analysis of count modulation profiles in solar hard X-ray imaging based on Rotating Modulation Collimators. The algorithm described in this paper solves the maximum likelihood problem iteratively and encoding a positivity constraint into the iterative optimization scheme. The result is therefore a classical Expectation Maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, ...
Designing for Reliability and Robustness
Svetlik, Randall G.; Moore, Cherice; Williams, Antony
2017-01-01
Long duration spaceflight has a negative effect on the human body, and exercise countermeasures are used on-board the International Space Station (ISS) to minimize bone and muscle loss, combatting these effects. Given the importance of these hardware systems to the health of the crew, this equipment must continue to be readily available. Designing spaceflight exercise hardware to meet high reliability and availability standards has proven to be challenging throughout the time the crewmembers have been living on ISS beginning in 2000. Furthermore, restoring operational capability after a failure is clearly time-critical, but can be problematic given the challenges of troubleshooting the problem from 220 miles away. Several best-practices have been leveraged in seeking to maximize availability of these exercise systems, including designing for robustness, implementing diagnostic instrumentation, relying on user feedback, and providing ample maintenance and sparing. These factors have enhanced the reliability of hardware systems, and therefore have contributed to keeping the crewmembers healthy upon return to Earth. This paper will review the failure history for three spaceflight exercise countermeasure systems identifying lessons learned that can help improve future systems. Specifically, the Treadmill with Vibration Isolation and Stabilization System (TVIS), Cycle Ergometer with Vibration Isolation and Stabilization System (CEVIS), and the Advanced Resistive Exercise Device (ARED) will be reviewed, analyzed, and conclusions identified so as to provide guidance for improving future exercise hardware designs. These lessons learned, paired with thorough testing, offer a path towards reduced system down-time.
Image fusion based on expectation maximization algorithm and steerable pyramid
Gang Liu(刘刚); Zhongliang Jing(敬忠良); Shaoyuan Sun(孙韶媛); Jianxun Li(李建勋); Zhenhua Li(李振华); Henry Leung
2004-01-01
In this paper, a novel image fusion method based on the expectation maximization (EM) algorithm and steerable pyramid is proposed. The registered images are first decomposed by using steerable pyramid.The EM algorithm is used to fuse the image components in the low frequency band. The selection method involving the informative importance measure is applied to those in the high frequency band. The final fused image is then computed by taking the inverse transform on the composite coefficient representations.Experimental results show that the proposed method outperforms conventional image fusion methods.
Reliability of the maximal oxygen uptake following two consecutive trials by indirect calorimetry.
Hall-López, Javier Arturo; Ochoa-Martínez, Paulina Yesica; Moncada-Jiménez, José; Ocampo Méndez, Mara Alessandra; Martínez García, Issael; Martínez García, Marco Antonio
2015-04-01
Introducción: La evaluación del con consumo máximo de oxígeno (VO2máx) por calorimetría indirecta es el método más confiable aun sin embargo los resultados al determinar el VO2máx cuando se realizan pruebas repetidas han resultado controversiales. Objetivo: Determinar la confiabilidad del consumo máximo de oxígeno (VO2máx) obtenido mediante dos pruebas de esfuerzo consecutivas utilizando el protocolo de Bruce en sujetos sanos que descansaron 10 min entre cada prueba. Método: En el estudio participaron 6 adultos jóvenes de género masculino, físicamente activos con una edad promedio de 23,4±1,3 años, los sujetos realizaron dos pruebas de esfuerzo mediante el protocolo de Bruce y al alcanzar el VO2máx, entre la primera y la segunda prueba, se bajaron del ergometro y reposaron sentados en una silla durante 10 minutos. Resultados: Los datos obtenidos mostraron alta reproducibilidad de los valores entre las pruebas, indicado por el coeficiente de correlación producto momento de Pearson y el R cuadrado con intervalos de confianza al 95% (IC95%), la correlación del consumo máximo de oxígeno de fue VO2máx=0.907, con un R2=0.823, la frecuencia cardiaca máxima fue FCmáx=0.786, con un R2=0.618 y la tasa de ventilación y eliminación de dióxido de carbono fue VE/VCO2=0.868, con un R2=754. Conclusión: No se observaron efectos adversos durante el periodo de descanso de 10 minutos entre pruebas. En conclusión, descansar 10 minutos entre pruebas de esfuerzo máximas consecutivas utilizando el protocolo de Bruce no afecta el VO2máx en sujetos jóvenes y aparentemente sanos. Repetir una prueba máxima en una misma sesión es posible, confiable y no se presentan efectos adversos.
Maximizing biomarker discovery by minimizing gene signatures
Chang Chang
2011-12-01
Full Text Available Abstract Background The use of gene signatures can potentially be of considerable value in the field of clinical diagnosis. However, gene signatures defined with different methods can be quite various even when applied the same disease and the same endpoint. Previous studies have shown that the correct selection of subsets of genes from microarray data is key for the accurate classification of disease phenotypes, and a number of methods have been proposed for the purpose. However, these methods refine the subsets by only considering each single feature, and they do not confirm the association between the genes identified in each gene signature and the phenotype of the disease. We proposed an innovative new method termed Minimize Feature's Size (MFS based on multiple level similarity analyses and association between the genes and disease for breast cancer endpoints by comparing classifier models generated from the second phase of MicroArray Quality Control (MAQC-II, trying to develop effective meta-analysis strategies to transform the MAQC-II signatures into a robust and reliable set of biomarker for clinical applications. Results We analyzed the similarity of the multiple gene signatures in an endpoint and between the two endpoints of breast cancer at probe and gene levels, the results indicate that disease-related genes can be preferably selected as the components of gene signature, and that the gene signatures for the two endpoints could be interchangeable. The minimized signatures were built at probe level by using MFS for each endpoint. By applying the approach, we generated a much smaller set of gene signature with the similar predictive power compared with those gene signatures from MAQC-II. Conclusions Our results indicate that gene signatures of both large and small sizes could perform equally well in clinical applications. Besides, consistency and biological significances can be detected among different gene signatures, reflecting the
Cycle-maximal triangle-free graphs
Durocher, Stephane; Gunderson, David S.; Li, Pak Ching;
2015-01-01
Abstract We conjecture that the balanced complete bipartite graph K ⌊ n / 2 ⌋ , ⌈ n / 2 ⌉ contains more cycles than any other n -vertex triangle-free graph, and we make some progress toward proving this. We give equivalent conditions for cycle-maximal triangle-free graphs; show bounds...
Gradient dynamics and entropy production maximization
Janečka, Adam
2016-01-01
Gradient dynamics describes irreversible evolution by means of a dissipation potential, which leads to several advantageous features like Maxwell--Onsager relations, distinguishing between thermodynamic forces and fluxes or geometrical interpretation of the dynamics. Entropy production maximization is a powerful tool for predicting constitutive relations in engineering. In this paper, both approaches are compared and their shortcomings and advantages are discussed.
Robust Utility Maximization Under Convex Portfolio Constraints
Matoussi, Anis, E-mail: anis.matoussi@univ-lemans.fr [Université du Maine, Risk and Insurance institut of Le Mans Laboratoire Manceau de Mathématiques (France); Mezghani, Hanen, E-mail: hanen.mezghani@lamsin.rnu.tn; Mnif, Mohamed, E-mail: mohamed.mnif@enit.rnu.tn [University of Tunis El Manar, Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT (Tunisia)
2015-04-15
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.
Maximizing the Motivated Mind for Emergent Giftedness.
Rea, Dan
2001-01-01
This article explains how the theory of the motivated mind conceptualizes the productive interaction of intelligence, creativity, and achievement motivation and how this theory can help educators to maximize students' emergent potential for giftedness. It discusses the integration of cold-order thinking and hot-chaotic thinking into fluid-adaptive…
The Winning Edge: Maximizing Success in College.
Schmitt, David E.
This book offers college students ideas on how to maximize their success in college by examining the personal management techniques a student needs to succeed. Chapters are as follows: "Getting and Staying Motivated"; "Setting Goals and Tapping Your Resources"; "Conquering Time"; "Think Yourself to College Success"; "Understanding and Remembering…
MAXIMAL ELEMENTS AND EQUILIBRIUM OF ABSTRACT ECONOMY
刘心歌; 蔡海涛
2001-01-01
An existence theorem of maximal elements for a new type of preference correspondences which are Qθ-majorized is given. Then some existence theorems of equilibrium for abstract economy and qualitative game in which the constraint or preference correspondences are Qθ-majorized are obtained in locally convex topological vector spaces.
Maximal workload capacity on moving platforms
Heus, R.; Wertheim, A.H.
1996-01-01
Physical tasks on a moving platform required more energy than the same tasks on a non-moving platform. In this study the maximum aerobic performance (defined as V_O2max) of people working on a moving floor was established compared to the maximal aerobic performance on a non-moving floor. The main
Maximal workload capacity on moving platforms
Heus, R.; Wertheim, A.H.
1996-01-01
Physical tasks on a moving platform required more energy than the same tasks on a non-moving platform. In this study the maximum aerobic performance (defined as V_O2max) of people working on a moving floor was established compared to the maximal aerobic performance on a non-moving floor. The main qu
Maximizing Resource Utilization in Video Streaming Systems
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Maximizing throughput in an automated test system
朱君
2007-01-01
@@ Overview This guide is collection of whitepapers designed to help you develop test systems that lower your cost, increase your test throughput, and can scale with future requirements. This whitepaper provides strategies for maximizing system throughput. To download the complete developers guide (120 pages), visit ni. com/automatedtest.
The gaugings of maximal D=6 supergravity
Bergshoeff, E.; Samtleben, H.; Sezgin, E.
2008-01-01
We construct the most general gaugings of the maximal D = 6 supergravity. The theory is ( 2, 2) supersymmetric, and possesses an on-shell SO( 5, 5) duality symmetry which plays a key role in determining its couplings. The field content includes 16 vector fields that carry a chiral spinor representat
WEIGHTED BOUNDEDNESS OF A ROUGH MAXIMAL OPERATOR
无
2000-01-01
In this note the authors give the weighted Lp-boundedness fora class of maximal singular integral operators with rough kernel.The result in this note is an improvement and extension ofthe result obtained by Chen and Lin in 1990.
Maximizing the Range of a Projectile.
Brown, Ronald A.
1992-01-01
Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)
Ehrenfest's Lottery--Time and Entropy Maximization
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Testing maximality in muon neutrino flavor mixing
Choubey, S; Choubey, Sandhya; Roy, Probir
2003-01-01
The small difference between the survival probabilities of muon neutrino and antineutrino beams, traveling through earth matter in a long baseline experiment such as MINOS, is shown to be an important measure of any possible deviation from maximality in the flavor mixing of those states.
Average utility maximization: A preference foundation
A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)
2014-01-01
textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen
On the Hardy-Littlewood maximal theorem
Shinji Yamashita
1982-01-01
Full Text Available The Hardy-Littlewood maximal theorem is extended to functions of class PL in the sense of E. F. Beckenbach and T. Radó, with a more precise expression of the absolute constant in the inequality. As applications we deduce some results on hyperbolic Hardy classes in terms of the non-Euclidean hyperbolic distance in the unit disk.
Maximal Cartel Pricing and Leniency Programs
Houba, H.E.D.; Motchenkova, E.; Wen, Q.
2008-01-01
For a general class of oligopoly models with price competition, we analyze the impact of ex-ante leniency programs in antitrust regulation on the endogenous maximal-sustainable cartel price. This impact depends upon industry characteristics including its cartel culture. Our analysis disentangles the
How to Generate Good Profit Maximization Problems
Davis, Lewis
2014-01-01
In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…
Ehrenfest's Lottery--Time and Entropy Maximization
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Maximally entangled mixed states made easy
Aiello, A; Voigt, D; Woerdman, J P
2006-01-01
We show that, contrarily to a recent claim [M. Ziman and V. Bu\\v{z}ek, Phys. Rev. A. \\textbf{72}, 052325 (2005)], it is possible to achieve maximally entangled mixed states of two qubits from the singlet state via the action of local nonunital quantum channels. Moreover, we present a simple, feasible linear optical implementation of one of such channels.
Maximizing Resource Utilization in Video Streaming Systems
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Maximizing scientific knowledge from randomized clinical trials
Gustafsson, Finn; Atar, Dan; Pitt, Bertram
2010-01-01
Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly...
Maximal Heat Generation in Nanoscale Systems
ZHOU Li-Ling; LI Shu-Shen; ZENG Zhao-Yang
2009-01-01
We investigate the heat generation in a nanoscale system coupled to normal leads and find that it is maximal when the average occupation of the electrons in the nanoscale system is 0.5,no matter what mechanism induces the heat generation.
Understanding violations of Gricean maxims in preschoolers and adults.
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Understanding Violations of Gricean Maxims in Preschoolers and Adults
Mako eOkanda
2015-07-01
Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Reliability and safety engineering
Verma, Ajit Kumar; Karanki, Durga Rao
2016-01-01
Reliability and safety are core issues that must be addressed throughout the life cycle of engineering systems. Reliability and Safety Engineering presents an overview of the basic concepts, together with simple and practical illustrations. The authors present reliability terminology in various engineering fields, viz.,electronics engineering, software engineering, mechanical engineering, structural engineering and power systems engineering. The book describes the latest applications in the area of probabilistic safety assessment, such as technical specification optimization, risk monitoring and risk informed in-service inspection. Reliability and safety studies must, inevitably, deal with uncertainty, so the book includes uncertainty propagation methods: Monte Carlo simulation, fuzzy arithmetic, Dempster-Shafer theory and probability bounds. Reliability and Safety Engineering also highlights advances in system reliability and safety assessment including dynamic system modeling and uncertainty management. Cas...
Measurement System Reliability Assessment
Kłos Ryszard
2015-06-01
Full Text Available Decision-making in problem situations is based on up-to-date and reliable information. A great deal of information is subject to rapid changes, hence it may be outdated or manipulated and enforce erroneous decisions. It is crucial to have the possibility to assess the obtained information. In order to ensure its reliability it is best to obtain it with an own measurement process. In such a case, conducting assessment of measurement system reliability seems to be crucial. The article describes general approach to assessing reliability of measurement systems.
Dai, Honghua; Smirnov, Evgueni
2012-01-01
Reliable Knowledge Discovery focuses on theory, methods, and techniques for RKDD, a new sub-field of KDD. It studies the theory and methods to assure the reliability and trustworthiness of discovered knowledge and to maintain the stability and consistency of knowledge discovery processes. RKDD has a broad spectrum of applications, especially in critical domains like medicine, finance, and military. Reliable Knowledge Discovery also presents methods and techniques for designing robust knowledge-discovery processes. Approaches to assessing the reliability of the discovered knowledge are introduc
Circuit design for reliability
Cao, Yu; Wirth, Gilson
2015-01-01
This book presents physical understanding, modeling and simulation, on-chip characterization, layout solutions, and design techniques that are effective to enhance the reliability of various circuit units. The authors provide readers with techniques for state of the art and future technologies, ranging from technology modeling, fault detection and analysis, circuit hardening, and reliability management. Provides comprehensive review on various reliability mechanisms at sub-45nm nodes; Describes practical modeling and characterization techniques for reliability; Includes thorough presentation of robust design techniques for major VLSI design units; Promotes physical understanding with first-principle simulations.
A New Algorithm to Optimize Maximal Information Coefficient.
Yuan Chen
Full Text Available The maximal information coefficient (MIC captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC.
Reliable Geographical Forwarding in Cognitive Radio Sensor Networks Using Virtual Clusters
Suleiman Zubair
2014-05-01
Full Text Available The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme.
Reliable geographical forwarding in cognitive radio sensor networks using virtual clusters.
Zubair, Suleiman; Fisal, Norsheila
2014-05-21
The need for implementing reliable data transfer in resource-constrained cognitive radio ad hoc networks is still an open issue in the research community. Although geographical forwarding schemes are characterized by their low overhead and efficiency in reliable data transfer in traditional wireless sensor network, this potential is still yet to be utilized for viable routing options in resource-constrained cognitive radio ad hoc networks in the presence of lossy links. In this paper, a novel geographical forwarding technique that does not restrict the choice of the next hop to the nodes in the selected route is presented. This is achieved by the creation of virtual clusters based on spectrum correlation from which the next hop choice is made based on link quality. The design maximizes the use of idle listening and receiver contention prioritization for energy efficiency, the avoidance of routing hot spots and stability. The validation result, which closely follows the simulation result, shows that the developed scheme can make more advancement to the sink as against the usual decisions of relevant ad hoc on-demand distance vector route select operations, while ensuring channel quality. Further simulation results have shown the enhanced reliability, lower latency and energy efficiency of the presented scheme.
Measurable Maximal Energy and Minimal Time Interval
Dahab, Eiman Abou El
2014-01-01
The possibility of finding the measurable maximal energy and the minimal time interval is discussed in different quantum aspects. It is found that the linear generalized uncertainty principle (GUP) approach gives a non-physical result. Based on large scale Schwarzshild solution, the quadratic GUP approach is utilized. The calculations are performed at the shortest distance, at which the general relativity is assumed to be a good approximation for the quantum gravity and at larger distances, as well. It is found that both maximal energy and minimal time have the order of the Planck time. Then, the uncertainties in both quantities are accordingly bounded. Some physical insights are addressed. Also, the implications on the physics of early Universe and on quantized mass are outlined. The results are related to the existence of finite cosmological constant and minimum mass (mass quanta).
Maximal temperature in a simple thermodynamical system
Dai, De-Chang
2016-01-01
Temperature in a simple thermodynamical system is not limited from above. It is also widely believed that it does not make sense talking about temperatures higher than the Planck temperature in the absence of the full theory of quantum gravity. Here, we demonstrate that there exist a maximal achievable temperature in a system where particles obey the laws of quantum mechanics and classical gravity before we reach the realm of quantum gravity. Namely, if two particles with a given center of mass energy come at the distance shorter than the Schwarzschild diameter apart, according to classical gravity they will form a black hole. It is possible to calculate that a simple thermodynamical system will be dominated by black holes at a critical temperature which is about three times lower than the Planck temperature. That represents the maximal achievable temperature in a simple thermodynamical system.
Hamiltonian formalism and path entropy maximization
Davis, Sergio; González, Diego
2015-10-01
Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.
Predicting Contextual Sequences via Submodular Function Maximization
Dey, Debadeepta; Hebert, Martial; Bagnell, J Andrew
2012-01-01
Sequence optimization, where the items in a list are ordered to maximize some reward has many applications such as web advertisement placement, search, and control libraries in robotics. Previous work in sequence optimization produces a static ordering that does not take any features of the item or context of the problem into account. In this work, we propose a general approach to order the items within the sequence based on the context (e.g., perceptual information, environment description, and goals). We take a simple, efficient, reduction-based approach where the choice and order of the items is established by repeatedly learning simple classifiers or regressors for each "slot" in the sequence. Our approach leverages recent work on submodular function maximization to provide a formal regret reduction from submodular sequence optimization to simple cost-sensitive prediction. We apply our contextual sequence prediction algorithm to optimize control libraries and demonstrate results on two robotics problems: ...
Nonlinear trading models through Sharpe Ratio maximization.
Choey, M; Weigend, A S
1997-08-01
While many trading strategies are based on price prediction, traders in financial markets are typically interested in optimizing risk-adjusted performance such as the Sharpe Ratio, rather than the price predictions themselves. This paper introduces an approach which generates a nonlinear strategy that explicitly maximizes the Sharpe Ratio. It is expressed as a neural network model whose output is the position size between a risky and a risk-free asset. The iterative parameter update rules are derived and compared to alternative approaches. The resulting trading strategy is evaluated and analyzed on both computer-generated data and real world data (DAX, the daily German equity index). Trading based on Sharpe Ratio maximization compares favorably to both profit optimization and probability matching (through cross-entropy optimization). The results show that the goal of optimizing out-of-sample risk-adjusted profit can indeed be achieved with this nonlinear approach.
Maximally Symmetric Spacetimes emerging from thermodynamic fluctuations
Bravetti, A; Quevedo, H
2015-01-01
In this work we prove that the maximally symmetric vacuum solutions of General Relativity emerge from the geometric structure of statistical mechanics and thermodynamic fluctuation theory. To present our argument, we begin by showing that the pseudo-Riemannian structure of the Thermodynamic Phase Space is a solution to the vacuum Einstein-Gauss-Bonnet theory of gravity with a cosmological constant. Then, we use the geometry of equilibrium thermodynamics to demonstrate that the maximally symmetric vacuum solutions of Einstein's Field Equations -- Minkowski, de-Sitter and Anti-de-Sitter spacetimes -- correspond to thermodynamic fluctuations. Moreover, we argue that these might be the only possible solutions that can be derived in this manner. Thus, the results presented here are the first concrete examples of spacetimes effectively emerging from the thermodynamic limit over an unspecified microscopic theory without any further assumptions.
Consistent 4-form fluxes for maximal supergravity
Godazgar, Hadi; Krueger, Olaf; Nicolai, Hermann
2015-01-01
We derive new ansaetze for the 4-form field strength of D=11 supergravity corresponding to uplifts of four-dimensional maximal gauged supergravity. In particular, the ansaetze directly yield the components of the 4-form field strength in terms of the scalars and vectors of the four-dimensional maximal gauged supergravity---in this way they provide an explicit uplift of all four-dimensional consistent truncations of D=11 supergravity. The new ansaetze provide a substantially simpler method for uplifting d=4 flows compared to the previously available method using the 3-form and 6-form potential ansaetze. The ansatz for the Freund-Rubin term allows us to conjecture a `master formula' for the latter in terms of the scalar potential of d=4 gauged supergravity and its first derivative. We also resolve a long-standing puzzle concerning the antisymmetry of the flux obtained from uplift ansaetze.
Modularity maximization using completely positive programming
Yazdanparast, Sakineh; Havens, Timothy C.
2017-04-01
Community detection is one of the most prominent problems of social network analysis. In this paper, a novel method for Modularity Maximization (MM) for community detection is presented which exploits the Alternating Direction Augmented Lagrangian (ADAL) method for maximizing a generalized form of Newman's modularity function. We first transform Newman's modularity function into a quadratic program and then use Completely Positive Programming (CPP) to map the quadratic program to a linear program, which provides the globally optimal maximum modularity partition. In order to solve the proposed CPP problem, a closed form solution using the ADAL merged with a rank minimization approach is proposed. The performance of the proposed method is evaluated on several real-world data sets used for benchmarks community detection. Simulation results shows the proposed technique provides outstanding results in terms of modularity value for crisp partitions.
Utility maximization in incomplete markets with default
Lim, Thomas
2008-01-01
We adress the maximization problem of expected utility from terminal wealth. The special feature of this paper is that we consider a financial market where the price process of risky assets can have a default time. Using dynamic programming, we characterize the value function with a backward stochastic differential equation and the optimal portfolio policies. We separately treat the cases of exponential, power and logarithmic utility.
Operational Modal Analysis using Expectation Maximization Algorithm
Cara Cañas, Francisco Javier; Carpio Huertas, Jaime; Juan Ruiz, Jesús; Alarcón Álvarez, Enrique
2011-01-01
This paper presents a time-domain stochastic system identification method based on Maximum Likelihood Estimation and the Expectation Maximization algorithm. The effectiveness of this structural identification method is evaluated through numerical simulation in the context of the ASCE benchmark problem on structural health monitoring. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the benchmark structure have been estimated applying the proposed identification method...
Revenue Maximizing Head Starts in Contests
Franke, Jörg; Leininger, Wolfgang; Wasser, Cédric
2014-01-01
We characterize revenue maximizing head starts for all-pay auctions and lottery contests with many heterogeneous players. We show that under optimal head starts all-pay auctions revenue-dominate lottery contests for any degree of heterogeneity among players. Moreover, all-pay auctions with optimal head starts induce higher revenue than any multiplicatively biased all-pay auction or lottery contest. While head starts are more effective than multiplicative biases in all-pay auctions, they are l...
Approximate Revenue Maximization in Interdependent Value Settings
Chawla, Shuchi; Fu, Hu; Karlin, Anna
2014-01-01
We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, ...
Driel, W.D. van; Yuan, C.A.; Koh, S.; Zhang, G.Q.
2011-01-01
This paper presents our effort to predict the system reliability of Solid State Lighting (SSL) applications. A SSL system is composed of a LED engine with micro-electronic driver(s) that supplies power to the optic design. Knowledge of system level reliability is not only a challenging scientific ex
Principles of Bridge Reliability
Thoft-Christensen, Palle; Nowak, Andrzej S.
The paper gives a brief introduction to the basic principles of structural reliability theory and its application to bridge engineering. Fundamental concepts like failure probability and reliability index are introduced. Ultimate as well as serviceability limit states for bridges are formulated...
Improving machinery reliability
Bloch, Heinz P
1998-01-01
This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.
Hawaii Electric System Reliability
Loose, Verne William [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva Monroy, Cesar Augusto [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2012-08-01
This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers’ views of reliability “worth” and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers’ views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.
Hawaii electric system reliability.
Silva Monroy, Cesar Augusto; Loose, Verne William
2012-09-01
This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.
Maximal supersymmetry and B-mode targets
Kallosh, Renata; Linde, Andrei; Wrase, Timm; Yamada, Yusuke
2017-04-01
Extending the work of Ferrara and one of the authors [1], we present dynamical cosmological models of α-attractors with plateau potentials for 3 α = 1, 2, 3, 4, 5, 6, 7. These models are motivated by geometric properties of maximally supersymmetric theories: M-theory, superstring theory, and maximal N = 8 supergravity. After a consistent truncation of maximal to minimal supersymmetry in a seven-disk geometry, we perform a two-step procedure: 1) we introduce a superpotential, which stabilizes the moduli of the seven-disk geometry in a supersymmetric minimum, 2) we add a cosmological sector with a nilpotent stabilizer, which breaks supersymmetry spontaneously and leads to a desirable class of cosmological attractor models. These models with n s consistent with observational data, and with tensor-to-scalar ratio r ≈ 10-2 - 10-3, provide natural targets for future B-mode searches. We relate the issue of stability of inflationary trajectories in these models to tessellations of a hyperbolic geometry.
Maximal respiratory pressures among adolescent swimmers.
Rocha Crispino Santos, M A; Pinto, M L; Couto Sant'Anna, C; Bernhoeft, M
2011-01-01
Maximal inspiratory pressures (MIP) and maximal expiratory pressures (MEP) are useful indices of respiratory muscle strength in athletes. The aims of this study were: to describe the strength of the respiratory muscles of Olympic junior swim team, at baseline and after a standard physical training; and to determine if there is a differential inspiratory and expiratory pressure response to the physical training. A cross-sectional study evaluated 28 international-level swimmers with ages ranging from 15 to 17 years, 19 (61 %) being males. At baseline, MIP was found to be lower in females (P = .001). The mean values reached by males and females were: MIP(cmH2O) = M: 100.4 (± 26.5)/F: 67.8 (± 23.2); MEP (cmH2O) = M: 87.4 (± 20.7)/F: 73.9 (± 17.3). After the physical training they reached: MIP (cmH2O) = M: 95.3 (± 30.3)/F: 71.8 (± 35.6); MEP (cmH2O) = M: 82.8 (± 26.2)/F: 70.4 (± 8.3). No differential pressure responses were observed in either males or females. These results suggest that swimmers can sustain the magnitude of the initial maximal pressures. Other studies should be developed to clarify if MIP and MEP could be used as a marker of an athlete's performance.
solveME: fast and reliable solution of nonlinear ME models
Yang, Laurence; Ma, Ding; Ebrahim, Ali
2016-01-01
reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Results: Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models...
Algora, Carlos; Espinet-Gonzalez, Pilar; Vazquez, Manuel; Bosco, Nick; Miller, David; Kurtz, Sarah; Rubio, Francisca; McConnell,Robert
2016-04-15
This chapter describes the accumulated knowledge on CPV reliability with its fundamentals and qualification. It explains the reliability of solar cells, modules (including optics) and plants. The chapter discusses the statistical distributions, namely exponential, normal and Weibull. The reliability of solar cells includes: namely the issues in accelerated aging tests in CPV solar cells, types of failure and failures in real time operation. The chapter explores the accelerated life tests, namely qualitative life tests (mainly HALT) and quantitative accelerated life tests (QALT). It examines other well proven and experienced PV cells and/or semiconductor devices, which share similar semiconductor materials, manufacturing techniques or operating conditions, namely, III-V space solar cells and light emitting diodes (LEDs). It addresses each of the identified reliability issues and presents the current state of the art knowledge for their testing and evaluation. Finally, the chapter summarizes the CPV qualification and reliability standards.
Location Based Throughput Maximization Routing in Energy Constrained Mobile Ad-hoc Network
V. Sumathy
2006-01-01
Full Text Available In wireless Ad-hoc network, power consumption becomes an important issue due to limited battery power. One of the reasons for energy expenditure in this network is irregularly distributed node pattern, which impose large interference range in certain area. To maximize the lifetime of ad-hoc mobile network, the power consumption rate of each node must be evenly distributed and the over all transmission range of each node must be minimized. Our protocol, Location based throughput maximization routing in energy constrained Ad-hoc network finds routing paths, which maximize the lifetime of individual nodes and minimize the total transmission energy consumption. The life of the entire network is increased and the network throughput is also increased. The reliability of the path is also increased. Location based energy constrained routing finds the distance between the nodes. Based on the distance the transmission power required is calculated and dynamically reduces the total transmission energy.
Makram KRIT
2016-01-01
Full Text Available This paper presents several iterative methods based on Stochastic Expectation-Maximization (EM methodology in order to estimate parametric reliability models for randomly lifetime data. The methodology is related to Maximum Likelihood Estimates (MLE in the case of missing data. A bathtub form of failure intensity formulation of a repairable system reliability is presented where the estimation of its parameters is considered through EM algorithm . Field of failures data from industrial site are used to fit the model. Finally, the interval estimation basing on large-sample in literature is discussed and the examination of the actual coverage probabilities of these confidence intervals is presented using Monte Carlo simulation method.
Reliability of genetic networks is evolvable
Braunewell, Stefan; Bornholdt, Stefan
2008-06-01
Control of the living cell functions with remarkable reliability despite the stochastic nature of the underlying molecular networks—a property presumably optimized by biological evolution. We ask here to what extent the ability of a stochastic dynamical network to produce reliable dynamics is an evolvable trait. Using an evolutionary algorithm based on a deterministic selection criterion for the reliability of dynamical attractors, we evolve networks of noisy discrete threshold nodes. We find that, starting from any random network, reliability of the attractor landscape can often be achieved with only a few small changes to the network structure. Further, the evolvability of networks toward reliable dynamics while retaining their function is investigated and a high success rate is found.
Introduction to quality and reliability engineering
Jiang, Renyan
2015-01-01
This book presents the state-of-the-art in quality and reliability engineering from a product life cycle standpoint. Topics in reliability include reliability models, life data analysis and modeling, design for reliability and accelerated life testing, while topics in quality include design for quality, acceptance sampling and supplier selection, statistical process control, production tests such as screening and burn-in, warranty and maintenance. The book provides comprehensive insights into two closely related subjects, and includes a wealth of examples and problems to enhance reader comprehension and link theory and practice. All numerical examples can be easily solved using Microsoft Excel. The book is intended for senior undergraduate and post-graduate students in related engineering and management programs such as mechanical engineering, manufacturing engineering, industrial engineering and engineering management programs, as well as for researchers and engineers in the quality and reliability fields. D...
Nagahara, Ryu; Mizutani, Mirai; Matsuo, Akifumi; Kanehisa, Hiroaki; Fukunaga, Tetsuo
2017-09-27
We aimed to clarify the mechanical determinants of sprinting performance during acceleration and maximal speed phases of a single sprint, using ground reaction forces (GRFs). While 18 male athletes performed a 60-m sprint, GRF was measured at every step over a 50-m distance from the start. Variables during the entire acceleration phase were approximated with a fourth-order polynomial. Subsequently, accelerations at 55%, 65%, 75%, 85%, and 95% of maximal speed, and running speed during the maximal speed phase were determined as sprinting performance variables. Ground reaction impulses and mean GRFs during the acceleration and maximal speed phases were selected as independent variables. Stepwise multiple regression analysis selected propulsive and braking impulses as contributors to acceleration at 55%-95% (β > 0.724) and 75%-95% (β > 0.176), respectively, of maximal speed. Moreover, mean vertical force was a contributor to maximal running speed (β = 0.481). The current results demonstrate that exerting a large propulsive force during the entire acceleration phase, suppressing braking force when approaching maximal speed, and producing a large vertical force during the maximal speed phase are essential for achieving greater acceleration and maintaining higher maximal speed, respectively.
Fuzzy neural network output maximization control for sensorless wind energy conversion system
Lin, Whei-Min; Hong, Chih-Ming [Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung (China); Cheng, Fu-Sheng [Department of Electrical Engineering, Cheng-Shiu University, Kaohsiung (China)
2010-02-15
This paper presents the design of an online training fuzzy neural network (FNN) controller with a high-performance speed observer for the induction generator (IG). The proposed output maximization control is achieved without mechanical sensors such as the wind speed or position sensor, and the new control system will deliver maximum electric power with light weight, high efficiency, and high reliability. The estimation of the rotor speed is designed on the basis of the sliding mode control theory. (author)
Photovoltaic system reliability
Maish, A.B.; Atcitty, C. [Sandia National Labs., NM (United States); Greenberg, D. [Ascension Technology, Inc., Lincoln Center, MA (United States)] [and others
1997-10-01
This paper discusses the reliability of several photovoltaic projects including SMUD`s PV Pioneer project, various projects monitored by Ascension Technology, and the Colorado Parks project. System times-to-failure range from 1 to 16 years, and maintenance costs range from 1 to 16 cents per kilowatt-hour. Factors contributing to the reliability of these systems are discussed, and practices are recommended that can be applied to future projects. This paper also discusses the methodology used to collect and analyze PV system reliability data.
Structural Reliability Methods
Ditlevsen, Ove Dalager; Madsen, H. O.
of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature......The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...
Low-frequency fatigue at maximal and submaximal muscle contractions
R.R. Baptista
2009-04-01
Full Text Available Skeletal muscle force production following repetitive contractions is preferentially reduced when muscle is evaluated with low-frequency stimulation. This selective impairment in force generation is called low-frequency fatigue (LFF and could be dependent on the contraction type. The purpose of this study was to compare LFF after concentric and eccentric maximal and submaximal contractions of knee extensor muscles. Ten healthy male subjects (age: 23.6 ± 4.2 years; weight: 73.8 ± 7.7 kg; height: 1.79 ± 0.05 m executed maximal voluntary contractions that were measured before a fatigue test (pre-exercise, immediately after (after-exercise and after 1 h of recovery (after-recovery. The fatigue test consisted of 60 maximal (100% or submaximal (40% dynamic concentric or eccentric knee extensions at an angular velocity of 60°/s. The isometric torque produced by low- (20 Hz and high- (100 Hz frequency stimulation was also measured at these times and the 20:100 Hz ratio was calculated to assess LFF. One-way ANOVA for repeated measures followed by the Newman-Keuls post hoc test was used to determine significant (P < 0.05 differences. LFF was evident after-recovery in all trials except following submaximal eccentric contractions. LFF was not evident after-exercise, regardless of exercise intensity or contraction type. Our results suggest that low-frequency fatigue was evident after submaximal concentric but not submaximal eccentric contractions and was more pronounced after 1-h of recovery.
Dopaminergic balance between reward maximization and policy complexity
Naama eParush
2011-05-01
Full Text Available Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor. Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost and maximizing the expected future reward (gain. We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the trade-off between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems.
Postactivation Potentiation Biases Maximal Isometric Strength Assessment
Leonardo Coelho Rabello Lima
2014-01-01
Full Text Available Postactivation potentiation (PAP is known to enhance force production. Maximal isometric strength assessment protocols usually consist of two or more maximal voluntary isometric contractions (MVCs. The objective of this study was to determine if PAP would influence isometric strength assessment. Healthy male volunteers (n=23 performed two five-second MVCs separated by a 180-seconds interval. Changes in isometric peak torque (IPT, time to achieve it (tPTI, contractile impulse (CI, root mean square of the electromyographic signal during PTI (RMS, and rate of torque development (RTD, in different intervals, were measured. Significant increases in IPT (240.6 ± 55.7 N·m versus 248.9 ± 55.1 N·m, RTD (746 ± 152 N·m·s−1versus 727 ± 158 N·m·s−1, and RMS (59.1 ± 12.2% RMSMAX versus 54.8 ± 9.4% RMSMAX were found on the second MVC. tPTI decreased significantly on the second MVC (2373 ± 1200 ms versus 2784 ± 1226 ms. We conclude that a first MVC leads to PAP that elicits significant enhancements in strength-related variables of a second MVC performed 180 seconds later. If disconsidered, this phenomenon might bias maximal isometric strength assessment, overestimating some of these variables.
Maximizing versus satisficing: happiness is a matter of choice.
Schwartz, Barry; Ward, Andrew; Monterosso, John; Lyubomirsky, Sonja; White, Katherine; Lehman, Darrin R
2002-11-01
Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame.
N. A. Nayak
1960-05-01
Full Text Available The reliability aspect of electronic equipment's is discussed. To obtain optimum results, close cooperation between the components engineer, the design engineer and the production engineer is suggested.
Reliability prediction techniques
Whittaker, B.; Worthington, B.; Lord, J.F.; Pinkard, D.
1986-01-01
The paper demonstrates the feasibility of applying reliability assessment techniques to mining equipment. A number of techniques are identified and described and examples of their use in assessing mining equipment are given. These techniques include: reliability prediction; failure analysis; design audit; maintainability; availability and the life cycle costing. Specific conclusions regarding the usefulness of each technique are outlined. The choice of techniques depends upon both the type of equipment being assessed and its stage of development, with numerical prediction best suited for electronic equipment and fault analysis and design audit suited to mechanical equipment. Reliability assessments involve much detailed and time consuming work but it has been demonstrated that the resulting reliability improvements lead to savings in service costs which more than offset the cost of the evaluation.
Cycle-maximal triangle-free graphs
Durocher, Stephane; Gunderson, David S.; Li, Pak Ching
2015-01-01
Abstract We conjecture that the balanced complete bipartite graph K ⌊ n / 2 ⌋ , ⌈ n / 2 ⌉ contains more cycles than any other n -vertex triangle-free graph, and we make some progress toward proving this. We give equivalent conditions for cycle-maximal triangle-free graphs; show bounds...... on the numbers of cycles in graphs depending on numbers of vertices and edges, girth, and homomorphisms to small fixed graphs; and use the bounds to show that among regular graphs, the conjecture holds. We also consider graphs that are close to being regular, with the minimum and maximum degrees differing...
ON THE SPACES OF THE MAXIMAL POINTS
梁基华; 刘应明
2003-01-01
For a continuous domain D, some characterization that the convex powerdomain CD is adomain hull of Max(CD) is given in terms of compact subsets of D. And in this case, it isproved that the set of the maximal points Max(CD) of CD with the relative Scott topology ishomeomorphic to the set of all Scott compact subsets of Max(D) with the topology induced bythe Hausdorff metric derived from a metric on Max(D) when Max(D) is metrizable.
Maximizing policy learning in international committees
Nedergaard, Peter
2007-01-01
, this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...
Optimal Implementations for Reliable Circadian Clocks
Hasegawa, Yoshihiko; Arita, Masanori
2014-09-01
Circadian rhythms are acquired through evolution to increase the chances for survival through synchronizing with the daylight cycle. Reliable synchronization is realized through two trade-off properties: regularity to keep time precisely, and entrainability to synchronize the internal time with daylight. We find by using a phase model with multiple inputs that achieving the maximal limit of regularity and entrainability entails many inherent features of the circadian mechanism. At the molecular level, we demonstrate the role sharing of two light inputs, phase advance and delay, as is well observed in mammals. At the behavioral level, the optimal phase-response curve inevitably contains a dead zone, a time during which light pulses neither advance nor delay the clock. We reproduce the results of phase-controlling experiments entrained by two types of periodic light pulses. Our results indicate that circadian clocks are designed optimally for reliable clockwork through evolution.
The rating reliability calculator
Solomon David J
2004-04-01
Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.
Reliability of power connections
BRAUNOVIC Milenko
2007-01-01
Despite the use of various preventive maintenance measures, there are still a number of problem areas that can adversely affect system reliability. Also, economical constraints have pushed the designs of power connections closer to the limits allowed by the existing standards. The major parameters influencing the reliability and life of Al-Al and Al-Cu connections are identified. The effectiveness of various palliative measures is determined and the misconceptions about their effectiveness are dealt in detail.
Multidisciplinary System Reliability Analysis
Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)
2001-01-01
The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.
Maximizing Lifetime of Wireless Sensor Networks with Mobile Sink Nodes
Yourong Chen
2014-01-01
Full Text Available In order to maximize network lifetime and balance energy consumption when sink nodes can move, maximizing lifetime of wireless sensor networks with mobile sink nodes (MLMS is researched. The movement path selection method of sink nodes is proposed. Modified subtractive clustering method, k-means method, and nearest neighbor interpolation method are used to obtain the movement paths. The lifetime optimization model is established under flow constraint, energy consumption constraint, link transmission constraint, and other constraints. The model is solved from the perspective of static and mobile data gathering of sink nodes. Subgradient method is used to solve the lifetime optimization model when one sink node stays at one anchor location. Geometric method is used to evaluate the amount of gathering data when sink nodes are moving. Finally, all sensor nodes transmit data according to the optimal data transmission scheme. Sink nodes gather the data along the shortest movement paths. Simulation results show that MLMS can prolong network lifetime, balance node energy consumption, and reduce data gathering latency under appropriate parameters. Under certain conditions, it outperforms Ratio_w, TPGF, RCC, and GRND.
Partial AUC maximization for essential gene prediction using genetic algorithms.
Hwang, Kyu-Baek; Ha, Beom-Yong; Ju, Sanghun; Kim, Sangsoo
2013-01-01
Identifying genes indispensable for an organism's life and their characteristics is one of the central questions in current biological research, and hence it would be helpful to develop computational approaches towards the prediction of essential genes. The performance of a predictor is usually measured by the area under the receiver operating characteristic curve (AUC). We propose a novel method by implementing genetic algorithms to maximize the partial AUC that is restricted to a specific interval of lower false positive rate (FPR), the region relevant to follow-up experimental validation. Our predictor uses various features based on sequence information, protein-protein interaction network topology, and gene expression profiles. A feature selection wrapper was developed to alleviate the over-fitting problem and to weigh each feature's relevance to prediction. We evaluated our method using the proteome of budding yeast. Our implementation of genetic algorithms maximizing the partial AUC below 0.05 or 0.10 of FPR outperformed other popular classification methods.
Sensitivity Analysis of Component Reliability
ZhenhuaGe
2004-01-01
In a system, Every component has its unique position within system and its unique failure characteristics. When a component's reliability is changed, its effect on system reliability is not equal. Component reliability sensitivity is a measure of effect on system reliability while a component's reliability is changed. In this paper, the definition and relative matrix of component reliability sensitivity is proposed, and some of their characteristics are analyzed. All these will help us to analyse or improve the system reliability.
Reliability estimation in a multilevel confirmatory factor analysis framework.
Geldhof, G John; Preacher, Kristopher J; Zyphur, Michael J
2014-03-01
Scales with varying degrees of measurement reliability are often used in the context of multistage sampling, where variance exists at multiple levels of analysis (e.g., individual and group). Because methodological guidance on assessing and reporting reliability at multiple levels of analysis is currently lacking, we discuss the importance of examining level-specific reliability. We present a simulation study and an applied example showing different methods for estimating multilevel reliability using multilevel confirmatory factor analysis and provide supporting Mplus program code. We conclude that (a) single-level estimates will not reflect a scale's actual reliability unless reliability is identical at each level of analysis, (b) 2-level alpha and composite reliability (omega) perform relatively well in most settings, (c) estimates of maximal reliability (H) were more biased when estimated using multilevel data than either alpha or omega, and (d) small cluster size can lead to overestimates of reliability at the between level of analysis. We also show that Monte Carlo confidence intervals and Bayesian credible intervals closely reflect the sampling distribution of reliability estimates under most conditions. We discuss the estimation of credible intervals using Mplus and provide R code for computing Monte Carlo confidence intervals.
Reliable conjunctive use rules for sustainable irrigated agriculture and reservoir spill control
Schoups, Gerrit; Addams, C. Lee; Minjares, Jose Luis; Gorelick, Steven M.
2006-12-01
We develop optimal conjunctive use water management strategies that balance two potentially conflicting objectives: sustaining irrigated agriculture during droughts and minimizing unnecessary spills and resulting water losses from the reservoir during wet periods. Conjunctive use is specified by a linear operating rule, which determines the maximum surface water release as a function of initial reservoir storage. Optimal strategies are identified using multiobjective interannual optimization for sustainability and spill control, combined with gradient-based annual profit maximization. Application to historical conditions in the irrigated system of the Yaqui Valley, Mexico, yields a Pareto curve of solutions illustrating the trade-off between sustaining agriculture and minimizing spills and water losses. Minimal water losses are obtained by maximizing surface water use and limiting groundwater pumping, such that reservoir levels are kept sufficiently low. Maximum agricultural sustainability, on the other hand, results from increased groundwater use and keeping surface water reservoir levels high during wet periods. Selected optimal operating rules from the multiobjective optimization are tested over a large number of equally probable streamflow time series, generated with a stochastic time series model. In this manner, statistical properties, such as the mean sustainability and sustainability percentiles, are determined for each optimal rule. These statistical properties can be used to select rules for water management that are reliable over a wide range of streamflow conditions.
Maximal subbundles, quot schemes, and curve counting
Gillam, W D
2011-01-01
Let $E$ be a rank 2, degree $d$ vector bundle over a genus $g$ curve $C$. The loci of stable pairs on $E$ in class $2[C]$ fixed by the scaling action are expressed as products of $\\Quot$ schemes. Using virtual localization, the stable pairs invariants of $E$ are related to the virtual intersection theory of $\\Quot E$. The latter theory is extensively discussed for an $E$ of arbitrary rank; the tautological ring of $\\Quot E$ is defined and is computed on the locus parameterizing rank one subsheaves. In case $E$ has rank 2, $d$ and $g$ have opposite parity, and $E$ is sufficiently generic, it is known that $E$ has exactly $2^g$ line subbundles of maximal degree. Doubling the zero section along such a subbundle gives a curve in the total space of $E$ in class $2[C]$. We relate this count of maximal subbundles with stable pairs/Donaldson-Thomas theory on the total space of $E$. This endows the residue invariants of $E$ with enumerative significance: they actually \\emph{count} curves in $E$.
Maximal coherence in a generic basis
Yao, Yao; Dong, G. H.; Ge, Li; Li, Mo; Sun, C. P.
2016-12-01
Since quantum coherence is an undoubted characteristic trait of quantum physics, the quantification and application of quantum coherence has been one of the long-standing central topics in quantum information science. Within the framework of a resource theory of quantum coherence proposed recently, a fiducial basis should be preselected for characterizing the quantum coherence in specific circumstances, namely, the quantum coherence is a basis-dependent quantity. Therefore, a natural question is raised: what are the maximum and minimum coherences contained in a certain quantum state with respect to a generic basis? While the minimum case is trivial, it is not so intuitive to verify in which basis the quantum coherence is maximal. Based on the coherence measure of relative entropy, we indicate the particular basis in which the quantum coherence is maximal for a given state, where the Fourier matrix (or more generally, complex Hadamard matrices) plays a critical role in determining the basis. Intriguingly, though we can prove that the basis associated with the Fourier matrix is a stationary point for optimizing the l1 norm of coherence, numerical simulation shows that it is not a global optimal choice.
Symmetry and approximability of submodular maximization problems
Vondrak, Jan
2011-01-01
A number of recent results on optimization problems involving submodular functions have made use of the multilinear relaxation of the problem. These results hold typically in the value oracle model, where the objective function is accessible via a black box returning f(S) for a given S. We present a general approach to deriving inapproximability results in the value oracle model, based on the notion of symmetry gap. Our main result is that for any fixed instance that exhibits a certain symmetry gap in its multilinear relaxation, there is a naturally related class of instances for which a better approximation factor than the symmetry gap would require exponentially many oracle queries. This unifies several known hardness results for submodular maximization, and implies several new ones. In particular, we prove that there is no constant-factor approximation for the problem of maximizing a non-negative submodular function over the bases of a matroid. We also provide a closely matching approximation algorithm for...
Maritime shipping as a high reliability industry: A qualitative analysis
Mannarelli, T.; Roberts, K.; Bea, R.
1994-10-01
The maritime oil shipping industry has great public demands for safe and reliable organizational performance. Researchers have identified a set of organizations and industries that operate at extremely high levels of reliability, and have labelled them High Reliability Organizations (HRO). Following the Exxon Valdez oil spill disaster of 1989, public demands for HRO-level operations were placed on the oil industry. It will be demonstrated that, despite enormous improvements in safety and reliability, maritime shipping is not operating as an HRO industry. An analysis of the organizational, environmental, and cultural history of the oil industry will help to provide justification and explanation. The oil industry will be contrasted with other HRO industries and the differences will inform the shortfalls maritime shipping experiences with regard to maximizing reliability. Finally, possible solutions for the achievement of HRO status will be offered.
Maximally informative dimensions Analyzing neural responses to natural signals
Sharpee, T; Bialek, W; Sharpee, Tatyana; Rust, Nicole C.; Bialek, William
2002-01-01
We propose a method that would allow for a rigorous statistical analysis of neural responses to natural stimuli, which are non-Gaussian and exhibit strong correlations. We have in mind a model in which neurons are selective for a small number of stimulus dimensions out of the high dimensional stimulus space, but within this subspace the responses can be arbitrarily nonlinear. Existing analysis methods are based on correlation functions between stimuli and responses, but these methods are guaranteed to work only in the case of Gaussian stimulus ensembles. As an alternative to correlation functions, we maximize the mutual information between the neural responses and projections of the stimulus onto low dimensional subspaces. The procedure can be done iteratively by increasing the dimensionality of this subspace. Those dimensions that allow the recovery of all of the information between spikes and the full unprojected stimuli describe the relevant subspace. If the dimensionality of the relevant subspace indeed i...
Analyzing neural responses to natural signals maximally informative dimensions
Sharpee, T; Bialek, W; Sharpee, Tatyana; Rust, Nicole C.; Bialek, William
2002-01-01
We propose a method that allows for a rigorous statistical analysis of neural responses to natural stimuli which are non-Gaussian and exhibit strong correlations. We have in mind a model in which neurons are selective for a small number of stimulus dimensions out of a high dimensional stimulus space, but within this subspace the responses can be arbitrarily nonlinear. Existing analysis methods are based on correlation functions between stimuli and responses, but these methods are guaranteed to work only in the case of Gaussian stimulus ensembles. As an alternative to correlation functions, we maximize the mutual information between the neural responses and projections of the stimulus onto low dimensional subspaces. The procedure can be done iteratively by increasing the dimensionality of this subspace. Those dimensions that allow the recovery of all of the information between spikes and the full unprojected stimuli describe the relevant subspace. If the dimensionality of the relevant subspace indeed is small,...
MaxAlign: maximizing usable data in an alignment
Oliveira, Rodrigo Gouveia; Sackett, Peter Wad; Pedersen, Anders Gorm
2007-01-01
BACKGROUND: The presence of gaps in an alignment of nucleotide or protein sequences is often an inconvenience for bioinformatical studies. In phylogenetic and other analyses, for instance, gapped columns are often discarded entirely from the alignment. RESULTS: MaxAlign is a program that optimizes...... the alignment prior to such analyses. Specifically, it maximizes the number of nucleotide (or amino acid) symbols that are present in gap-free columns - the alignment area - by selecting the optimal subset of sequences to exclude from the alignment. MaxAlign can be used prior to phylogenetic and bioinformatical...... analyses as well as in other situations where this form of alignment improvement is useful. In this work we test MaxAlign's performance in these tasks and compare the accuracy of phylogenetic estimates including and excluding gapped columns from the analysis, with and without processing with MaxAlign...
Maximizing the efficiency of multienzyme process by stoichiometry optimization.
Dvorak, Pavel; Kurumbang, Nagendra P; Bendl, Jaroslav; Brezovsky, Jan; Prokop, Zbynek; Damborsky, Jiri
2014-09-05
Multienzyme processes represent an important area of biocatalysis. Their efficiency can be enhanced by optimization of the stoichiometry of the biocatalysts. Here we present a workflow for maximizing the efficiency of a three-enzyme system catalyzing a five-step chemical conversion. Kinetic models of pathways with wild-type or engineered enzymes were built, and the enzyme stoichiometry of each pathway was optimized. Mathematical modeling and one-pot multienzyme experiments provided detailed insights into pathway dynamics, enabled the selection of a suitable engineered enzyme, and afforded high efficiency while minimizing biocatalyst loadings. Optimizing the stoichiometry in a pathway with an engineered enzyme reduced the total biocatalyst load by an impressive 56 %. Our new workflow represents a broadly applicable strategy for optimizing multienzyme processes.
Increasing Efficiency by Maximizing Electrical Output
2016-08-01
installation without the need for further demonstration. This project both (1) allows us to roll out this solution to other installations, and (2...While the requirements of our system are not that demanding, i.e. we have 4 hoses and a wire that connect to the outside world, the ease with which...was limited run time. This precluded any steady state running that allows for more reliable efficiency results. The ORCA system draws more power
Silvia, Paul J.
2011-01-01
The present research examined the reliability of three types of divergent thinking tasks (unusual uses, instances, consequences/implications) and two types of subjective scoring (an average across all responses vs. the responses people chose as their top-two responses) within a latent variable framework, using the maximal-reliability "H"…
THE AIRLINE'S RELIABILITY PROGRAM
Тамаргазін, О. А.; Національний авіаційний університет; Власенко, П. О.; Національний авіаційний університет
2013-01-01
Airline's operational structure for Reliability program implementation — engineering division, reliability division, reliability control division, aircraft maintenance division, quality assurance division — was considered. Airline's Reliability program structure is shown. Using of Reliability program for reducing costs on aircraft maintenance is proposed. Рассмотрена организационная структура авиакомпании по выполнению Программы надежности - инженерный отдел, отделы по надежности авиацио...
Shapiro, Andrew A.
2006-01-01
Ultra reliable systems are critical to NASA particularly as consideration is being given to extended lunar missions and manned missions to Mars. NASA has formulated a program designed to improve the reliability of NASA systems. The long term goal for the NASA ultra reliability is to ultimately improve NASA systems by an order of magnitude. The approach outlined in this presentation involves the steps used in developing a strategic plan to achieve the long term objective of ultra reliability. Consideration is given to: complex systems, hardware (including aircraft, aerospace craft and launch vehicles), software, human interactions, long life missions, infrastructure development, and cross cutting technologies. Several NASA-wide workshops have been held, identifying issues for reliability improvement and providing mitigation strategies for these issues. In addition to representation from all of the NASA centers, experts from government (NASA and non-NASA), universities and industry participated. Highlights of a strategic plan, which is being developed using the results from these workshops, will be presented.
Photovoltaic module reliability workshop
Mrig, L. (ed.)
1990-01-01
The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.
Holistic Admissions after Affirmative Action: Does "Maximizing" the High School Curriculum Matter?
Bastedo, Michael N.; Howard, Joseph E.; Flaster, Allyson
2016-01-01
Selective colleges and universities purport to consider students' achievement in the context of the academic opportunities available in their high schools. Thus, students who "maximize" their curricular opportunities should be more likely to gain admission. Using nationally representative data, we examine the effect of "maximizing…
Maximal lattice free bodies, test sets and the Frobenius problem
Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt
Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....
Reliability Centered Maintenance - Methodologies
Kammerer, Catherine C.
2009-01-01
Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.
Maximizing scientific knowledge from randomized clinical trials
Gustafsson, Finn; Atar, Dan; Pitt, Bertram;
2010-01-01
Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly...... variable. Generation of trial databases and/or biobanks originating in large randomized clinical trials has successfully increased the knowledge obtained from those trials. At the 10th Cardiovascular Trialist Workshop, possibilities and pitfalls in designing and accessing clinical trial databases were......, in particular with respect to collaboration with the trial sponsor and to analytic pitfalls. The advantages of creating screening databases in conjunction with a given clinical trial are described; and finally, the potential for posttrial database studies to become a platform for training young scientists...
Characterizing maximally singular phase-space distributions
Sperling, J.
2016-07-01
Phase-space distributions are widely applied in quantum optics to access the nonclassical features of radiations fields. In particular, the inability to interpret the Glauber-Sudarshan distribution in terms of a classical probability density is the fundamental benchmark for quantum light. However, this phase-space distribution cannot be directly reconstructed for arbitrary states, because of its singular behavior. In this work, we perform a characterization of the Glauber-Sudarshan representation in terms of distribution theory. We address important features of such distributions: (i) the maximal degree of their singularities is studied, (ii) the ambiguity of representation is shown, and (iii) their dual space for nonclassicality tests is specified. In this view, we reconsider the methods for regularizing the Glauber-Sudarshan distribution for verifying its nonclassicality. This treatment is supported with comprehensive examples and counterexamples.
Maximization of eigenvalues using topology optimization
Pedersen, Niels Leergaard
2000-01-01
Topology optimization is used to optimize the eigenvalues of plates. The results are intended especially for MicroElectroMechanical Systems (MEMS) but call be seen as more general. The problem is not formulated as a case of reinforcement of an existing structure, so there is a problem related...... to localized modes in low density areas. The topology optimization problem is formulated using the SIMP method. Special attention is paid to a numerical method for removing localized eigenmodes in low density areas. The method is applied to numerical examples of maximizing the first eigenfrequency, One example...... is a practical MEMS application; a probe used in an Atomic Force Microscope (AFM). For the AFM probe the optimization is complicated by a constraint on the stiffness and constraints on higher order eigenvalues....
Reflection Quasilattices and the Maximal Quasilattice
Boyle, Latham
2016-01-01
We introduce the concept of a {\\it reflection quasilattice}, the quasiperiodic generalization of a Bravais lattice with irreducible reflection symmetry. Among their applications, reflection quasilattices are the reciprocal (i.e. Bragg diffraction) lattices for quasicrystals and quasicrystal tilings, such as Penrose tilings, with irreducible reflection symmetry and discrete scale invariance. In a follow-up paper, we will show that reflection quasilattices can be used to generate tilings in real space with properties analogous to those in Penrose tilings, but with different symmetries and in various dimensions. Here we prove that reflection quasilattices only exist in dimensions two, three and four, and we prove that there is a unique reflection quasilattice in dimension four: the "maximal reflection quasilattice" in terms of dimensionality and symmetry. We further show that, unlike crystallographic Bravais lattices, all reflection quasilattices are invariant under rescaling by certain discrete scale factors. W...
Distributed Maximality based CTL Model Checking
Djamel Eddine Saidouni
2010-05-01
Full Text Available In this paper we investigate an approach to perform a distributed CTL Model checker algorithm on a network of workstations using Kleen three value logic, the state spaces is partitioned among the network nodes, We represent the incomplete state spaces as a Maximality labeled Transition System MLTS which are able to express true concurrency. we execute in parallel the same algorithm in each node, for a certain property on an incomplete MLTS , this last compute the set of states which satisfy or which if they fail are assigned the value .The third value mean unknown whether true or false because the partial state space lacks sufficient information needed for a precise answer concerning the complete state space .To solve this problem each node exchange the information needed to conclude the result about the complete state space. The experimental version of the algorithm is currently being implemented using the functional programming language Erlang.
Evolution of correlated multiplexity through stability maximization
Dwivedi, Sanjiv K
2016-01-01
Investigating relation between various structural patterns found in real-world networks and stability of underlying systems is crucial to understand importance and evolutionary origin of such patterns. We evolve multiplex networks, comprising of anti-symmetric couplings in one layer, depicting predator-prey relation, and symmetric couplings in the other, depicting mutualistic (or competitive) relation, based on stability maximization through the largest eigenvalue. We find that the correlated multiplexity emerges as evolution progresses. The evolved values of the correlated multiplexity exhibit a dependence on the inter-link coupling strength. Furthermore, the inter-layer coupling strength governs the evolution of disassortativity property in the individual layers. We provide analytical understanding to these findings by considering star like networks in both the layers. The model and tools used here are useful for understanding the principles governing the stability as well as importance of such patterns in ...
Witten spinors on maximal, conformally flat hypersurfaces
Frauendiener, Jörg; Szabados, László B
2011-01-01
The boundary conditions that exclude zeros of the solutions of the Witten equation (and hence guarantee the existence of a 3-frame satisfying the so-called special orthonormal frame gauge conditions) are investigated. We determine the general form of the conformally invariant boundary conditions for the Witten equation, and find the boundary conditions that characterize the constant and the conformally constant spinor fields among the solutions of the Witten equations on compact domains in extrinsically and intrinsically flat, and on maximal, intrinsically globally conformally flat spacelike hypersurfaces, respectively. We also provide a number of exact solutions of the Witten equation with various boundary conditions (both at infinity and on inner or outer boundaries) that single out nowhere vanishing spinor fields on the flat, non-extreme Reissner--Nordstr\\"om and Brill--Lindquist data sets. Our examples show that there is an interplay between the boundary conditions, the global topology of the hypersurface...
Greedy Maximal Scheduling in Wireless Networks
Li, Qiao
2010-01-01
In this paper we consider greedy scheduling algorithms in wireless networks, i.e., the schedules are computed by adding links greedily based on some priority vector. Two special cases are considered: 1) Longest Queue First (LQF) scheduling, where the priorities are computed using queue lengths, and 2) Static Priority (SP) scheduling, where the priorities are pre-assigned. We first propose a closed-form lower bound stability region for LQF scheduling, and discuss the tightness result in some scenarios. We then propose an lower bound stability region for SP scheduling with multiple priority vectors, as well as a heuristic priority assignment algorithm, which is related to the well-known Expectation-Maximization (EM) algorithm. The performance gain of the proposed heuristic algorithm is finally confirmed by simulations.
A New Biflavone from Selaginella pulvinata Maxim
XU Kang-Ping; XU Zhi; DENG Yin-Hua; LI Fu-Shuang; ZHOU Ying-Jun; HU Gao-Yun; TAN Gui-Shan
2003-01-01
@@ Selaginella pulvinata Maxim. distributes all over the country of China and is used for the treatment for haemor rhage. [1] We studied on the chemical constituents of S. pulvinata in order to find the active compounds. Dried stems and leaves of S. pulvinata (6.5 kg) were extracted with 70% ethanol twice. The extract was evaporated under vacuum and than suspended in water, extracted with petroleum and EtOAc sequentially. The EtOAc extract was chromatographed on silica gel, eluted with CHCl3-MeOH. As a result, a novel biflavone, named pulvinatabiflavone, was obtained from fractions 75 ～ 78. Its structure was determined on the basis of spectroscopic analysis as 5,5″, 4′″ trihydroxy-7,7″-dimethoxy-[4′-O-6″]-biflavone (compound 1).
Maximal energy extraction under discrete diffusive exchange
Hay, M. J., E-mail: hay@princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Schiff, J. [Department of Mathematics, Bar-Ilan University, Ramat Gan 52900 (Israel); Fisch, N. J. [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)
2015-10-15
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Maximal energy extraction under discrete diffusive exchange
Hay, Michael J; Fisch, Nathaniel J
2015-01-01
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Bruce Weaver
2014-09-01
Full Text Available Missing data is a frequent problem for researchers conducting exploratory factor analysis (EFA or reliability analysis. The SPSS FACTOR procedure allows users to select listwise deletion, pairwise deletion or mean substitution as a method for dealing with missing data. The shortcomings of these methods are well-known. Graham (2009 argues that a much better way to deal with missing data in this context is to use a matrix of expectation maximization (EM covariances(or correlations as input for the analysis. SPSS users who have the Missing Values Analysis add-on module can obtain vectors ofEM means and standard deviations plus EM correlation and covariance matrices via the MVA procedure. But unfortunately, MVA has no /MATRIX subcommand, and therefore cannot write the EM correlations directly to a matrix dataset of the type needed as input to the FACTOR and RELIABILITY procedures. We describe two macros that (in conjunction with an intervening MVA command carry out the data management steps needed to create two matrix datasets, one containing EM correlations and the other EM covariances. Either of those matrix datasets can then be used asinput to the FACTOR procedure, and the EM correlations can also be used as input to RELIABILITY. We provide an example that illustrates the use of the two macros to generate the matrix datasets and how to use those datasets as input to the FACTOR and RELIABILITY procedures. We hope that this simple method for handling missing data will prove useful to both students andresearchers who are conducting EFA or reliability analysis.
Gearbox Reliability Collaborative Update (Presentation)
Sheng, S.; Keller, J.; Glinsky, C.
2013-10-01
This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.
Wyse, Adam E.; Babcock, Ben
2016-01-01
A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…
From entropy-maximization to equality-maximization: Gauss, Laplace, Pareto, and Subbotin
Eliazar, Iddo
2014-12-01
The entropy-maximization paradigm of statistical physics is well known to generate the omnipresent Gauss law. In this paper we establish an analogous socioeconomic model which maximizes social equality, rather than physical disorder, in the context of the distributions of income and wealth in human societies. We show that-on a logarithmic scale-the Laplace law is the socioeconomic equality-maximizing counterpart of the physical entropy-maximizing Gauss law, and that this law manifests an optimized balance between two opposing forces: (i) the rich and powerful, striving to amass ever more wealth, and thus to increase social inequality; and (ii) the masses, struggling to form more egalitarian societies, and thus to increase social equality. Our results lead from log-Gauss statistics to log-Laplace statistics, yield Paretian power-law tails of income and wealth distributions, and show how the emergence of a middle-class depends on the underlying levels of socioeconomic inequality and variability. Also, in the context of asset-prices with Laplace-distributed returns, our results imply that financial markets generate an optimized balance between risk and predictability.
THE EFFECTS MAXIMAL AND SUB MAXIMAL AEROBIC EXERCISE ON THE BRONCHOSPASM INDICES IN NON ATHLETIC
Amir GANJİ
2012-08-01
Full Text Available Background: Exercise-induced bronchospasm (EIB is a transient airway obstruction that occurs during and after the exercise. Exercise-induced bronchospasm is observed in healthy individuals as well as the asthmatic and allergic rhinitis patients. Research question: The study compared the effects of one session of submaximal aerobic exercise and a maximal one on the prevalence of exercise-induced bronchospasm in non-athletic students. Type of study: An experimental study, using human subjects, was designed. Methods: 20 non-athletic male students participated in two sessions of aerobic exercise. The prevalence of EIB was investigated among them. The criteria for assessing exercise-induced bronchospasm were ≥10% fall in FEV1, ≥15% fall in FEF25-75%, or ≥25% fall in PEFR. Results: The results revealed that the maximal exercise did not affect FEF25-75% and PEF, but it led to a meaningful reduction in FEV1. Contrarily, the submaximal exercise affected none of these indices. That is, in both protocols the same result was obtained for PEF and FEF25-75. Moreover, the prevalence of EIB was 15% in the submaximal exercise and 20% in the maximal one. Actually, this difference was significant. Conclusion: This study demonstrated that in contrast to the subjects who performed submaximal exercise, those who participated in the maximal protocol showed more changes in the pulmonary function indices and the prevalence of EIB was greater among them.
Foxall, Gordon R; Oliveira-Castro, Jorge M; Schrezenmaier, Teresa C
2004-06-30
Purchasers of fast-moving consumer goods generally exhibit multi-brand choice, selecting apparently randomly among a small subset or "repertoire" of tried and trusted brands. Their behavior shows both matching and maximization, though it is not clear just what the majority of buyers are maximizing. Each brand attracts, however, a small percentage of consumers who are 100%-loyal to it during the period of observation. Some of these are exclusively buyers of premium-priced brands who are presumably maximizing informational reinforcement because their demand for the brand is relatively price-insensitive or inelastic. Others buy exclusively the cheapest brands available and can be assumed to maximize utilitarian reinforcement since their behavior is particularly price-sensitive or elastic. Between them are the majority of consumers whose multi-brand buying takes the form of selecting a mixture of economy -- and premium-priced brands. Based on the analysis of buying patterns of 80 consumers for 9 product categories, the paper examines the continuum of consumers so defined and seeks to relate their buying behavior to the question of how and what consumers maximize.
System Reliability Analysis: Foundations.
1982-07-01
performance formulas for systems subject to pre- ventive maintenance are given. V * ~, , 9 D -2 SYSTEM RELIABILITY ANALYSIS: FOUNDATIONS Richard E...reliability in this case is V P{s can communicate with the terminal t = h(p) Sp2(((((p p)p) p)p)gp) + p(l -p)(((pL p)p)(p 2 JLp)) + p(l -p)((p(p p...For undirected networks, the basic reference is A. Satyanarayana and Kevin Wood (1982). For directed networks, the basic reference is Avinash
Maximal elements of non necessarily acyclic binary relations
Josep Enric Peris Ferrando; Begoña Subiza Martínez
1992-01-01
The existence of maximal elements for binary preference relations is analyzed without imposing transitivity or convexity conditions. From each preference relation a new acyclic relation is defined in such a way that some maximal elements of this new relation characterize maximal elements of the original one. The result covers the case whereby the relation is acyclic.
Reliabilities of genomic estimated breeding values in Danish Jersey
Thomasen, Jørn Rind; Guldbrandtsen, Bernt; Su, Guosheng;
2012-01-01
In order to optimize the use of genomic selection in breeding plans, it is essential to have reliable estimates of the genomic breeding values. This study investigated reliabilities of direct genomic values (DGVs) in the Jersey population estimated by three different methods. The validation methods...... of DGV. The data set consisted of 1003 Danish Jersey bulls with conventional estimated breeding values (EBVs) for 14 different traits included in the Nordic selection index. The bulls were genotyped for Single-nucleotide polymorphism (SNP) markers using the Illumina 54 K chip. A Bayesian method was used...... index pre-selection only. Averaged across traits, the estimates of reliability of DGVs ranged from 0.20 for validation on the most recent 3 years of bulls and up to 0.42 for expected reliabilities. Reliabilities from the cross-validation were on average 0.24. For the individual traits, the reliability...
Reliability of chemical analyses of water samples
Beardon, R.
1989-11-01
Ground-water quality investigations require reliable chemical analyses of water samples. Unfortunately, laboratory analytical results are often unreliable. The Uranium Mill Tailings Remedial Action (UMTRA) Project`s solution to this problem was to establish a two phase quality assurance program for the analysis of water samples. In the first phase, eight laboratories analyzed three solutions of known composition. The analytical accuracy of each laboratory was ranked and three laboratories were awarded contracts. The second phase consists of on-going monitoring of the reliability of the selected laboratories. The following conclusions are based on two years experience with the UMTRA Project`s Quality Assurance Program. The reliability of laboratory analyses should not be taken for granted. Analytical reliability may be independent of the prices charged by laboratories. Quality assurance programs benefit both the customer and the laboratory.
Reliability of chemical analyses of water samples
Beardon, R.
1989-11-01
Ground-water quality investigations require reliable chemical analyses of water samples. Unfortunately, laboratory analytical results are often unreliable. The Uranium Mill Tailings Remedial Action (UMTRA) Project`s solution to this problem was to establish a two phase quality assurance program for the analysis of water samples. In the first phase, eight laboratories analyzed three solutions of known composition. The analytical accuracy of each laboratory was ranked and three laboratories were awarded contracts. The second phase consists of on-going monitoring of the reliability of the selected laboratories. The following conclusions are based on two years experience with the UMTRA Project`s Quality Assurance Program. The reliability of laboratory analyses should not be taken for granted. Analytical reliability may be independent of the prices charged by laboratories. Quality assurance programs benefit both the customer and the laboratory.
Reliability and Validity of Dual-Task Mobility Assessments in People with Chronic Stroke.
Lei Yang
Full Text Available The ability to perform a cognitive task while walking simultaneously (dual-tasking is important in real life. However, the psychometric properties of dual-task walking tests have not been well established in stroke.To assess the test-retest reliability, concurrent and known-groups validity of various dual-task walking tests in people with chronic stroke.Observational measurement study with a test-retest design.Eighty-eight individuals with chronic stroke participated. The testing protocol involved four walking tasks (walking forward at self-selected and maximal speed, walking backward at self-selected speed, and crossing over obstacles performed simultaneously with each of the three attention-demanding tasks (verbal fluency, serial 3 subtractions or carrying a cup of water. For each dual-task condition, the time taken to complete the walking task, the correct response rate (CRR of the cognitive task, and the dual-task effect (DTE for the walking time and CRR were calculated. Forty-six of the participants were tested twice within 3-4 days to establish test-retest reliability.The walking time in various dual-task assessments demonstrated good to excellent reliability [Intraclass correlation coefficient (ICC2,1 = 0.70-0.93; relative minimal detectable change at 95% confidence level (MDC95% = 29%-45%]. The reliability of the CRR (ICC2,1 = 0.58-0.81 and the DTE in walking time (ICC2,1 = 0.11-0.80 was more varied. The reliability of the DTE in CRR (ICC2,1 = -0.31-0.40 was poor to fair. The walking time and CRR obtained in various dual-task walking tests were moderately to strongly correlated with those of the dual-task Timed-up-and-Go test, thus demonstrating good concurrent validity. None of the tests could discriminate fallers (those who had sustained at least one fall in the past year from non-fallers.The results are generalizable to community-dwelling individuals with chronic stroke only.The walking time derived from the various dual
Expert system aids reliability
Johnson, A.T. [Tennessee Gas Pipeline, Houston, TX (United States)
1997-09-01
Quality and Reliability are key requirements in the energy transmission industry. Tennessee Gas Co. a division of El Paso Energy, has applied Gensym`s G2, object-oriented Expert System programming language as a standard tool for maintaining and improving quality and reliability in pipeline operation. Tennessee created a small team of gas controllers and engineers to develop a Proactive Controller`s Assistant (ProCA) that provides recommendations for operating the pipeline more efficiently, reliably and safely. The controller`s pipeline operating knowledge is recreated in G2 in the form of Rules and Procedures in ProCA. Two G2 programmers supporting the Gas Control Room add information to the ProCA knowledge base daily. The result is a dynamic, constantly improving system that not only supports the pipeline controllers in their operations, but also the measurement and communications departments` requests for special studies. The Proactive Controller`s Assistant development focus is in the following areas: Alarm Management; Pipeline Efficiency; Reliability; Fuel Efficiency; and Controller Development.
Reliability based structural design
Vrouwenvelder, A.C.W.M.
2014-01-01
According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A ke
Reliability based structural design
Vrouwenvelder, A.C.W.M.
2013-01-01
According to ISO 2394, structures shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life in an economic way. To fulfil this requirement one needs insight into the risk and reliability under expected and non-expected actions. A ke
Parametric Mass Reliability Study
Holt, James P.
2014-01-01
The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.
Avionics Design for Reliability
1976-03-01
Consultant P.O. Box 181, Hazelwood. Missouri 63042, U.S.A. soup ""•.• • CONTENTS Page LIST OF SPEAKERS iii INTRODUCTION AND OVERVIEW-RELIABILITY UNDER... primordial , d’autant plus quo dans co cam ia procg- dure do st~lection en fiabilitg eat assez peu efficaco. La ripartition des pannes suit
1980-01-01
The reliability of a wind energy system depends on the size of the propeller and the size of the back-up energy storage. Design of the optimum system...speed incidents which generate a significant part of the wind energy . A nomogram is presented, based on some continuous wind speed measurements
Visser, M
1997-01-01
The ``reliability horizon'' for semi-classical quantum gravity quantifies the extent to which we should trust semi-classical quantum gravity, and gives a handle on just where the ``Planck regime'' resides. The key obstruction to pushing semi-classical quantum gravity into the Planck regime is often the existence of large metric fluctuations, rather than a large back-reaction.
Reliability of semiology description.
Heo, Jae-Hyeok; Kim, Dong Wook; Lee, Seo-Young; Cho, Jinwhan; Lee, Sang-Kun; Nam, Hyunwoo
2008-01-01
Seizure semiology is important for classifying patients' epilepsy. Physicians usually get most of the seizure information from observers though there have been few reports on the reliability of the observers' description. This study aims at determining the reliability of observers' description of the semiology. We included 92 patients who had their habitual seizures recorded during video-EEG monitoring. We compared the semiology described by the observers with that recorded on the videotape, and reviewed which characteristics of the observers affected the reliability of their reported data. The classification of seizures and the individual components of the semiology based only on the observer-description was somewhat discordant compared with the findings from the videotape (correct classification, 85%). The descriptions of some ictal behaviors such as oroalimentary automatism, tonic/dystonic limb posturing, and head versions were relatively accurate, but those of motionless staring and hand automatism were less accurate. The specified directions by the observers were relatively correct. The accuracy of the description was related to the educational level of the observers. Much of the information described by well-educated observers is reliable. However, every physician should keep in mind the limitations of this information and use this information cautiously.
High reliability organizations
Gallis, R.; Zwetsloot, G.I.J.M.
2014-01-01
High Reliability Organizations (HRO’s) are organizations that constantly face serious and complex (safety) risks yet succeed in realising an excellent safety performance. In such situations acceptable levels of safety cannot be achieved by traditional safety management only. HRO’s manage safety
Rouis Majdi
2016-06-01
Full Text Available The aim of this study was to verify the impact of ethnicity on the maximal power-vertical jump relationship. Thirty-one healthy males, sixteen Caucasian (age: 26.3 ± 3.5 years; body height: 179.1 ± 5.5 cm; body mass: 78.1 ± 9.8 kg and fifteen Afro-Caribbean (age: 24.4 ±2.6 years; body height: 178.9 ± 5.5 cm; body mass: 77.1 ± 10.3 kg completed three sessions during which vertical jump height and maximal power of lower limbs were measured. The results showed that the values of vertical jump height and maximal power were higher for Afro-Caribbean participants (62.92 ± 6.7 cm and 14.70 ± 1.75 W∙kg-1 than for Caucasian ones (52.92 ± 4.4 cm and 12.75 ± 1.36 W∙kg-1. Moreover, very high reliability indices were obtained on vertical jump (e.g. 0.95 < ICC < 0.98 and maximal power performance (e.g. 0.75 < ICC < 0.97. However, multiple linear regression analysis showed that, for a given value of maximal power, the Afro-Caribbean participants jumped 8 cm higher than the Caucasians. Together, these results confirmed that ethnicity impacted the maximal power-vertical jump relationship over three sessions. In the current context of cultural diversity, the use of vertical jump performance as a predictor of muscular power should be considered with caution when dealing with populations of different ethnic origins.
最大泛化规则生成%Generation of Maximally Generalized Rules
徐如燕; 鲁汉榕; 郭齐胜
2001-01-01
In this paper,the generation of maximally generalized rules in the course of classitication knowledge discovery based on rough sets theory is discussed. Firstly, an algorithm is introduced. Secondly,we propose that the information-based J-measure is used as another measure of attribute signifi cance value. This measure is used for heuristically selecting the conditions to be removed in the process of extracting a set of maximally generalized rules. Finally,we present an example to illustrate the process of the algorithm.
A data-driven model for maximization of methane production in a wastewater treatment plant.
Kusiak, Andrew; Wei, Xiupeng
2012-01-01
A data-driven approach for maximization of methane production in a wastewater treatment plant is presented. Industrial data collected on a daily basis was used to build the model. Temperature, total solids, volatile solids, detention time and pH value were selected as parameters for the model construction. First, a prediction model of methane production was built by a multi-layer perceptron neural network. Then a particle swarm optimization algorithm was used to maximize methane production based on the model developed in this research. The model resulted in a 5.5% increase in methane production.
Maximal aerobic capacity in ageing subjects: actual measurements versus predicted values
Cristina Pistea; Evelyne Lonsdorfer; Stéphane Doutreleau; Monique Oswald; Irina Enache; Anne Charloux
2016-01-01
We evaluated the impact of selection of reference values on the categorisation of measured maximal oxygen consumption (V′O2 peak) as “normal” or “abnormal” in an ageing population. We compared measured V′O2 peak with predicted values and the lower limit of normal (LLN) calculated with five equations. 99 (58 males and 41 females) disease-free subjects aged ≥70 years completed an incremental maximal exercise test on a cycle ergometer. Mean V′O2 peak was 1.88 L·min−1 in men and 1.26 L·min−1 in w...
Component Reliability Assessment of Offshore Jacket Platforms
V.J. Kurian
2015-01-01
Full Text Available Oil and gas industry is one of the most important industries contributing to the Malaysian economy. To extract hydrocarbons, various types of production platforms have been developed. Fixed jacket platform is the earliest type of production structure, widely installed in Malaysia’s shallow and intermediate waters. To date, more than 60% of these jacket platforms have operated exceeding their initial design life, thus making the re-evaluation and reassessment necessary for these platforms to continue to be put in service. In normal engineering practice, system reliability of a structure is evaluated as its safety parameter. This method is however, much complicated and time consuming. Assessing component's reliability can be an alternative approach to provide assurance about a structure’s condition in an early stage. Design codes such as the Working Stress Design (WSD and the Load and Resistance Factor Design (LRFD are well established for the component-level assessment. In reliability analysis, failure function, which consists of strength and load, is used to define the failure event. If the load acting exceeds the capacity of a structure, the structure will fail. Calculation of stress utilization ratio as given in the design codes is able to predict the reliability of a member and to estimate the extent to which a member is being utilised. The basic idea of this ratio is that if it is more than one, the member has failed and vice versa. Stress utilization ratio is a ratio of applied stress, which is the output reaction of environmental loadings acting on the structural member, to the design strength that comes from the member’s geometric and material properties. Adopting this ratio as the failure event, the reliability of each component is found. This study reviews and discusses the reliability for selected members of three Malaysian offshore jacket platforms. First Order Reliability Method (FORM was used to generate reliability index and
A novel 2-phase reliability improvement of digital circuits
Shojaei, Maryam; Mahani, Ali
2016-12-01
Nowadays several methods based on modular redundancy are proposed to increase the reliability of digital circuits. Redundant fault tolerant techniques increase the consumed power and area overhead. So in this paper a two phase fault tolerant design is proposed to get the balance between reliability and area overhead. In the first phase, reliability optimization of digital circuits which utilizes the architecture with higher reliability as an objective function is considered. Then the automatic insertion of selective non-uniform redundancy is applied to improve the reliability of obtained circuit as second phase. To show the effectiveness of the proposed method, simulation results are compared with triple modular redundancy.
Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were proposed
Kottner, Jan; Audigé, Laurent; Brorson, Stig;
2011-01-01
Results of reliability and agreement studies are intended to provide information about the amount of error inherent in any diagnosis, score, or measurement. The level of reliability and agreement among users of scales, instruments, or classifications is widely unknown. Therefore, there is a need...... for rigorously conducted interrater and intrarater reliability and agreement studies. Information about sample selection, study design, and statistical analysis is often incomplete. Because of inadequate reporting, interpretation and synthesis of study results are often difficult. Widely accepted criteria......, standards, or guidelines for reporting reliability and agreement in the health care and medical field are lacking. The objective was to develop guidelines for reporting reliability and agreement studies....
Reliability in the utility computing era: Towards reliable Fog computing
Madsen, Henrik; Burtschy, Bernard; Albeanu, G.
2013-01-01
This paper considers current paradigms in computing and outlines the most important aspects concerning their reliability. The Fog computing paradigm as a non-trivial extension of the Cloud is considered and the reliability of the networks of smart devices are discussed. Combining the reliability...... requirements of grid and cloud paradigms with the reliability requirements of networks of sensor and actuators it follows that designing a reliable Fog computing platform is feasible....
1978-02-01
these factors places great im- portance on selecting missile materiels which are capable of performing reliably in each of the environments. The... great variety of special features and construction, are available. Latching and time delay are common features, and the reed construction has advantages...represents a reliability study performed under contract to RADC in 1974. This source identified the type and quality grades for the devides , however
EXPLANATORY VARIANCE IN MAXIMAL OXYGEN UPTAKE
Jacalyn J. Robert McComb
2006-06-01
Full Text Available The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females, ages 18 - 24 years, underwent the following testing procedures: (a a 7-site skin fold assessment; (b a land VO2max running treadmill test; and (c a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants' head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF, height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27 of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF.
Reflection quasilattices and the maximal quasilattice
Boyle, Latham; Steinhardt, Paul J.
2016-08-01
We introduce the concept of a reflection quasilattice, the quasiperiodic generalization of a Bravais lattice with irreducible reflection symmetry. Among their applications, reflection quasilattices are the reciprocal (i.e., Bragg diffraction) lattices for quasicrystals and quasicrystal tilings, such as Penrose tilings, with irreducible reflection symmetry and discrete scale invariance. In a follow-up paper, we will show that reflection quasilattices can be used to generate tilings in real space with properties analogous to those in Penrose tilings, but with different symmetries and in various dimensions. Here we explain that reflection quasilattices only exist in dimensions two, three, and four, and we prove that there is a unique reflection quasilattice in dimension four: the "maximal reflection quasilattice" in terms of dimensionality and symmetry. Unlike crystallographic Bravais lattices, all reflection quasilattices are invariant under rescaling by certain discrete scale factors. We tabulate the complete set of scale factors for all reflection quasilattices in dimension d >2 , and for all those with quadratic irrational scale factors in d =2 .
Viral quasispecies assembly via maximal clique enumeration.
Töpfer, Armin; Marschall, Tobias; Bull, Rowena A; Luciani, Fabio; Schönhuth, Alexander; Beerenwinkel, Niko
2014-03-01
Virus populations can display high genetic diversity within individual hosts. The intra-host collection of viral haplotypes, called viral quasispecies, is an important determinant of virulence, pathogenesis, and treatment outcome. We present HaploClique, a computational approach to reconstruct the structure of a viral quasispecies from next-generation sequencing data as obtained from bulk sequencing of mixed virus samples. We develop a statistical model for paired-end reads accounting for mutations, insertions, and deletions. Using an iterative maximal clique enumeration approach, read pairs are assembled into haplotypes of increasing length, eventually enabling global haplotype assembly. The performance of our quasispecies assembly method is assessed on simulated data for varying population characteristics and sequencing technology parameters. Owing to its paired-end handling, HaploClique compares favorably to state-of-the-art haplotype inference methods. It can reconstruct error-free full-length haplotypes from low coverage samples and detect large insertions and deletions at low frequencies. We applied HaploClique to sequencing data derived from a clinical hepatitis C virus population of an infected patient and discovered a novel deletion of length 357±167 bp that was validated by two independent long-read sequencing experiments. HaploClique is available at https://github.com/armintoepfer/haploclique. A summary of this paper appears in the proceedings of the RECOMB 2014 conference, April 2-5.
Evolution of correlated multiplexity through stability maximization
Dwivedi, Sanjiv K.; Jalan, Sarika
2017-02-01
Investigating the relation between various structural patterns found in real-world networks and the stability of underlying systems is crucial to understand the importance and evolutionary origin of such patterns. We evolve multiplex networks, comprising antisymmetric couplings in one layer depicting predator-prey relationship and symmetric couplings in the other depicting mutualistic (or competitive) relationship, based on stability maximization through the largest eigenvalue of the corresponding adjacency matrices. We find that there is an emergence of the correlated multiplexity between the mirror nodes as the evolution progresses. Importantly, evolved values of the correlated multiplexity exhibit a dependence on the interlayer coupling strength. Additionally, the interlayer coupling strength governs the evolution of the disassortativity property in the individual layers. We provide analytical understanding to these findings by considering starlike networks representing both the layers. The framework discussed here is useful for understanding principles governing the stability as well as the importance of various patterns in the underlying networks of real-world systems ranging from the brain to ecology which consist of multiple types of interaction behavior.
Viral quasispecies assembly via maximal clique enumeration.
Armin Töpfer
2014-03-01
Full Text Available Virus populations can display high genetic diversity within individual hosts. The intra-host collection of viral haplotypes, called viral quasispecies, is an important determinant of virulence, pathogenesis, and treatment outcome. We present HaploClique, a computational approach to reconstruct the structure of a viral quasispecies from next-generation sequencing data as obtained from bulk sequencing of mixed virus samples. We develop a statistical model for paired-end reads accounting for mutations, insertions, and deletions. Using an iterative maximal clique enumeration approach, read pairs are assembled into haplotypes of increasing length, eventually enabling global haplotype assembly. The performance of our quasispecies assembly method is assessed on simulated data for varying population characteristics and sequencing technology parameters. Owing to its paired-end handling, HaploClique compares favorably to state-of-the-art haplotype inference methods. It can reconstruct error-free full-length haplotypes from low coverage samples and detect large insertions and deletions at low frequencies. We applied HaploClique to sequencing data derived from a clinical hepatitis C virus population of an infected patient and discovered a novel deletion of length 357±167 bp that was validated by two independent long-read sequencing experiments. HaploClique is available at https://github.com/armintoepfer/haploclique. A summary of this paper appears in the proceedings of the RECOMB 2014 conference, April 2-5.
Maximal respiratory pressure in healthy Japanese children
Tagami, Miki; Okuno, Yukako; Matsuda, Tadamitsu; Kawamura, Kenta; Shoji, Ryosuke; Tomita, Kazuhide
2017-01-01
[Purpose] Normal values for respiratory muscle pressures during development in Japanese children have not been reported. The purpose of this study was to investigate respiratory muscle pressures in Japanese children aged 3–12 years. [Subjects and Methods] We measured respiratory muscle pressure values using a manovacuometer without a nose clip, with subjects in a sitting position. Data were collected for ages 3–6 (Group I: 68 subjects), 7–9 (Group II: 86 subjects), and 10–12 (Group III: 64 subjects) years. [Results] The values for respiratory muscle pressures in children were significantly higher with age in both sexes, and were higher in boys than in girls. Correlation coefficients were significant at values of 0.279 to 0.471 for each gender relationship between maximal respiratory pressure and age, height, and weight, respectively. [Conclusion] In this study, we showed pediatric respiratory muscle pressure reference value for each age. In the present study, values for respiratory muscle pressures were lower than Brazilian studies. This suggests that differences in respiratory muscle pressures vary with ethnicity. PMID:28356644
Maximizing exosome colloidal stability following electroporation.
Hood, Joshua L; Scott, Michael J; Wickline, Samuel A
2014-03-01
Development of exosome-based semisynthetic nanovesicles for diagnostic and therapeutic purposes requires novel approaches to load exosomes with cargo. Electroporation has previously been used to load exosomes with RNA. However, investigations into exosome colloidal stability following electroporation have not been considered. Herein, we report the development of a unique trehalose pulse media (TPM) that minimizes exosome aggregation following electroporation. Dynamic light scattering (DLS) and RNA absorbance were employed to determine the extent of exosome aggregation and electroextraction post electroporation in TPM compared to common PBS pulse media or sucrose pulse media (SPM). Use of TPM to disaggregate melanoma exosomes post electroporation was dependent on both exosome concentration and electric field strength. TPM maximized exosome dispersal post electroporation for both homogenous B16 melanoma and heterogeneous human serum-derived populations of exosomes. Moreover, TPM enabled heavy cargo loading of melanoma exosomes with 5nm superparamagnetic iron oxide nanoparticles (SPION5) while maintaining original exosome size and minimizing exosome aggregation as evidenced by transmission electron microscopy. Loading exosomes with SPION5 increased exosome density on sucrose gradients. This provides a simple, label-free means of enriching exogenously modified exosomes and introduces the potential for MRI-driven theranostic exosome investigations in vivo.
Reliability on ISS Talk Outline
Misiora, Mike
2015-01-01
1. Overview of ISS 2. Space Environment and it effects a. Radiation b. Microgravity 3. How we ensure reliability a. Requirements b. Component Selection i. Note: I plan to stay away from talk about Rad Hardened components and talk about why we use older processors because they are less susceptible to SEUs. c. Testing d. Redundancy / Failure Tolerance e. Sparing strategies 4. Operational Examples a. Multiple MDM Failures on 6A due to hard drive failure In general, my plan is to only talk about data that is currently available via normal internet sources to ensure that I stay away from any topics that would be Export Controlled, ITAR, or NDA-controlled. The operational example has been well-reported on in the media and those are the details that I plan to cover. Additionally I am not planning on using any slides or showing any photos during the talk.
Human Reliability Program Workshop
Landers, John; Rogers, Erin; Gerke, Gretchen
2014-05-18
A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.
Accelerator reliability workshop
Hardy, L.; Duru, Ph.; Koch, J.M.; Revol, J.L.; Van Vaerenbergh, P.; Volpe, A.M.; Clugnet, K.; Dely, A.; Goodhew, D
2002-07-01
About 80 experts attended this workshop, which brought together all accelerator communities: accelerator driven systems, X-ray sources, medical and industrial accelerators, spallation sources projects (American and European), nuclear physics, etc. With newly proposed accelerator applications such as nuclear waste transmutation, replacement of nuclear power plants and others. Reliability has now become a number one priority for accelerator designers. Every part of an accelerator facility from cryogenic systems to data storage via RF systems are concerned by reliability. This aspect is now taken into account in the design/budget phase, especially for projects whose goal is to reach no more than 10 interruptions per year. This document gathers the slides but not the proceedings of the workshop.
Reliability and construction control
Sherif S. AbdelSalam
2016-06-01
Full Text Available The goal of this study was to determine the most reliable and efficient combination of design and construction methods required for vibro piles. For a wide range of static and dynamic formulas, the reliability-based resistance factors were calculated using EGYPT database, which houses load test results for 318 piles. The analysis was extended to introduce a construction control factor that determines the variation between the pile nominal capacities calculated using static versus dynamic formulae. From the major outcomes, the lowest coefficient of variation is associated with Davisson’s criterion, and the resistance factors calculated for the AASHTO method are relatively high compared with other methods. Additionally, the CPT-Nottingham and Schmertmann method provided the most economic design. Recommendations related to a pile construction control factor were also presented, and it was found that utilizing the factor can significantly reduce variations between calculated and actual capacities.
Improving Power Converter Reliability
Ghimire, Pramod; de Vega, Angel Ruiz; Beczkowski, Szymon
2014-01-01
The real-time junction temperature monitoring of a high-power insulated-gate bipolar transistor (IGBT) module is important to increase the overall reliability of power converters for industrial applications. This article proposes a new method to measure the on-state collector?emitter voltage...... of a high-power IGBT module during converter operation, which may play a vital role in improving the reliability of the power converters. The measured voltage is used to estimate the module average junction temperature of the high and low-voltage side of a half-bridge IGBT separately in every fundamental...... is measured in a wind power converter at a low fundamental frequency. To illustrate more, the test method as well as the performance of the measurement circuit are also presented. This measurement is also useful to indicate failure mechanisms such as bond wire lift-off and solder layer degradation...
Power electronics reliability.
Kaplar, Robert James; Brock, Reinhard C.; Marinella, Matthew; King, Michael Patrick; Stanley, James K.; Smith, Mark A.; Atcitty, Stanley
2010-10-01
The project's goals are: (1) use experiments and modeling to investigate and characterize stress-related failure modes of post-silicon power electronic (PE) devices such as silicon carbide (SiC) and gallium nitride (GaN) switches; and (2) seek opportunities for condition monitoring (CM) and prognostics and health management (PHM) to further enhance the reliability of power electronics devices and equipment. CM - detect anomalies and diagnose problems that require maintenance. PHM - track damage growth, predict time to failure, and manage subsequent maintenance and operations in such a way to optimize overall system utility against cost. The benefits of CM/PHM are: (1) operate power conversion systems in ways that will preclude predicted failures; (2) reduce unscheduled downtime and thereby reduce costs; and (3) pioneering reliability in SiC and GaN.
Bartsch, R.R.
1995-09-01
Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.
Reliability of Circumplex Axes
Micha Strack
2013-06-01
Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.
Optimal redundancy allocation for reliability systems with imperfect switching
Lun Ran; Jinlin Li; Xujie Jia; Hongrui Chu
2014-01-01
The problem of stochastical y al ocating redundant com-ponents to increase the system lifetime is an important topic of reliability. An optimal redundancy al ocation is proposed, which maximizes the expected lifetime of a reliability system with sub-systems consisting of components in paral el. The constraints are minimizing the total resources and the sizes of subsystems. In this system, each switching is independent with each other and works with probability p. Two optimization problems are studied by an incremental algorithm and dynamic programming technique respectively. The incremental algorithm proposed could obtain an approximate optimal solution, and the dynamic programming method could generate the optimal solution.
Reliability analysis of retaining walls with multiple failure modes
张道兵; 孙志彬; 朱川曲
2013-01-01
In order to reduce the errors of the reliability of the retaining wall structure in the establishment of function, in the estimation of parameter and algorithm, firstly, two new reliability and stability models of anti-slipping and anti-overturning based on the upper-bound theory of limit analysis were established, and two kinds of failure modes were regarded as a series of systems with multiple correlated failure modes. Then, statistical characteristics of parameters of the retaining wall structure were inferred by maximal entropy principle. At last, the structural reliabilities of single failure mode and multiple failure modes were calculated by Monte Carlo method in MATLAB and the results were compared and analyzed on the sensitivity. It indicates that this method, with a high precision, is not only easy to program and quick in calculation, but also without the limit of nonlinear functions and non-normal random variables. And the results calculated by this method which applies both the limit analysis theory, maximal entropy principle and Monte Carlo method into analyzing the reliability of the retaining wall structures is more scientific, accurate and reliable, in comparison with those calculated by traditional method.
Methodology to Customize Maximal Isometric Forces for Hill-Type Muscle Models.
Dal Maso, Fabien; Begon, Mickaël; Raison, Maxime
2017-02-01
One approach to increasing the confidence of muscle force estimation via musculoskeletal models is to minimize the root mean square error (RMSE) between joint torques estimated from electromyographic-driven musculoskeletal models and those computed using inverse dynamics. We propose a method that reduces RMSE by selecting subsets of combinations of maximal voluntary isometric contraction (MVIC) trials that minimize RMSE. Twelve participants performed 3 elbow MVIC in flexion and in extension. An upper-limb electromyographic-driven musculoskeletal model was created to optimize maximum muscle stress and estimate the maximal isometric force of the biceps brachii, brachialis, brachioradialis, and triceps brachii. Maximal isometric forces were computed from all possible combinations of flexion-extension trials. The combinations producing the smallest RMSE significantly reduced the normalized RMSE to 7.4% compared with the combination containing all trials (9.0%). Maximal isometric forces ranged between 114-806 N, 64-409 N, 236-1511 N, and 556-3434 N for the brachii, brachialis, brachioradialis, and triceps brachii, respectively. These large variations suggest that customization is required to reduce the difference between models and actual participants' maximal isometric force. While the smallest previously reported RMSE was 10.3%, the proposed method reduced the RMSE to 7.4%, which may increase the confidence of muscle force estimation.
Dandanell, Sune; Præst, Charlotte Boslev; Søndergård, Stine Dam; Skovborg, Camilla; Dela, Flemming; Larsen, Steen; Helge, Jørn Wulff
2017-04-01
Maximal fat oxidation (MFO) and the exercise intensity that elicits MFO (FatMax) are commonly determined by indirect calorimetry during graded exercise tests in both obese and normal-weight individuals. However, no protocol has been validated in individuals with obesity. Thus, the aims were to develop a graded exercise protocol for determination of FatMax in individuals with obesity, and to test validity and inter-method reliability. Fat oxidation was assessed over a range of exercise intensities in 16 individuals (age: 28 (26-29) years; body mass index: 36 (35-38) kg·m(-2); 95% confidence interval) on a cycle ergometer. The graded exercise protocol was validated against a short continuous exercise (SCE) protocol, in which FatMax was determined from fat oxidation at rest and during 10 min of continuous exercise at 35%, 50%, and 65% of maximal oxygen uptake. Intraclass and Pearson correlation coefficients between the protocols were 0.75 and 0.72 and within-subject coefficient of variation (CV) was 5 (3-7)%. A Bland-Altman plot revealed a bias of -3% points of maximal oxygen uptake (limits of agreement: -12 to 7). A tendency towards a systematic difference (p = 0.06) was observed, where FatMax occurred at 42 (40-44)% and 45 (43-47)% of maximal oxygen uptake with the graded and the SCE protocol, respectively. In conclusion, there was a high-excellent correlation and a low CV between the 2 protocols, suggesting that the graded exercise protocol has a high inter-method reliability. However, considerable intra-individual variation and a trend towards systematic difference between the protocols reveal that further optimization of the graded exercise protocol is needed to improve validity.
Sabaté-Llobera, A; Notta, P C; Benítez-Segura, A; López-Ojeda, A; Pernas-Simon, S; Boya-Román, M P; Bajén, M T
2015-01-01
To assess the influence of time on the reliability of sentinel lymph node biopsy (SLNB) in breast cancer patients with previous excisional biopsy (EB), analyzing both the sentinel lymph node detection and the lymph node recurrence rate. Thirty-six patients with cT1/T2 N0 breast cancer and previous EB of the lesion underwent a lymphoscintigraphy after subdermal periareolar administration of radiocolloid, the day before SLNB. Patients were classified into two groups, one including 12 patients with up to 29 days elapsed between EB and SLNB (group A), and another with the remaining 24 in which time between both procedures was of 30 days or more (group B). Scintigraphic and surgical detection of the sentinel lymph node, histological status of the sentinel lymph node and of the axillary lymph node dissection, if performed, and lymphatic recurrences during follow-up, were analyzed. Sentinel lymph node visualization at the lymphoscintigraphy and surgical detection were 100% in both groups. Histologically, three patients showed macrometastasis in the sentinel lymph node, one from group A and two from group B. None of the patients, not even those with malignancy of the sentinel lymph node, relapsed after a medium follow-up of 49.5 months (24-75). Time elapsed between EB and SLNB does not influence the reliability of this latter technique as long as a superficial injection of the radiopharmaceutical is performed, proving a very high detection rate of the sentinel lymph node without evidence of lymphatic relapse during follow-up. Copyright © 2014 Elsevier España, S.L.U. and SEMNIM. All rights reserved.
[Comparation of maximal oxygen consumption equations in young people].
Barbosa, Fernando Policarpo; Oliveira, Hildeamo Bonifacio; Fernandes, Paula Roqueti; Fernandes Filho, José
2005-01-01
To compare the results of equations for estimate of the indirect maximal oxygen consumption with obtained them in test ergoespirometric in young individuals. Fifty-two individuals of the masculine sex were submitted to the test of effort with it analyzes direct of gases in treadmill for the determination of the maximum consumption of oxygen (VO2máx). The progressive protocol was used with load increment to each one minute. The obtained results were compared ace equations of Jackson et al and the one of Mathews et al. For the statistical analysis of the results the test of multiple comparisons corrected by the test of Bonferrone was applied. The significance level was of p heartrate of (191.73 +/- 7.84). The equations of Jackson et al (VO2máx = 49.29 +/- 2.95) for a standard mistake of (EPE = 0.41) and Mathews et al (VO2máx = 37.43 +/-. 2.14) with a (EPE = 0,31) presenting tendency in underestimating the consumption of oxygen for the sample that was of (VO2máx = 55.34 +/- 8.34) for a (EPE = 1.16), being observed significant difference in relation to measure of the VO2máx obtained in test of effort (p < 0.05), for the value of Wilks'E = 0.044; F (2.500) = 539.27; p = 0.001. The equations didn't present a power of reliable estimate for the studied population.
Tsiolis, Dimitrios
2008-01-01
Financial ratio analysis is a widely known financial statements analysis tool and is used to evaluate companies` financial position. Careful selection process in collaboration with other financial statement analysis techniques as well as taking into consideration the financial ratio analysis problems can lead the companies' analysts to a clear determination of their company's financial position.
Adaptive Influence Maximization in Social Networks: Why Commit when You can Adapt?
Vaswani, Sharan; Lakshmanan, Laks V. S.
2016-01-01
Most previous work on influence maximization in social networks is limited to the non-adaptive setting in which the marketer is supposed to select all of the seed users, to give free samples or discounts to, up front. A disadvantage of this setting is that the marketer is forced to select all the seeds based solely on a diffusion model. If some of the selected seeds do not perform well, there is no opportunity to course-correct. A more practical setting is the adaptive setting in which the ma...
Catalan Number and Enumeration of Maximal Outerplanar Graphs
无
2000-01-01
Catalan number is an important class of combinatorial numbers. The maximal outerplanar graphs are important in graph theory. In this paper some formulas to enumerate the numbers of maximal outerplanar graphs by means of the compressing graph and group theory method are given first. Then the relationships between Catalan numbers and the numbers of labeled and unlabeled maximal outerplanar graphs are presented. The computed results verified these formulas.
Maximality-Based Structural Operational Semantics for Petri Nets
Saīdouni, Djamel Eddine; Belala, Nabil; Bouneb, Messaouda
2009-03-01
The goal of this work is to exploit an implementable model, namely the maximality-based labeled transition system, which permits to express true-concurrency in a natural way without splitting actions on their start and end events. One can do this by giving a maximality-based structural operational semantics for the model of Place/Transition Petri nets in terms of maximality-based labeled transition systems structures.
Relative advantage, queue jumping, and welfare maximizing wealth distribution
2006-01-01
Suppose individuals get utilities from the total amount of wealth they hold and from their wealth relative to those immediately below them. This paper studies the distribution of wealth that maximizes an additive welfare function made up of these utilities. It interprets wealth distribution in a control theory framework to show that the welfare maximizing distribution may have unexpected properties. In some circumstances it requires that inequality be maximized at the poorest and richest ends...
Maximizers versus satisficers: Decision-making styles, competence, and outcomes
Parker, Andrew M.; Wändi Bruine de Bruin; Baruch Fischhoff
2007-01-01
Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al.\\ (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decision...
Maximally entangled states in pseudo-telepathy games
Mančinska, Laura
2015-01-01
A pseudo-telepathy game is a nonlocal game which can be won with probability one using some finite-dimensional quantum strategy but not using a classical one. Our central question is whether there exist two-party pseudo-telepathy games which cannot be won with probability one using a maximally entangled state. Towards answering this question, we develop conditions under which maximally entangled states suffice. In particular, we show that maximally entangled states suffice for weak projection...
An Investigation of Software metrics Affect on Cobol Program reliability
Day II, Henry Jesse
1996-01-01
The purpose of this research was to predict a COBOL program's reliability from software characteristics that are found in the program's source code. The first step was to select factors based on the human information processing model that are associated with changes in computer program reliability. Then these factors (software metrics) were quantitatively studied to determine which factors affect COBOL program reliability. Then a statistical model was developed that predicts COBOL program rel...
Equivalent reliability polynomials modeling EAS and their geometries
Hassan Zahir Abdul Haddi
2015-07-01
Full Text Available In this paper we shall introduce two equivalent techniques in order to evaluate reliability analysis of electrical aircrafts systems (EAS: (i graph theory technique, and (ii simplifying diffeomorphism technique. Geometric modeling of reliability models is based on algebraic hypersurfaces, whose intrinsic properties are able to select those models which are relevant for applications. The basic idea is to cover the reliability hypersurfaces by exponentially decay curves. Most of the calculations made in this paper have used Maple and Matlab software.
Honeyman-Buck, Janice C.; Rill, Lynn; Frost, Meryll M.; Staab, Edward V.
1998-07-01
The purpose of this work was to develop a method for systematically testing the reliability of a CR system under realistic daily loads in a non-clinical environment prior to its clinical adoption. Once digital imaging replaces film, it will be very difficult to revert back should the digital system become unreliable. Prior to the beginning of the test, a formal evaluation was performed to set the benchmarks for performance and functionality. A formal protocol was established that included all the 62 imaging plates in the inventory for each 24-hour period in the study. Imaging plates were exposed using different combinations of collimation, orientation, and SID. Anthropomorphic phantoms were used to acquire images of different sizes. Each combination was chosen randomly to simulate the differences that could occur in clinical practice. The tests were performed over a wide range of times with batches of plates processed to simulate the temporal constraints required by the nature of portable radiographs taken in the Intensive Care Unit (ICU). Current patient demographics were used for the test studies so automatic routing algorithms could be tested. During the test, only three minor reliability problems occurred, two of which were not directly related to the CR unit. One plate was discovered to cause a segmentation error that essentially reduced the image to only black and white with no gray levels. This plate was removed from the inventory to be replaced. Another problem was a PACS routing problem that occurred when the DICOM server with which the CR was communicating had a problem with disk space. The final problem was a network printing failure to the laser cameras. Although the units passed the reliability test, problems with interfacing to workstations were discovered. The two issues that were identified were the interpretation of what constitutes a study for CR and the construction of the look-up table for a proper gray scale display.
Ultimately Reliable Pyrotechnic Systems
Scott, John H.; Hinkel, Todd
2015-01-01
This paper presents the methods by which NASA has designed, built, tested, and certified pyrotechnic devices for high reliability operation in extreme environments and illustrates the potential applications in the oil and gas industry. NASA's extremely successful application of pyrotechnics is built upon documented procedures and test methods that have been maintained and developed since the Apollo Program. Standards are managed and rigorously enforced for performance margins, redundancy, lot sampling, and personnel safety. The pyrotechnics utilized in spacecraft include such devices as small initiators and detonators with the power of a shotgun shell, detonating cord systems for explosive energy transfer across many feet, precision linear shaped charges for breaking structural membranes, and booster charges to actuate valves and pistons. NASA's pyrotechnics program is one of the more successful in the history of Human Spaceflight. No pyrotechnic device developed in accordance with NASA's Human Spaceflight standards has ever failed in flight use. NASA's pyrotechnic initiators work reliably in temperatures as low as -420 F. Each of the 135 Space Shuttle flights fired 102 of these initiators, some setting off multiple pyrotechnic devices, with never a failure. The recent landing on Mars of the Opportunity rover fired 174 of NASA's pyrotechnic initiators to complete the famous '7 minutes of terror.' Even after traveling through extreme radiation and thermal environments on the way to Mars, every one of them worked. These initiators have fired on the surface of Titan. NASA's design controls, procedures, and processes produce the most reliable pyrotechnics in the world. Application of pyrotechnics designed and procured in this manner could enable the energy industry's emergency equipment, such as shutoff valves and deep-sea blowout preventers, to be left in place for years in extreme environments and still be relied upon to function when needed, thus greatly enhancing
Ferrite logic reliability study
Baer, J. A.; Clark, C. B.
1973-01-01
Development and use of digital circuits called all-magnetic logic are reported. In these circuits the magnetic elements and their windings comprise the active circuit devices in the logic portion of a system. The ferrite logic device belongs to the all-magnetic class of logic circuits. The FLO device is novel in that it makes use of a dual or bimaterial ferrite composition in one physical ceramic body. This bimaterial feature, coupled with its potential for relatively high speed operation, makes it attractive for high reliability applications. (Maximum speed of operation approximately 50 kHz.)
Blade reliability collaborative :
Ashwill, Thomas D.; Ogilvie, Alistair B.; Paquette, Joshua A.
2013-04-01
The Blade Reliability Collaborative (BRC) was started by the Wind Energy Technologies Department of Sandia National Laboratories and DOE in 2010 with the goal of gaining insight into planned and unplanned O&M issues associated with wind turbine blades. A significant part of BRC is the Blade Defect, Damage and Repair Survey task, which will gather data from blade manufacturers, service companies, operators and prior studies to determine details about the largest sources of blade unreliability. This report summarizes the initial findings from this work.
Sums of magnetic eigenvalues are maximal on rotationally symmetric domains
Laugesen, Richard S; Roy, Arindam
2011-01-01
The sum of the first n energy levels of the planar Laplacian with constant magnetic field of given total flux is shown to be maximal among triangles for the equilateral triangle, under normalization of the ratio (moment of inertia)/(area)^3 on the domain. The result holds for both Dirichlet and Neumann boundary conditions, with an analogue for Robin (or de Gennes) boundary conditions too. The square similarly maximizes the eigenvalue sum among parallelograms, and the disk maximizes among ellipses. More generally, a domain with rotational symmetry will maximize the magnetic eigenvalue sum among all linear images of that domain. These results are new even for the ground state energy (n=1).
Sums of Laplace eigenvalues - rotationally symmetric maximizers in the plane
Laugesen, R S
2010-01-01
The sum of the first $n \\geq 1$ eigenvalues of the Laplacian is shown to be maximal among triangles for the equilateral triangle, maximal among parallelograms for the square, and maximal among ellipses for the disk, provided the ratio $\\text{(area)}^3/\\text{(moment of inertia)}$ for the domain is fixed. This result holds for both Dirichlet and Neumann eigenvalues, and similar conclusions are derived for Robin boundary conditions and Schr\\"odinger eigenvalues of potentials that grow at infinity. A key ingredient in the method is the tight frame property of the roots of unity. For general convex plane domains, the disk is conjectured to maximize sums of Neumann eigenvalues.
Jørgensen, Sune Dandanell; Præst, Charlotte Boslev; Søndergård, Stine Dam
2017-01-01
. The graded exercise protocol was validated against a short continuous exercise (SCE) protocol, in which FatMax was determined from fat oxidation at rest and during 10-min continuous exercise at 35, 50 and 65% of maximal oxygen uptake (VO2max). Intraclass and Pearson correlation coefficients between......2max with the graded and the SCE protocol, respectively. In conclusion, there was a high-excellent correlation and a low CV between the two protocols, suggesting that the graded exercise protocol has a high inter-method reliability. However, considerable intra-individual variation and a trend...
Reliability sensitivity-based correlation coefficient calculation in structural reliability analysis
Yang, Zhou; Zhang, Yimin; Zhang, Xufang; Huang, Xianzhen
2012-05-01
The correlation coefficients of random variables of mechanical structures are generally chosen with experience or even ignored, which cannot actually reflect the effects of parameter uncertainties on reliability. To discuss the selection problem of the correlation coefficients from the reliability-based sensitivity point of view, the theory principle of the problem is established based on the results of the reliability sensitivity, and the criterion of correlation among random variables is shown. The values of the correlation coefficients are obtained according to the proposed principle and the reliability sensitivity problem is discussed. Numerical studies have shown the following results: (1) If the sensitivity value of correlation coefficient ρ is less than (at what magnitude 0.000 01), then the correlation could be ignored, which could simplify the procedure without introducing additional error. (2) However, as the difference between ρ s, that is the most sensitive to the reliability, and ρ R , that is with the smallest reliability, is less than 0.001, ρ s is suggested to model the dependency of random variables. This could ensure the robust quality of system without the loss of safety requirement. (3) In the case of | E abs|>0.001 and also | E rel|>0.001, ρ R should be employed to quantify the correlation among random variables in order to ensure the accuracy of reliability analysis. Application of the proposed approach could provide a practical routine for mechanical design and manufactory to study the reliability and reliability-based sensitivity of basic design variables in mechanical reliability analysis and design.
Load Control System Reliability
Trudnowski, Daniel [Montana Tech of the Univ. of Montana, Butte, MT (United States)
2015-04-03
This report summarizes the results of the Load Control System Reliability project (DOE Award DE-FC26-06NT42750). The original grant was awarded to Montana Tech April 2006. Follow-on DOE awards and expansions to the project scope occurred August 2007, January 2009, April 2011, and April 2013. In addition to the DOE monies, the project also consisted of matching funds from the states of Montana and Wyoming. Project participants included Montana Tech; the University of Wyoming; Montana State University; NorthWestern Energy, Inc., and MSE. Research focused on two areas: real-time power-system load control methodologies; and, power-system measurement-based stability-assessment operation and control tools. The majority of effort was focused on area 2. Results from the research includes: development of fundamental power-system dynamic concepts, control schemes, and signal-processing algorithms; many papers (including two prize papers) in leading journals and conferences and leadership of IEEE activities; one patent; participation in major actual-system testing in the western North American power system; prototype power-system operation and control software installed and tested at three major North American control centers; and, the incubation of a new commercial-grade operation and control software tool. Work under this grant certainly supported the DOE-OE goals in the area of “Real Time Grid Reliability Management.”
Supply chain reliability modelling
Eugen Zaitsev
2012-03-01
Full Text Available Background: Today it is virtually impossible to operate alone on the international level in the logistics business. This promotes the establishment and development of new integrated business entities - logistic operators. However, such cooperation within a supply chain creates also many problems related to the supply chain reliability as well as the optimization of the supplies planning. The aim of this paper was to develop and formulate the mathematical model and algorithms to find the optimum plan of supplies by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Methods: The mathematical model and algorithms to find the optimum plan of supplies were developed and formulated by using economic criterion and the model for the probability evaluating of non-failure operation of supply chain. Results and conclusions: The problem of ensuring failure-free performance of goods supply channel analyzed in the paper is characteristic of distributed network systems that make active use of business process outsourcing technologies. The complex planning problem occurring in such systems that requires taking into account the consumer's requirements for failure-free performance in terms of supply volumes and correctness can be reduced to a relatively simple linear programming problem through logical analysis of the structures. The sequence of the operations, which should be taken into account during the process of the supply planning with the supplier's functional reliability, was presented.
Quantitative metal magnetic memory reliability modeling for welded joints
Xing, Haiyan; Dang, Yongbin; Wang, Ben; Leng, Jiancheng
2016-03-01
Metal magnetic memory(MMM) testing has been widely used to detect welded joints. However, load levels, environmental magnetic field, and measurement noises make the MMM data dispersive and bring difficulty to quantitative evaluation. In order to promote the development of quantitative MMM reliability assessment, a new MMM model is presented for welded joints. Steel Q235 welded specimens are tested along the longitudinal and horizontal lines by TSC-2M-8 instrument in the tensile fatigue experiments. The X-ray testing is carried out synchronously to verify the MMM results. It is found that MMM testing can detect the hidden crack earlier than X-ray testing. Moreover, the MMM gradient vector sum K vs is sensitive to the damage degree, especially at early and hidden damage stages. Considering the dispersion of MMM data, the K vs statistical law is investigated, which shows that K vs obeys Gaussian distribution. So K vs is the suitable MMM parameter to establish reliability model of welded joints. At last, the original quantitative MMM reliability model is first presented based on the improved stress strength interference theory. It is shown that the reliability degree R gradually decreases with the decreasing of the residual life ratio T, and the maximal error between prediction reliability degree R 1 and verification reliability degree R 2 is 9.15%. This presented method provides a novel tool of reliability testing and evaluating in practical engineering for welded joints.
Solving Maximal Clique Problem through Genetic Algorithm
Rajawat, Shalini; Hemrajani, Naveen; Menghani, Ekta
2010-11-01
Genetic algorithm is one of the most interesting heuristic search techniques. It depends basically on three operations; selection, crossover and mutation. The outcome of the three operations is a new population for the next generation. Repeating these operations until the termination condition is reached. All the operations in the algorithm are accessible with today's molecular biotechnology. The simulations show that with this new computing algorithm, it is possible to get a solution from a very small initial data pool, avoiding enumerating all candidate solutions. For randomly generated problems, genetic algorithm can give correct solution within a few cycles at high probability.
Trend of maximal inspiratory pressure in mechanically ventilated patients: predictors
Pedro Caruso
2008-01-01
Full Text Available INTRODUCTION: It is known that mechanical ventilation and many of its features may affect the evolution of inspiratory muscle strength during ventilation. However, this evolution has not been described, nor have its predictors been studied. In addition, a probable parallel between inspiratory and limb muscle strength evolution has not been investigated. OBJECTIVE: To describe the variation over time of maximal inspiratory pressure during mechanical ventilation and its predictors. We also studied the possible relationship between the evolution of maximal inspiratory pressure and limb muscle strength. METHODS: A prospective observational study was performed in consecutive patients submitted to mechanical ventilation for > 72 hours. The maximal inspiratory pressure trend was evaluated by the linear regression of the daily maximal inspiratory pressure and a logistic regression analysis was used to look for independent maximal inspiratory pressure trend predictors. Limb muscle strength was evaluated using the Medical Research Council score. RESULTS: One hundred and sixteen patients were studied, forty-four of whom (37.9% presented a decrease in maximal inspiratory pressure over time. The members of the group in which maximal inspiratory pressure decreased underwent deeper sedation, spent less time in pressure support ventilation and were extubated less frequently. The only independent predictor of the maximal inspiratory pressure trend was the level of sedation (OR=1.55, 95% CI 1.003 - 2.408; p = 0.049. There was no relationship between the maximal inspiratory pressure trend and limb muscle strength. CONCLUSIONS: Around forty percent of the mechanically ventilated patients had a decreased maximal inspiratory pressure during mechanical ventilation, which was independently associated with deeper levels of sedation. There was no relationship between the evolution of maximal inspiratory pressure and the muscular strength of the limb.
D2-brane Chern-Simons theories: F-maximization = a-maximization
Fluder, Martin
2015-01-01
We study a system of N D2-branes probing a generic Calabi-Yau three-fold singularity in the presence of a non-zero quantized Romans mass n. We argue that the low-energy effective N = 2 Chern-Simons quiver gauge theory flows to a superconformal fixed point in the IR, and construct the dual AdS_4 solution in massive IIA supergravity. We compute the free energy F of the gauge theory on S^3 using localization. In the large N limit we find F = c(nN)^{1/3}a^{2/3}, where c is a universal constant and a is the a-function of the "parent" four-dimensional N = 1 theory on N D3-branes probing the same Calabi-Yau singularity. It follows that maximizing F over the space of admissible R-symmetries is equivalent to maximizing a for this class of theories. Moreover, we show that the gauge theory result precisely matches the holographic free energy of the supergravity solution, and provide a similar matching of the VEV of a BPS Wilson loop operator.
OSS reliability measurement and assessment
Yamada, Shigeru
2016-01-01
This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.
Reliability and validity in research.
Roberts, Paula; Priest, Helena
This article examines reliability and validity as ways to demonstrate the rigour and trustworthiness of quantitative and qualitative research. The authors discuss the basic principles of reliability and validity for readers who are new to research.
Parton distributions based on a maximally consistent dataset
Rojo, Juan
2014-01-01
The choice of data that enters a global QCD analysis can have a substantial impact on the resulting parton distributions and their predictions for collider observables. One of the main reasons for this has to do with the possible presence of inconsistencies, either internal within an experiment or external between different experiments. In order to assess the robustness of the global fit, different definitions of a conservative PDF set, that is, a PDF set based on a maximally consistent dataset, have been introduced. However, these approaches are typically affected by theory biases in the selection of the dataset. In this contribution, after a brief overview of recent NNPDF developments, we propose a new, fully objective, definition of a conservative PDF set, based on the Bayesian reweighting approach. Using the new NNPDF3.0 framework, we produce various conservative sets, which turn out to be mutually in agreement within the respective PDF uncertainties, as well as with the global fit. We explore some of the...
Reliability and Its Quantitative Measures
Alexandru ISAIC-MANIU
2010-01-01
Full Text Available In this article is made an opening for the software reliability issues, through wide-ranging statistical indicators, which are designed based on information collected from operating or testing (samples. It is developed the reliability issues also for the case of the main reliability laws (exponential, normal, Weibull, which validated for a particular system, allows the calculation of some reliability indicators with a higher degree of accuracy and trustworthiness
OPTUM : Optimum Portfolio Tool for Utility Maximization documentation and user's guide.
VanKuiken, J. C.; Jusko, M. J.; Samsa, M. E.; Decision and Information Sciences
2008-09-30
The Optimum Portfolio Tool for Utility Maximization (OPTUM) is a versatile and powerful tool for selecting, optimizing, and analyzing portfolios. The software introduces a compact interface that facilitates problem definition, complex constraint specification, and portfolio analysis. The tool allows simple comparisons between user-preferred choices and optimized selections. OPTUM uses a portable, efficient, mixed-integer optimization engine (lp-solve) to derive the optimal mix of projects that satisfies the constraints and maximizes the total portfolio utility. OPTUM provides advanced features, such as convenient menus for specifying conditional constraints and specialized graphical displays of the optimal frontier and alternative solutions to assist in sensitivity visualization. OPTUM can be readily applied to other nonportfolio, resource-constrained optimization problems.
When and why are reliable organizations favored?
Ethiraj, Sendil; Yi, Sangyoon
In the 1980s, organization theory witnessed a decade long debate about the incentives and consequences of organizational change. Though the fountainhead of this debate was the observation that reliable organizations are the “consequence” rather than the “cause” of selection forces, much of the en......In the 1980s, organization theory witnessed a decade long debate about the incentives and consequences of organizational change. Though the fountainhead of this debate was the observation that reliable organizations are the “consequence” rather than the “cause” of selection forces, much...... shocks, reliable organizations can in fact outperform their less reliable counterparts if they can take advantage of the knowledge resident in their historical choices. While these results are counter-intuitive, the caveat is that our results are only an existence proof for our theory rather than...... a representation of reality. Thus, our attempt is best characterized as shining a spotlight on a small part of the larger canvas that constitutes the literature on organizational change....
2017 NREL Photovoltaic Reliability Workshop
Kurtz, Sarah [National Renewable Energy Laboratory (NREL), Golden, CO (United States)
2017-08-15
NREL's Photovoltaic (PV) Reliability Workshop (PVRW) brings together PV reliability experts to share information, leading to the improvement of PV module reliability. Such improvement reduces the cost of solar electricity and promotes investor confidence in the technology -- both critical goals for moving PV technologies deeper into the electricity marketplace.
Testing for PV Reliability (Presentation)
Kurtz, S.; Bansal, S.
2014-09-01
The DOE SUNSHOT workshop is seeking input from the community about PV reliability and how the DOE might address gaps in understanding. This presentation describes the types of testing that are needed for PV reliability and introduces a discussion to identify gaps in our understanding of PV reliability testing.
Detrimental Relations of Maximization with Academic and Career Attitudes
Dahling, Jason J.; Thompson, Mindi N.
2013-01-01
Maximization refers to a decision-making style that involves seeking the single best option when making a choice, which is generally dysfunctional because people are limited in their ability to rationally evaluate all options and identify the single best outcome. The vocational consequences of maximization are examined in two samples, college…
An Overview of Maximal Unitarity at Two Loops
Johansson, Henrik; Larsen, Kasper J.
2012-01-01
We discuss the extension of the maximal-unitarity method to two loops, focusing on the example of the planar double box. Maximal cuts are reinterpreted as contour integrals, with the choice of contour fixed by the requirement that integrals of total derivatives vanish on it. The resulting formulae, like their one-loop counterparts, can be applied either analytically or numerically.
Haemodynamics during maximal exercise after coronary bypass surgery
P.W.J.C. Serruys (Patrick); M.F. Rousseau (Francois); J. Cosyns; R. Ponlot; L.A. Brasseur; J-M.R. Detry (Jean-Marie)
1978-01-01
textabstractFifty patients underwent an objective measurement of physical working capacity by means of a multistage test of maximally tolerated exertion before and after coronary bypass surgery; 29 patients also had haemodynamic measurements during maximal exercise before and after coronary bypass s
Utility maximization under solvency constraints and unhedgeable risks
T. Kleinow; A. Pelsser
2008-01-01
We consider the utility maximization problem for an investor who faces a solvency or risk constraint in addition to a budget constraint. The investor wishes to maximize her expected utility from terminal wealth subject to a bound on her expected solvency at maturity. We measure solvency using a solv
Detrimental Relations of Maximization with Academic and Career Attitudes
Dahling, Jason J.; Thompson, Mindi N.
2013-01-01
Maximization refers to a decision-making style that involves seeking the single best option when making a choice, which is generally dysfunctional because people are limited in their ability to rationally evaluate all options and identify the single best outcome. The vocational consequences of maximization are examined in two samples, college…
On a discrete version of Tanaka's theorem for maximal functions
Bober, Jonathan; Hughes, Kevin; Pierce, Lillian B
2010-01-01
In this paper we prove a discrete version of Tanaka's Theorem \\cite{Ta} for the Hardy-Littlewood maximal operator in dimension $n=1$, both in the non-centered and centered cases. For the discrete non-centered maximal operator $\\wM $ we prove that, given a function $f: \\Z \\to \\R$ of bounded variation,
Haemodynamics during maximal exercise after coronary bypass surgery
P.W.J.C. Serruys (Patrick); M.F. Rousseau (Francois); J. Cosyns; R. Ponlot; L.A. Brasseur; J-M.R. Detry (Jean-Marie)
1978-01-01
textabstractFifty patients underwent an objective measurement of physical working capacity by means of a multistage test of maximally tolerated exertion before and after coronary bypass surgery; 29 patients also had haemodynamic measurements during maximal exercise before and after coronary bypass
A Class of Maximal Functions with Oscillating Kernels
Ahmad AL-SALMAN
2007-01-01
The author studies the Lp mapping properties of a class of maximal functions that are related to oscillatory singular integral operators. Lp estimates, as well as the corresponding weighted estimates of such maximal functions, are obtained. Moreover, several applications of our results are highlighted.
ESTIMATES FOR THE MAXIMAL MULTILINEAR SINGULAR INTEGRAL OPERATORS
Yulan Jiao
2010-01-01
In this paper,some mapping properties are considered for the maximal multilinear singular integral operator whose kernel satisfies certain minimum regularity condition.It is proved that certain uniform local estimate for doubly truncated operators implies the LP(Rn)(1
maximal operator.