Maximum entropy principal for transportation
Bilich, F.; Da Silva, R.
2008-01-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Principals' Perceptions of School Public Relations
Morris, Robert C.; Chan, Tak Cheung; Patterson, Judith
2009-01-01
This study was designed to investigate school principals' perceptions on school public relations in five areas: community demographics, parental involvement, internal and external communications, school council issues, and community resources. Findings indicated that principals' concerns were as follows: rapid population growth, change of…
Constrained principal component analysis and related techniques
Takane, Yoshio
2013-01-01
In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre
Tanabe, Yuki; Kido, Teruhito; Kurata, Akira; Sawada, Shun; Suekuni, Hiroshi; Kido, Tomoyuki; Yokoi, Takahiro; Miyagawa, Masao; Mochizuki, Teruhito [Ehime University Graduate School of Medicine, Department of Radiology, Toon City, Ehime (Japan); Uetani, Teruyoshi; Inoue, Katsuji [Ehime University Graduate School of Medicine, Department of Cardiology, Pulmonology, Hypertension and Nephrology, Toon City, Ehime (Japan)
2017-04-15
To evaluate the feasibility of three-dimensional (3D) maximum principal strain (MP-strain) derived from cardiac computed tomography (CT) for detecting myocardial infarction (MI). Forty-three patients who underwent cardiac CT and magnetic resonance imaging (MRI) were retrospectively selected. Using the voxel tracking of motion coherence algorithm, the peak CT MP-strain was measured using the 16-segment model. With the trans-mural extent of late gadolinium enhancement (LGE) and the distance from MI, all segments were classified into four groups (infarcted, border, adjacent, and remote segments); infarcted and border segments were defined as MI with LGE positive. Diagnostic performance of MP-strain for detecting MI was compared with per cent systolic wall thickening (%SWT) assessed by MRI using receiver-operating characteristic curve analysis at a segment level. Of 672 segments excluding16 segments influenced by artefacts, 193 were diagnosed as MI. Sensitivity and specificity of peak MP-strain to identify MI were 81 % [95 % confidence interval (95 % CI): 74-88 %] and 86 % (81-92 %) compared with %SWT: 76 % (60-95 %) and 68 % (48-84 %), respectively. The area under the curve of peak MP-strain was superior to %SWT [0.90 (0.87-0.93) vs. 0.80 (0.76-0.83), p < 0.05]. CT MP-strain has a potential to provide incremental value to coronary CT angiography for detecting MI. (orig.)
Subjective performance evaluations and reciprocity in principal-agent relations
Sebald, Alexander Christopher; Walzl, Markus
2014-01-01
. In contrast to existing models of reciprocity, we find that agents tend to sanction whenever the feedback of principals is below their subjective self-evaluations even if agents' pay-offs are independent of it. In turn, principals provide more positive feedback (relative to their actual performance assessment......We conduct a laboratory experiment with agents working on, and principals benefiting from, a real effort task in which the agents' performance can only be evaluated subjectively. Principals give subjective performance feedback to agents, and agents have an opportunity to sanction principals...... of the agent) if this does not affect their pay-off....
Public Relations for Principals. "A Guidebook for the Pennsylvania Administrator."
Pennsylvania School Boards Association, Inc., Harrisburg.
This report discusses what makes news, what people want to read, and how to write news releases or other informative bulletins and brochures. Also included are a description of principal-reporter-editor relations, some layout and typography data, and photography instructions. (JF)
Leadership Behaviors and Its Relation with Principals' Management Experience
Mehdinezhad, Vali; Sardarzahi, Zaid
2016-01-01
This paper aims at studying the leadership behaviors reported by principals and observed by teachers and its relationship with management experience of principals. A quantitative method was used in this study. The target population included all principals and teachers of guidance schools and high schools in the Dashtiari District, Iran. A sample…
Steele, Gayle
2012-01-01
Because of public concern over the effectiveness of our schools, a new evaluation system was put in place to hold principals and teachers directly accountable for student academic achievement. Part of this evaluation included student performance on state assessments. The purpose of this qualitative study sought to examine how the transformation…
Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation
Xi Liu
2016-09-01
Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.
Relative azimuth inversion by way of damped maximum correlation estimates
Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.
2012-01-01
Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.
The Kalman Filter Revisited Using Maximum Relative Entropy
Adom Giffin
2014-02-01
Full Text Available In 1960, Rudolf E. Kalman created what is known as the Kalman filter, which is a way to estimate unknown variables from noisy measurements. The algorithm follows the logic that if the previous state of the system is known, it could be used as the best guess for the current state. This information is first applied a priori to any measurement by using it in the underlying dynamics of the system. Second, measurements of the unknown variables are taken. These two pieces of information are taken into account to determine the current state of the system. Bayesian inference is specifically designed to accommodate the problem of updating what we think of the world based on partial or uncertain information. In this paper, we present a derivation of the general Bayesian filter, then adapt it for Markov systems. A simple example is shown for pedagogical purposes. We also show that by using the Kalman assumptions or “constraints”, we can arrive at the Kalman filter using the method of maximum (relative entropy (MrE, which goes beyond Bayesian methods. Finally, we derive a generalized, nonlinear filter using MrE, where the original Kalman Filter is a special case. We further show that the variable relationship can be any function, and thus, approximations, such as the extended Kalman filter, the unscented Kalman filter and other Kalman variants are special cases as well.
Relative level of occurrence of the principal heuristics in Nigeria property valuation
Iroham C.O.,
2013-06-01
Full Text Available The neglect of the other principal heuristics namely avaialability, representative and positivity in real estate behaviourial property research as against the exclusive focus on anchoring and adjustment heuristics invariably results to a lopsided research. This work studied the four principal heuristics in property behaviourial property valutaion in a bid to discovering its relative level of occurrence. The study adopted a cross-sectional questionnaire survey approach of 159 of the 270 Head Offices of Estate Surveying and Valuation firms in Lagos Metropolis, while 29 and 30 questionnaire were distributed to the Head Offices of the entire Estate Surveying and Valuation Firms in Abuja and Port-Harcourt respectively. The data gotten was analyzed with the aid of Statistical Package for the Social Sciences first using frequency distributions/means and the data so analyzed was further analyzed using maximum and minimum values, means/standard deviations and ultimately ranking of such means. The result revealed that respondents use the various principal heuristics in this decreasing order of magnitude: availability heuristics (26.77%, anchoring and adjustment heuristics (18.62%; representative heuristics (15.63% and least of all positivity heuristics (10.41%. The authors thereby opined that emphasis be placed more on availability heuristics research particularly as usage of heuristcis (anchoring and adjustment has been seen to influence valuation inconsistency/accuracy
Principals' Self-Efficacy: Relations with Job Autonomy, Job Satisfaction, and Contextual Constraints
Federici, Roger A.
2013-01-01
The purpose of the present study was to explore relations between principals' self-efficacy, perceived job autonomy, job satisfaction, and perceived contextual constraints to autonomy. Principal self-efficacy was measured by a multidimensional scale called the Norwegian Principal Self-Efficacy Scale. Job autonomy, job satisfaction, and contextual…
THE EFFECT OF THE STATIC RELATIVE STRENGTH ON THE MAXIMUM RELATIVE RECEIVING OF OXYGEN
Abdulla Elezi
2011-09-01
Full Text Available Based on research on the sample of 263 students of age- 18 years, and used batteries of 9 tests for evaluation of the static relative strength and the criterion variable- maximum relative receiving of oxygen (VO2 ml / kg / min based on the Astrand test ,and on regression analysis to determine the influence of the static relative strength on the criterion variable maximum relative oxygen receiving, can be generally concluded that from 9 predictor variables statistically significant partial effect have 2variables. In hierarchical order, they are: the variable of static relative leg strength - endurance of the fingers (the angle of the lower leg and thigh 900 (SRL2 which arithmetic mean is 25.04 seconds and variable ctatic relative strength of arms and shoulders – push-up endurance in the balance beam (angle of the forearm and upper arm 900 ( SRA2 with arithmetic mean of 17.75 seconds. From the statistically influential significant predictor variables on the criterion variable one is from the static relative leg strength (SRL2 and the other is from the static relative strength of arm and shoulder area (SRA2. With the analysis of these relations we can conclude that the isometric contractions of the four headed thigh muscle and the isometric contractions of the three headed upper arm muscle are predominantly responsible for the successful execution of doing actions on a bicycle ergometer and not on the maximum relative receiving of oxygen.
Relative level of occurrence of the principal heuristics in Nigeria property valuation
Iroham C.O.,; Ogunba, O.A.; Oloyede, S.A.
2013-01-01
The neglect of the other principal heuristics namely avaialability, representative and positivity in real estate behaviourial property research as against the exclusive focus on anchoring and adjustment heuristics invariably results to a lopsided research. This work studied the four principal heuristics in property behaviourial property valutaion in a bid to discovering its relative level of occurrence. The study adopted a cross-sectional questionnaire survey approach of 159 of the 270 Head O...
School Principals' Opinions about Public Relations Practices on Schools
Çoruk, Adil
2018-01-01
Schools are at the forefront of the institutions that need to be in close relations with the social environment. In this regard, practices of the public relations are prominent. This obligation is also responsibility of the school principals, as there are no public relations units in public schools. The purpose of this research is to reveal the…
Hongjuan Yu; Jinyun Guo; Jiulong Li; Dapeng Mu; Qiaoli Kong
2015-01-01
Zero drift and solid Earth tide corrections to static relative gravimetric data cannot be ignored. In this paper, a new principal component analysis (PCA) algorithm is presented to extract the zero drift and the solid Earth tide, as signals, from static relative gravimetric data assuming that the components contained in the relative gravimetric data are uncorrelated. Static relative gravity observations from Aug. 15 to Aug. 23, 2014 are used as statistical variables to separate the signal and...
Fault Diagnosis Method Based on Information Entropy and Relative Principal Component Analysis
Xiaoming Xu
2017-01-01
Full Text Available In traditional principle component analysis (PCA, because of the neglect of the dimensions influence between different variables in the system, the selected principal components (PCs often fail to be representative. While the relative transformation PCA is able to solve the above problem, it is not easy to calculate the weight for each characteristic variable. In order to solve it, this paper proposes a kind of fault diagnosis method based on information entropy and Relative Principle Component Analysis. Firstly, the algorithm calculates the information entropy for each characteristic variable in the original dataset based on the information gain algorithm. Secondly, it standardizes every variable’s dimension in the dataset. And, then, according to the information entropy, it allocates the weight for each standardized characteristic variable. Finally, it utilizes the relative-principal-components model established for fault diagnosis. Furthermore, the simulation experiments based on Tennessee Eastman process and Wine datasets demonstrate the feasibility and effectiveness of the new method.
Zaid Sardarzahi, Zaid
2015-06-01
Full Text Available The present paper aims to study the leadership behaviors reported by principals and observed by teachers and its relationship with management experience of principals. A quantitative method used in this study. Target population included all principals and teachers of guidance schools and high schools in Dashtiari District, Iran. A sample consisted of 46 principals and 129 teachers were selected by stratified sampling and simple random sampling methods. Leadership Behavior Description Questionnaire (LBDQ developed by Kozes and Posner (2001 was used for data collection. The obtained data were analyzed using one sample and independent t-test, correlation coefficient and pearson chi-square test. The results showed that teachers describe the leadership behaviors of their principals relatively good. However, the principals themselves evaluated their leadership behaviors as very good. In comparison between leadership behaviors self-reported by principals and those observed by teachers, it was found that there is a significant difference between the views and evaluations of teachers and principals on all components of leadership behaviors of principals, except empowerment. In fact, principals have described their leadership behaviors at a better and more appropriate level than what teachers have done. From the perspective of both teachers and principals, there is no significant relationship between none of the components of leadership behaviors and management experience of principals.
Chen, Shuming; Wang, Dengfeng; Liu, Bo
This paper investigates optimization design of the thickness of the sound package performed on a passenger automobile. The major characteristics indexes for performance selected to evaluate the processes are the SPL of the exterior noise and the weight of the sound package, and the corresponding parameters of the sound package are the thickness of the glass wool with aluminum foil for the first layer, the thickness of the glass fiber for the second layer, and the thickness of the PE foam for the third layer. In this paper, the process is fundamentally with multiple performances, thus, the grey relational analysis that utilizes grey relational grade as performance index is especially employed to determine the optimal combination of the thickness of the different layers for the designed sound package. Additionally, in order to evaluate the weighting values corresponding to various performance characteristics, the principal component analysis is used to show their relative importance properly and objectively. The results of the confirmation experiments uncover that grey relational analysis coupled with principal analysis methods can successfully be applied to find the optimal combination of the thickness for each layer of the sound package material. Therefore, the presented method can be an effective tool to improve the vehicle exterior noise and lower the weight of the sound package. In addition, it will also be helpful for other applications in the automotive industry, such as the First Automobile Works in China, Changan Automobile in China, etc.
Circulation types related to lightning activity over Catalonia and the Principality of Andorra
Pineda, N.; Esteban, P.; Trapero, L.; Soler, X.; Beck, C.
In the present study, we use a Principal Component Analysis (PCA) to characterize the surface 6-h circulation types related to substantial lightning activity over the Catalonia area (north-eastern Iberia) and the Principality of Andorra (eastern Pyrenees) from January 2003 to December 2007. The gridded data used for classification of the circulation types is the NCEP Final Analyses of the Global Tropospheric Analyses at 1° resolution over the region 35°N-48°N by 5°W-8°E. Lightning information was collected by the SAFIR lightning detection system operated by the Meteorological Service of Catalonia (SMC), which covers the region studied. We determined nine circulation types on the basis of the S-mode orthogonal rotated Principal Component Analysis. The “extreme scores” principle was used previous to the assignation of all cases, to obtain the number of final types and their centroids. The distinct differences identified in the resulting mean Sea Level Pressure (SLP) fields enabled us to group the types into three main patterns, taking into account their scale/dynamical origin. The first group of types shows the different distribution of the centres of action at synoptic scale associated with the occurrence of lightning. The second group is connected to mesoscale dynamics, mainly induced by the relief of the Pyrenees. The third group shows types with low gradient SLP patterns in which the lightning activity is a consequence of thermal dynamics (coastal and mountain breezes). Apart from reinforcing the consistency of the groups obtained, analysis of the resulting classification improves our understanding of the geographical distribution and genesis factors of thunderstorm activity in the study area, and provides complementary information for supporting weather forecasting. Thus, the catalogue obtained will provide advances in different climatological and meteorological applications, such as nowcasting products or detection of climate change trends.
Subjective Performance Evaluations, Self-esteem, and Ego-threats in Principal-agent Relations
Sebald, Alexander Christopher; Walzl, Markus
find that agents sanction whenever the feedback of principals is below their subjective self-evaluations even if the agents' payoff is independent of the principals' feedback. Based on our experimental analysis we propose a principal-agent model with subjective performance evaluations that accommodates...
A feasibility study on age-related factors of wrist pulse using principal component analysis.
Jang-Han Bae; Young Ju Jeon; Sanghun Lee; Jaeuk U Kim
2016-08-01
Various analysis methods for examining wrist pulse characteristics are needed for accurate pulse diagnosis. In this feasibility study, principal component analysis (PCA) was performed to observe age-related factors of wrist pulse from various analysis parameters. Forty subjects in the age group of 20s and 40s were participated, and their wrist pulse signal and respiration signal were acquired with the pulse tonometric device. After pre-processing of the signals, twenty analysis parameters which have been regarded as values reflecting pulse characteristics were calculated and PCA was performed. As a results, we could reduce complex parameters to lower dimension and age-related factors of wrist pulse were observed by combining-new analysis parameter derived from PCA. These results demonstrate that PCA can be useful tool for analyzing wrist pulse signal.
Maximum relative speeds of living organisms: Why do bacteria perform as fast as ostriches?
Meyer-Vernet, Nicole; Rospars, Jean-Pierre
2016-12-01
Self-locomotion is central to animal behaviour and survival. It is generally analysed by focusing on preferred speeds and gaits under particular biological and physical constraints. In the present paper we focus instead on the maximum speed and we study its order-of-magnitude scaling with body size, from bacteria to the largest terrestrial and aquatic organisms. Using data for about 460 species of various taxonomic groups, we find a maximum relative speed of the order of magnitude of ten body lengths per second over a 1020-fold mass range of running and swimming animals. This result implies a locomotor time scale of the order of one tenth of second, virtually independent on body size, anatomy and locomotion style, whose ubiquity requires an explanation building on basic properties of motile organisms. From first-principle estimates, we relate this generic time scale to other basic biological properties, using in particular the recent generalisation of the muscle specific tension to molecular motors. Finally, we go a step further by relating this time scale to still more basic quantities, as environmental conditions at Earth in addition to fundamental physical and chemical constants.
Benfenati, Francesco; Beretta, Gian Paolo
2018-04-01
We show that to prove the Onsager relations using the microscopic time reversibility one necessarily has to make an ergodic hypothesis, or a hypothesis closely linked to that. This is true in all the proofs of the Onsager relations in the literature: from the original proof by Onsager, to more advanced proofs in the context of linear response theory and the theory of Markov processes, to the proof in the context of the kinetic theory of gases. The only three proofs that do not require any kind of ergodic hypothesis are based on additional hypotheses on the macroscopic evolution: Ziegler's maximum entropy production principle (MEPP), the principle of time reversal invariance of the entropy production, or the steepest entropy ascent principle (SEAP).
Dansereau Richard M
2007-01-01
Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.
Mohammad H. Radfar
2006-11-01
Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.
Selin Aviyente
2010-01-01
Full Text Available Joint time-frequency representations offer a rich representation of event related potentials (ERPs that cannot be obtained through individual time or frequency domain analysis. This representation, however, comes at the expense of increased data volume and the difficulty of interpreting the resulting representations. Therefore, methods that can reduce the large amount of time-frequency data to experimentally relevant components are essential. In this paper, we present a method that reduces the large volume of ERP time-frequency data into a few significant time-frequency parameters. The proposed method is based on applying the widely used matching pursuit (MP approach, with a Gabor dictionary, to principal components extracted from the time-frequency domain. The proposed PCA-Gabor decomposition is compared with other time-frequency data reduction methods such as the time-frequency PCA approach alone and standard matching pursuit methods using a Gabor dictionary for both simulated and biological data. The results show that the proposed PCA-Gabor approach performs better than either the PCA alone or the standard MP data reduction methods, by using the smallest amount of ERP data variance to produce the strongest statistical separation between experimental conditions.
S. Prabhu
2014-06-01
Full Text Available Carbon nanotube (CNT mixed grinding wheel has been used in the electrolytic in-process dressing (ELID grinding process to analyze the surface characteristics of AISI D2 Tool steel material. CNT grinding wheel is having an excellent thermal conductivity and good mechanical property which is used to improve the surface finish of the work piece. The multiobjective optimization of grey relational analysis coupled with principal component analysis has been used to optimize the process parameters of ELID grinding process. Based on the Taguchi design of experiments, an L9 orthogonal array table was chosen for the experiments. The confirmation experiment verifies the proposed that grey-based Taguchi method has the ability to find out the optimal process parameters with multiple quality characteristics of surface roughness and metal removal rate. Analysis of variance (ANOVA has been used to verify and validate the model. Empirical model for the prediction of output parameters has been developed using regression analysis and the results were compared for with and without using CNT grinding wheel in ELID grinding process.
Age-Related Differences of Maximum Phonation Time in Patients after Cardiac Surgery
Kazuhiro P. Izawa
2017-12-01
Full Text Available Background and aims: Maximum phonation time (MPT, which is related to respiratory function, is widely used to evaluate maximum vocal capabilities, because its use is non-invasive, quick, and inexpensive. We aimed to examine differences in MPT by age, following recovery phase II cardiac rehabilitation (CR. Methods: This longitudinal observational study assessed 50 consecutive cardiac patients who were divided into the middle-aged group (<65 years, n = 29 and older-aged group (≥65 years, n = 21. MPTs were measured at 1 and 3 months after cardiac surgery, and were compared. Results: The duration of MPT increased more significantly from month 1 to month 3 in the middle-aged group (19.2 ± 7.8 to 27.1 ± 11.6 s, p < 0.001 than in the older-aged group (12.6 ± 3.5 to 17.9 ± 6.0 s, p < 0.001. However, no statistically significant difference occurred in the % change of MPT from 1 month to 3 months after cardiac surgery between the middle-aged group and older-aged group, respectively (41.1% vs. 42.1%. In addition, there were no significant interactions of MPT in the two groups for 1 versus 3 months (F = 1.65, p = 0.20. Conclusion: Following phase II, CR improved MPT for all cardiac surgery patients.
Age-Related Differences of Maximum Phonation Time in Patients after Cardiac Surgery.
Izawa, Kazuhiro P; Kasahara, Yusuke; Hiraki, Koji; Hirano, Yasuyuki; Watanabe, Satoshi
2017-12-21
Background and aims: Maximum phonation time (MPT), which is related to respiratory function, is widely used to evaluate maximum vocal capabilities, because its use is non-invasive, quick, and inexpensive. We aimed to examine differences in MPT by age, following recovery phase II cardiac rehabilitation (CR). Methods: This longitudinal observational study assessed 50 consecutive cardiac patients who were divided into the middle-aged group (<65 years, n = 29) and older-aged group (≥65 years, n = 21). MPTs were measured at 1 and 3 months after cardiac surgery, and were compared. Results: The duration of MPT increased more significantly from month 1 to month 3 in the middle-aged group (19.2 ± 7.8 to 27.1 ± 11.6 s, p < 0.001) than in the older-aged group (12.6 ± 3.5 to 17.9 ± 6.0 s, p < 0.001). However, no statistically significant difference occurred in the % change of MPT from 1 month to 3 months after cardiac surgery between the middle-aged group and older-aged group, respectively (41.1% vs. 42.1%). In addition, there were no significant interactions of MPT in the two groups for 1 versus 3 months (F = 1.65, p = 0.20). Conclusion: Following phase II, CR improved MPT for all cardiac surgery patients.
Urniezius, Renaldas
2011-01-01
The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigid body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.
Lehmann, A; Scheffler, Ch; Hermanussen, M
2010-02-01
Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.
School Principals' Leadership Behaviours and Its Relation with Teachers' Sense of Self-Efficacy
Mehdinezhad, Vali; Mansouri, Masoumeh
2016-01-01
The aim of this study was to investigate the relationship between school principals' leadership behaviours and teachers' sense of self-efficacy. The research method was descriptive and correlational. A sample size of 254 teachers was simply selected randomly by proportional sampling. For data collection, the Teachers' Sense of Efficacy Scale of…
Baltaci, Ali
2017-01-01
The aim of this study is to determine the mediating role of prejudice in the relationship between the cultural intelligence of school principals and the level of entrepreneurship. The design of this study was classified as correlational survey research. This study was designed by quantitative research method. The universe of this study constitutes…
Predictive Ability of Variables Related to the Aspects of School Principals' Management
Lukaš, Mirko; Jankovic, Boris
2014-01-01
The authors of this research paper believe that school principals play an irreplaceable role in raising the school efficiency. Their role is rather neglected in the Croatian academic debates on improving the quality of school system. This research intends to enhance the scientific level of their position as irreplaceable factors in a school…
Iyyappan, I.; Ponmurugan, M.
2018-03-01
A trade of figure of merit (\\dotΩ ) criterion accounts the best compromise between the useful input energy and the lost input energy of the heat devices. When the heat engine is working at maximum \\dotΩ criterion its efficiency increases significantly from the efficiency at maximum power. We derive the general relations between the power, efficiency at maximum \\dotΩ criterion and minimum dissipation for the linear irreversible heat engine. The efficiency at maximum \\dotΩ criterion has the lower bound \
Kordi, Mehdi; Goodall, Stuart; Barratt, Paul; Rowley, Nicola; Leeder, Jonathan; Howatson, Glyn
2017-08-01
From a cycling paradigm, little has been done to understand the relationships between maximal isometric strength of different single joint lower body muscle groups and their relation with, and ability to predict PPO and how they compare to an isometric cycling specific task. The aim of this study was to establish relationships between maximal voluntary torque production from isometric single-joint and cycling specific tasks and assess their ability to predict PPO. Twenty male trained cyclists participated in this study. Peak torque was measured by performing maximum voluntary contractions (MVC) of knee extensors, knee flexors, dorsi flexors and hip extensors whilst instrumented cranks measured isometric peak torque from MVC when participants were in their cycling specific position (ISOCYC). A stepwise regression showed that peak torque of the knee extensors was the only significant predictor of PPO when using SJD and accounted for 47% of the variance. However, when compared to ISOCYC, the only significant predictor of PPO was ISOCYC, which accounted for 77% of the variance. This suggests that peak torque of the knee extensors was the best single-joint predictor of PPO in sprint cycling. Furthermore, a stronger prediction can be made from a task specific isometric task. Copyright © 2017 Elsevier Ltd. All rights reserved.
Larsen, Donald E.; Hunter, Joseph E.
2014-01-01
Research conducted by Larsen and Hunter (2013, February) identified a clear pattern in secondary school principals' decision-making related to mandated change: more than half of participants' decisions were based on core values and beliefs, requiring value judgments. Analysis of themes revealed that more than half of administrative decisions…
Hamata,Marcelo Matida; Zuim,Paulo Renato Junqueira; Garcia,Alicio Rosalino
2009-01-01
Fabrication of occlusal splints in centric relation for temporomandibular disorders (TMD) patients is arguable, since this position has been defined for asymptomatic stomatognathic system. Thus, maximum intercuspation might be employed in patients with occlusal stability, eliminating the need for interocclusal records. This study compared occlusal splints fabricated in centric relation and maximum intercuspation in muscle pain reduction of TMD patients. Twenty patients with TMD of myogenous o...
Adom Giffin
2014-09-01
Full Text Available In this paper, we continue our efforts to show how maximum relative entropy (MrE can be used as a universal updating algorithm. Here, our purpose is to tackle a joint state and parameter estimation problem where our system is nonlinear and in a non-equilibrium state, i.e., perturbed by varying external forces. Traditional parameter estimation can be performed by using filters, such as the extended Kalman filter (EKF. However, as shown with a toy example of a system with first order non-homogeneous ordinary differential equations, assumptions made by the EKF algorithm (such as the Markov assumption may not be valid. The problem can be solved with exponential smoothing, e.g., exponentially weighted moving average (EWMA. Although this has been shown to produce acceptable filtering results in real exponential systems, it still cannot simultaneously estimate both the state and its parameters and has its own assumptions that are not always valid, for example when jump discontinuities exist. We show that by applying MrE as a filter, we can not only develop the closed form solutions, but we can also infer the parameters of the differential equation simultaneously with the means. This is useful in real, physical systems, where we want to not only filter the noise from our measurements, but we also want to simultaneously infer the parameters of the dynamics of a nonlinear and non-equilibrium system. Although there were many assumptions made throughout the paper to illustrate that EKF and exponential smoothing are special cases ofMrE, we are not “constrained”, by these assumptions. In other words, MrE is completely general and can be used in broader ways.
Journy, N; Sinno-Tellier, S; Maccia, C; Le Tertre, A; Pirard, P; Pagès, P; Eilstein, D; Donadieu, J; Bar, O
2012-01-01
Objective The study aimed to characterise the factors related to the X-ray dose delivered to the patient's skin during interventional cardiology procedures. Methods We studied 177 coronary angiographies (CAs) and/or percutaneous transluminal coronary angioplasties (PTCAs) carried out in a French clinic on the same radiography table. The clinical and therapeutic characteristics, and the technical parameters of the procedures, were collected. The dose area product (DAP) and the maximum skin dose (MSD) were measured by an ionisation chamber (Diamentor; Philips, Amsterdam, The Netherlands) and radiosensitive film (Gafchromic; International Specialty Products Advanced Materials Group, Wayne, NJ). Multivariate analyses were used to assess the effects of the factors of interest on dose. Results The mean MSD and DAP were respectively 389 mGy and 65 Gy cm−2 for CAs, and 916 mGy and 69 Gy cm−2 for PTCAs. For 8% of the procedures, the MSD exceeded 2 Gy. Although a linear relationship between the MSD and the DAP was observed for CAs (r=0.93), a simple extrapolation of such a model to PTCAs would lead to an inadequate assessment of the risk, especially for the highest dose values. For PTCAs, the body mass index, the therapeutic complexity, the fluoroscopy time and the number of cine frames were independent explanatory factors of the MSD, whoever the practitioner was. Moreover, the effect of technical factors such as collimation, cinematography settings and X-ray tube orientations on the DAP was shown. Conclusion Optimising the technical options for interventional procedures and training staff on radiation protection might notably reduce the dose and ultimately avoid patient skin lesions. PMID:22457404
Ion Gr. IONESCU
2015-11-01
Full Text Available Although the existence of some regulating documents, called capitulations, concerning the relations on various plans, between the Romanian Country, Moldavia and the Ottoman Empire was known, the first one of these diplomatic documents, that have been operational over time, was discovered only in 1974. It was an act that had been granted to Mihnea Turkished, in the year 1585. This important discovery has been completed, with others that had the same purpose. In fact, they were some diplomatic documents, with the role of Treaty, which has regulated quite explicitly, the status of the two Romanian principalities, in relations with the suzerain power. The most important fact of their contents was the recognition of the internal autonomy of principalities and a certain degree of freedom, in relations outside the borders. The price was that Romanian countries paid was ,however, to never become hostile to Ottoman interests, integrating in the Ottoman foreign policy and paying an annual tribute.
Marcelo Matida Hamata
2009-02-01
Full Text Available Fabrication of occlusal splints in centric relation for temporomandibular disorders (TMD patients is arguable, since this position has been defined for asymptomatic stomatognathic system. Thus, maximum intercuspation might be employed in patients with occlusal stability, eliminating the need for interocclusal records. This study compared occlusal splints fabricated in centric relation and maximum intercuspation in muscle pain reduction of TMD patients. Twenty patients with TMD of myogenous origin and bruxism were divided into 2 groups treated with splints in maximum intercuspation (I or centric relation (II. Clinical, electrognathographic and electromyographic examinations were performed before and 3 months after therapy. Data were analyzed by the Student's t test. Differences at 5% level of probability were considered statistically significant. There was a remarkable reduction in pain symptomatology, without statistically significant differences (p>0.05 between the groups. There was mandibular repositioning during therapy, as demonstrated by the change in occlusal contacts on the splints. Electrognathographic examination demonstrated a significant increase in maximum left lateral movement for group I and right lateral movement for group II (p0.05 in the electromyographic activities at rest after utilization of both splints. In conclusion, both occlusal splints were effective for pain control and presented similar action. The results suggest that maximum intercuspation may be used for fabrication of occlusal splints in patients with occlusal stability without large discrepancies between centric relation and maximum intercuspation. Moreover, this technique is simpler and less expensive.
Hamata, Marcelo Matida; Zuim, Paulo Renato Junqueira; Garcia, Alicio Rosalino
2009-01-01
Fabrication of occlusal splints in centric relation for temporomandibular disorders (TMD) patients is arguable, since this position has been defined for asymptomatic stomatognathic system. Thus, maximum intercuspation might be employed in patients with occlusal stability, eliminating the need for interocclusal records. This study compared occlusal splints fabricated in centric relation and maximum intercuspation in muscle pain reduction of TMD patients. Twenty patients with TMD of myogenous origin and bruxism were divided into 2 groups treated with splints in maximum intercuspation (I) or centric relation (II). Clinical, electrognathographic and electromyographic examinations were performed before and 3 months after therapy. Data were analyzed by the Student's t test. Differences at 5% level of probability were considered statistically significant. There was a remarkable reduction in pain symptomatology, without statistically significant differences (p>0.05) between the groups. There was mandibular repositioning during therapy, as demonstrated by the change in occlusal contacts on the splints. Electrognathographic examination demonstrated a significant increase in maximum left lateral movement for group I and right lateral movement for group II (p0.05) in the electromyographic activities at rest after utilization of both splints. In conclusion, both occlusal splints were effective for pain control and presented similar action. The results suggest that maximum intercuspation may be used for fabrication of occlusal splints in patients with occlusal stability without large discrepancies between centric relation and maximum intercuspation. Moreover, this technique is simpler and less expensive.
Perspectives on Inmate Communication and Interpersonal Relations in the Maximum Security Prison.
Van Voorhis, Patricia; Meussling, Vonne
In recent years, scholarly and applied inquiry has addressed the importance of interpersonal communication patterns and problems in maximum security institutions for males. As a result of this research, the number of programs designed to improve the interpersonal effectiveness of prison inmates has increased dramatically. Research suggests that…
Clegg, Samuel M [Los Alamos National Laboratory; Barefield, James E [Los Alamos National Laboratory; Wiens, Roger C [Los Alamos National Laboratory; Sklute, Elizabeth [MT HOLYOKE COLLEGE; Dyare, Melinda D [MT HOLYOKE COLLEGE
2008-01-01
Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.
25(OHD3 Levels Relative to Muscle Strength and Maximum Oxygen Uptake in Athletes
Książek Anna
2016-04-01
Full Text Available Vitamin D is mainly known for its effects on the bone and calcium metabolism. The discovery of Vitamin D receptors in many extraskeletal cells suggests that it may also play a significant role in other organs and systems. The aim of our study was to assess the relationship between 25(OHD3 levels, lower limb isokinetic strength and maximum oxygen uptake in well-trained professional football players. We enrolled 43 Polish premier league soccer players. The mean age was 22.7±5.3 years. Our study showed decreased serum 25(OHD3 levels in 74.4% of the professional players. The results also demonstrated a lack of statistically significant correlation between 25(OHD3 levels and lower limb muscle strength with the exception of peak torque of the left knee extensors at an angular velocity of 150°/s (r=0.41. No significant correlations were found between hand grip strength and maximum oxygen uptake. Based on our study we concluded that in well-trained professional soccer players, there was no correlation between serum levels of 25(OHD3 and muscle strength or maximum oxygen uptake.
Relative timing of last glacial maximum and late-glacial events in the central tropical Andes
Bromley, Gordon R. M.; Schaefer, Joerg M.; Winckler, Gisela; Hall, Brenda L.; Todd, Claire E.; Rademaker, Kurt M.
2009-11-01
Whether or not tropical climate fluctuated in synchrony with global events during the Late Pleistocene is a key problem in climate research. However, the timing of past climate changes in the tropics remains controversial, with a number of recent studies reporting that tropical ice age climate is out of phase with global events. Here, we present geomorphic evidence and an in-situ cosmogenic 3He surface-exposure chronology from Nevado Coropuna, southern Peru, showing that glaciers underwent at least two significant advances during the Late Pleistocene prior to Holocene warming. Comparison of our glacial-geomorphic map at Nevado Coropuna to mid-latitude reconstructions yields a striking similarity between Last Glacial Maximum (LGM) and Late-Glacial sequences in tropical and temperate regions. Exposure ages constraining the maximum and end of the older advance at Nevado Coropuna range between 24.5 and 25.3 ka, and between 16.7 and 21.1 ka, respectively, depending on the cosmogenic production rate scaling model used. Similarly, the mean age of the younger event ranges from 10 to 13 ka. This implies that (1) the LGM and the onset of deglaciation in southern Peru occurred no earlier than at higher latitudes and (2) that a significant Late-Glacial event occurred, most likely prior to the Holocene, coherent with the glacial record from mid and high latitudes. The time elapsed between the end of the LGM and the Late-Glacial event at Nevado Coropuna is independent of scaling model and matches the period between the LGM termination and Late-Glacial reversal in classic mid-latitude records, suggesting that these events in both tropical and temperate regions were in phase.
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
Iannone, Ron
1973-01-01
Achievement and recognition were mentioned as factors appearing with greater frequency in principal's job satisfactions; school district policy and interpersonal relations were mentioned as job dissatisfactions. (Editor)
Shen, Jianping; Leslie, Jeffrey M.; Spybrook, Jessaca K.; Ma, Xin
2012-01-01
Using nationally representative samples for public school teachers and principals, the authors inquired into whether principal background and school processes are related to teacher job satisfaction. Employing hierarchical linear modeling (HLM), the authors were able to control for background characteristics at both the teacher and school levels.…
Gabriel Pinkas
2017-09-01
Full Text Available The experience of the environment in which the activity is performed is a significant factor of the outcome of this activity, that is, the efficiency of the work and the degree of achieving the goal. Within the work environment, physical and social conditions can be observed. The first, which includes material and technical means, are mostly static, easily perceivable and measurable. Others, which include social relations, are much more susceptible to change, more difficult to perceive and measure, and their experience with different individuals within the same group can be more distinct. Although all members of the group participate in group dynamics and relationships, not all are equally relevant to these processes. Considering the position that carries the right and responsibility of setting up a vision and mission, setting goals, creating conditions for work, making decisions and providing feedback, the leader is in most cases crucial. This paper analyzes the role of elementary school principals in creating a school climate, as a non - material environment in which educational activity is carried out, and in this sense it is a specific group / work organization. An estimate was used to measure both variables, i.e. teacher's experience. The instruments used are Multifactor Leadership Questionnaire - MLQ (Avolio and Bass and School Level Environment Questionnaire - SLEQ (Johnson, Stevens and Zvoch. The survey was conducted in elementary schools in the wider city area of Tuzla, on a sample of 467 teachers and 25 principals. In statistical data processing, multiple regression (Ordinary least squares and direct square discriminatory analysis were applied. The obtained results point to the connection between the perceived leadership style of elementary school principals and the school climate experienced by teachers, especially in the field of innovation in teaching and mutual cooperation.
DeWeber, Jefferson T.; Wagner, Tyler
2018-01-01
Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30‐day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species’ distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold‐water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid‐century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation
DeWeber, Jefferson T; Wagner, Tyler
2018-06-01
Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30-day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species' distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold-water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid-century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our
Lemos, Jose P.S.; Lopes, Francisco J.; Quinta, Goncalo [Universidade de Lisboa, UL, Departamento de Fisica, Centro Multidisciplinar de Astrofisica, CENTRA, Instituto Superior Tecnico, IST, Lisbon (Portugal); Zanchin, Vilson T. [Universidade Federal do ABC, Centro de Ciencias Naturais e Humanas, Santo Andre, SP (Brazil)
2015-02-01
One of the stiffest equations of state for matter in a compact star is constant energy density and this generates the interior Schwarzschild radius to mass relation and the Misner maximum mass for relativistic compact stars. If dark matter populates the interior of stars, and this matter is supersymmetric or of some other type, some of it possessing a tiny electric charge, there is the possibility that highly compact stars can trap a small but non-negligible electric charge. In this case the radius to mass relation for such compact stars should get modifications. We use an analytical scheme to investigate the limiting radius to mass relation and the maximum mass of relativistic stars made of an incompressible fluid with a small electric charge. The investigation is carried out by using the hydrostatic equilibrium equation, i.e., the Tolman-Oppenheimer-Volkoff (TOV) equation, together with the other equations of structure, with the further hypothesis that the charge distribution is proportional to the energy density. The approach relies on Volkoff and Misner's method to solve the TOV equation. For zero charge one gets the interior Schwarzschild limit, and supposing incompressible boson or fermion matter with constituents with masses of the order of the neutron mass one finds that the maximum mass is the Misner mass. For a small electric charge, our analytical approximating scheme, valid in first order in the star's electric charge, shows that the maximum mass increases relatively to the uncharged case, whereas the minimum possible radius decreases, an expected effect since the new field is repulsive, aiding the pressure to sustain the star against gravitational collapse. (orig.)
Using Tranformation Group Priors and Maximum Relative Entropy for Bayesian Glaciological Inversions
Arthern, R. J.; Hindmarsh, R. C. A.; Williams, C. R.
2014-12-01
One of the key advances that has allowed better simulations of the large ice sheets of Greenland and Antarctica has been the use of inverse methods. These have allowed poorly known parameters such as the basal drag coefficient and ice viscosity to be constrained using a wide variety of satellite observations. Inverse methods used by glaciologists have broadly followed one of two related approaches. The first is minimization of a cost function that describes the misfit to the observations, often accompanied by some kind of explicit or implicit regularization that promotes smallness or smoothness in the inverted parameters. The second approach is a probabilistic framework that makes use of Bayes' theorem to update prior assumptions about the probability of parameters, making use of data with known error estimates. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. What should the functional form of the cost function be if there are alternatives? What kind of regularization should be applied, and how much? How should the prior probability distribution for a parameter such as basal slipperiness be specified when we know so little about the details of the subglacial environment? Here we consider some approaches that have been used to address these questions and discuss ways that probabilistic prior information used for regularizing glaciological inversions might be specified with greater objectivity.
National Oceanic and Atmospheric Administration, Department of Commerce — Principal Ports are defined by port limits or US Army Corps of Engineers (USACE) projects, these exclude non-USACE projects not authorized for publication. The...
Syed S. Ghani
2017-12-01
Full Text Available The current work observes the trends in Lautoka’s temperature and relative humidity during the period 2003 – 2013, which were analyzed using the recently updated data obtained from Fiji Meteorological Services (FMS. Four elements, mean maximum temperature, mean minimum temperature along with diurnal temperature range (DTR and mean relative humidity are investigated. From 2003–2013, the annual mean temperature has been enhanced between 0.02 and 0.080C. The heating is more in minimum temperature than in maximum temperature, resulting in a decrease of diurnal temperature range. The statistically significant increase was mostly seen during the summer months of December and January. Mean Relative Humidity has also increased from 3% to 8%. The bases of abnormal climate conditions are also studied. These bases were defined with temperature or humidity anomalies in their appropriate time sequences. These established the observed findings and exhibited that climate has been becoming gradually damper and heater throughout Lautoka during this period. While we are only at an initial phase in the probable inclinations of temperature changes, ecological reactions to recent climate change are already evidently noticeable. So it is proposed that it would be easier to identify climate alteration in a small island nation like Fiji.
Hallin, M.; Hörmann, S.; Piegorsch, W.; El Shaarawi, A.
2012-01-01
Principal Components are probably the best known and most widely used of all multivariate analysis techniques. The essential idea consists in performing a linear transformation of the observed k-dimensional variables in such a way that the new variables are vectors of k mutually orthogonal
Eggers, G. L.; Lewis, K. W.; Simons, F. J.; Olhede, S.
2013-12-01
Venus does not possess a plate-tectonic system like that observed on Earth, and many surface features--such as tesserae and coronae--lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere, requiring a study of topography and gravity, and how they relate. Past studies of topography dealt with mapping and classification of visually observed features, and studies of gravity dealt with inverting the relation between topography and gravity anomalies to recover surface density and elastic thickness in either the space (correlation) or the spectral (admittance, coherence) domain. In the former case, geological features could be delineated but not classified quantitatively. In the latter case, rectangular or circular data windows were used, lacking geological definition. While the estimates of lithospheric strength on this basis were quantitative, they lacked robust error estimates. Here, we remapped the surface into 77 regions visually and qualitatively defined from a combination of Magellan topography, gravity, and radar images. We parameterize the spectral covariance of the observed topography, treating it as a Gaussian process assumed to be stationary over the mapped regions, using a three-parameter isotropic Matern model, and perform maximum-likelihood based inversions for the parameters. We discuss the parameter distribution across the Venusian surface and across terrain types such as coronoae, dorsae, tesserae, and their relation with mean elevation and latitudinal position. We find that the three-parameter model, while mathematically established and applicable to Venus topography, is overparameterized, and thus reduce the results to a two-parameter description of the peak spectral variance and the range-to-half-peak variance (in function of the wavenumber). With the reduction the clustering of geological region types in two-parameter space becomes promising. Finally, we perform inversions for the JOINT spectral variance of
Nix, Toby Lee
2012-01-01
The purpose of this study was to explore the perceptions and experiences of three Texas high school principals regarding their first-year of leadership involving Career and Technical Education (CTE) programs. A narrative non-fiction methodology was used to present the participants' stories and perceptions of their lived experiences. The three…
Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong
2016-01-01
Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267
Kyungsoo Kim
2016-06-01
Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.
Shennan, Ian; Bradley, Sarah L.; Edwards, Robin
2018-05-01
The new sea-level database for Britain and Ireland contains >2100 data points from 86 regions and records relative sea-level (RSL) changes over the last 20 ka and across elevations ranging from ∼+40 to -55 m. It reveals radically different patterns of RSL as we move from regions near the centre of the Celtic ice sheet at the last glacial maximum to regions near and beyond the ice limits. Validated sea-level index points and limiting data show good agreement with the broad patterns of RSL change predicted by current glacial isostatic adjustment (GIA) models. The index points show no consistent pattern of synchronous coastal advance and retreat across different regions, ∼100-500 km scale, indicating that within-estuary processes, rather than decimetre- and centennial-scale oscillations in sea level, produce major controls on the temporal pattern of horizontal shifts in coastal sedimentary environments. Comparisons between the database and GIA model predictions for multiple regions provide potentially powerful constraints on various characteristics of global GIA models, including the magnitude of MWP1A, the final deglaciation of the Laurentide ice sheet and the continued melting of Antarctica after 7 ka BP.
Nathanaili, Valbona
2016-01-01
This article aims to evaluate the relation between school performance and the Teacher's Influence Scale on certain issues from their colleagues and principals in the public educational system of Albania. For this purpose, a questionnaire was used. The sample consisted of 428 teachers, teaching at 20 public schools in the pre-university educational…
Young, I. Phillip; Vang, Maiyoua; Young, Karen Holsey
2008-01-01
Standards-based student achievement scores are used to assess the effectiveness of public education and to have important implications regarding school public relations and human resource practices. Often overlooked is that these scores may be moderated by the characteristics of students, the qualifications of principals, and the restraints…
Legal Problems of the Principal.
Stern, Ralph D.; And Others
The three talks included here treat aspects of the law--tort liability, student records, and the age of majority--as they relate to the principal. Specifically, the talk on torts deals with the consequences of principal negligence in the event of injuries to students. Assurance is given that a reasonable and prudent principal will have a minimum…
Radim Uhlář
2009-09-01
Full Text Available BACKGROUND: There are several factors (the initial ski jumper's body position and its changes at the transition to the flight phase, the magnitude and the direction of the velocity vector of the jumper's center of mass, the magnitude of the aerodynamic drag and lift forces, etc., which determine the trajectory of the jumper ski system along with the total distance of the jump. OBJECTIVE: The objective of this paper is to bring out a method based on Pontryagin's maximum principle, which allows us to obtain a solution of the optimization problem for flight style control with three constrained control variables – the angle of attack (a, body ski angle (b, and ski opening angle (V. METHODS: The flight distance was used as the optimality criterion. The borrowed regression function was taken as the source of information about the dependence of the drag (D and lift (L area on control variables with tabulated regression coefficients. The trajectories of the reference and optimized jumps were compared with the K = 125 m jumping hill profile in Frenštát pod Radhoštěm (Czech Republic and the appropriate lengths of the jumps, aerodynamic drag and lift forces, magnitudes of the ski jumper system's center of mass velocity vector and it's vertical and horizontal components were evaluated. Admissible control variables were taken at each time from the bounded set to respect the realistic posture of the ski jumper system in flight. RESULTS: It was found that a ski jumper should, within the bounded set of admissible control variables, minimize the angles (a and (b, whereas angle (V should be maximized. The length increment due to optimization is 17%. CONCLUSIONS: For future work it is necessary to determine the dependence of the aerodynamic forces acting on the ski jumper system on the flight via regression analysis of the experimental data as well as the application of the control variables related to the ski jumper's mental and motor abilities. [V
Verhoek-Miller, Nancy; Miller, Duane I; Shirachi, Miyoko; Hoda, Nicholas
2002-08-01
Two studies investigated teachers' and principals' power styles as related to college students' retrospective ratings of satisfaction and peers' abusive behavior. One study also investigated retrospective self-perception as related to students' sensitivity to the occurrence of physical and psychological abuse in the school environment. Among the findings were positive correlations between subjects' perceptions that their typical elementary school teacher used referent, legitimate, or expert power styles and subjects' reported satisfaction with their elementary school experience. Small but statistically significant correlations were found suggesting that principals' power style was weakly associated with ratings of psychological abuse in elementary school and physical abuse in middle school. Also, students who rated themselves as intelligent, sensitive, attractive, and depressive had higher ratings of perceived psychological and physical abuse at school. It was concluded that parameters of leaders' power styles and subjects' vigilance might be useful for understanding school climates. Experimentally designed studies are required.
Chang-Qing Duan
2008-11-01
Full Text Available Color is one of the key characteristics used to evaluate the sensory quality of red wine, and anthocyanins are the main contributors to color. Monomeric anthocyanins and CIELAB color values were investigated by HPLC-MS and spectrophotometry during fermentation of Cabernet Sauvignon red wine, and principal component regression (PCR, a statistical tool, was used to establish a linkage between the detected anthocyanins and wine coloring. The results showed that 14 monomeric anthocyanins could be identified in wine samples, and all of these anthocyanins were negatively correlated with the L*, b* and H*ab values, but positively correlated with a* and C*ab values. On an equal concentration basis for each detected anthocyanin, cyanidin-3-O-glucoside (Cy3-glu had the most influence on CIELAB color value, while malvidin 3-O-glucoside (Mv3-glu had the least. The color values of various monomeric anthocyanins were influenced by their structures, substituents on the B-ring, acyl groups on the glucoside and the molecular steric structure. This work develops a statistical method for evaluating correlation between wine color and monomeric anthocyanins, and also provides a basis for elucidating the effect of intramolecular copigmentation on wine coloring.
Zhao, Yan-Yan; Liu, Li-Yan; Han, Yuan-Yuan; Li, Yue-Qiu; Wang, Yan; Shi, Min-Jian
2013-08-01
A simple, fast and sensitive analytical method for the simultaneous separation and detection of 18alpha-glycyrrhizinic acid, 18beta-glycyrrhizinic acid, related substance A and related substance B by RP-HPLC and drug quality standard was established. The structures of principal component isomer and related substances of raw material drug of ammonium glycyrrhizinate have been confirmed. Reference European Pharmacopoeia EP7.0 version, British Pharmacopoeia 2012 version, National Drug Standards of China (WS 1-XG-2002), domestic and international interrelated literature were referred to select the composition of mobile phase. The experimental parameters including salt concentration, pH, addition quantities of organic solvent, column temperature and flow rate were optimized. Finally, the assay was conducted on a Durashell-C18 column (250 mm x 4.6 mm, 5 microm) with 0.01 mol x mL(-1) ammonium perchlorate (add ammonia to adjust the pH value to 8.2) -methanol (48 : 52) as mobile phase at the flow rate of 0.8 mL x min(-1), and the detection wavelength was set at 254 nm. The column temperature was 50 degrees C and the injection volume was 10 microL. The MS, NMR, UV and RP-HPLC were used to confirm the structures of principal component isomer and related substances of raw material drug of ammonium glycyrrhizinate. Under the optimized separation conditions, the calibration curves of 18 alpha-glycyrrhizinic acid, 18beta-glycyrrhizinic acid, related substance A and related substance B showed good linearity within the concentration of 0.50-100 microg x mL(-1) (r = 0.999 9). The detection limits for 18alpha-glycyrrhizinic acid, 18beta-glycyrrhizinic acid, related substance A and related substance B were 0.15, 0.10, 0.10, 0.15 microg x mL(-1) respectively. The method is sensitive, reproducible and the results are accurate and reliable. It can be used for chiral resolution of 18alpha-glycyrrhizinic acid, 18Pbeta-glycyrrhizinic acid, and detection content of principal component and
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Principal. 208.995 Section 208.995 Foreign...) Definitions § 208.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...
2010-04-01
... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Principal. 1006.995 Section 1006.995 Foreign... § 1006.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...
2010-04-01
... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Principal. 1508.995 Section 1508.995 Foreign...) Definitions § 1508.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...
Mohamed, A. A.; Gopalswamy, N; Yashiro, S.; Akiyama, S.; Makela, P.; Xie, H.; Jung, H.
2012-01-01
We study the interaction between coronal holes (CHs) and coronal mass ejections (CMEs) using a resultant force exerted by all the coronal holes present on the disk and is defined as the coronal hole influence parameter (CHIP). The CHIP magnitude for each CH depends on the CH area, the distance between the CH centroid and the eruption region, and the average magnetic field within the CH at the photospheric level. The CHIP direction for each CH points from the CH centroid to the eruption region. We focus on Solar Cycle 23 CMEs originating from the disk center of the Sun (central meridian distance =15deg) and resulting in magnetic clouds (MCs) and non-MCs in the solar wind. The CHIP is found to be the smallest during the rise phase for MCs and non-MCs. The maximum phase has the largest CHIP value (2.9 G) for non-MCs. The CHIP is the largest (5.8 G) for driverless (DL) shocks, which are shocks at 1 AU with no discernible MC or non-MC. These results suggest that the behavior of non-MCs is similar to that of the DL shocks and different from that of MCs. In other words, the CHs may deflect the CMEs away from the Sun-Earth line and force them to behave like limb CMEs with DL shocks. This finding supports the idea that all CMEs may be flux ropes if viewed from an appropriate vantage point.
Andrieux, A.; Vandanjon, P. O.; Lengelle, R.; Chabanon, C.
2010-12-01
Tyre-road estimation methods have been the objective of many research programmes throughout the world. Most of these methods aim at estimating the friction components such as tyre longitudinal slip rate κ and friction coefficient μ in the contact patch area. In order to estimate the maximum available friction coefficient μmax, these methods generally use a probabilistic relationship between the grip obtained for low tyre excitations (such as constant speed driving) and the grip obtained for high tyre excitations (such as emergency braking manoeuvre). Confirmation or invalidation of this relationship from experimental results is the purpose of this paper. Experiments have been carried out on a reference track including several test boards corresponding to a wide textural spectrum. The main advantage of these experiments lies in the use of a vehicle allowing us to accurately build point-by-point relationship between κ and μ. This relationship has been determined for different tyres and pavement textures. Finally, the curves obtained are analysed to check the validity of the relationship between the current friction coefficient used by the car during normal driving conditions and μmax.
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
2010-04-01
... 21 Food and Drugs 9 2010-04-01 2010-04-01 false Principal. 1404.995 Section 1404.995 Food and...) Definitions § 1404.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...
2010-07-01
... 34 Education 1 2010-07-01 2010-07-01 false Principal. 85.995 Section 85.995 Education Office of...) Definitions § 85.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory responsibilities related to a...
Principals: Learn P.R. Survival Skills.
Reep, Beverly B.
1988-01-01
School building level public relations depends on the principal or vice principal. Strategies designed to enhance school public relations programs include linking school and community, working with the press, and keeping morale high inside the school. (MLF)
Elsyad, Moustafa Abdou; Mostafa, Aisha Zakaria
2018-01-01
This cross over study aimed to evaluate the effect of telescopic distal extension removable partial dentures on oral health related quality of life and maximum bite force MATERIALS AND METHODS: Twenty patients with complete maxillary edentulism and partially edentulous mandibles with anterior teeth only remaining were selected for this cross over study. All patients received complete maxillary dentures and mandibular partial removable dental prosthesis (PRDP, control). After 3 months of adaptation, PRDP was replaced with conventional telescopic partial dentures (TPD) or telescopic partial dentures with cantilevered extensions (TCPD) in a quasi-random method. Oral health related quality of life (OHRQoL) was measured using OHIP-14 questionnaire and Maximum bite force (MBF) was measured using a bite force transducer. Measurements were performed 3 months after using each of the following prostheses; PRDP, TPD, and TCPD. TCPD showed the OHIP-14 lowest scores (i.e., the highest patient satisfaction with their OHRQoL), followed by TPD, and PRDP showed the highest OHIP-14 scores (i.e., the lowest patient satisfaction with OHRQoL). TCPD showed the highest MBF (70.7 ± 3.71), followed by TPD (57.4 ± 3.43) and the lowest MBF (40.2 ± 2.20) was noted with PRDP. WITHIN The Limitations of This Study, Mandibular Telescopic Distal Extension Removable Partial Dentures with Cantilevered Extensions Were Associated with Improved Oral Health Related Quality of Life and Maximum Bite Force Compared to Telescopic or Conventional PRDP. Telescopic distal extension removable prostheses is an esthetic restoration in partially edentulous patients with free end saddle. This article describes the addition of cantilevered extensions of this prosthesis. The results showed that telescopic distal extension removable prostheses with cantilevered extensions were associated with improved oral health related quality of life and maximum bite force compared to telescopic or conventional RPDs
Ngeow, Chow-Choong [Graduate Institute of Astronomy, National Central University, Jhongli 32001, Taiwan (China); Kanbur, Shashi M.; Schrecengost, Zachariah [Department of Physics, SUNY Oswego, Oswego, NY 13126 (United States); Bhardwaj, Anupam; Singh, Harinder P. [Department of Physics and Astrophysics, University of Delhi, Delhi 110007 (India)
2017-01-10
Investigation of period–color (PC) and amplitude–color (AC) relations at the maximum and minimum light can be used to probe the interaction of the hydrogen ionization front (HIF) with the photosphere and the radiation hydrodynamics of the outer envelopes of Cepheids and RR Lyraes. For example, theoretical calculations indicated that such interactions would occur at minimum light for RR Lyrae and result in a flatter PC relation. In the past, the PC and AC relations have been investigated by using either the ( V − R ){sub MACHO} or ( V − I ) colors. In this work, we extend previous work to other bands by analyzing the RR Lyraes in the Sloan Digital Sky Survey Stripe 82 Region. Multi-epoch data are available for RR Lyraes located within the footprint of the Stripe 82 Region in five ( ugriz ) bands. We present the PC and AC relations at maximum and minimum light in four colors: ( u − g ){sub 0}, ( g − r ){sub 0}, ( r − i ){sub 0}, and ( i − z ){sub 0}, after they are corrected for extinction. We found that the PC and AC relations for this sample of RR Lyraes show a complex nature in the form of flat, linear or quadratic relations. Furthermore, the PC relations at minimum light for fundamental mode RR Lyrae stars are separated according to the Oosterhoff type, especially in the ( g − r ){sub 0} and ( r − i ){sub 0} colors. If only considering the results from linear regressions, our results are quantitatively consistent with the theory of HIF-photosphere interaction for both fundamental and first overtone RR Lyraes.
Hautala R.
2017-12-01
Full Text Available Research objectives: The author of present article examines the overall response of Russian medieval scribes to the ascension to power of Uzbek Khan – the Golden Horde’s Muslim ruler who made a major effort to spread Islam in the Jochid Empire. Analyzing Russian sources, the author tries to answer the question regarding to what extent we can trust the reliability of their information about the impact of Uzbek’s religious affiliation on the anticipated change in his relations with the Russian principalities. Research materials: Russian sources are of paramount importance for the study of the Golden Horde’s history. On the one hand, Russian chronicles contain a wealth of relevant factual material. The abundance of this material can be explained by the fact that medieval Rus’ was subordinated to the Golden Horde, although its numerous and disjointed princes enjoyed considerable autonomy within the Jochid Empire. On the other hand, the accuracy of Russian chronicles’ information should not be overestimated for several reasons. The preserved chronicle collections were often composed several centuries after the described events. Therefore, their information underwent the influence of significant ideological changes. In addition, the authors of Russian chronicles focused on the description of only those events that were directly related to the Russian principalities and their rulers. The novelty of this study emerges from a comparison of the Russian chronicles’ content with information found in little-known written sources. In particular, Latin sources compiled within the ulus of Jochi in a relatively large amount exactly during the period under study compensate to some extent for the complete absence of Jochid written sources. In this case, the content of the Latin sources will allow us to reconsider the established opinion about the total Islamization of the ulus of Jochi during Uzbek’s reign. Research results: The use of
Borgbjerg, Jens; Bøgsted, Martin; Lindholt, Jes S
2018-01-01
Objectives: Controversy exists regarding optimal caliper placement in ultrasound assessment of maximum abdominal aortic diameter. This study aimed primarily to determine reproducibility of caliper placement in relation to the aortic wall with the three principal methods: leading to leading edge...
Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee
2015-01-01
In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916
Beresford, B; Gibson, F; Bayliss, J; Mukherjee, S
2018-03-01
Growing evidence of the association between health professionals' well-being and patient and organisational outcomes points to the need for effective staff support. This paper reports a brief survey of the UK's children's cancer Principal Treatment Centres (PTCs) regarding staff support systems and practices. A short on-line questionnaire, administered in 2012-2013, collected information about the availability of staff support interventions which seek to prevent work-related stress among different members of the multi-disciplinary team (MDT). It was completed by a member of staff with, where required, assistance from colleagues. All PTCs (n = 19) participated. Debriefs following a patient death was the most frequently reported staff support practice. Support groups were infrequently mentioned. There was wide variability between PTCs, and between professional groups, regarding the number and type of interventions available. Doctors appear to be least likely to have access to support. A few Centres routinely addressed work-related stress in wider staff management strategies. Two Centres had developed a bespoke intervention. Very few Centres were reported to actively raise awareness of support available from their hospital's Occupational Health department. A minority of PTCs had expert input regarding staff support from clinical psychology/liaison psychiatry. © 2016 The Authors. European Journal of Cancer Care Published by John Wiley & Sons Ltd.
Zhao, Yanyan; Liu, Liyan; Han, Yuanyuan; Li, Yueqiu; Wang, Yan; Shi, Minjian
2013-09-01
An analytical method for the simultaneous determination of 18alpha-glycyrrhizic acid, 18beta-glycyrrhizinic acid, related substances A and B and drug quality standard by reversed-phase high performance liquid chromatography (RP-HPLC) was established. The assay was carried out on a Durashell-C18 column (250 mm x 4.6 mm, 5 microm) with 10 mmol/L ammonium perchlorate (the pH value was adjusted to 8.20 with ammonia)-methanol (48:52, v/v) as mobile phase at a flow rate of 0.80 mL/min, and the detection wavelength was set at 254 nm. The column temperature was 50 degrees C and the injection volume was 10 microL. Under the separation conditions, the calibration curves of the analytes showed good linearities within the mass concentrations of 0.50 -100 mg/L (r > 0.999 9). The detection limits for 18alpha-glycyrrhizic acid, 18beta-glycyrrhizinic acid, related substances A and B were 0.15, 0.10, 0.10, 0.15 mg/L, respectively. The average recoveries were between 97.32% and 99.33% (n = 3) with the relative standard deviations (RSDs) between 0.05% and 1.06%. The method is sensitive, reproducible, and the results are accurate and reliable. The method can be used for the determination of principal components and related substances of ammonium glycyrrhizinate for the quality control of raw material drug of ammonium glycyrrhizinate.
Principal-Counselor Collaboration and School Climate
Rock, Wendy D.; Remley, Theodore P.; Range, Lillian M.
2017-01-01
Examining whether principal-counselor collaboration and school climate were related, researchers sent 4,193 surveys to high school counselors in the United States and received 419 responses. As principal-counselor collaboration increased, there were increases in counselors viewing the principal as supportive, the teachers as regarding one another…
Ferguson, J H
1942-03-20
By means of a novel adaptation of the Evelyn photoelectric colorimeter to the measurement of relative turbidities, the question of the flocculation maximum (F.M.) in acetate buffer solutions of varying pH and salt content has been studied on (a) an exceptionally stable prothrombin-free fibrinogen and its solutions after incipient thermal denaturation and incomplete tryptic proteolysis, (b) plasma, similarly treated, (c) prothrombin, thrombin, and (brain) thromboplastin solutions. All the fibrinogens show a remarkable uniformity of the precipitation pattern, viz. F.M. =4.7 (+/-0.2) pH in salt-containing buffer solutions and pH = 5.3 (+/-0.2) in salt-poor buffer (N/100 acetate). The latter approximates the isoelectric point (5.4) obtained by cataphoresis (14). There is no evidence that denaturation or digestion can produce any "second maximum." The data support the view that fibrin formation (under the specific influence of thrombin) is intrinsically unrelated to denaturation and digestion phenomena, although all three can proceed simultaneously in crude materials. A criticism is offered, therefore, of Wöhlisch's blood clotting theory. Further applications of the photoelectric colorimeter to coagulation problems are suggested, including kinetic study of fibrin formation and the assay of fibrinogen, with a possible sensitivity of 7.5 mg. protein in 100 cc. solution.
Redesigning Principal Internships: Practicing Principals' Perspectives
Anast-May, Linda; Buckner, Barbara; Geer, Gregory
2011-01-01
Internship programs too often do not provide the types of experiences that effectively bridge the gap between theory and practice and prepare school leaders who are capable of leading and transforming schools. To help address this problem, the current study is directed at providing insight into practicing principals' views of the types of…
Mustufa Haider Abidi
2017-11-01
Full Text Available Shape memory alloys (SMAs are advanced engineering materials which possess shape memory effects and super-elastic properties. Their high strength, high wear-resistance, pseudo plasticity, etc., makes the machining of Ni-Ti based SMAs difficult using traditional techniques. Among all non-conventional processes, micro-electric discharge machining (micro-EDM is considered one of the leading processes for micro-machining, owing to its high aspect ratio and capability to machine hard-to-cut materials with good surface finish.The selection of the most appropriate input parameter combination to provide the optimum values for various responses is very important in micro-EDM. This article demonstrates the methodology for optimizing multiple quality characteristics (overcut, taper angle and surface roughness to enhance the quality of micro-holes in Ni-Ti based alloy, using the Grey–Taguchi method. A Taguchi-based grey relational analysis coupled with principal component analysis (Grey-PCA methodology was implemented to investigate the effect of three important micro-EDM process parameters, namely capacitance, voltage and electrode material.The analysis of the individual responses established the importance of multi-response optimization. The main effects plots for the micro-EDM parameters and Analysis of Variance (ANOVA indicate that every parameter does not produce same effect on individual responses, and also that the percent contribution of each parameter to individual response is highly varied. As a result, multi-response optimization was implemented using Grey-PCA. Further, this study revealed that the electrode material had the strongest effect on the multi-response parameter, followed by the voltage and capacitance. The main effects plot for the Grey-PCA shows that the micro-EDM parameters “capacitance” at level-2 (i.e., 475 pF, “discharge voltage” at level-1 (i.e., 80 V and the “electrode material” Cu provided the best multi-response.
Principal Ports and Facilities
California Natural Resource Agency — The Principal Port file contains USACE port codes, geographic locations (longitude, latitude), names, and commodity tonnage summaries (total tons, domestic, foreign,...
Principal Ports and Facilities
California Department of Resources — The Principal Port file contains USACE port codes, geographic locations (longitude, latitude), names, and commodity tonnage summaries (total tons, domestic, foreign,...
The Principal and the Law. Elementary Principal Series No. 7.
Doverspike, David E.; Cone, W. Henry
Developments over the past 25 years in school-related legal issues in elementary schools have significantly changed the principal's role. In 1975, a decision of the U.S. Supreme Court established three due-process guidelines for short-term suspension. The decision requires student notification of charges, explanation of evidence, and an informal…
Principal minors and rhombus tilings
Kenyon, Richard; Pemantle, Robin
2014-01-01
The algebraic relations between the principal minors of a generic n × n matrix are somewhat mysterious, see e.g. Lin and Sturmfels (2009 J. Algebra 322 4121–31). We show, however, that by adding in certain almost principal minors, the ideal of relations is generated by translations of a single relation, the so-called hexahedron relation, which is a composition of six cluster mutations. We give in particular a Laurent-polynomial parameterization of the space of n × n matrices, whose parameters consist of certain principal and almost principal minors. The parameters naturally live on vertices and faces of the tiles in a rhombus tiling of a convex 2n-gon. A matrix is associated to an equivalence class of tilings, all related to each other by Yang–Baxter-like transformations. By specializing the initial data we can similarly parameterize the space of Hermitian symmetric matrices over R,C or H the quaternions. Moreover by further specialization we can parametrize the space of positive definite matrices over these rings. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical devoted to ‘Cluster algebras mathematical physics’. (paper)
Principals' Perceptions of Politics
Tooms, Autumn K.; Kretovics, Mark A.; Smialek, Charles A.
2007-01-01
This study is an effort to examine principals' perceptions of workplace politics and its influence on their productivity and efficacy. A survey was used to explore the perceptions of current school administrators with regard to workplace politics. The instrument was disseminated to principals serving public schools in one Midwestern state in the…
Renewing the Principal Pipeline
Turnbull, Brenda J.
2015-01-01
The work principals do has always mattered, but as the demands of the job increase, it matters even more. Perhaps once they could maintain safety and order and call it a day, but no longer. Successful principals today must also lead instruction and nurture a productive learning community for students, teachers, and staff. They set the tone for the…
Principal bundles the classical case
Sontz, Stephen Bruce
2015-01-01
This introductory graduate level text provides a relatively quick path to a special topic in classical differential geometry: principal bundles. While the topic of principal bundles in differential geometry has become classic, even standard, material in the modern graduate mathematics curriculum, the unique approach taken in this text presents the material in a way that is intuitive for both students of mathematics and of physics. The goal of this book is to present important, modern geometric ideas in a form readily accessible to students and researchers in both the physics and mathematics communities, providing each with an understanding and appreciation of the language and ideas of the other.
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
Marija Bodroža-Solarov
2011-01-01
Full Text Available Quality parameters of several wheat grain lots (low vitreous and high vitreous grains,non-infested and infested with rice weevils, (Sitophilus oryzae L. treated with inert dusts(natural zeolite, two diatomaceous earths originating from Serbia and a commercial productProtect-It® were investigated. Principal component analysis (PCA was used to investigatethe classification of treated grain lots and to assess how attributes of technological qualitycontribute to this classification. This research showed that vitreousness (0.95 and test weight(0.93 contributed most to the first principal component whereas extensigraph area (-0.76contributed to the second component. The determined accountability of the total variabilityby the first component was around 55%, while with the second it was 18%, which meansthat those two dimensions together account for around 70% of total variability of the observedset of variables. Principal component analysis (PCA of data set was able to distinguishamong the various treatments of wheat lots. It was revealed that inert dust treatments producedifferent effects depending on the degree of endosperm vitreousness.
Wood, Lesley; Webb, Paul
2008-05-01
Despite various HIV and AIDS training programmes offered for educators by the South African Department of Education, little has been achieved at the level of management in terms of creating a wider understanding of the social and cultural complexities of the condition and its impact on the quality of teaching and learning. Specifically, there is a lack of developmental programmes to help school principals provide leadership that can ensure that teachers and children who live in a context affected by the disease will still find themselves in a school environment of quality, care and compassion. With this in mind, we conducted a qualitative research enquiry among a sample of 12 school principals in the Eastern Cape Province in order to discover their perceptions about the impacts of HIV and AIDS on their schools and to learn how they have responded to the corresponding challenges. Our intention was to use the findings primarily to inform the development of an academic programme and short courses to empower school principals and leadership in this regard, but the findings may also be relevant as a guide for research on a larger scale.
Kaufmann, Tobias; Kübler, Andrea
2014-10-01
Objective. The speed of brain-computer interfaces (BCI), based on event-related potentials (ERP), is inherently limited by the commonly used one-stimulus paradigm. In this paper, we introduce a novel paradigm that can increase the spelling speed by a factor of 2, thereby extending the one-stimulus paradigm to a two-stimulus paradigm. Two different stimuli (a face and a symbol) are presented at the same time, superimposed on different characters and ERPs are classified using a multi-class classifier. Here, we present the proof-of-principle that is achieved with healthy participants. Approach. Eight participants were confronted with the novel two-stimulus paradigm and, for comparison, with two one-stimulus paradigms that used either one of the stimuli. Classification accuracies (percentage of correctly predicted letters) and elicited ERPs from the three paradigms were compared in a comprehensive offline analysis. Main results. The accuracies slightly decreased with the novel system compared to the established one-stimulus face paradigm. However, the use of two stimuli allowed for spelling at twice the maximum speed of the one-stimulus paradigms, and participants still achieved an average accuracy of 81.25%. This study introduced an alternative way of increasing the spelling speed in ERP-BCIs and illustrated that ERP-BCIs may not yet have reached their speed limit. Future research is needed in order to improve the reliability of the novel approach, as some participants displayed reduced accuracies. Furthermore, a comparison to the most recent BCI systems with individually adjusted, rapid stimulus timing is needed to draw conclusions about the practical relevance of the proposed paradigm. Significance. We introduced a novel two-stimulus paradigm that might be of high value for users who have reached the speed limit with the current one-stimulus ERP-BCI systems.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Multiscale principal component analysis
Akinduko, A A; Gorban, A N
2014-01-01
Principal component analysis (PCA) is an important tool in exploring data. The conventional approach to PCA leads to a solution which favours the structures with large variances. This is sensitive to outliers and could obfuscate interesting underlying structures. One of the equivalent definitions of PCA is that it seeks the subspaces that maximize the sum of squared pairwise distances between data projections. This definition opens up more flexibility in the analysis of principal components which is useful in enhancing PCA. In this paper we introduce scales into PCA by maximizing only the sum of pairwise distances between projections for pairs of datapoints with distances within a chosen interval of values [l,u]. The resulting principal component decompositions in Multiscale PCA depend on point (l,u) on the plane and for each point we define projectors onto principal components. Cluster analysis of these projectors reveals the structures in the data at various scales. Each structure is described by the eigenvectors at the medoid point of the cluster which represent the structure. We also use the distortion of projections as a criterion for choosing an appropriate scale especially for data with outliers. This method was tested on both artificial distribution of data and real data. For data with multiscale structures, the method was able to reveal the different structures of the data and also to reduce the effect of outliers in the principal component analysis
Negligence--When Is the Principal Liable? A Legal Memorandum.
Stern, Ralph D., Ed.
Negligence, a tort liability, is defined, discussed, and reviewed in relation to several court decisions involving school principals. The history of liability suits against school principals suggests that a reasonable, prudent principal can avoid legal problems. Ten guidelines are presented to assist principals in avoiding charges of negligence.…
Principal noncommutative torus bundles
Echterhoff, Siegfried; Nest, Ryszard; Oyono-Oyono, Herve
2008-01-01
of bivariant K-theory (denoted RKK-theory) due to Kasparov. Using earlier results of Echterhoff and Williams, we shall give a complete classification of principal non-commutative torus bundles up to equivariant Morita equivalence. We then study these bundles as topological fibrations (forgetting the group...
Hollar, Charlie
2004-01-01
They may never grace the pages of The Wall Street Journal or Fortune magazine, but they might possibly be the most important CEOs in our country. They are elementary school principals. Each of them typically serves the learning needs of 350-400 clients (students) while overseeing a multimillion-dollar facility staffed by 20-25 teachers and 10-15…
Euler principal component analysis
Liwicki, Stephan; Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja
Principal Component Analysis (PCA) is perhaps the most prominent learning tool for dimensionality reduction in pattern recognition and computer vision. However, the ℓ 2-norm employed by standard PCA is not robust to outliers. In this paper, we propose a kernel PCA method for fast and robust PCA,
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
Haid, Thomas H.; Doix, Aude-Clémence M.; Nigg, Benno M.; Federolf, Peter A.
2018-01-01
Optimal feedback control theory suggests that control of movement is focused on movement dimensions that are important for the task's success. The current study tested the hypotheses that age effects would emerge in the control of only specific movement components and that these components would be linked to the task relevance. Fifty healthy volunteers, 25 young and 25 older adults, performed a 80s-tandem stance while their postural movements were recorded using a standard motion capture system. The postural movements were decomposed by a principal component analysis into one-dimensional movement components, PMk, whose control was assessed through two variables, Nk and σk, which characterized the tightness and the regularity of the neuro-muscular control, respectively. The older volunteers showed less tight and more irregular control in PM2 (N2: −9.2%, p = 0.007; σ2: +14.3.0%, p = 0.017) but tighter control in PM8 and PM9 (N8: +4.7%, p = 0.020; N9: +2.5%, p = 0.043; σ9: −8.8%, p = 0.025). These results suggest that aging effects alter the postural control system not as a whole, but emerge in specific, task relevant components. The findings of the current study thus support the hypothesis that the minimal intervention principle, as described in the context of optimal feedback control (OFC), may be relevant when assessing aging effects on postural control. PMID:29459826
Ochiai, Noriaki; Mizuno, Masayuki; Mimori, Norihiko; Miyake, Toshihiko; Dekeyser, Mark; Canlas, Liza Jara; Takeda, Makio
2007-01-01
Bifenazate is a novel carbazate acaricide discovered by Uniroyal Chemical (now Chemtura Corporation) for the control of phytophagous mites infesting agricultural and ornamental crops. Its acaricidal activity and that of its principal active metabolite, diazene, were characterized. Bifenazate and diazene had high toxicity and specificity both orally and topically to all life stages of Tetranychus urticae and Panonychus citri. Acute poisoning was observed with no temperature dependency. No cross-resistance was found to mites resistant to several other classes of acaricides, such as tebufenpyrad, etoxazole, fenbutatin oxide and dicofol. Bifenazate remained effective for a long time with only about a 10% loss of efficacy on T. urticae after 1 month of application in the field. All stages of development of the predatory mites, Phytoseiulus persimilis and Neoseiulus californicus, survived treatment by both bifenazate and diazene. When adult females of the two predatory mite species were treated with either bifenazate or diazene, they showed a normal level of fecundity and predatory activity in the laboratory, effectively suppressing spider mite population growth. Even when the predators were fed spider mite eggs that had been treated previously with bifenazate, they survived. These findings indicate that bifenazate is a very useful acaricide giving high efficacy, long-lasting activity and excellent selectivity for spider mites. It is, therefore, concluded that bifenazate is an ideal compound for controlling these pest mites.
Trattner, K. J.; Burch, J. L.; Ergun, R.; Eriksson, S.; Fuselier, S. A.; Giles, B. L.; Gomez, R. G.; Grimes, E. W.; Lewis, W. S.; Mauk, B.; Petrinec, S. M.; Russell, C. T.; Strangeway, R. J.; Trenchi, L.; Wilder, F. D.
2017-12-01
Several studies have validated the accuracy of the maximum magnetic shear model to predict the location of the reconnection site at the dayside magnetopause. These studies found agreement between model and observations for 74% to 88% of events examined. It should be noted that, of the anomalous events that failed the prediction of the model, 72% shared a very specific parameter range. These events occurred around equinox for an interplanetary magnetic field (IMF) clock angle of about 240°. This study investigates if this remarkable grouping of events is also present in data from the recently launched MMS. The MMS magnetopause encounter database from the first dayside phase of the mission includes about 4,500 full and partial magnetopause crossings and flux transfer events. We use the known reconnection line signature of switching accelerated ion beams in the magnetopause boundary layer to identify encounters with the reconnection region and identify 302 events during phase 1a when the spacecraft are at reconnection sites. These confirmed reconnection locations are compared with the predicted location from the maximum magnetic shear model and revealed an 80% agreement. The study also revealed the existence of anomalous cases as mentioned in an earlier study. The anomalies are concentrated for times around the equinoxes together with IMF clock angles around 140° and 240°. Another group of anomalies for the same clock angle ranges was found during December events.
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
Visible Leading: Principal Academy Connects and Empowers Principals
Hindman, Jennifer; Rozzelle, Jan; Ball, Rachel; Fahey, John
2015-01-01
The School-University Research Network (SURN) Principal Academy at the College of William & Mary in Williamsburg, Virginia, has a mission to build a leadership development program that increases principals' instructional knowledge and develops mentor principals to sustain the program. The academy is designed to connect and empower principals…
Kasse, C.
2002-01-01
Periglacial aeolian sand sheets and dunes of the last glacial cover extensive areas of northwest and central Europe. Four sedimentary facies have been described that could be related to fluvio-aeolian and cryogenic processes, moisture content of the depositional surface and surface morphology.
Principals' Salaries, 2007-2008
Cooke, Willa D.; Licciardi, Chris
2008-01-01
How do salaries of elementary and middle school principals compare with those of other administrators and classroom teachers? Are increases in salaries of principals keeping pace with increases in salaries of classroom teachers? And how have principals' salaries fared over the years when the cost of living is taken into account? There are reliable…
Principals Who Think Like Teachers
Fahey, Kevin
2013-01-01
Being a principal is a complex job, requiring quick, on-the-job learning. But many principals already have deep experience in a role at the very essence of the principalship. They know how to teach. In interviews with principals, Fahey and his colleagues learned that thinking like a teacher was key to their work. Part of thinking the way a teacher…
School Principals' Emotional Coping Process
Poirel, Emmanuel; Yvon, Frédéric
2014-01-01
The present study examines the emotional coping of school principals in Quebec. Emotional coping was measured by stimulated recall; six principals were filmed during a working day and presented a week later with their video showing stressful encounters. The results show that school principals experience anger because of reproaches from staff…
RE Rooted in Principal's Biography
ter Avest, Ina; Bakker, C.
2017-01-01
Critical incidents in the biography of principals appear to be steering in their innovative way of constructing InterReligious Education in their schools. In this contribution, the authors present the biographical narratives of 4 principals: 1 principal introducing interreligious education in a
The Future of Principal Evaluation
Clifford, Matthew; Ross, Steven
2012-01-01
The need to improve the quality of principal evaluation systems is long overdue. Although states and districts generally require principal evaluations, research and experience tell that many state and district evaluations do not reflect current standards and practices for principals, and that evaluation is not systematically administered. When…
Taking a Distributed Perspective to the School Principal's Workday
Spillane, James P.; Camburn, Eric M.; Pareja, Amber Stitziel
2007-01-01
Focusing on the school principal's day-to-day work, we examine who leads curriculum and instruction- and administration-related activities when the school principal is not leading but participating in the activity. We also explore the prevalence of coperformance of management and leadership activities in the school principal's workday. Looking…
Grison, B.; Bocchialini, K.; Menvielle, M.; Chambodut, A.; Cornilleau-Wehrlin, N.; Fontaine, D.; Marchaudon, A.; Pick, M.; Pitout, F.; Schmieder, B.; Regnier, S.; Zouganelis, Y.
2017-12-01
Taking the 32 sudden storm commencements (SSC) listed by the observatory de l'Ebre / ISGI over the year 2002 (maximal solar activity) as a starting point, we performed a statistical analysis of the related solar sources, solar wind signatures, and terrestrial responses. For each event, we characterized and identified, as far as possible, (i) the sources on the Sun (Coronal Mass Ejections -CME-), with the help of a series of herafter detailed criteria (velocities, drag coefficient, radio waves, polarity), as well as (ii) the structure and properties in the interplanetary medium, at L1, of the event associated to the SSC: magnetic clouds -MC-, non-MC interplanetary coronal mass ejections -ICME-, co-rotating/stream interaction regions -SIR/CIR-, shocks only and unclear events that we call "miscellaneous" events. The categorization of the events at L1 is made on published catalogues. For each potential CME/L1 event association we compare the velocity observed at L1 with the one observed at the Sun and the estimated balistic velocity. Observations of radio emissions (Type II, Type IV detected from the ground and /or by WIND) associated to the CMEs make the solar source more probable. We also compare the polarity of the magnetic clouds with the hemisphere of the solar source. The drag coefficient (estimated with the drag-based model) is calculated for each potential association and it is compared to the expected range values. We identified a solar source for 26 SSC related events. 12 of these 26 associations match all criteria. We finally discuss the difficulty to perform such associations.
Principal stratification in causal inference.
Frangakis, Constantine E; Rubin, Donald B
2002-03-01
Many scientific problems require that treatment comparisons be adjusted for posttreatment variables, but the estimands underlying standard methods are not causal effects. To address this deficiency, we propose a general framework for comparing treatments adjusting for posttreatment variables that yields principal effects based on principal stratification. Principal stratification with respect to a posttreatment variable is a cross-classification of subjects defined by the joint potential values of that posttreatment variable tinder each of the treatments being compared. Principal effects are causal effects within a principal stratum. The key property of principal strata is that they are not affected by treatment assignment and therefore can be used just as any pretreatment covariate. such as age category. As a result, the central property of our principal effects is that they are always causal effects and do not suffer from the complications of standard posttreatment-adjusted estimands. We discuss briefly that such principal causal effects are the link between three recent applications with adjustment for posttreatment variables: (i) treatment noncompliance, (ii) missing outcomes (dropout) following treatment noncompliance. and (iii) censoring by death. We then attack the problem of surrogate or biomarker endpoints, where we show, using principal causal effects, that all current definitions of surrogacy, even when perfectly true, do not generally have the desired interpretation as causal effects of treatment on outcome. We go on to forrmulate estimands based on principal stratification and principal causal effects and show their superiority.
Ferrari, Jérôme
2015-01-01
Fasciné par la figure du physicien allemand Werner Heisenberg (1901-1976), fondateur de la mécanique quantique, inventeur du célèbre "principe d'incertitude" et Prix Nobel de physique en 1932, un jeune aspirant-philosophe désenchanté s'efforce, à l'aube du XXIe siècle, de considérer l'incomplétude de sa propre existence à l'aune des travaux et de la destinée de cet exceptionnel homme de sciences qui incarne pour lui la rencontre du langage scientifique et de la poésie, lesquels, chacun à leur manière, en ouvrant la voie au scandale de l'inédit, dessillent les yeux sur le monde pour en révéler la mystérieuse beauté que ne cessent de confisquer le matérialisme à l'œuvre dans l'Histoire des hommes.
Principal oscillation patterns
Storch, H. von; Buerger, G.; Storch, J.S. von
1993-01-01
The Principal Oscillation Pattern (POP) analysis is a technique which is used to simultaneously infer the characteristic patterns and time scales of a vector time series. The POPs may be seen as the normal modes of a linearized system whose system matrix is estimated from data. The concept of POP analysis is reviewed. Examples are used to illustrate the potential of the POP technique. The best defined POPs of tropospheric day-to-day variability coincide with the most unstable modes derived from linearized theory. POPs can be derived even from a space-time subset of data. POPs are successful in identifying two independent modes with similar time scales in the same data set. The POP method can also produce forecasts which may potentially be used as a reference for other forecast models. The conventional POP analysis technique has been generalized in various ways. In the cyclostationary POP analysis, the estimated system matrix is allowed to vary deterministically with an externally forced cycle. In the complex POP analysis not only the state of the system but also its ''momentum'' is modeled. Associated correlation patterns are a useful tool to describe the appearance of a signal previously identified by a POP analysis in other parameters. (orig.)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
Probable maximum flood control
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Introduction to maximum entropy
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Functional Maximum Autocorrelation Factors
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Portraits of Principal Practice: Time Allocation and School Principal Work
Sebastian, James; Camburn, Eric M.; Spillane, James P.
2018-01-01
Purpose: The purpose of this study was to examine how school principals in urban settings distributed their time working on critical school functions. We also examined who principals worked with and how their time allocation patterns varied by school contextual characteristics. Research Method/Approach: The study was conducted in an urban school…
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
School Principals' Sources of Knowledge
Perkins, Arland Early
2014-01-01
The purpose of this study was to determine what sources of professional knowledge are available to principals in 1 rural East Tennessee school district. Qualitative research methods were applied to gain an understanding of what sources of knowledge are used by school principals in 1 rural East Tennessee school district and the barriers they face…
Innovation Management Perceptions of Principals
Bakir, Asli Agiroglu
2016-01-01
This study is aimed to determine the perceptions of principals about innovation management and to investigate whether there is a significant difference in this perception according to various parameters. In the study, descriptive research model is used and universe is consisted from principals who participated in "Acquiring Formation Course…
What Do Effective Principals Do?
Protheroe, Nancy
2011-01-01
Much has been written during the past decade about the changing role of the principal and the shift in emphasis from manager to instructional leader. Anyone in education, and especially principals themselves, could develop a mental list of responsibilities that fit within each of these realms. But research makes it clear that both those aspects of…
Time Management for New Principals
Ruder, Robert
2008-01-01
Becoming a principal is a milestone in an educator's professional life. The principalship is an opportunity to provide leadership that will afford students opportunities to thrive in a nurturing and supportive environment. Despite the continuously expanding demands of being a new principal, effective time management will enable an individual to be…
Bureaucratic Control and Principal Role.
Bezdek, Robert; And Others
The purposes of this study were to determine the manner in which the imposition of increased bureaucratic control over principals influenced their allocation of time to tasks and to investigate principals' perceptions of the changes in their roles brought about by this increased control. The specific bureaucratic control system whose effects were…
Assessment of School Principals' Reassignment Process
Sezgin-Nartgün, Senay; Ekinci, Serkan
2016-01-01
This study aimed to identify administrators' views related to the assessment of school principals' reassignment in educational organizations. The study utilized qualitative research design and the study group composed of 8 school administrators selected via simple sampling who were employed in the Bolu central district in 2014-2015 academic year.…
Geometry of Quantum Principal Bundles. Pt. 1
Durdevic, M.
1996-01-01
A theory of principal bundles possessing quantum structure groups and classical base manifolds is presented. Structural analysis of such quantum principal bundles is performed. A differential calculus is constructed, combining differential forms on the base manifold with an appropriate differential calculus on the structure quantum group. Relations between the calculus on the group and the calculus on the bundle are investigated. A concept of (pseudo)tensoriality is formulated. The formalism of connections is developed. In particular, operators of horizontal projection, covariant derivative and curvature are constructed and analyzed. Generalizations of the first Structure Equation and of the Bianchi identity are found. Illustrative examples are presented. (orig.)
relationship between principals' management approaches
Admin
Data were collected using a self-administered questionnaire from a sample of. 211 teachers, 28 principals and 22 chairpersons of parent- teachers association. Data were ..... their role expectation in discipline management. Data from the 20 ...
Principals, agents and research programmes
Elizabeth Shove
2003-01-01
Research programmes appear to represent one of the more powerful instruments through which research funders (principals) steer and shape what researchers (agents) do. The fact that agents navigate between different sources and styles of programme funding and that they use programmes to their own ends is readily accommodated within principal-agent theory with the help of concepts such as shirking and defection. Taking a different route, I use three examples of research programming (by the UK, ...
Remarks on the maximum luminosity
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
24 CFR 232.565 - Maximum loan amount.
2010-04-01
... URBAN DEVELOPMENT MORTGAGE AND LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES MORTGAGE INSURANCE FOR NURSING HOMES, INTERMEDIATE CARE FACILITIES, BOARD AND CARE HOMES, AND ASSISTED... Fire Safety Equipment Eligible Security Instruments § 232.565 Maximum loan amount. The principal amount...
Principal Curves on Riemannian Manifolds.
Hauberg, Soren
2016-09-01
Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.
José Piquer
2010-01-01
ío, focalizando el ascenso de los intrusivos mencionados.The Cenozoic geologic evolution of the central part of the Cordillera Principal at ~35°S, is intimately related to the geodynamic evolution of deep crustal structures, which during different stages contra lled the deposition of volcanosedimentary sequences, and the ascent and emplacement of epizonal intrusions. Newly defined stratigraphy around these structures confirms the Cenozoic age of a group of pyroclastic and sedimentary rocks, which conformably underlie andesitic lavas of the Abanico Formation (assigned to the Late Eocene-Early to Middle Miocene. Intrusive rocks correspond to four main phases (from oldest to youngest: diorite, granodiorite, rhyo-dacitic and dacitic porphyry, which oceurs in a North-South trending belt. The granodiorite was dated at 7.8+0.4 Ma (K-Ar in biotite. Rhyo-dacitic porphyries, considered as a marginal lithodeme of the granodiorite, yielded 7.9+0.4 Ma (K-Ar in plagioclase phenocrysts. Two main structures of regional importance were observed: the El Fierro thrust, and, towards the west, the Infiernillo-Los Cipreses Fault System. In the characterization of the latter, magnetic modeling of cross-sections were analyzed as a complement to the geologic information. The ascent of the different intrusive phases mentioned before, is interpreted as being controlled by the Infiernillo-Los Cipreses Fault System. This structure, as well as the El Fierro thrust, acted as a basin-margin normal fault during the Late Eocene-Mddle Mocene, contralling the deposition of the Abanico Formation. These faults were reactivated as reverse faults during an episode of major tectonic contraction and magmatic-induced high fluid pressure in the Late Mocene, focusing the ascent of the intrusive bodies.
Credal Networks under Maximum Entropy
Lukasiewicz, Thomas
2013-01-01
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...
The Impact of Pay Satisfaction and School Achievement on High School Principals' Turnover Intentions
Tran, Henry
2017-01-01
In recent years, a principal supply shortage crisis has emerged in the USA. This problem has been exacerbated by an increase in principal departures, which has been found to be negatively related to school outcomes. While research exists on several determinants of principal turnover, any examination of the relationship between principals'…
Surface analysis the principal techniques
Vickerman, John C
2009-01-01
This completely updated and revised second edition of Surface Analysis: The Principal Techniques, deals with the characterisation and understanding of the outer layers of substrates, how they react, look and function which are all of interest to surface scientists. Within this comprehensive text, experts in each analysis area introduce the theory and practice of the principal techniques that have shown themselves to be effective in both basic research and in applied surface analysis. Examples of analysis are provided to facilitate the understanding of this topic and to show readers how they c
The Technology Principal: To Be or Not to Be?
Anthony, Anika Ball; Patravanich, Supawaree
2014-01-01
This case provides principal licensure candidates a strategic perspective on leading and managing educational technology initiatives. It presents issues related to vision setting, planning, implementation, organizational structure, and decision making. The case narrative is presented from the perspective of a principal, but it can also be used to…
Integrating Data Transformation in Principal Components Analysis
Maadooliat, Mehdi
2015-01-02
Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior to applying PCA. Such transformation is usually obtained from previous studies, prior knowledge, or trial-and-error. In this work, we develop a model-based method that integrates data transformation in PCA and finds an appropriate data transformation using the maximum profile likelihood. Extensions of the method to handle functional data and missing values are also developed. Several numerical algorithms are provided for efficient computation. The proposed method is illustrated using simulated and real-world data examples.
Shara, Michael M.; Doyle, Trisha; Zurek, David [Department of Astrophysics, American Museum of Natural History, Central Park West and 79th Street, New York, NY 10024-5192 (United States); Lauer, Tod R. [National Optical Astronomy Observatory, P.O. Box 26732, Tucson, AZ 85726 (United States); Baltz, Edward A. [KIPAC, SLAC, 2575 Sand Hill Road, M/S 29, Menlo Park, CA 94025 (United States); Kovetz, Attay [School of Physics and Astronomy, Faculty of Exact Sciences, Tel Aviv University, Tel Aviv (Israel); Madrid, Juan P. [CSIRO, Astronomy and Space Science, P.O. Box 76, Epping, NSW 1710 (Australia); Mikołajewska, Joanna [N. Copernicus Astronomical Center, Polish Academy of Sciences, Bartycka 18, PL 00-716 Warsaw (Poland); Neill, J. D. [California Institute of Technology, 1200 East California Boulevard, MC 278-17, Pasadena CA 91125 (United States); Prialnik, Dina [Department of Geosciences, Tel Aviv University, Ramat Aviv, Tel Aviv 69978 (Israel); Welch, D. L. [Department of Physics and Astronomy, McMaster University, Hamilton, L8S 4M1, Ontario (Canada); Yaron, Ofer [Department of Particle Physics and Astrophysics, Weizmann Institute of Science, 76100 Rehovot (Israel)
2017-04-20
The extensive grid of numerical simulations of nova eruptions from the work of Yaron et al. first predicted that some classical novae might significantly deviate from the Maximum Magnitude–Rate of Decline (MMRD) relation, which purports to characterize novae as standard candles. Kasliwal et al. have announced the observational detection of a new class of faint, fast classical novae in the Andromeda galaxy. These objects deviate strongly from the MMRD relationship, as predicted by Yaron et al. Recently, Shara et al. reported the first detections of faint, fast novae in M87. These previously overlooked objects are as common in the giant elliptical galaxy M87 as they are in the giant spiral M31; they comprise about 40% of all classical nova eruptions and greatly increase the observational scatter in the MMRD relation. We use the extensive grid of the nova simulations of Yaron et al. to identify the underlying causes of the existence of faint, fast novae. These are systems that have accreted, and can thus eject, only very low-mass envelopes, of the order of 10{sup −7}–10{sup −8} M {sub ⊙}, on massive white dwarfs. Such binaries include, but are not limited to, the recurrent novae. These same models predict the existence of ultrafast novae that display decline times, t {sub 2,} to be as short as five hours. We outline a strategy for their future detection.
School Uniforms: Guidelines for Principals.
Essex, Nathan L.
2001-01-01
Principals desiring to develop a school-uniform policy should involve parents, teachers, community leaders, and student representatives; beware restrictions on religious and political expression; provide flexibility and assistance for low-income families; implement a pilot program; align the policy with school-safety issues; and consider legal…
The Principal and Tort Liability.
Stern, Ralph D.
The emphasis of this chapter is on the tort liability of principals, especially their commission of unintentional torts or torts resulting from negligent conduct. A tort is defined as a wrongful act, not including a breach of contract or trust, which results in injury to another's person, property, or reputation and for which the injured party is…
Teachers' Perspectives on Principal Mistreatment
Blase, Joseph; Blase, Jo
2006-01-01
Although there is some important scholarly work on the problem of workplace mistreatment/abuse, theoretical or empirical work on abusive school principals is nonexistent. Symbolic interactionism was the theoretical structure for the present study. This perspective on social research is founded on three primary assumptions: (1) individuals act…
COPD phenotype description using principal components analysis
Roy, Kay; Smith, Jacky; Kolsum, Umme
2009-01-01
BACKGROUND: Airway inflammation in COPD can be measured using biomarkers such as induced sputum and Fe(NO). This study set out to explore the heterogeneity of COPD using biomarkers of airway and systemic inflammation and pulmonary function by principal components analysis (PCA). SUBJECTS...... AND METHODS: In 127 COPD patients (mean FEV1 61%), pulmonary function, Fe(NO), plasma CRP and TNF-alpha, sputum differential cell counts and sputum IL8 (pg/ml) were measured. Principal components analysis as well as multivariate analysis was performed. RESULTS: PCA identified four main components (% variance...... associations between the variables within components 1 and 2. CONCLUSION: COPD is a multi dimensional disease. Unrelated components of disease were identified, including neutrophilic airway inflammation which was associated with systemic inflammation, and sputum eosinophils which were related to increased Fe...
Principal semantic components of language and the measurement of meaning.
Samsonovich, Alexei V; Samsonovic, Alexei V; Ascoli, Giorgio A
2010-06-11
Metric systems for semantics, or semantic cognitive maps, are allocations of words or other representations in a metric space based on their meaning. Existing methods for semantic mapping, such as Latent Semantic Analysis and Latent Dirichlet Allocation, are based on paradigms involving dissimilarity metrics. They typically do not take into account relations of antonymy and yield a large number of domain-specific semantic dimensions. Here, using a novel self-organization approach, we construct a low-dimensional, context-independent semantic map of natural language that represents simultaneously synonymy and antonymy. Emergent semantics of the map principal components are clearly identifiable: the first three correspond to the meanings of "good/bad" (valence), "calm/excited" (arousal), and "open/closed" (freedom), respectively. The semantic map is sufficiently robust to allow the automated extraction of synonyms and antonyms not originally in the dictionaries used to construct the map and to predict connotation from their coordinates. The map geometric characteristics include a limited number ( approximately 4) of statistically significant dimensions, a bimodal distribution of the first component, increasing kurtosis of subsequent (unimodal) components, and a U-shaped maximum-spread planar projection. Both the semantic content and the main geometric features of the map are consistent between dictionaries (Microsoft Word and Princeton's WordNet), among Western languages (English, French, German, and Spanish), and with previously established psychometric measures. By defining the semantics of its dimensions, the constructed map provides a foundational metric system for the quantitative analysis of word meaning. Language can be viewed as a cumulative product of human experiences. Therefore, the extracted principal semantic dimensions may be useful to characterize the general semantic dimensions of the content of mental states. This is a fundamental step toward a
Principal semantic components of language and the measurement of meaning.
Alexei V Samsonovich
Full Text Available Metric systems for semantics, or semantic cognitive maps, are allocations of words or other representations in a metric space based on their meaning. Existing methods for semantic mapping, such as Latent Semantic Analysis and Latent Dirichlet Allocation, are based on paradigms involving dissimilarity metrics. They typically do not take into account relations of antonymy and yield a large number of domain-specific semantic dimensions. Here, using a novel self-organization approach, we construct a low-dimensional, context-independent semantic map of natural language that represents simultaneously synonymy and antonymy. Emergent semantics of the map principal components are clearly identifiable: the first three correspond to the meanings of "good/bad" (valence, "calm/excited" (arousal, and "open/closed" (freedom, respectively. The semantic map is sufficiently robust to allow the automated extraction of synonyms and antonyms not originally in the dictionaries used to construct the map and to predict connotation from their coordinates. The map geometric characteristics include a limited number ( approximately 4 of statistically significant dimensions, a bimodal distribution of the first component, increasing kurtosis of subsequent (unimodal components, and a U-shaped maximum-spread planar projection. Both the semantic content and the main geometric features of the map are consistent between dictionaries (Microsoft Word and Princeton's WordNet, among Western languages (English, French, German, and Spanish, and with previously established psychometric measures. By defining the semantics of its dimensions, the constructed map provides a foundational metric system for the quantitative analysis of word meaning. Language can be viewed as a cumulative product of human experiences. Therefore, the extracted principal semantic dimensions may be useful to characterize the general semantic dimensions of the content of mental states. This is a fundamental step
Pantelia, Anna
2014-01-01
1 April 2014 - President of the Parliament of the Principality of Liechtenstein A. Frick and his delegation visiting the LHC tunnel at Point 1 with Technology Department Head J.M. Jiménez and signing the Guest book with CERN Director-General R. Heuer. Deputy Head of International Relations E. Tsesmelis present throughout.
Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo
Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.
2010-01-01
model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order
Developing Principal Instructional Leadership through Collaborative Networking
Cone, Mariah Bahar
2010-01-01
This study examines what occurs when principals of urban schools meet together to learn and improve their instructional leadership in collaborative principal networks designed to support, sustain, and provide ongoing principal capacity building. Principal leadership is considered second only to teaching in its ability to improve schools, yet few…
2010-07-01
... SUSPENSION (NONPROCUREMENT) Definitions § 19.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or supervisory... 31 Money and Finance: Treasury 1 2010-07-01 2010-07-01 false Principal. 19.995 Section 19.995...
2010-07-01
... SUSPENSION (NONPROCUREMENT) Definitions § 1471.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator, or other person within a participant with management or... 29 Labor 4 2010-07-01 2010-07-01 false Principal. 1471.995 Section 1471.995 Labor Regulations...
2010-01-01
... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false Principal. 180.995 Section 180.995 Grants and Agreements OFFICE OF MANAGEMENT AND BUDGET GOVERNMENTWIDE GUIDANCE FOR GRANTS AND AGREEMENTS... § 180.995 Principal. Principal means— (a) An officer, director, owner, partner, principal investigator...
Female Principals in Education: Breaking the Glass Ceiling in Spain
Enrique Javier Diez Gutierrez
Full Text Available Abstract Spanish schools are characterised by having a high proportion of female staff. However, statistics show that a proportionately higher number of men hold leadership positions. The aim of this study was to analyse the reasons why this is so, and to determine the motivations and barriers that women encounter in attaining and exercising these positions of greater responsibility and power. Questionnaires were administered to 2,022 female teachers, 430 female principals and 322 male principals. In addition, semi-structured interviews were held with 60 female principals, 14 focus group discussions were held with female principals and 16 autobiographical narratives were compiled with female principals and school inspectors. The reasons identified were related to structural aspects linked to the patriarchal worldview that is still dominant in our society and culture. Nevertheless, we also found motivations among women for attaining and exercising leadership roles.
Moriarty, Margaret E.
2012-01-01
This mixed-methods study was designed to determine how principals perceived the ethicality of sanctions for students engaged in sexting behavior relative to the race/ethnicity and gender of the student. Personality traits of the principals were surveyed to determine if Openness and/or Conscientiousness would predict principal response. Sexting is…
Fischer, M. J.
2014-02-01
There are many different methods for investigating the coupling between two climate fields, which are all based on the multivariate regression model. Each different method of solving the multivariate model has its own attractive characteristics, but often the suitability of a particular method for a particular problem is not clear. Continuum regression methods search the solution space between the conventional methods and thus can find regression model subspaces that mix the attractive characteristics of the end-member subspaces. Principal covariates regression is a continuum regression method that is easily applied to climate fields and makes use of two end-members: principal components regression and redundancy analysis. In this study, principal covariates regression is extended to additionally span a third end-member (partial least squares or maximum covariance analysis). The new method, regularized principal covariates regression, has several attractive features including the following: it easily applies to problems in which the response field has missing values or is temporally sparse, it explores a wide range of model spaces, and it seeks a model subspace that will, for a set number of components, have a predictive skill that is the same or better than conventional regression methods. The new method is illustrated by applying it to the problem of predicting the southern Australian winter rainfall anomaly field using the regional atmospheric pressure anomaly field. Regularized principal covariates regression identifies four major coupled patterns in these two fields. The two leading patterns, which explain over half the variance in the rainfall field, are related to the subtropical ridge and features of the zonally asymmetric circulation.
Gustavo Souza Valladares
2008-02-01
SBCS (Brazilian Soil Science Society; the analytical methods proposed by EMBRAPA-Solos were used to characterize the soils. The principal component analysis was used to cluster the profiles based on morphological, physical, chemical and environmental attributes and proved adequate to group the soils under study based on the profile attributes and the grouping was well related to their taxonomy. The soil profiles were ranked by the ordinal multicriteria methods of Border, Condorcet and Copeland based on the subsidence risk. Results indicated a correlation between the methods (with exception of the Condorcet approach, unsuitable to rank the alternatives and the minimum residue, which is the classical parameter for the evaluation of subsidence, indicating efficacy to rank/classify the soil profiles in relation to subsidence risk. The quantitative approaches used are promising as evaluation tools in soil science studies.
Density estimation by maximum quantum entropy
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-01-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Chang, Yujin; Leach, Nicole; Anderman, Eric M.
2015-01-01
The purpose of this study is to examine the relations between principals' perceived autonomy support from superintendents, affective commitment to their school districts, and job satisfaction. We also explore possible moderation effects of principals' career experiences on these relations. Data were collected from K-12 public school principals in…
School Principals as Marketing Managers: The Expanding Role of Marketing for School Development
Anast-May, Linda; Mitchell, Mark; Buckner, Barbara Chesler; Elsberry, Cindy
2012-01-01
This study examined the relative importance that school principals attach to aspects of their role as marketing managers for their schools and their relative satisfaction with their efforts to date. The study included 60 principals from two school districts. Findings suggest that principals are aware of the importance of marketing in today's…
Maximum gravitational redshift of white dwarfs
Shapiro, S.L.; Teukolsky, S.A.
1976-01-01
The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores
Nakada, Masao; Okuno, Jun'ichi; Yokoyama, Yusuke
2016-02-01
Inference of globally averaged eustatic sea level (ESL) rise since the Last Glacial Maximum (LGM) highly depends on the interpretation of relative sea level (RSL) observations at Barbados and Bonaparte Gulf, Australia, which are sensitive to the viscosity structure of Earth's mantle. Here we examine the RSL changes at the LGM for Barbados and Bonaparte Gulf ({{RSL}}_{{L}}^{{{Bar}}} and {{RSL}}_{{L}}^{{{Bon}}}), differential RSL for both sites (Δ {{RSL}}_{{L}}^{{{Bar}},{{Bon}}}) and rate of change of degree-two harmonics of Earth's geopotential due to glacial isostatic adjustment (GIA) process (GIA-induced J˙2) to infer the ESL component and viscosity structure of Earth's mantle. Differential RSL, Δ {{RSL}}_{{L}}^{{{Bar}},{{Bon}}} and GIA-induced J˙2 are dominantly sensitive to the lower-mantle viscosity, and nearly insensitive to the upper-mantle rheological structure and GIA ice models with an ESL component of about (120-130) m. The comparison between the predicted and observationally derived Δ {{RSL}}_{{L}}^{{{Bar}},{{Bon}}} indicates the lower-mantle viscosity higher than ˜2 × 1022 Pa s, and the observationally derived GIA-induced J˙2 of -(6.0-6.5) × 10-11 yr-1 indicates two permissible solutions for the lower mantle, ˜1022 and (5-10) × 1022 Pa s. That is, the effective lower-mantle viscosity inferred from these two observational constraints is (5-10) × 1022 Pa s. The LGM RSL changes at both sites, {{RSL}}_{{L}}^{{{Bar}}} and {{RSL}}_{{L}}^{{{Bon}}}, are also sensitive to the ESL component and upper-mantle viscosity as well as the lower-mantle viscosity. The permissible upper-mantle viscosity increases with decreasing ESL component due to the sensitivity of the LGM sea level at Bonaparte Gulf ({{RSL}}_{{L}}^{{{Bon}}}) to the upper-mantle viscosity, and inferred upper-mantle viscosity for adopted lithospheric thicknesses of 65 and 100 km is (1-3) × 1020 Pa s for ESL˜130 m and (4-10) × 1020 Pa s for ESL˜125 m. The former solution of (1-3) × 1020
Principal chiral model on superspheres
Mitev, V.; Schomerus, V.; Quella, T.
2008-09-01
We investigate the spectrum of the principal chiral model (PCM) on odd-dimensional superspheres as a function of the curvature radius R. For volume-filling branes on S 3 vertical stroke 2 , we compute the exact boundary spectrum as a function of R. The extension to higher dimensional superspheres is discussed, but not carried out in detail. Our results provide very convincing evidence in favor of the strong-weak coupling duality between supersphere PCMs and OSP(2S+2 vertical stroke 2S) Gross-Neveu models that was recently conjectured by Candu and Saleur. (orig.)
Interpretable functional principal component analysis.
Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo
2016-09-01
Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.
Last Glacial Maximum Salinity Reconstruction
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were
Maximum stellar iron core mass
60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.
Maximum entropy beam diagnostic tomography
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs
Maximum entropy beam diagnostic tomography
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
A portable storage maximum thermometer
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
Female Traditional Principals and Co-Principals: Experiences of Role Conflict and Job Satisfaction
Eckman, Ellen Wexler; Kelber, Sheryl Talcott
2010-01-01
This paper presents a secondary analysis of survey data focusing on role conflict and job satisfaction of 102 female principals. Data were collected from 51 female traditional principals and 51 female co-principals. By examining the traditional and co-principal leadership models as experienced by female principals, this paper addresses the impact…
Neutron spectra unfolding with maximum entropy and maximum likelihood
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
The radial distribution of cosmic rays in the heliosphere at solar maximum
McDonald, F. B.; Fujii, Z.; Heikkila, B.; Lal, N.
2003-08-01
To obtain a more detailed profile of the radial distribution of galactic (GCRs) and anomalous (ACRs) cosmic rays, a unique time in the 11-year solar activity cycle has been selected - that of solar maximum. At this time of minimum cosmic ray intensity a simple, straight-forward normalization technique has been found that allows the cosmic ray data from IMP 8, Pioneer 10 (P-10) and Voyagers 1 and 2 (V1, V2) to be combined for the solar maxima of cycles 21, 22 and 23. This combined distribution reveals a functional form of the radial gradient that varies as G 0/r with G 0 being constant and relatively small in the inner heliosphere. After a transition region between ˜10 and 20 AU, G 0 increases to a much larger value that remains constant between ˜25 and 82 AU. This implies that at solar maximum the changes that produce the 11-year modulation cycle are mainly occurring in the outer heliosphere between ˜15 AU and the termination shock. These observations are not inconsistent with the concept that Global Merged Interaction. regions (GMIRs) are the principal agent of modulation between solar minimum and solar maximum. There does not appear to be a significant change in the amount of heliosheath modulation occurring between the 1997 solar minimum and the cycle 23 solar maximum.
On Bayesian Principal Component Analysis
Šmídl, Václav; Quinn, A.
2007-01-01
Roč. 51, č. 9 (2007), s. 4101-4123 ISSN 0167-9473 R&D Projects: GA MŠk(CZ) 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Principal component analysis ( PCA ) * Variational bayes (VB) * von-Mises–Fisher distribution Subject RIV: BC - Control Systems Theory Impact factor: 1.029, year: 2007 http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V8V-4MYD60N-6&_user=10&_coverDate=05%2F15%2F2007&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=b8ea629d48df926fe18f9e5724c9003a
On Maximum Entropy and Inference
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
Maximum Water Hammer Sensitivity Analysis
Jalil Emadi; Abbas Solemani
2011-01-01
Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...
Yunfeng Shan
2008-01-01
Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.
LCLS Maximum Credible Beam Power
Clendenin, J.
2005-01-01
The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed
Principals as Assessment Leaders in Rural Schools
Renihan, Patrick; Noonan, Brian
2012-01-01
This article reports a study of rural school principals' assessment leadership roles and the impact of rural context on their work. The study involved three focus groups of principals serving small rural schools of varied size and grade configuration in three systems. Principals viewed assessment as a matter of teacher accountability and as a…
Principal Stability and the Rural Divide
Pendola, Andrew; Fuller, Edward J.
2018-01-01
This article examines the unique features of the rural school context and how these features are associated with the stability of principals in these schools. Given the small but growing literature on the characteristics of rural principals, this study presents an exploratory analysis of principal stability across schools located in different…
New Principal Coaching as a Safety Net
Celoria, Davide; Roberson, Ingrid
2015-01-01
This study examines new principal coaching as an induction process and explores the emotional dimensions of educational leadership. Twelve principal coaches and new principals--six of each--participated in this qualitative study that employed emergent coding (Creswell, 2008; Denzin, 2005; Glaser & Strauss, 1998; Spradley, 1979). The major…
12 CFR 561.39 - Principal office.
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Principal office. 561.39 Section 561.39 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY DEFINITIONS FOR REGULATIONS AFFECTING ALL SAVINGS ASSOCIATIONS § 561.39 Principal office. The term principal office means the home...
The Principal as Academician: The Renewed Voice.
McAvoy, Brenda, Ed.
This collection of essays was written by principals who participated in the 1986-87 Humanities Seminar sponsored by the Principals' Institute of Georgia State University. The focus was "The Evolution of Intellectual Leadership." The roles of the principal as philosopher, historian, ethnician, writer and team member are examined through…
Modelling Monthly Mental Sickness Cases Using Principal ...
The methodology was principal component analysis (PCA) using data obtained from the hospital to estimate regression coefficients and parameters. It was found that the principal component regression model that was derived was good predictive tool. The principal component regression model obtained was okay and this ...
Principals' Collaborative Roles as Leaders for Learning
Kitchen, Margaret; Gray, Susan; Jeurissen, Maree
2016-01-01
This article draws on data from three multicultural New Zealand primary schools to reconceptualize principals' roles as leaders for learning. In doing so, the writers build on Sinnema and Robinson's (2012) article on goal setting in principal evaluation. Sinnema and Robinson found that even principals hand-picked for their experience fell short on…
Perceptions of Beginning Public School Principals.
Lyons, James E.
1993-01-01
Summarizes a study to determine principal's perceptions of their competency in primary responsibility areas and their greatest challenges and frustrations. Beginning principals are challenged by delegating responsibilities and becoming familiar with the principal's role, the local school, and school operations. Their major frustrations are role…
Teacher Supervision Practices and Principals' Characteristics
April, Daniel; Bouchamma, Yamina
2015-01-01
A questionnaire was used to determine the individual and collective teacher supervision practices of school principals and vice-principals in Québec (n = 39) who participated in a research-action study on pedagogical supervision. These practices were then analyzed in terms of the principals' sociodemographic and socioprofessional characteristics…
Leadership Coaching for Principals: A National Study
Wise, Donald; Cavazos, Blanca
2017-01-01
Surveys were sent to a large representative sample of public school principals in the United States asking if they had received leadership coaching. Comparison of responses to actual numbers of principals indicates that the sample represents the first national study of principal leadership coaching. Results indicate that approximately 50% of all…
41 CFR 105-68.995 - Principal.
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false Principal. 105-68.995 Section 105-68.995 Public Contracts and Property Management Federal Property Management Regulations System...-GOVERNMENTWIDE DEBARMENT AND SUSPENSION (NONPROCUREMENT) Definitions § 105-68.995 Principal. Principal means— (a...
A principal-agent Model of corruption
Groenendijk, Nico
1997-01-01
One of the new avenues in the study of political corruption is that of neo-institutional economics, of which the principal-agent theory is a part. In this article a principal-agent model of corruption is presented, in which there are two principals (one of which is corrupting), and one agent (who is
School Principals' Assumptions about Human Nature: Implications for Leadership in Turkey
Sabanci, Ali
2008-01-01
This article considers principals' assumptions about human nature in Turkey and the relationship between the assumptions held and the leadership style adopted in schools. The findings show that school principals hold Y-type assumptions and prefer a relationship-oriented style in their relations with assistant principals. However, both principals…
Maximum power point tracking: a cost saving necessity in solar energy systems
Enslin, J H.R. [Stellenbosch Univ. (South Africa). Dept. of Electrical and Electronic Engineering
1992-12-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking (MPPT) can improve cost effectiveness, has a higher reliability and can improve the quality of life in remote areas. A high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of between 15 and 25% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply (RAPS) systems. The advantages at large temperature variations and high power rated systems are much higher. Other advantages include optimal sizing and system monitor and control. (author).
Principal component regression analysis with SPSS.
Liu, R X; Kuang, J; Gong, Q; Hou, X L
2003-06-01
The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
Teaching Principal Components Using Correlations.
Westfall, Peter H; Arias, Andrea L; Fulton, Lawrence V
2017-01-01
Introducing principal components (PCs) to students is difficult. First, the matrix algebra and mathematical maximization lemmas are daunting, especially for students in the social and behavioral sciences. Second, the standard motivation involving variance maximization subject to unit length constraint does not directly connect to the "variance explained" interpretation. Third, the unit length and uncorrelatedness constraints of the standard motivation do not allow re-scaling or oblique rotations, which are common in practice. Instead, we propose to motivate the subject in terms of optimizing (weighted) average proportions of variance explained in the original variables; this approach may be more intuitive, and hence easier to understand because it links directly to the familiar "R-squared" statistic. It also removes the need for unit length and uncorrelatedness constraints, provides a direct interpretation of "variance explained," and provides a direct answer to the question of whether to use covariance-based or correlation-based PCs. Furthermore, the presentation can be made without matrix algebra or optimization proofs. Modern tools from data science, including heat maps and text mining, provide further help in the interpretation and application of PCs; examples are given. Together, these techniques may be used to revise currently used methods for teaching and learning PCs in the behavioral sciences.
Hua, Yang; Liu, Zhanqiang
2018-05-24
Residual stresses of turned Inconel 718 surface along its axial and circumferential directions affect the fatigue performance of machined components. However, it has not been clear that the axial and circumferential directions are the principle residual stress direction. The direction of the maximum principal residual stress is crucial for the machined component service life. The present work aims to focuses on determining the direction and magnitude of principal residual stress and investigating its influence on fatigue performance of turned Inconel 718. The turning experimental results show that the principal residual stress magnitude is much higher than surface residual stress. In addition, both the principal residual stress and surface residual stress increase significantly as the feed rate increases. The fatigue test results show that the direction of the maximum principal residual stress increased by 7.4%, while the fatigue life decreased by 39.4%. The maximum principal residual stress magnitude diminished by 17.9%, whereas the fatigue life increased by 83.6%. The maximum principal residual stress has a preponderant influence on fatigue performance as compared to the surface residual stress. The maximum principal residual stress can be considered as a prime indicator for evaluation of the residual stress influence on fatigue performance of turned Inconel 718.
Yang Hua
2018-05-01
Full Text Available Residual stresses of turned Inconel 718 surface along its axial and circumferential directions affect the fatigue performance of machined components. However, it has not been clear that the axial and circumferential directions are the principle residual stress direction. The direction of the maximum principal residual stress is crucial for the machined component service life. The present work aims to focuses on determining the direction and magnitude of principal residual stress and investigating its influence on fatigue performance of turned Inconel 718. The turning experimental results show that the principal residual stress magnitude is much higher than surface residual stress. In addition, both the principal residual stress and surface residual stress increase significantly as the feed rate increases. The fatigue test results show that the direction of the maximum principal residual stress increased by 7.4%, while the fatigue life decreased by 39.4%. The maximum principal residual stress magnitude diminished by 17.9%, whereas the fatigue life increased by 83.6%. The maximum principal residual stress has a preponderant influence on fatigue performance as compared to the surface residual stress. The maximum principal residual stress can be considered as a prime indicator for evaluation of the residual stress influence on fatigue performance of turned Inconel 718.
Extreme Maximum Land Surface Temperatures.
Garratt, J. R.
1992-09-01
There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).
Prelas, M. A.; Hora, H.; Miley, G. H.
2014-07-01
Evaluation of nuclear binding energies from theory close to available measurements of a very high number of superheavy elements (SHE) based on α-decay energies Qα, arrived at a closing shell with a significant neutron number 184. Within the option of several discussed magic numbers for protons of around 120, Bagge's numbers 126 and 184 fit well and are supported by the element generation measurements by low energy nuclear reactions (LENR) discovered in deuterium loaded host metals. These measurements were showing a Maruhn-Greiner maximum from fission of compound nuclei in an excited state with double magic numbers for mutual confirmation.
Nonlinear principal component analysis and its applications
Mori, Yuichi; Makino, Naomichi
2016-01-01
This book expounds the principle and related applications of nonlinear principal component analysis (PCA), which is useful method to analyze mixed measurement levels data. In the part dealing with the principle, after a brief introduction of ordinary PCA, a PCA for categorical data (nominal and ordinal) is introduced as nonlinear PCA, in which an optimal scaling technique is used to quantify the categorical variables. The alternating least squares (ALS) is the main algorithm in the method. Multiple correspondence analysis (MCA), a special case of nonlinear PCA, is also introduced. All formulations in these methods are integrated in the same manner as matrix operations. Because any measurement levels data can be treated consistently as numerical data and ALS is a very powerful tool for estimations, the methods can be utilized in a variety of fields such as biometrics, econometrics, psychometrics, and sociology. In the applications part of the book, four applications are introduced: variable selection for mixed...
System for memorizing maximum values
Bozeman, Richard J., Jr.
1992-08-01
The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Scintillation counter, maximum gamma aspect
Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
maximum neutron flux at thermal nuclear reactors
Strugar, P.
1968-10-01
Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr
Reduction of symplectic principal R-bundles
Lacirasella, Ignazio; Marrero, Juan Carlos; Padrón, Edith
2012-01-01
We describe a reduction process for symplectic principal R-bundles in the presence of a momentum map. These types of structures play an important role in the geometric formulation of non-autonomous Hamiltonian systems. We apply this procedure to the standard symplectic principal R-bundle associated with a fibration π:M→R. Moreover, we show a reduction process for non-autonomous Hamiltonian systems on symplectic principal R-bundles. We apply these reduction processes to several examples. (paper)
Maximum entropy and Bayesian methods
Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.
1992-01-01
Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...
Maximum-entropy clustering algorithm and its global convergence analysis
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
The regulation of starch accumulation in Panicum maximum Jacq ...
... decrease the starch level. These observations are discussed in relation to the photosynthetic characteristics of P. maximum. Keywords: accumulation; botany; carbon assimilation; co2 fixation; growth conditions; mesophyll; metabolites; nitrogen; nitrogen levels; nitrogen supply; panicum maximum; plant physiology; starch; ...
Prelas, M.A. [University of Missouri, Columbia, MO (United States); Hora, H. [University of New South Wales, Sydney (Australia); Miley, G.H. [University of Illinois, Urbana-Champaign (United States)
2014-07-04
Evaluation of nuclear binding energies from theory close to available measurements of a very high number of superheavy elements (SHE) based on α-decay energies Q{sub α}, arrived at a closing shell with a significant neutron number 184. Within the option of several discussed magic numbers for protons of around 120, Bagge's numbers 126 and 184 fit well and are supported by the element generation measurements by low energy nuclear reactions (LENR) discovered in deuterium loaded host metals. These measurements were showing a Maruhn–Greiner maximum from fission of compound nuclei in an excited state with double magic numbers for mutual confirmation. - Highlights: • Use of Bagge procedure confirmed that Z=126 and N=184 are proper magic numbers. • Elements are generated by low energy nuclear reactions in deuterium loaded metal. • Postulated from measured distribution that a compound nucleus {sup 310}X{sub 126} was formed. • Formation of 164 deuterons in Bose–Einstein state clusters with 2 pm spacing.
Prelas, M.A.; Hora, H.; Miley, G.H.
2014-01-01
Evaluation of nuclear binding energies from theory close to available measurements of a very high number of superheavy elements (SHE) based on α-decay energies Q α , arrived at a closing shell with a significant neutron number 184. Within the option of several discussed magic numbers for protons of around 120, Bagge's numbers 126 and 184 fit well and are supported by the element generation measurements by low energy nuclear reactions (LENR) discovered in deuterium loaded host metals. These measurements were showing a Maruhn–Greiner maximum from fission of compound nuclei in an excited state with double magic numbers for mutual confirmation. - Highlights: • Use of Bagge procedure confirmed that Z=126 and N=184 are proper magic numbers. • Elements are generated by low energy nuclear reactions in deuterium loaded metal. • Postulated from measured distribution that a compound nucleus 310 X 126 was formed. • Formation of 164 deuterons in Bose–Einstein state clusters with 2 pm spacing
Molina, L; Elosua, R; Marrugat, J; Pons, S
1999-10-15
The relation between maximum systolic blood pressure (BP) during exercise and left ventricular (LV) mass is controversial. Physical activity also induces LV mass increase. The objective was to assess the relation between BP response to exercise and LV mass in normotensive men, taking into account physical activity practice. A cross-sectional study was performed. Three hundred eighteen healthy normotensive men, aged between 20 and 60 years, participated in this study. The Minnesota questionnaire was used to assess physical activity practice. An echocardiogram and a maximum exercise test were performed. LV mass was calculated and indexed to body surface area. LV hypertrophy was defined as a ventricular mass index > or =134 g/m2. BP was measured at the moment of maximum effort. Hypertensive response was considered when BP was > or =210 mm Hg. In the multiple linear regression model, maximum systolic BP was associated with LV mass index and correlation coefficient was 0.27 (SE 0.07). Physical activity practice and age were also associated with LV mass. An association between hypertensive response to exercise and LV hypertrophy was observed (odds ratio 3.16). Thus, BP response to exercise is associated with LV mass and men with systolic BP response > or =210 mm Hg present a 3-times higher risk of LV hypertrophy than those not reaching this limit. Physical activity practice is related to LV mass, but not to LV hypertrophy.
New pulser for principal PO power
Coudert, G.
1984-01-01
The pulser of the principal power of the PS is the unit that makes it possible to generate the reference function of the voltage of the principal magnet. This function depends on time and on the magnetic field of the magnet. It also generates various synchronization and reference pulses
Principals: Human Capital Managers at Every School
Kimball, Steven M.
2011-01-01
Being a principal is more than just being an instructional leader. Principals also must manage their schools' teaching talent in a strategic way so that it is linked to school instructional improvement strategies, to the competencies needed to enact the strategies, and to success in boosting student learning. Teacher acquisition and performance…
Constructing principals' professional identities through life stories ...
The Life History approach was used to collect data from six ... experience as the most significant leadership factors that influence principals' ... ranging from their entry into the teaching profession to their appointment as ..... teachers. I think I learnt from my principal to be strict but accommodating ..... Teachers College Press.
Integrating Technology: The Principals' Role and Effect
Machado, Lucas J.; Chung, Chia-Jung
2015-01-01
There are many factors that influence technology integration in the classroom such as teacher willingness, availability of hardware, and professional development of staff. Taking into account these elements, this paper describes research on technology integration with a focus on principals' attitudes. The role of the principal in classroom…
Building Leadership Capacity to Support Principal Succession
Escalante, Karen Elizabeth
2016-01-01
This study applies transformational leadership theory practices, specifically inspiring a shared vision, modeling the way and enabling others to act to examine the purposeful ways in which principals work to build the next generation of teacher leaders in response to the dearth of K-12 principals. The purpose of this study was to discover how one…
Deformation quantization of principal fibre bundles
Weiss, S.
2007-01-01
Deformation quantization is an algebraic but still geometrical way to define noncommutative spacetimes. In order to investigate corresponding gauge theories on such spaces, the geometrical formulation in terms of principal fibre bundles yields the appropriate framework. In this talk I will explain what should be understood by a deformation quantization of principal fibre bundles and how associated vector bundles arise in this context. (author)
Primary School Principals' Self-Monitoring Skills
Konan, Necdet
2015-01-01
The aim of the present study is to identify primary school principals' self-monitoring skills. The study adopted the general survey model and its population comprised primary school principals serving in the city of Diyarbakir, Turkey, while 292 of these constituted the sample. Self-Monitoring Scale was used as the data collection instrument. In…
Revising the Role of Principal Supervisor
Saltzman, Amy
2016-01-01
In Washington, D.C., and Tulsa, Okla., districts whose efforts are supported by the Wallace Foundation, principal supervisors concentrate on bolstering their principals' work to improve instruction, as opposed to focusing on the managerial or operational aspects of running a school. Supervisors oversee fewer schools, which enables them to provide…
An Examination of Principal Job Satisfaction
Pengilly, Michelle M.
2010-01-01
As education continues to succumb to deficits in budgets and increasingly high levels of student performance to meet the federal and state mandates, the quest to sustain and retain successful principals is imperative. The National Association of School Boards (1999) portrays effective principals as "linchpins" of school improvement and…
Do Principals Fire the Worst Teachers?
Jacob, Brian A.
2011-01-01
This article takes advantage of a unique policy change to examine how principals make decisions regarding teacher dismissal. In 2004, the Chicago Public Schools (CPS) and Chicago Teachers Union signed a new collective bargaining agreement that gave principals the flexibility to dismiss probationary teachers for any reason and without the…
Artful Dodges Principals Use to Beat Bureaucracy.
Ficklen, Ellen
1982-01-01
A study of Chicago (Illinois) principals revealed many ways principals practiced "creative insubordination"--avoiding following instructions but still getting things done. Among the dodges are deliberately missing deadlines, following orders literally, ignoring channels to procure teachers or materials, and using community members to…
Women principals' reflections of curriculum management challenges ...
This study reports the reflections of grade 6 rural primary principals in Mpumalanga province. A qualitative method of inquiry was used in this article, where data were collected using individual interviews with three principals and focus group discussions with the school management teams (SMTs) of three primary schools.
The Succession of a School Principal.
Fauske, Janice R.; Ogawa, Rodney T.
Applying theory from organizational and cultural perspectives to succession of principals, this study observes and records the language and culture of a small suburban elementary school. The study's procedures included analyses of shared organizational understandings as well as identification of the principal's influence on the school. Analyses of…
Should Principals Know More about Law?
Doctor, Tyrus L.
2013-01-01
Educational law is a critical piece of the education conundrum. Principals reference law books on a daily basis in order to address the wide range of complex problems in the school system. A principal's knowledge of law issues and legal decision-making are essential to provide effective feedback for a successful school.
How Not to Prepare School Principals
Davis, Stephen H.; Leon, Ronald J.
2011-01-01
Instead of focusing on how principals should be trained, an contrarian view is offered, grounded upon theoretical perspectives of experiential learning, and in particular, upon the theory of andragogy. A brief parable of the DoNoHarm School of Medicine is used as a descriptive analog for many principal preparation programs in America. The…
Social Media Strategies for School Principals
Cox, Dan; McLeod, Scott
2014-01-01
The purpose of this qualitative study was to describe, analyze, and interpret the experiences of school principals who use multiple social media tools with stakeholders as part of their comprehensive communications practices. Additionally, it examined why school principals have chosen to communicate with their stakeholders through social media.…
New Principals' Perspectives of Their Multifaceted Roles
Gentilucci, James L.; Denti, Lou; Guaglianone, Curtis L.
2013-01-01
This study utilizes Symbolic Interactionism to explore perspectives of neophyte principals. Findings explain how these perspectives are modified through complex interactions throughout the school year, and they also suggest preparation programs can help new principals most effectively by teaching "soft" skills such as active listening…
The Principal's Guide to Grant Success.
Bauer, David G.
This book provides principals of public and private elementary and middle schools with a step-by-step approach for developing a system that empowers faculty, staff, and the school community in attracting grant funds. Following the introduction, chapter 1 discusses the principal's role in supporting grantseeking. Chapter 2 describes how to…
System Based Code: Principal Concept
Yasuhide Asada; Masanori Tashimo; Masahiro Ueta
2002-01-01
This paper introduces a concept of the 'System Based Code' which has initially been proposed by the authors intending to give nuclear industry a leap of progress in the system reliability, performance improvement, and cost reduction. The concept of the System Based Code intends to give a theoretical procedure to optimize the reliability of the system by administrating every related engineering requirement throughout the life of the system from design to decommissioning. (authors)
Global Social Issues in the Curriculum: Perspectives of School Principals
Simovska, Venka; Prøsch, Åsa Kremer
2016-01-01
In this article, we discuss principals' perspectives on the priority given to the place in the curriculum of and the supporting practices related to health and sustainability education in schools in Denmark (for pupils aged 6-16). The study is situated within the discourses about critical health and sustainability education and treats the two…
Influence of Principals' Administrative Style on the Job Performance ...
The objective was to evaluate the administrative style of secondary school principals in relation to teachers' job performance in Cross River State, Nigeria. A total of four hundred (400) teachers and one thousand two hundred (1200) students were randomly sampled for the study, using Ex-post facto research design with ...
Perspectives on Principal Instructional Leadership in Vietnam: A Preliminary Model
Hallinger, Philip; Walker, Allan; Nguyen, Dao Thi Hong; Truong, Thang; Nguyen, Thi Thinh
2017-01-01
Purpose: Worldwide interest in principal instructional leadership has led to global dissemination of related research findings despite their concentration in a limited set of western cultural contexts. An urgent challenge in educational leadership and management lies in expanding the range of national settings for investigations of instructional…
Hidden symmetries of the Principal Chiral Model unveiled
Devchand, C.; Schiff, J.
1996-12-01
By relating the two-dimensional U(N) Principal Chiral Model to a Simple linear system we obtain a free-field parametrization of solutions. Obvious symmetry transformations on the free-field data give symmetries of the model. In this way all known 'hidden symmetries' and Baecklund transformations, as well as a host of new symmetries, arise. (author). 21 refs
Entrepreneurialism for Canadian Principals: Yesterday, Today, and Tomorrow
Scott, Shelleyann; Webber, Charles F.
2013-01-01
This article explores the various elements of Canadian educational entrepreneurialism as manifested yesterday, today, and tomorrow and in relation to the social and political influences of the time. This discussion is informed by the findings of the International Study of the Preparation of Principals (ISPP) and represents an expansion of the…
Primary Principals' Leadership Styles, School Organizational Health and Workplace Bullying
Cemaloglu, Necati
2011-01-01
Purpose: The purpose of this paper is to determine the relationships between leadership styles of primary school principals and organizational health and bullying. Design/methodology/approach: Two hypotheses were formulated in relation to the research. Three instruments were used--a multi-level questionnaire for measuring leadership, an…
Sustainable School Improvement: Suburban Elementary Principals' Capacity Building
Clark, Alison J.
2017-01-01
The increase of intense pressures to ensure long-term education reforms have created a challenge for school leaders as they direct and nurture the abilities of others. The purpose of this research was to understand and describe suburban elementary principals' practices and perceptions as change leaders related to capacity building through the…
31 CFR 515.404 - Transactions between principal and agent.
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Transactions between principal and agent. 515.404 Section 515.404 Money and Finance: Treasury Regulations Relating to Money and Finance... transaction were in no way affiliated or associated with each other. ...
31 CFR 500.404 - Transactions between principal and agent.
2010-07-01
... 31 Money and Finance: Treasury 3 2010-07-01 2010-07-01 false Transactions between principal and agent. 500.404 Section 500.404 Money and Finance: Treasury Regulations Relating to Money and Finance... transaction were in no way affiliated or associated with each other. ...
Maximum Parsimony on Phylogenetic networks
2012-01-01
Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are
Sparse Principal Component Analysis in Medical Shape Modeling
Sjöstrand, Karl; Stegmann, Mikkel Bille; Larsen, Rasmus
2006-01-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims...... analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of sufficiently small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA...
Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo
Martinez, Josue G.
2010-06-01
The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.
Taylor Backor, Karen; Gordon, Stephen P.
2015-01-01
Although research has established links between the principal's instructional leadership and student achievement, there is considerable concern in the literature concerning the capacity of principal preparation programs to prepare instructional leaders. This study interviewed educational leadership faculty as well as expert principals and teacher…
Bon, Susan C.
2009-01-01
In this experimental study, a national random sample of high school principals (stratified by gender) were asked to evaluate hypothetical applicants whose resumes varied by religion (Jewish, Catholic, nondenominational) and gender (male, female) for employment as assistant principals. Results reveal that male principals rate all applicants higher…
Principal Self-Efficacy and Work Engagement: Assessing a Norwegian Principal Self-Efficacy Scale
Federici, Roger A.; Skaalvik, Einar M.
2011-01-01
One purpose of the present study was to develop and test the factor structure of a multidimensional and hierarchical Norwegian Principal Self-Efficacy Scale (NPSES). Another purpose of the study was to investigate the relationship between principal self-efficacy and work engagement. Principal self-efficacy was measured by the 22-item NPSES. Work…
Grissom, Jason A.; Loeb, Susanna; Mitani, Hajime
2015-01-01
Purpose: Time demands faced by school principals make principals' work increasingly difficult. Research outside education suggests that effective time management skills may help principals meet job demands, reduce job stress, and improve their performance. The purpose of this paper is to investigate these hypotheses. Design/methodology/approach:…
Supervision Duty of School Principals
Kürşat YILMAZ
2009-04-01
Full Text Available Supervision by school administrators is becoming more and more important. The change in the roles ofschool administrators has a great effect on that increase. At present, school administrators are consideredmore than as technical directors, but as instructional leaders. This increased the importance of schooladministrators’ expected supervision acts. In this respect, the aim of this study is to make a conceptualanalysis about school administrators’ supervision duties. For this reason, a literature review related withsupervision and contemporary supervision approaches was done, and the official documents concerningsupervision were examined. As a result, it can be said that school administrators’ supervision duties havebecome very important. And these duties must certainly be carried out by school administrators.
Grigioni, Mauro; Daniele, Carla; D'Avenio, Giuseppe; Barbaro, Vincenzo
2002-05-01
Turbulent flow generated by prosthetic devices at the bloodstream level may cause mechanical stress on blood particles. Measurement of the Reynolds stress tensor and/or some of its components is a mandatory step to evaluate the mechanical load on blood components exerted by fluid stresses, as well as possible consequent blood damage (hemolysis or platelet activation). Because of the three-dimensional nature of turbulence, in general, a three-component anemometer should be used to measure all components of the Reynolds stress tensor, but this is difficult, especially in vivo. The present study aimed to derive the maximum Reynolds shear stress (RSS) in three commercially available prosthetic heart valves (PHVs) of wide diffusion, starting with monodimensional data provided in vivo by echo Doppler. Accurate measurement of PHV flow field was made using laser Doppler anemometry; this provided the principal turbulence quantities (mean velocity, root-mean-square value of velocity fluctuations, average value of cross-product of velocity fluctuations in orthogonal directions) needed to quantify the maximum turbulence-related shear stress. The recorded data enabled determination of the relationship, the Reynolds stresses ratio (RSR) between maximum RSS and Reynolds normal stress in the main flow direction. The RSR was found to be dependent upon the local structure of the flow field. The reported RSR profiles, which permit a simple calculation of maximum RSS, may prove valuable during the post-implantation phase, when an assessment of valve function is made echocardiographically. Hence, the risk of damage to blood constituents associated with bileaflet valve implantation may be accurately quantified in vivo.
Two-dimensional maximum entropy image restoration
Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.
1977-07-01
An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures
Maximum principles for boundary-degenerate second-order linear elliptic differential operators
Feehan, Paul M. N.
2012-01-01
We prove weak and strong maximum principles, including a Hopf lemma, for smooth subsolutions to equations defined by linear, second-order, partial differential operators whose principal symbols vanish along a portion of the domain boundary. The boundary regularity property of the smooth subsolutions along this boundary vanishing locus ensures that these maximum principles hold irrespective of the sign of the Fichera function. Boundary conditions need only be prescribed on the complement in th...
Maximum Entropy: Clearing up Mysteries
Marian GrendÃƒÂ¡r
2001-04-01
Full Text Available Abstract: There are several mystifications and a couple of mysteries pertinent to MaxEnt. The mystifications, pitfalls and traps are set up mainly by an unfortunate formulation of Jaynes' die problem, the cause cÃƒÂ©lÃƒÂ¨bre of MaxEnt. After discussing the mystifications a new formulation of the problem is proposed. Then we turn to the mysteries. An answer to the recurring question 'Just what are we accomplishing when we maximize entropy?' [8], based on MaxProb rationale of MaxEnt [6], is recalled. A brief view on the other mystery: 'What is the relation between MaxEnt and the Bayesian method?' [9], in light of the MaxProb rationale of MaxEnt suggests that there is not and cannot be a conflict between MaxEnt and Bayes Theorem.
Comparative Analysis of Principals' Management Strategies in ...
It was recommended among others that principals of secondary schools should adopt all the management strategies in this study as this will improve school administration and consequently students‟ academic performance. Keywords: Management Strategies; Secondary Schools; Administrative Effectiveness ...
Spatial control of groundwater contamination, using principal ...
probe into the spatial controlling processes of groundwater contamination, using principal component analysis (PCA). ... topography, soil type, depth of water levels, and water usage. Thus, the ... of effective sites for infiltration of recharge water.
The Relationship between Principals' Managerial Approaches and ...
Nekky Umera
Egerton University, P. O. Box 16568, NAKURU KENYA bosirej@yahoo.com ... teacher and parental input while it was negatively correlated with the level of .... principal's attitude, gender qualifications, and leadership experience (Green,. 1999 ...
First-Year Principal Encounters Homophobia
Retelle, Ellen
2011-01-01
A 1st-year principal encounters homonegativity and an ethical dilemma when she attempts to terminate a teacher because of the teacher's inadequate and ineffective teaching. The teacher responds by threatening to "out" Ms. L. to the parents.
Integrating Data Transformation in Principal Components Analysis
Maadooliat, Mehdi; Huang, Jianhua Z.; Hu, Jianhua
2015-01-01
Principal component analysis (PCA) is a popular dimension reduction method to reduce the complexity and obtain the informative aspects of high-dimensional datasets. When the data distribution is skewed, data transformation is commonly used prior
Spatial control of groundwater contamination, using principal
Spatial control of groundwater contamination, using principal component analysis ... anthropogenic (agricultural activities and domestic wastewaters), and marine ... The PC scores reflect the change of groundwater quality of geogenic origin ...
Principal Hawaiian Islands Geoid Heights (GEOID96)
National Oceanic and Atmospheric Administration, Department of Commerce — This 2' geoid height grid for the Principal Hawaiian Islands is distributed as a GEOID96 model. The computation used 61,000 terrestrial and marine gravity data held...
The principal radionuclides in high level radioactive waste management
Mulyanto
1998-01-01
The principal radionuclides in high level radioactive waste management. The selection of the principal radionuclides in the high level waste (HLW) management was developed in order to improve the disposal scenario of HLW. In this study the unified criteria for selection of the principal radionuclides were proposed as; (1) the value of hazard index estimated by annual limit of intake (ALI) for long-term tendency,(2) the relative dose factor related to adsorbed migration rate transferred by ground water, and (3) heat generation in the repository. From this study it can be concluded that the principal radionuclides in the HLW management were minor actinide (MA=Np, Am, Cm, etc), Tc, I, Cs and Sr, based on the unified basic criteria introduced in this study. The remaining short-lived fission product (SLFPs), after the selected nuclides are removed, should be immobilized and solidified in a glass matrix. Potential risk due to the remaining SLFPs can be lower than that of uranium ore after about 300 year. (author)
The worst case complexity of maximum parsimony.
Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal
2014-11-01
One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Squires, R.R.; Young, R.L.
1984-01-01
Flood hazards for a 9-mile reach of Fortymile Wash and its principal southwestern tributaries - Busted Butte, Drill Hole, and Yucca Washes - were evaluated to aid in determining possible sites for the storage of high-level radioactive wastes on the Nevada Test Site. Data from 12 peak-flow gaging stations adjacent to the Test Site were used to develop regression relations that would permit an estimation of the magnitude of the 100- and 500-year flood peaks (Q 100 and Q 500 ), in cubic feet per second. The resulting equations are: Q 100 = 482A 0 565 and Q 500 = 2200A 0 571 , where A is the tributary drainage area, in square miles. The estimate of the regional maximum flood was based on data from extreme floods elsewhere in Nevada and in surrounding states. Among seven cross sections on Fortymile Wash, the estimated maximum depths of the 100-year, 500-year, and regional maximum floods are 8, 11, and 29 feet, respectively. At these depths, flood water would remain within the deeply incised channel of the wash. Mean flow velocities would be as great as 9, 14, and 28 feet per second for the three respective flood magnitudes. The study shows that Busted Butte and Drill Hole Washes (9 and 11 cross sections, respectively) would have water depths of up to at least 4 feet and mean flow velocities of up to at least 8 feet per second during a 100-year flood. A 500-year flood would exceed stream-channel capacities at several places, with depths to 10 feet and mean flow velocities to 11 feet per second. The regional maximum flood would inundate sizeable areas in central parts of the two watersheds. At Yucca Wash (5 cross sections), the 100-year, 500-year, and regional maximum floods would remain within the stream channel. Maximum flood depths would be about 5, 9, and 23 feet and mean velocities about 9, 12, and 22 feet per second, respectively, for the three floods
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Maximum a posteriori covariance estimation using a power inverse wishart prior
Nielsen, Søren Feodor; Sporring, Jon
2012-01-01
The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximum...
Parameters determining maximum wind velocity in a tropical cyclone
Choudhury, A.M.
1984-09-01
The spiral structure of a tropical cyclone was earlier explained by a tangential velocity distribution which varies inversely as the distance from the cyclone centre outside the circle of maximum wind speed. The case has been extended in the present paper by adding a radial velocity. It has been found that a suitable combination of radial and tangential velocities can account for the spiral structure of a cyclone. This enables parametrization of the cyclone. Finally a formula has been derived relating maximum velocity in a tropical cyclone with angular momentum, radius of maximum wind speed and the spiral angle. The shapes of the spirals have been computed for various spiral angles. (author)
Assessment of extreme value distributions for maximum temperature in the Mediterranean area
Beck, Alexander; Hertig, Elke; Jacobeit, Jucundus
2015-04-01
Extreme maximum temperatures highly affect the natural as well as the societal environment Heat stress has great effects on flora, fauna and humans and culminates in heat related morbidity and mortality. Agriculture and different industries are severely affected by extreme air temperatures. Even more under climate change conditions, it is necessary to detect potential hazards which arise from changes in the distributional parameters of extreme values, and this is especially relevant for the Mediterranean region which is characterized as a climate change hot spot. Therefore statistical approaches are developed to estimate these parameters with a focus on non-stationarities emerging in the relationship between regional climate variables and their large-scale predictors like sea level pressure, geopotential heights, atmospheric temperatures and relative humidity. Gridded maximum temperature data from the daily E-OBS dataset (Haylock et al., 2008) with a spatial resolution of 0.25° x 0.25° from January 1950 until December 2012 are the predictands for the present analyses. A s-mode principal component analysis (PCA) has been performed in order to reduce data dimension and to retain different regions of similar maximum temperature variability. The grid box with the highest PC-loading represents the corresponding principal component. A central part of the analyses is the model development for temperature extremes under the use of extreme value statistics. A combined model is derived consisting of a Generalized Pareto Distribution (GPD) model and a quantile regression (QR) model which determines the GPD location parameters. The QR model as well as the scale parameters of the GPD model are conditioned by various large-scale predictor variables. In order to account for potential non-stationarities in the predictors-temperature relationships, a special calibration and validation scheme is applied, respectively. Haylock, M. R., N. Hofstra, A. M. G. Klein Tank, E. J. Klok, P
Promoting principals' managerial involvement in instructional improvement.
Gillat, A
1994-01-01
Studies of school leadership suggest that visiting classrooms, emphasizing achievement and training, and supporting teachers are important indicators of the effectiveness of school principals. The utility of a behavior-analytic program to support the enhancement of these behaviors in 2 school principals and the impact of their involvement upon teachers' and students' performances in three classes were examined in two experiments, one at an elementary school and another at a secondary school. Treatment conditions consisted of helping the principal or teacher to schedule his or her time and to use goal setting, feedback, and praise. A withdrawal design (Experiment 1) and a multiple baseline across classrooms (Experiment 2) showed that the principal's and teacher's rates of praise, feedback, and goal setting increased during the intervention, and were associated with improvements in the academic performance of the students. In the future, school psychologists might analyze the impact of involving themselves in supporting the principal's involvement in improving students' and teachers' performances or in playing a similar leadership role themselves.
Dimensionality reduction of collective motion by principal manifolds
Gajamannage, Kelum; Butail, Sachit; Porfiri, Maurizio; Bollt, Erik M.
2015-01-01
While the existence of low-dimensional embedding manifolds has been shown in patterns of collective motion, the current battery of nonlinear dimensionality reduction methods is not amenable to the analysis of such manifolds. This is mainly due to the necessary spectral decomposition step, which limits control over the mapping from the original high-dimensional space to the embedding space. Here, we propose an alternative approach that demands a two-dimensional embedding which topologically summarizes the high-dimensional data. In this sense, our approach is closely related to the construction of one-dimensional principal curves that minimize orthogonal error to data points subject to smoothness constraints. Specifically, we construct a two-dimensional principal manifold directly in the high-dimensional space using cubic smoothing splines, and define the embedding coordinates in terms of geodesic distances. Thus, the mapping from the high-dimensional data to the manifold is defined in terms of local coordinates. Through representative examples, we show that compared to existing nonlinear dimensionality reduction methods, the principal manifold retains the original structure even in noisy and sparse datasets. The principal manifold finding algorithm is applied to configurations obtained from a dynamical system of multiple agents simulating a complex maneuver called predator mobbing, and the resulting two-dimensional embedding is compared with that of a well-established nonlinear dimensionality reduction method.
Evaluating the Effectiveness of Traditional and Alternative Principal Preparation Programs
Pannell, Summer; Peltier-Glaze, Bernnell M.; Haynes, Ingrid; Davis, Delilah; Skelton, Carrie
2015-01-01
This study sought to determine the effectiveness on increasing student achievement of principals trained in a traditional principal preparation program and those trained in an alternate route principal preparation program within the same Mississippi university. Sixty-six Mississippi principals and assistant principals participated in the study. Of…
Riccati transformations and principal solutions of discrete linear systems
Ahlbrandt, C.D.; Hooker, J.W.
1984-01-01
Consider a second-order linear matrix difference equation. A definition of principal and anti-principal, or recessive and dominant, solutions of the equation are given and the existence of principal and anti-principal solutions and the essential uniqueness of principal solutions is proven
Principals, Trust, and Cultivating Vibrant Schools
Megan Tschannen-Moran
2015-03-01
Full Text Available Although principals are ultimately held accountable to student learning in their buildings, the most consistent research results have suggested that their impact on student achievement is largely indirect. Leithwood, Patten, and Jantzi proposed four paths through which this indirect influence would flow, and the purpose of this special issue is to examine in greater depth these mediating variables. Among mediating variables, we assert that trust is key. In this paper, we explore the evidence that points to the role that faculty trust in the principal plays in student learning and how principals can cultivate trust by attending to the five facets of trust, as well as the correlates of trust that mediate student learning, including academic press, collective teacher efficacy, and teacher professionalism. We argue that trust plays a role in each of the four paths identified by Leithwood, Patten, and Jantzi. Finally, we explore possible new directions for future research.
Principal component regression for crop yield estimation
Suryanarayana, T M V
2016-01-01
This book highlights the estimation of crop yield in Central Gujarat, especially with regard to the development of Multiple Regression Models and Principal Component Regression (PCR) models using climatological parameters as independent variables and crop yield as a dependent variable. It subsequently compares the multiple linear regression (MLR) and PCR results, and discusses the significance of PCR for crop yield estimation. In this context, the book also covers Principal Component Analysis (PCA), a statistical procedure used to reduce a number of correlated variables into a smaller number of uncorrelated variables called principal components (PC). This book will be helpful to the students and researchers, starting their works on climate and agriculture, mainly focussing on estimation models. The flow of chapters takes the readers in a smooth path, in understanding climate and weather and impact of climate change, and gradually proceeds towards downscaling techniques and then finally towards development of ...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
correlation between maximum dry density and cohesion of ...
HOD
investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
Conseiller principal en communications (h/f) | CRDI - Centre de ...
Résumé des fonctions Le conseiller principal en communications est chargé de la gestion des relations publiques de telecentre.org. Il est également responsable de l'élaboration et de la gestion des principales stratégies de communication et de mise en commun des connaissances, de même que des grandes stratégies ...
High School Principals Who Stay: Stability in a Time of Change
Luebke, Patricia A.
2013-01-01
This qualitative study explored the institutional factors, personal characteristics, and work-related relationships of high school principals that led to their longer than usual tenure in their positions. Data were gathered from interviews with ten high school principals who had served in their positions for a range of 8 to 23 years, much longer…
Osborne-Lampkin, La'Tara; Folsom, Jessica Sidler; Herrington, Carolyn D.
2015-01-01
This systematic review of the relationships between principal characteristics and student achievement was created for educators, administrators, policy-makers, and other individuals interested in a comprehensive catalogue of research on relations between principal characteristics and student achievement. It synthesizes what is known about…
School Principals' Evaluations of Their Instructional Leadership Behaviours: Realities vs. Ideals
Kalman, Mahmut; Arslan, Mustafa Cüneyt
2016-01-01
The purpose of the current study was to examine primary and middle school principals' evaluations of their own instructional leadership behaviours, and thereby pay closer attention to the ideal instructional leadership behaviours suggested in the related literature and the realities of principals' instructional leadership behaviours. Although…
A Study of How Secondary School Principals in Minnesota Perceive the Evaluation of Their Performance
Muenich, John Andrew
2014-01-01
The purpose for this study was to ascertain the perceptions principals of public secondary schools in Minnesota have in relation to the evaluation of their job performance. Responding principals reported that past evaluations have been fair and consistent but have questioned their value with regard to professional growth. When asked if student…
Arar, Khalid
2017-01-01
This paper examines emotional expression experienced by female principals in the Arab school system in Israel over their managerial careers--role-related emotions that they choose to express or repress before others. I employed narrative methodology, interviewing nine female principals from the Arab school system to investigate expression of…
School Principals' Job Satisfaction: The Effects of Work Intensification
Wang, Fei; Pollock, Katina; Hauseman, Cameron
2018-01-01
This study examines principals' job satisfaction in relation to their work intensification. Frederick Herzberg's two-factor theory was used to shed light on how motivating and maintenance factors affect principals' job satisfaction. Logistic multiple regressions were used in the analysis of survey data that were collected from 2,701 elementary and…
Northfield, Shawn
2014-01-01
As part of principal succession, new school leaders must take action to solidify their position as the school's legitimate lead authority while at the same time, develop and utilize interactive mechanisms designed to nurture staff relations and engender teacher support and confidence in their leadership. For beginning principals, this process…
High-Need Schools in Australia: The Leadership of Two Principals
Gurr, David; Drysdale, Lawrie; Clarke, Simon; Wildy, Helen
2014-01-01
In this article, we report on our initial work with the International School Leadership Development Network. In doing so, we present two cases of principals leading high-need schools, and conclude with some key observations in relation to what is distinctive about leading these schools. The first case features a principal leading a suburban school…
DeMatthews, David; Izquierdo, Elena
2018-01-01
Recent calls for social justice to be a key aspect of principal preparation have been made, but content related to the efficacy of dual language education has been a neglected area of educational leadership research, coursework, and principal preparation standards. We draw on scholarship focused on dual language education, social justice…
Cannata, Marisa; Engel, Mimi
2012-01-01
The academic success of any school depends on its teachers. However, relatively little research exists on the qualities principals value in teacher hiring, and we know almost nothing about charter school principals' preferences. This article addresses this gap in the literature using survey results for a matched sample of charter and traditional…
Bird, James J.; Wang, Chuang; Watson, Jim; Murray, Louise
2012-01-01
The focus of this study was to explore the relationships between the authentic leadership of building principals and the trust, engagement, and intention to return of their teaching staffs. School principals (n = 28) and their teaching staffs (n = 633) were surveyed. Teacher trust and engagement were found to be significantly related to principal…
Relationships among Principal Authentic Leadership and Teacher Trust and Engagement Levels
Bird, James J.; Wang, Chuang; Watson, Jim R.; Murray, Louise
2009-01-01
This study examined the relationships among the authentic leadership style of school principals and the trust and engagement levels of their teachers in a county school district in a Southeastern state. The authenticity of the school principal was found to be significantly positively related to teacher trust and teacher engagement levels. The…
Principals' Response to Change in Schools and Its Effect on School Climate
Busch, Steve; Johnson, Shirley; Robles-Piña, Rebecca; Slate, John R.
2009-01-01
In this study, the researchers examined principal behaviors related with change in school climate. That is, the manner in which principals managed change within their schools and the impact of these change behaviors on the school climate was investigated. Through use of the Leadership Profile (Johnson, 2003) and the Organizational Health Inventory…
Kelsen, Virginia E.
2011-01-01
School principals face an increasing number of professional demands, especially the challenge of improving student achievement. As such, the purpose of this dissertation is to study the effect of leadership coaching on a school principal's responsibilities related to carrying out these demands. Specifically, the researcher examined a subset of…
Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation
Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.
2015-11-01
We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Leaf, Ann; Odhiambo, George
2017-01-01
Purpose: The purpose of this paper is to report on a study examining the perceptions of secondary principals, deputies and teachers, of deputy principal (DP) instructional leadership (IL), as well as deputies' professional learning (PL) needs. Framed within an interpretivist approach, the specific objectives of this study were: to explore the…
Haller, Alicia; Hunt, Erika
2016-01-01
Research has demonstrated that principals have a powerful impact on school improvement and student learning. Principals play a vital role in recruiting, developing, and retaining effective teachers; creating a school-wide culture of learning; and implementing a continuous improvement plan aimed at increasing student achievement. Leithwood, Louis,…
Principal Self-Efficacy, Teacher Perceptions of Principal Performance, and Teacher Job Satisfaction
Evans, Molly Lynn
2016-01-01
In public schools, the principal's role is of paramount importance in influencing teachers to excel and to keep their job satisfaction high. The self-efficacy of leaders is an important characteristic of leadership, but this issue has not been extensively explored in school principals. Using internet-based questionnaires, this study obtained…
McDaniel, Luther
2017-01-01
The purpose of this mixed methods study was to assess school principals' perspectives of the extent to which they apply the principles of andragogy to the professional development of assistant principals in their schools. This study was conducted in school districts that constitute a RESA area in a southeastern state. The schools in these…
Modelling maximum canopy conductance and transpiration in ...
There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...
What Principals Should Know About Food Allergies.
Munoz-Furlong, Anne
2002-01-01
Describes what principals should know about recent research findings on food allergies (peanuts, tree nuts, milk, eggs, soy, wheat) that can produce severe or life-threatening reactions in children. Asserts that every school should have trained staff and written procedures for reacting quickly to allergic reactions. (PKP)
A Principal's Guide to Children's Allergies.
Munoz-Furlong, Anne
1999-01-01
Discusses several common children's allergies, including allergic rhinitis, asthma, atopic dermatitis, food allergies, and anaphylactic shock. Principals should become familiar with various medications and should work with children's parents and physicians to determine how to manage their allergies at school. Allergen avoidance is the best…
An Exploration of Principal Instructional Technology Leadership
Townsend, LaTricia Walker
2013-01-01
Nationwide the demand for schools to incorporate technology into their educational programs is great. In response, North Carolina developed the IMPACT model in 2003 to provide a comprehensive model for technology integration in the state. The model is aligned to national educational technology standards for teachers, students, and principals.…
Principals' Leadership Styles and Student Achievement
Harnish, David Alan
2012-01-01
Many schools struggle to meet No Child Left Behind's stringent adequate yearly progress standards, although the benchmark has stimulated national creativity and reform. The purpose of this study was to explore teacher perceptions of principals' leadership styles, curriculum reform, and student achievement to ascertain possible factors to improve…
How To Select a Good Assistant Principal.
Holman, Linda J.
1997-01-01
Notes that a well-structured job profile and interview can provide insight into the key qualities of an effective assistant principal. These include organizational skills, basic accounting knowledge, interpersonal skills, dependability, strong work ethic, effective problem-solving skills, leadership skills, written communication skills,…
Principals' Transformational Leadership in School Improvement
Yang, Yingxiu
2013-01-01
Purpose: This paper aims to contribute experience and ideas of the transformational leadership, not only for the principal want to improve leadership himself (herself), but also for the school at critical period of improvement, through summarizing forming process and the problem during the course and key factors that affect the course.…
Imprecise Beliefs in a Principal Agent Model
Rigotti, L.
1998-01-01
This paper presents a principal-agent model where the agent has multiple, or imprecise, beliefs. We model this situation formally by assuming the agent's preferences are incomplete. One can interpret this multiplicity as an agent's limited knowledge of the surrounding environment. In this setting,
Bootstrap confidence intervals for principal response curves
Timmerman, Marieke E.; Ter Braak, Cajo J. F.
2008-01-01
The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the
Bootstrap Confidence Intervals for Principal Response Curves
Timmerman, M.E.; Braak, ter C.J.F.
2008-01-01
The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the
Islamitisch financieren tussen principes en realiteit
Wolters, W.G.
2009-01-01
‘De financiële crisis zou niet hebben plaatsgevonden, als de wereld de principes van islamitisch bankieren en financieren zou hebben aangenomen.’ Dat was één van de kenmerkende reacties van de kant van de islamitische bankiers, in de laatste maanden van 2008. Toen begon de wereldwijde financiële
Dealing with Crises: One Principal's Experience.
Foley, Charles F.
1986-01-01
The principal of Concord High School (New Hampshire) recounts the 1985-86 school year's four crises--the visits of teacher-astronaut Christa McAuliffe and Secretary of Education William Bennett, the shooting of a former student, and the Challenger space shuttle explosion. The greatest challenge was resuming the normal schedule and fielding media…
Principal Pressure in the Middle of Accountability
Derrington, Mary Lynne; Larsen, Donald E.
2012-01-01
When a new superintendent is hired, Tom Thompson, middle school principal, is squeezed between complying with the demands of the district and cultivating a positive culture in his school. He wrestles with the stress of facing tough leadership choices that take a toll on his physical and mental health. Tom realizes that a career-ending move might…
The Relationship between Principals' Managerial Approaches and ...
Students' discipline is critical to the attainment of positive school outcomes. This paper presents and discusses findings of a study on the relationship between principals' management approaches and the level of student discipline in selected public secondary schools in Kenya. The premise of the study was that the level of ...
Primary School Principals' Experiences with Smartphone Apps
Çakir, Rahman; Aktay, Sayim
2016-01-01
Smartphones are not just pieces of hardware, they at same time also dip into software features such as communication systems. The aim of this study is to examine primary school principals' experiences with smart phone applications. Shedding light on this subject means that this research is qualitative. Criterion sampling has been intentionally…
Principal normal indicatrices of closed space curves
Røgen, Peter
1999-01-01
A theorem due to J. Weiner, which is also proven by B. Solomon, implies that a principal normal indicatrix of a closed space curve with nonvanishing curvature has integrated geodesic curvature zero and contains no subarc with integrated geodesic curvature pi. We prove that the inverse problem alw...
Summer Principals'/Directors' Orientation Training Module.
Mata, Robert L.; Garcia, Richard L.
Intended to provide current or potential project principals/directors with the basic knowledge, skills, abilities, and sensitivities needed to manage a summer migrant school project in the local educational setting, this module provides instruction in the project management areas of planning, preparation, control, and termination. The module…
Probabilistic Principal Component Analysis for Metabolomic Data.
Nyamundanda, Gift
2010-11-23
Abstract Background Data from metabolomic studies are typically complex and high-dimensional. Principal component analysis (PCA) is currently the most widely used statistical technique for analyzing metabolomic data. However, PCA is limited by the fact that it is not based on a statistical model. Results Here, probabilistic principal component analysis (PPCA) which addresses some of the limitations of PCA, is reviewed and extended. A novel extension of PPCA, called probabilistic principal component and covariates analysis (PPCCA), is introduced which provides a flexible approach to jointly model metabolomic data and additional covariate information. The use of a mixture of PPCA models for discovering the number of inherent groups in metabolomic data is demonstrated. The jackknife technique is employed to construct confidence intervals for estimated model parameters throughout. The optimal number of principal components is determined through the use of the Bayesian Information Criterion model selection tool, which is modified to address the high dimensionality of the data. Conclusions The methods presented are illustrated through an application to metabolomic data sets. Jointly modeling metabolomic data and covariates was successfully achieved and has the potential to provide deeper insight to the underlying data structure. Examination of confidence intervals for the model parameters, such as loadings, allows for principled and clear interpretation of the underlying data structure. A software package called MetabolAnalyze, freely available through the R statistical software, has been developed to facilitate implementation of the presented methods in the metabolomics field.
Principals in Partnership with Math Coaches
Grant, Catherine Miles; Davenport, Linda Ruiz
2009-01-01
One of the most promising developments in math education is the fact that many districts are hiring math coaches--also called math resource teachers, math facilitators, math lead teachers, or math specialists--to assist elementary-level teachers with math instruction. What must not be lost, however, is that principals play an essential role in…
Experimental and principal component analysis of waste ...
The present study is aimed at determining through principal component analysis the most important variables affecting bacterial degradation in ponds. Data were collected from literature. In addition, samples were also collected from the waste stabilization ponds at the University of Nigeria, Nsukka and analyzed to ...
Principal Component Analysis as an Efficient Performance ...
This paper uses the principal component analysis (PCA) to examine the possibility of using few explanatory variables (X's) to explain the variation in Y. It applied PCA to assess the performance of students in Abia State Polytechnic, Aba, Nigeria. This was done by estimating the coefficients of eight explanatory variables in a ...
Principal component analysis of psoriasis lesions images
Maletti, Gabriela Mariel; Ersbøll, Bjarne Kjær
2003-01-01
A set of RGB images of psoriasis lesions is used. By visual examination of these images, there seem to be no common pattern that could be used to find and align the lesions within and between sessions. It is expected that the principal components of the original images could be useful during future...
The Principal as Professional Development Leader
Lindstrom, Phyllis H.; Speck, Marsha
2004-01-01
Individual teachers have the greatest effect on student performance. Principals, as professional development leaders, are in the best position to provide teachers with the professional development strategies they need to improve skills and raise student achievement. This book guides readers through a step-by-step process to formulate, implement,…
Burnout And Lifestyle Of Principals And Entrepreneurs
Jasna Lavrenčič
2014-12-01
Full Text Available Research Question (RQ: What kind of lifestyle do the principals and entrepreneurs lead? Does the lifestyle of principals and entrepreneurs influence burnout? Purpose: To find out, based on the results of a questionnaire, what kind of lifestyle both researched groups lead. Does lifestyle have an influence on the occurrence of the phenomenon of burnout. Method: We used the method of data collection by questionnaire. Acquired data were analyzed using SPSS, descriptive and inference statistics. Results: Results showed, that both groups lead a similar lifestyle and that lifestyle influences burnout with principals, as well as entrepreneurs. Organization: School principals and entrepreneurs are the heads of individual organizations or companies, the goal of which is success. To be successful in their work, they must adapt their lifestyle, which can be healthy or unhealthy. If their lifestyle is unhealthy, it can lead to burnout. Society: With results of the questionnaire we would like to answer the question about the lifestyle of both groups and its influence on the occurrence of burnout. Originality: The study of lifestyle and the occurrence of burnout in these two groups is the first study in this area. Limitations/Future Research: In continuation, research groups could be submitted to the research fields of effort physiology and tracking of certain haematological parameters, such as cholesterol, blood sugar and stress hormones - adrenaline, noradrenalin, cortisol. Thus, we could carry out an even more in depth research of the connection between lifestyle and burnout.
Principal Connection / Amazon and the Whole Teacher
Hoerr, Thomas R.
2015-01-01
A recent controversy over Amazon's culture has strong implications for the whole child approach, and it offers powerful lessons for principals. A significant difference between the culture of so many businesses today and the culture at good schools is that in good schools, the welfare of the employees is very important. Student success is the…
The Gender of Secondary School Principals.
Bonuso, Carl; Shakeshaft, Charol
1983-01-01
A study was conducted to understand why so few of the secondary school principals in New York State are women. Results suggest two possible causes: either sufficient women candidates do not apply for the positions, or sex discrimination still exists. (KH)
Maximum permissible concentration (MPC) values for spontaneously fissioning radionuclides
Ford, M.R.; Snyder, W.S.; Dillman, L.T.; Watson, S.B.
1976-01-01
The radiation hazards involved in handling certain of the transuranic nuclides that exhibit spontaneous fission as a mode of decay were reaccessed using recent advances in dosimetry and metabolic modeling. Maximum permissible concentration (MPC) values in air and water for occupational exposure (168 hr/week) were calculated for 244 Pu, 246 Cm, 248 Cm, 250 Cf, 252 Cf, 254 Cf, /sup 254m/Es, 255 Es, 254 Fm, and 256 Fm. The half-lives, branching ratios, and principal modes of decay of the parent-daughter members down to a member that makes a negligible contribution to the dose are given, and all daughters that make a significant contribution to the dose to body organs following inhalation or ingestion are included in the calculations. Dose commitments for body organs are also given
THESEUS: maximum likelihood superpositioning and analysis of macromolecular structures.
Theobald, Douglas L; Wuttke, Deborah S
2006-09-01
THESEUS is a command line program for performing maximum likelihood (ML) superpositions and analysis of macromolecular structures. While conventional superpositioning methods use ordinary least-squares (LS) as the optimization criterion, ML superpositions provide substantially improved accuracy by down-weighting variable structural regions and by correcting for correlations among atoms. ML superpositioning is robust and insensitive to the specific atoms included in the analysis, and thus it does not require subjective pruning of selected variable atomic coordinates. Output includes both likelihood-based and frequentist statistics for accurate evaluation of the adequacy of a superposition and for reliable analysis of structural similarities and differences. THESEUS performs principal components analysis for analyzing the complex correlations found among atoms within a structural ensemble. ANSI C source code and selected binaries for various computing platforms are available under the GNU open source license from http://monkshood.colorado.edu/theseus/ or http://www.theseus3d.org.
Principal parameters of classical multiply charged ion sources
Winter, H.; Wolf, B.H.
1974-01-01
A review is given of the operational principles of classical multiply charged ion sources (operating sources for intense beams of multiply charged ions using discharge plasmas; MCIS). The fractional rates of creation of multiply charged ions in MCIS plasmas cannot be deduced from the discharge parameters in a simple manner; they depend essentially on three principal parameters, the density and energy distribution of the ionizing electrons, and the confinement time of ions in the ionization space. Simple discharge models were used to find relations between principal parameters, and results of model calculations are compared to actually measured charge state density distributions of extracted ions. Details of processes which determine the energy distribution of ionizing electrons (heating effects), confinement times of ions (instabilities), and some technical aspects of classical MCIS (cathodes, surface processes, conditioning, life time) are discussed
MXLKID: a maximum likelihood parameter identifier
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
Maximum power point tracker based on fuzzy logic
Daoud, A.; Midoun, A.
2006-01-01
The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and
Spatial variations of growth within domes having different patterns of principal growth directions
Jerzy Nakielski
2014-01-01
Full Text Available Growth rate variations for two paraboloidal domes: A and B, identical when seen from the outside but differing in the internal pattern of principal growth directions, were modeled by means of the growth tensor and a natural coordinate system. In dome A periclinal trajectories in the axial plane were given by confocal parabolas (as in a tunical dome, in dome B by parabolas converging to the vertex (as in a dome without a tunica. Accordingly, two natural coordinate systems, namely paraboloidal for A and convergent parabolic for B, were used. In both cases, the rate of growth in area on the surfaces of domes was assumed to be isotropic and identical in corresponding points. It appears that distributions of growth rates within domes A and B are similar in their peripheral and central parts and different only in their distal regions. In the latter, growth rates are relatively large; the maximum relative rate of growth in volume is around the geometric focus in dome A, and on the surface around the vertex in dome B.
Maximum neutron flux in thermal reactors
Strugar, P.V.
1968-12-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples
Maximum allowable load on wheeled mobile manipulators
Habibnejad Korayem, M.; Ghariblu, H.
2003-01-01
This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...
Principal components analysis in clinical studies.
Zhang, Zhongheng; Castelló, Adela
2017-09-01
In multivariate analysis, independent variables are usually correlated to each other which can introduce multicollinearity in the regression models. One approach to solve this problem is to apply principal components analysis (PCA) over these variables. This method uses orthogonal transformation to represent sets of potentially correlated variables with principal components (PC) that are linearly uncorrelated. PCs are ordered so that the first PC has the largest possible variance and only some components are selected to represent the correlated variables. As a result, the dimension of the variable space is reduced. This tutorial illustrates how to perform PCA in R environment, the example is a simulated dataset in which two PCs are responsible for the majority of the variance in the data. Furthermore, the visualization of PCA is highlighted.
A Genealogical Interpretation of Principal Components Analysis
McVean, Gil
2009-01-01
Principal components analysis, PCA, is a statistical method commonly used in population genetics to identify structure in the distribution of genetic variation across geographical location and ethnic background. However, while the method is often used to inform about historical demographic processes, little is known about the relationship between fundamental demographic parameters and the projection of samples onto the primary axes. Here I show that for SNP data the projection of samples onto the principal components can be obtained directly from considering the average coalescent times between pairs of haploid genomes. The result provides a framework for interpreting PCA projections in terms of underlying processes, including migration, geographical isolation, and admixture. I also demonstrate a link between PCA and Wright's fst and show that SNP ascertainment has a largely simple and predictable effect on the projection of samples. Using examples from human genetics, I discuss the application of these results to empirical data and the implications for inference. PMID:19834557
PCA: Principal Component Analysis for spectra modeling
Hurley, Peter D.; Oliver, Seb; Farrah, Duncan; Wang, Lingyu; Efstathiou, Andreas
2012-07-01
The mid-infrared spectra of ultraluminous infrared galaxies (ULIRGs) contain a variety of spectral features that can be used as diagnostics to characterize the spectra. However, such diagnostics are biased by our prior prejudices on the origin of the features. Moreover, by using only part of the spectrum they do not utilize the full information content of the spectra. Blind statistical techniques such as principal component analysis (PCA) consider the whole spectrum, find correlated features and separate them out into distinct components. This code, written in IDL, classifies principal components of IRS spectra to define a new classification scheme using 5D Gaussian mixtures modelling. The five PCs and average spectra for the four classifications to classify objects are made available with the code.
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Maximum entropy deconvolution of low count nuclear medicine images
McGrath, D.M.
1998-12-01
Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were
Executive Compensation and Principal-Agent Theory.
Garen, John E
1994-01-01
The empirical literature on executive compensation generally fails to specify a model of executive pay on which to base hypotheses regarding its determinants. In contrast, this paper analyzes a simple principal-agent model to determine how well it explains variations in CEO incentive pay and salaries. Many findings are consistent with the basic intuition of principle-agent models that compensation is structured to trade off incentives with insurance. However, statistical significance for some...
Resonant Homoclinic Flips Bifurcation in Principal Eigendirections
Tiansi Zhang
2013-01-01
Full Text Available A codimension-4 homoclinic bifurcation with one orbit flip and one inclination flip at principal eigenvalue direction resonance is considered. By introducing a local active coordinate system in some small neighborhood of homoclinic orbit, we get the Poincaré return map and the bifurcation equation. A detailed investigation produces the number and the existence of 1-homoclinic orbit, 1-periodic orbit, and double 1-periodic orbits. We also locate their bifurcation surfaces in certain regions.
Principal bundles on the projective line
M. Senthilkumar (Newgen Imaging) 1461 1996 Oct 15 13:05:22
LetX be a complete nonsingular curve over the algebraic closurek ofk andGa reductive group over k. Let E → X be a principal G-bundle on X. E is said to be semistable if, for every reduction of structure group EP ⊂ E to a maximal parabolic subgroup P of G, we have degree EP (p) ≤ 0, where p is the Lie algebra of P and EP ...
Interplay between tilted and principal axis rotation
Datta, Pradip; Roy, Santosh; Chattopadhyay, S.
2014-01-01
At IUAC-INGA, our group has studied four neutron rich nuclei of mass-110 region, namely 109,110 Ag and 108,110 Cd. These nuclei provide the unique platform to study the interplay between Tilted and Principal axis rotation since these are moderately deformed and at the same time, shears structures are present at higher spins. The salient features of the high spin behaviors of these nuclei will be discussed which are the signatures of this interplay
Interplay between tilted and principal axis rotation
Datta, Pradip [Ananda Mohan College, 102/1 Raja Rammohan Sarani, Kolkata 700 009 (India); Roy, Santosh; Chattopadhyay, S. [Saha Institute of Nuclear Physics, 1/AF Bidhan Nagar, Kolkata 700 064 (India)
2014-08-14
At IUAC-INGA, our group has studied four neutron rich nuclei of mass-110 region, namely {sup 109,110}Ag and {sup 108,110}Cd. These nuclei provide the unique platform to study the interplay between Tilted and Principal axis rotation since these are moderately deformed and at the same time, shears structures are present at higher spins. The salient features of the high spin behaviors of these nuclei will be discussed which are the signatures of this interplay.
Multilevel sparse functional principal component analysis.
Di, Chongzhi; Crainiceanu, Ciprian M; Jank, Wolfgang S
2014-01-29
We consider analysis of sparsely sampled multilevel functional data, where the basic observational unit is a function and data have a natural hierarchy of basic units. An example is when functions are recorded at multiple visits for each subject. Multilevel functional principal component analysis (MFPCA; Di et al. 2009) was proposed for such data when functions are densely recorded. Here we consider the case when functions are sparsely sampled and may contain only a few observations per function. We exploit the multilevel structure of covariance operators and achieve data reduction by principal component decompositions at both between and within subject levels. We address inherent methodological differences in the sparse sampling context to: 1) estimate the covariance operators; 2) estimate the functional principal component scores; 3) predict the underlying curves. Through simulations the proposed method is able to discover dominating modes of variations and reconstruct underlying curves well even in sparse settings. Our approach is illustrated by two applications, the Sleep Heart Health Study and eBay auctions.
A principal components model of soundscape perception.
Axelsson, Östen; Nilsson, Mats E; Berglund, Birgitta
2010-11-01
There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker's N(10)), which shows that 'informational' properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.
The Interdependence of Principal School Leadership and Student Achievement
Soehner, David; Ryan, Thomas
2011-01-01
This review illuminated principal school leadership as a variable that impacted achievement. The principal as school leader and manager was explored because these roles were thought to impact student achievement both directly and indirectly. Specific principal leadership behaviors and principal effectiveness were explored as variables potentially…
Management Of Indiscipline Among Teachers By Principals Of ...
This study compared the management of indiscipline among teachers by public and private school principals in Akwa Ibom State. The sample comprised four hundred and fifty (450) principals/vice principals randomly selected from a population of one thousand, four hundred and twenty eight (1,428) principals. The null ...
Image coding based on maximum entropy partitioning for identifying ...
A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization ...
Maximum super angle optimization method for array antenna pattern synthesis
Wu, Ji; Roederer, A. G
1991-01-01
Different optimization criteria related to antenna pattern synthesis are discussed. Based on the maximum criteria and vector space representation, a simple and efficient optimization method is presented for array and array fed reflector power pattern synthesis. A sector pattern synthesized by a 2...
Maximum Interconnectedness and Availability for Directional Airborne Range Extension Networks
2016-08-29
IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS 1 Maximum Interconnectedness and Availability for Directional Airborne Range Extension Networks Thomas...2 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS I. INTRODUCTION Tactical military networks both on land and at sea often have restricted transmission...a standard definition in graph theoretic and networking literature that is related to, but different from, the metric we consider. August 29, 2016
The constraint rule of the maximum entropy principle
Uffink, J.
1995-01-01
The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability
MAXIMUM-LIKELIHOOD-ESTIMATION OF THE ENTROPY OF AN ATTRACTOR
SCHOUTEN, JC; TAKENS, F; VANDENBLEEK, CM
In this paper, a maximum-likelihood estimate of the (Kolmogorov) entropy of an attractor is proposed that can be obtained directly from a time series. Also, the relative standard deviation of the entropy estimate is derived; it is dependent on the entropy and on the number of samples used in the
A maximum likelihood framework for protein design
Philippe Hervé
2006-06-01
Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces
Anglo-American views of Gavrilo Princip
Markovich Slobodan G.
2015-01-01
Full Text Available The paper deals with Western (Anglo-American views on the Sarajevo assassination/attentat and Gavrilo Princip. Articles on the assassination and Princip in two leading quality dailies (The Times and The New York Times have particularly been analysed as well as the views of leading historians and journalists who covered the subject including: R. G. D. Laffan, R. W. Seton-Watson, Winston Churchill, Sidney Fay, Bernadotte Schmitt, Rebecca West, A. J. P. Taylor, Vladimir Dedijer, Christopher Clark and Tim Butcher. In the West, the original general condemnation of the assassination and its main culprits was challenged when Rebecca West published her famous travelogue on Yugoslavia in 1941. Another Brit, the remarkable historian A. J. P. Taylor, had a much more positive view on the Sarajevo conspirators and blamed Germany and Austria-Hungary for the outbreak of the Great War. A turning point in Anglo-American perceptions was the publication of Vladimir Dedijer’s monumental book The Road to Sarajevo (1966, which humanised the main conspirators, a process initiated by R. West. Dedijer’s book was translated from English into all major Western languages and had an immediate impact on the understanding of the Sarajevo assassination. The rise of national antagonisms in Bosnia gradually alienated Princip from Bosnian Muslims and Croats, a process that began in the 1980s and was completed during the wars of the Yugoslav succession. Although all available sources clearly show that Princip, an ethnic Serb, gradually developed a broader Serbo-Croat and Yugoslav identity, he was ethnified and seen exclusively as a Serb by Bosnian Croats and Bosniaks and Western journalists in the 1990s. In the past century imagining Princip in Serbia and the West involved a whole spectrum of views. In interwar Anglo-American perceptions he was a fanatic and lunatic. He became humanised by Rebecca West (1941, A. J. P. Taylor showed understanding for his act (1956, he was fully
Principal Investigator-in-a-Box
Young, Laurence R.
1999-01-01
Human performance in orbit is currently limited by several factors beyond the intrinsic awkwardness of motor control in weightlessness. Cognitive functioning can be affected by such factors as cumulative sleep loss, stress and the psychological effects of long-duration small-group isolation. When an astronaut operates a scientific experiment, the performance decrement associated with such factors can lead to lost or poor quality data and even the total loss of a scientific objective, at great cost to the sponsors and to the dismay of the Principal Investigator. In long-duration flights, as anticipated on the International Space Station and on any planetary exploration, the experimental model is further complicated by long delays between training and experiment, and the large number of experiments each crew member must perform. Although no documented studies have been published on the subject, astronauts report that an unusually large number of simple errors are made in space. Whether a result of the effects of microgravity, accumulated fatigue, stress or other factors, this pattern of increased error supports the need for a computerized decision-making aid for astronauts performing experiments. Artificial intelligence and expert systems might serve as powerful tools for assisting experiments in space. Those conducting space experiments typically need assistance exactly when the planned checklist does not apply. Expert systems, which use bits of human knowledge and human methods to respond appropriately to unusual situations, have a flexibility that is highly desirable in circumstances where an invariably predictable course of action/response does not exist. Frequently the human expert on the ground is unavailable, lacking the latest information, or not consulted by the astronaut conducting the experiment. In response to these issues, we have developed "Principal Investigator-in-a-Box," or [PI], to capture the reasoning process of the real expert, the Principal
Lamb, Lori D.
2014-01-01
The purpose of this qualitative study was to investigate the perceptions of effective principals' leadership competencies; determine if the perceptions of teachers, principals, and superintendents aligned with the proposed National Framework for Principal Evaluations initiative. This study examined the six domains of leadership outlined by the…
Do Qualification, Experience and Age Matter for Principals Leadership Styles?
Muhammad Javed Sawati; Saeed Anwar; Muhammad Iqbal Majoka
2013-01-01
The main focus of present study was to find out the prevalent leadership styles of principals in government schools of Khyber Pakhtunkhwa and to find relationship of leadership styles with qualifications, age and experience of the principals. On the basis of analyzed data, four major leadership styles of the principals were identified as Eclectic, Democratic, Autocratic, and Free-rein. However, a small proportion of the principal had no dominant leadership style. This study shows that princip...
On the structure of dynamic principal component analysis used in statistical process monitoring
Vanhatalo, Erik; Kulahci, Murat; Bergquist, Bjarne
2017-01-01
When principal component analysis (PCA) is used for statistical process monitoring it relies on the assumption that data are time independent. However, industrial data will often exhibit serial correlation. Dynamic PCA (DPCA) has been suggested as a remedy for high-dimensional and time...... for determining the number of principal components to retain. The number of retained principal components is determined by visual inspection of the serial correlation in the squared prediction error statistic, Q (SPE), together with the cumulative explained variance of the model. The methods are illustrated using...... driven method to determine the maximum number of lags in DPCA with a foundation in multivariate time series analysis. The method is based on the behavior of the eigenvalues of the lagged autocorrelation and partial autocorrelation matrices. Given a specific lag structure we also propose a method...
Glogovac Svetlana
2012-01-01
Full Text Available This study investigates variability of tomato genotypes based on morphological and biochemical fruit traits. Experimental material is a part of tomato genetic collection from Institute of Filed and Vegetable Crops in Novi Sad, Serbia. Genotypes were analyzed for fruit mass, locule number, index of fruit shape, fruit colour, dry matter content, total sugars, total acidity, lycopene and vitamin C. Minimum, maximum and average values and main indicators of variability (CV and σ were calculated. Principal component analysis was performed to determinate variability source structure. Four principal components, which contribute 93.75% of the total variability, were selected for analysis. The first principal component is defined by vitamin C, locule number and index of fruit shape. The second component is determined by dry matter content, and total acidity, the third by lycopene, fruit mass and fruit colour. Total sugars had the greatest part in the fourth component.
Maximum entropy production rate in quantum thermodynamics
Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)
2010-06-01
In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible
Dan Wu
2009-06-01
Full Text Available The principal-subordinate hierarchical multi-objective programming model of initial water rights allocation was developed based on the principle of coordinated and sustainable development of different regions and water sectors within a basin. With the precondition of strictly controlling maximum emissions rights, initial water rights were allocated between the first and the second levels of the hierarchy in order to promote fair and coordinated development across different regions of the basin and coordinated and efficient water use across different water sectors, realize the maximum comprehensive benefits to the basin, promote the unity of quantity and quality of initial water rights allocation, and eliminate water conflict across different regions and water sectors. According to interactive decision-making theory, a principal-subordinate hierarchical interactive iterative algorithm based on the satisfaction degree was developed and used to solve the initial water rights allocation model. A case study verified the validity of the model.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Ozer, Niyazi
2013-01-01
The purpose of this study was to determine the primary school principals' views on trust in students and parents and also, to explore the relationships between principals' levels of professional burnout and their trust in students and parents. To this end, Principal Trust Survey and Friedman Principal Burnout scales were administered on 119…
Quality, precision and accuracy of the maximum No. 40 anemometer
Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)
1996-12-31
This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.
Evolution of the Nova Vulpeculae no.1 1968 (LV Vul) spectrum after the maximum brightness
Andrijya, I.; Antipova, L.I.; Babaev, M.B.; AN Azerbajdzhanskoj SSR, Baku. Shemakhinskaya Astrofizicheskaya Observatoriya)
1986-01-01
The analysis of the spectral evolution of LV Vulpeculae 1968 after the maximum brightness was carried out. It is shown that the pre-maximum spectrum was replaced by the principal one in less than 24sup(h). The diffuse enhanced scectrum and the Orion one existed already when the Nova brightness has decreased only by 0.4sup(m) and 0.5sup(m) respectively. The radial velocities of the Orion spectrum coincided with those of the diffuse enhanced one during the total observational period. The Orion spectrum consists of the lines of He I, N2, O 2 and may be H 1. The appearance of two additional components is probably due to splitting of the principal and diffuse enhanced spectrum
Maximum vehicle cabin temperatures under different meteorological conditions
Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John
2009-05-01
A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.
Fractal Dimension and Maximum Sunspot Number in Solar Cycle
R.-S. Kim
2006-09-01
Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.
Radar fall detection using principal component analysis
Jokanovic, Branka; Amin, Moeness; Ahmad, Fauzia; Boashash, Boualem
2016-05-01
Falls are a major cause of fatal and nonfatal injuries in people aged 65 years and older. Radar has the potential to become one of the leading technologies for fall detection, thereby enabling the elderly to live independently. Existing techniques for fall detection using radar are based on manual feature extraction and require significant parameter tuning in order to provide successful detections. In this paper, we employ principal component analysis for fall detection, wherein eigen images of observed motions are employed for classification. Using real data, we demonstrate that the PCA based technique provides performance improvement over the conventional feature extraction methods.
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Shower maximum detector for SDC calorimetry
Ernwein, J.
1994-01-01
A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs
Topics in Bayesian statistics and maximum entropy
Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.
1998-12-01
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Current aspects of the principal of protecting employees
Jovanović Predrag
2011-01-01
Full Text Available The principal of protecting employees is traditionally present in labor law. Particular categories of employees, also traditionally, enjoy special protection (young people, women, the disabled. However, the issue of protection of moral integrity of the employees has only recently been addressed. That makes the general principle of the protection of employees in labor relations very current, and it is from that perspective that this paper points out to certain standards of protection of employees in light of international, European and domestic law.
PRINCIPAL COMPONENT ANALYSIS (PCA DAN APLIKASINYA DENGAN SPSS
Hermita Bus Umar
2009-03-01
Full Text Available PCA (Principal Component Analysis are statistical techniques applied to a single set of variables when the researcher is interested in discovering which variables in the setform coherent subset that are relativity independent of one another.Variables that are correlated with one another but largely independent of other subset of variables are combined into factors. The Coals of PCA to which each variables is explained by each dimension. Step in PCA include selecting and mean measuring a set of variables, preparing the correlation matrix, extracting a set offactors from the correlation matrixs. Rotating the factor to increase interpretabilitv and interpreting the result.
Efficient training of multilayer perceptrons using principal component analysis
Bunzmann, Christoph; Urbanczik, Robert; Biehl, Michael
2005-01-01
A training algorithm for multilayer perceptrons is discussed and studied in detail, which relates to the technique of principal component analysis. The latter is performed with respect to a correlation matrix computed from the example inputs and their target outputs. Typical properties of the training procedure are investigated by means of a statistical physics analysis in models of learning regression and classification tasks. We demonstrate that the procedure requires by far fewer examples for good generalization than traditional online training. For networks with a large number of hidden units we derive the training prescription which achieves, within our model, the optimal generalization behavior
Two correlated quasiparticles states in the principal series approximation
Dukelsky, J.; Dussel, G.G.; Sofia, H.M.
1983-01-01
The principal series approximation is extended to the description of two correlated quasiparticles states, enabling a treatment of these states that takes into account the coupling among the two particle Green's function and the particle-hole one. This description is related to a random phase approximation treatment of collective states in open shell nuclei that includes simultaneously the particle-particle and particle-hole versions of the nuclear residual Hamiltonian. Using separable interactions it is found that the inclusion of the particle-particle part of the Hamiltonians greatly changes the properties of the 2 + states in the Sn isotopes
Principal forensic physicians as educational supervisors.
Stark, Margaret M
2009-10-01
This research project was performed to assist the Faculty of Forensic and Legal Medicine (FFLM) with the development of a training programme for Principal Forensic Physicians (PFPs) (Since this research was performed the Metropolitan Police Service have dispensed with the services of the Principal Forensic Physicians so currently (as of January 2009) there is no supervision of newly appointed FMEs or the development training of doctors working in London nor any audit or appraisal reviews.) to fulfil their role as educational supervisors. PFPs working in London were surveyed by questionnaire to identify the extent of their knowledge with regard to their role in the development training of all forensic physicians (FPs) in their group, the induction of assistant FPs and their perceptions of their own training needs with regard to their educational role. A focus group was held at the FFLM annual conference to discuss areas of interest that arose from the preliminary results of the questionnaire. There is a clear need for the FFLM to set up a training programme for educational supervisors in clinical forensic medicine, especially with regard to appraisal. 2009 Elsevier Ltd and Faculty of Forensic and Legal Medicine.
Wang, P.-Y.; Hou, S.-S.
2005-01-01
In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions
Nonsymmetric entropy and maximum nonsymmetric entropy principle
Liu Chengshi
2009-01-01
Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.
Maximum speed of dewetting on a fiber
Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus
2011-01-01
A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed
Maximum potential preventive effect of hip protectors
van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.
2007-01-01
OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
correlation between maximum dry density and cohesion
HOD
represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
The maximum-entropy method in superspace
van Smaalen, S.; Palatinus, Lukáš; Schneider, M.
2003-01-01
Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003
Achieving maximum sustainable yield in mixed fisheries
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna
2017-01-01
Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example
5 CFR 534.203 - Maximum stipends.
2010-01-01
... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...
Minimal length, Friedmann equations and maximum density
Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)
2014-06-16
Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.
Surrogate Endpoint Evaluation: Principal Stratification Criteria and the Prentice Definition.
Gilbert, Peter B; Gabriel, Erin E; Huang, Ying; Chan, Ivan S F
2015-09-01
A common problem of interest within a randomized clinical trial is the evaluation of an inexpensive response endpoint as a valid surrogate endpoint for a clinical endpoint, where a chief purpose of a valid surrogate is to provide a way to make correct inferences on clinical treatment effects in future studies without needing to collect the clinical endpoint data. Within the principal stratification framework for addressing this problem based on data from a single randomized clinical efficacy trial, a variety of definitions and criteria for a good surrogate endpoint have been proposed, all based on or closely related to the "principal effects" or "causal effect predictiveness (CEP)" surface. We discuss CEP-based criteria for a useful surrogate endpoint, including (1) the meaning and relative importance of proposed criteria including average causal necessity (ACN), average causal sufficiency (ACS), and large clinical effect modification; (2) the relationship between these criteria and the Prentice definition of a valid surrogate endpoint; and (3) the relationship between these criteria and the consistency criterion (i.e., assurance against the "surrogate paradox"). This includes the result that ACN plus a strong version of ACS generally do not imply the Prentice definition nor the consistency criterion, but they do have these implications in special cases. Moreover, the converse does not hold except in a special case with a binary candidate surrogate. The results highlight that assumptions about the treatment effect on the clinical endpoint before the candidate surrogate is measured are influential for the ability to draw conclusions about the Prentice definition or consistency. In addition, we emphasize that in some scenarios that occur commonly in practice, the principal strata sub-populations for inference are identifiable from the observable data, in which cases the principal stratification framework has relatively high utility for the purpose of effect
Surrogate Endpoint Evaluation: Principal Stratification Criteria and the Prentice Definition
Gilbert, Peter B.; Gabriel, Erin E.; Huang, Ying; Chan, Ivan S.F.
2015-01-01
A common problem of interest within a randomized clinical trial is the evaluation of an inexpensive response endpoint as a valid surrogate endpoint for a clinical endpoint, where a chief purpose of a valid surrogate is to provide a way to make correct inferences on clinical treatment effects in future studies without needing to collect the clinical endpoint data. Within the principal stratification framework for addressing this problem based on data from a single randomized clinical efficacy trial, a variety of definitions and criteria for a good surrogate endpoint have been proposed, all based on or closely related to the “principal effects” or “causal effect predictiveness (CEP)” surface. We discuss CEP-based criteria for a useful surrogate endpoint, including (1) the meaning and relative importance of proposed criteria including average causal necessity (ACN), average causal sufficiency (ACS), and large clinical effect modification; (2) the relationship between these criteria and the Prentice definition of a valid surrogate endpoint; and (3) the relationship between these criteria and the consistency criterion (i.e., assurance against the “surrogate paradox”). This includes the result that ACN plus a strong version of ACS generally do not imply the Prentice definition nor the consistency criterion, but they do have these implications in special cases. Moreover, the converse does not hold except in a special case with a binary candidate surrogate. The results highlight that assumptions about the treatment effect on the clinical endpoint before the candidate surrogate is measured are influential for the ability to draw conclusions about the Prentice definition or consistency. In addition, we emphasize that in some scenarios that occur commonly in practice, the principal strata sub-populations for inference are identifiable from the observable data, in which cases the principal stratification framework has relatively high utility for the purpose of
[School principals--too ill for healthy schools?].
Weber, A; Weltle, D; Lederer, P
2004-03-01
School principals on the one hand play an important role in maintaining the performance and health of teachers, but on the other hand often feel over-burdened themselves and suffer from illnesses which not only impair their health-promoting function, but also lead to limitations in their fitness for the occupation. The aim of our study was therefore, using objective parameters and larger numbers of cases, to obtain a differentiated insight into the type and extent of morbidity spectrum and the health-related early retirement of school principals. In a prospective total assessment (the whole of Bavaria in the period 1997-1999), all the reports about the premature unfitness for work of school directors were evaluated. The analysis included for example socio-demographic/occupational factors, diagnoses, assessment of performance and rehabilitation. The answers given in a standardised, anonymous questionnaire provided the database. Evaluation was carried out by means of descriptive statistics. The median age of the 408 school principals included in the evaluation (heads and vice-heads, 30% of whom were women) was 58 (min: 41 years old, max: 64 years old). The most frequent workplaces were primary schools (38%) and secondary schools (25%). 84% (n=342) of the headmasters were assessed to be unfit for work. The main reasons for early retirement were psychic/psychosomatic illnesses (F-ICD 10) which made up 45% of the cases. The relative frequency was higher in women than in men. Depressive disorders and exhaustion syndromes (burnout) dominated among the psychiatric diagnoses (proportion: 57%). The most frequent somatic diseases were cardio-vascular diseases (I-ICD10) in 19% of cases, then muscular/skeletal diseases (M-ICD10) in 10% and malignant tumours (C-ICD 10) in 9% of cases. Cardio-vascular diseases, in particular arterial hypertonia and ischaemic heart disease, were, in addition, found in headmasters significantly more frequently than in teachers without a leadership
A L-Dahnaim, Layla; Said, Hana; Salama, Rasha; Bella, Hassan; Malo, Denise
2013-04-01
The school nurse plays a crucial role in the provision of comprehensive health services to students. This role encompasses both health and educational goals. The perception of the school nurse's role and its relation to health promotion is fundamental to the development of school nursing. This study aimed to determine the perception of school nurses and principals toward the role of school nurses in providing school health services in Qatar. A cross-sectional study was carried out among all school nurses (n=159) and principals (n=159) of governmental schools in Qatar. The participants were assessed for their perception toward the role of the school nurse in the school using 19-Likert-type scaled items Questionnaire. The response rates were 100% for nurses and 94% for principals. The most commonly perceived roles of the school nurse by both nurses and principals were 'following up of chronically ill students', 'providing first aid', and 'referral of students with health problems', whereas most of the roles that were not perceived as school nurse roles were related to student academic achievements. School nurses and principals agreed on the clinical/medical aspects of nurses' role within schools, but disagreed on nurses' involvement in issues related to the school performance of students. The study recommends raising awareness of school principals on the school nursing role, especially in issues related to the school performance of students.
Reinvention and the Principal-Agent Model
J. Ramón Gil García
2003-01-01
Full Text Available Existe una interesante polémica en el sector público, derivada de las tensiones existentes entre desempeño y flexibilidad administrativa por un lado, y rendición de cuentas y control, por el otro. El propósito de este artículo es discutir la utilidad del modelo agente principal para un mejor entendimiento de las tensiones entre desempeño y rendición de cuentas, así como analizar las similitudes y contradicciones de esta perspectiva teórica en comparación con el movimiento de reinvención del gobierno de la década de los noventa en Estados Unidos.
1991-01-01
The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de
2010-07-27
...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum parsimony on subsets of taxa.
Fischer, Mareike; Thatte, Bhalchandra D
2009-09-21
In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.
Maximum entropy analysis of liquid diffraction data
Root, J.H.; Egelstaff, P.A.; Nickel, B.G.
1986-01-01
A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)
Moolenaar, Nienke M.; Sleegers, Peter J. C.
2015-01-01
Purpose: While in everyday practice, school leaders are often involved in social relationships with a variety of stakeholders both within and outside their own schools, studies on school leaders' networks often focus either on networks within or outside schools. The purpose of this paper is to investigate the extent to which principals occupy…
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Automatic maximum entropy spectral reconstruction in NMR
Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.
2007-01-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system
Bayesian interpretation of Generalized empirical likelihood by maximum entropy
Rochet , Paul
2011-01-01
We study a parametric estimation problem related to moment condition models. As an alternative to the generalized empirical likelihood (GEL) and the generalized method of moments (GMM), a Bayesian approach to the problem can be adopted, extending the MEM procedure to parametric moment conditions. We show in particular that a large number of GEL estimators can be interpreted as a maximum entropy solution. Moreover, we provide a more general field of applications by proving the method to be rob...
Occurrence and Impact of Insects in Maximum Growth Plantations
Nowak, J.T.; Berisford, C.W.
2001-01-01
Investigation of the relationships between intensive management practices and insect infestation using maximum growth potential studies of loblolly pine constructed over five years using a hierarchy of cultural treatments-monitoring differences in growth and insect infestation levels related to the increasing management intensities. This study shows that tree fertilization can increase coneworm infestation and demonstrated that tip moth management tree growth, at least initially.
Principals' Perceived Supervisory Behaviors Regarding Marginal Teachers in Two States
Range, Bret; Hewitt, Paul; Young, Suzie
2014-01-01
This descriptive study used an online survey to determine how principals in two states viewed the supervision of marginal teachers. Principals ranked their own evaluation of the teacher as the most important factor when identifying marginal teachers and relied on informal methods to diagnose marginal teaching. Female principals rated a majority of…
District Leadership for Effective Principal Evaluation and Support
Kimball, Steven M.; Arrigoni, Jessica; Clifford, Matthew; Yoder, Maureen; Milanowski, Anthony
2015-01-01
Research demonstrating principals' impact on student learning outcomes has fueled the shift from principals as facilities managers to an emphasis on instructional leadership (Hallinger & Heck, 1996; Leithwood, Louis, Anderson, & Wahlstrom, 2004; Marzano, Waters, & McNulty, 2005). Principals are under increasing pressure to carry out…
Principal Leadership for Technology-enhanced Learning in Science
Gerard, Libby F.; Bowyer, Jane B.; Linn, Marcia C.
2008-02-01
Reforms such as technology-enhanced instruction require principal leadership. Yet, many principals report that they need help to guide implementation of science and technology reforms. We identify strategies for helping principals provide this leadership. A two-phase design is employed. In the first phase we elicit principals' varied ideas about the Technology-enhanced Learning in Science (TELS) curriculum materials being implemented by teachers in their schools, and in the second phase we engage principals in a leadership workshop designed based on the ideas they generated. Analysis uses an emergent coding scheme to categorize principals' ideas, and a knowledge integration framework to capture the development of these ideas. The analysis suggests that principals frame their thinking about the implementation of TELS in terms of: principal leadership, curriculum, educational policy, teacher learning, student outcomes and financial resources. They seek to improve their own knowledge to support this reform. The principals organize their ideas around individual school goals and current political issues. Principals prefer professional development activities that engage them in reviewing curricula and student work with other principals. Based on the analysis, this study offers guidelines for creating learning opportunities that enhance principals' leadership abilities in technology and science reform.
Contemporary Challenges and Changes: Principals' Leadership Practices in Malaysia
Jones, Michelle; Adams, Donnie; Joo, Mabel Tan Hwee; Muniandy, Vasu; Perera, Corinne Jaqueline; Harris, Alma
2015-01-01
This article outlines the findings from a contemporary study of principals' leadership practices in Malaysia as part of the 7 System Leadership Study. Recent policy developments within Malaysia have increased principals' accountability and have underlined the importance of the role of the principals in transforming school performance and student…
Common Core Implementation Decisions Made by Principals in Elementary Schools
Norman, Alexis Cienfuegos
2016-01-01
The purpose of this study was to understand the decisions elementary principals have made during the Common Core State Standards reform. Specifically, (a) what decisions principals have made to support Common Core implementation, (b) what strategies elementary principals have employed to communicate with stakeholders about Common Core State…
Job Satisfaction of Elementary Principals in Large Urban Communities
Mitchell, Cathryn M.
2010-01-01
The purpose of this study was to determine job satisfaction levels of elementary principals in "major urban" districts in Texas and to identify strategies these principals used to cope with the demands of the position. Additionally, the project sought to find structures and supports needed to attract and retain principals in the…
Aerobic Physical Activity and the Leadership of Principals
Kiser, Kari
2016-01-01
The purpose of this study was to explore if there was a connection between regular aerobic physical activity and the stress and energy levels of principals as they reported it. To begin the research, the current aerobic physical activity level of principals was discovered. Additionally, the energy and stress levels of the principals who do engage…
Principal Preparation in Special Education: Building an Inclusive Culture
Hofreiter, Deborah
2017-01-01
The importance of principal preparation in special education has increased since the Education for All Handicapped Children Act was passed in 1975. There are significant financial reasons for preparing principals in the area of special education. Recent research also shows that all children learn better in an inclusive environment. Principals who…
How the Principalship Has Changed: Lessons from Principals' Life Stories.
Brubaker, Dale L.
1995-01-01
The life stories of (North Carolina) principals in a graduate education class reveal vast changes over the past 20 years. "Good ol' boy" superintendents and principals have been replaced by self-interested political "sharks" concerned more with image than substance. Fortunately, principals with resiliency, caring values, and…
Principal Turnover: Upheaval and Uncertainty in Charter Schools?
Ni, Yongmei; Sun, Min; Rorrer, Andrea
2015-01-01
Purpose: Informed by literature on labor market and school choice, this study aims to examine the dynamics of principal career movements in charter schools by comparing principal turnover rates and patterns between charter schools and traditional public schools. Research Methods/Approach: This study uses longitudinal data on Utah principals and…
A Review of the Literature on Principal Turnover
Snodgrass Rangel, Virginia
2018-01-01
Among the many challenges facing public schools are high levels of principal turnover. Given the important role that principals play and are expected to play in the improvement process, concerns about principal turnover have resulted in a growing body of research on its causes and consequences. The purpose of this review is to take stock of what…
Principal Holistic Judgments and High-Stakes Evaluations of Teachers
Briggs, Derek C.; Dadey, Nathan
2017-01-01
Results from a sample of 1,013 Georgia principals who rated 12,617 teachers are used to compare holistic and analytic principal judgments with indicators of student growth central to the state's teacher evaluation system. Holistic principal judgments were compared to mean student growth percentiles (MGPs) and analytic judgments from a formal…
Urban School Principals and Their Role as Multicultural Leaders
Gardiner, Mary E.; Enomoto, Ernestine K.
2006-01-01
This study focuses on the role of urban school principals as multicultural leaders. Using cross-case analysis, the authors describe what 6 practicing principals do in regard to multicultural leadership. The findings suggest that although multicultural preparation was lacking for these principals, some did engage in work that promoted diversity in…
School Restructuring and the Dilemmas of Principals' Work.
Wildy, Helen; Louden, William
2000-01-01
The complexity of principals' work may be characterized according to three dilemmas: accountability, autonomy, and efficiency. Narrative vignettes of 74 Australian principals revealed that principals were fair and inclusive. When faced with restructuring dilemmas, however, they favored strong over shared leadership, efficiency over collaboration,…
Importance of an Effective Principal-Counselor Relationship
Edwards, LaWanda; Grace, Ronald; King, Gwendolyn
2014-01-01
An effective relationship between the principal and school counselor is essential when improving student achievement. To have an effective relationship, there must be communication, trust and respect, leadership, and collaborative planning between the principal and school counselor (College Board, 2011). Principals and school counselors are both…
Honouring Roles: The Story of a Principal and a Student
Cranston, Jerome
2012-01-01
The importance of the teacher-student relationship in educational practice is well established, as is the idea of principal leadership in relationship to staff. Even though principal leadership is regarded as a factor in student success, the principal's effect is usually assumed to take place via the teaching staff. There is an absence of research…
Use of Sparse Principal Component Analysis (SPCA) for Fault Detection
Gajjar, Shriram; Kulahci, Murat; Palazoglu, Ahmet
2016-01-01
Principal component analysis (PCA) has been widely used for data dimension reduction and process fault detection. However, interpreting the principal components and the outcomes of PCA-based monitoring techniques is a challenging task since each principal component is a linear combination of the ...
An Efficient Algorithm for the Maximum Distance Problem
Gabrielle Assunta Grün
2001-12-01
Full Text Available Efficient algorithms for temporal reasoning are essential in knowledge-based systems. This is central in many areas of Artificial Intelligence including scheduling, planning, plan recognition, and natural language understanding. As such, scalability is a crucial consideration in temporal reasoning. While reasoning in the interval algebra is NP-complete, reasoning in the less expressive point algebra is tractable. In this paper, we explore an extension to the work of Gerevini and Schubert which is based on the point algebra. In their seminal framework, temporal relations are expressed as a directed acyclic graph partitioned into chains and supported by a metagraph data structure, where time points or events are represented by vertices, and directed edges are labelled with < or ≤. They are interested in fast algorithms for determining the strongest relation between two events. They begin by developing fast algorithms for the case where all points lie on a chain. In this paper, we are interested in a generalization of this, namely we consider the problem of finding the maximum ``distance'' between two vertices in a chain ; this problem arises in real world applications such as in process control and crew scheduling. We describe an O(n time preprocessing algorithm for the maximum distance problem on chains. It allows queries for the maximum number of < edges between two vertices to be answered in O(1 time. This matches the performance of the algorithm of Gerevini and Schubert for determining the strongest relation holding between two vertices in a chain.
Einstein, Albert
2013-01-01
Time magazine's ""Man of the Century"", Albert Einstein is the founder of modern physics and his theory of relativity is the most important scientific idea of the modern era. In this short book, Einstein explains, using the minimum of mathematical terms, the basic ideas and principles of the theory that has shaped the world we live in today. Unsurpassed by any subsequent books on relativity, this remains the most popular and useful exposition of Einstein's immense contribution to human knowledge.With a new foreword by Derek Raine.
Principal and secondary luminescence lifetime components in annealed natural quartz
Chithambo, M.L.; Ogundare, F.O.; Feathers, J.
2008-01-01
Time-resolved luminescence spectra from quartz can be separated into components with distinct principal and secondary lifetimes depending on certain combinations of annealing and measurement temperature. The influence of annealing on properties of the lifetimes related to irradiation dose and temperature of measurement has been investigated in sedimentary quartz annealed at various temperatures up to 900 deg. C. Time-resolved luminescence for use in the analysis was pulse stimulated from samples at 470 nm between 20 and 200 deg. C. Luminescence lifetimes decrease with measurement temperature due to increasing thermal effect on the associated luminescence with an activation energy of thermal quenching equal to 0.68±0.01eV for the secondary lifetime but only qualitatively so for the principal lifetime component. Concerning the influence of annealing temperature, luminescence lifetimes measured at 20 deg. C are constant at about 33μs for annealing temperatures up to 600 0 C but decrease to about 29μs when the annealing temperature is increased to 900 deg. C. In addition, it was found that lifetime components in samples annealed at 800 deg. C are independent of radiation dose in the range 85-1340 Gy investigated. The dependence of lifetimes on both the annealing temperature and magnitude of radiation dose is described as being due to the increasing importance of a particular recombination centre in the luminescence emission process as a result of dynamic hole transfer between non-radiative and radiative luminescence centres
Functional Principal Components Analysis of Shanghai Stock Exchange 50 Index
Zhiliang Wang
2014-01-01
Full Text Available The main purpose of this paper is to explore the principle components of Shanghai stock exchange 50 index by means of functional principal component analysis (FPCA. Functional data analysis (FDA deals with random variables (or process with realizations in the smooth functional space. One of the most popular FDA techniques is functional principal component analysis, which was introduced for the statistical analysis of a set of financial time series from an explorative point of view. FPCA is the functional analogue of the well-known dimension reduction technique in the multivariate statistical analysis, searching for linear transformations of the random vector with the maximal variance. In this paper, we studied the monthly return volatility of Shanghai stock exchange 50 index (SSE50. Using FPCA to reduce dimension to a finite level, we extracted the most significant components of the data and some relevant statistical features of such related datasets. The calculated results show that regarding the samples as random functions is rational. Compared with the ordinary principle component analysis, FPCA can solve the problem of different dimensions in the samples. And FPCA is a convenient approach to extract the main variance factors.
Sparse principal component analysis in medical shape modeling
Sjöstrand, Karl; Stegmann, Mikkel B.; Larsen, Rasmus
2006-03-01
Principal component analysis (PCA) is a widely used tool in medical image analysis for data reduction, model building, and data understanding and exploration. While PCA is a holistic approach where each new variable is a linear combination of all original variables, sparse PCA (SPCA) aims at producing easily interpreted models through sparse loadings, i.e. each new variable is a linear combination of a subset of the original variables. One of the aims of using SPCA is the possible separation of the results into isolated and easily identifiable effects. This article introduces SPCA for shape analysis in medicine. Results for three different data sets are given in relation to standard PCA and sparse PCA by simple thresholding of small loadings. Focus is on a recent algorithm for computing sparse principal components, but a review of other approaches is supplied as well. The SPCA algorithm has been implemented using Matlab and is available for download. The general behavior of the algorithm is investigated, and strengths and weaknesses are discussed. The original report on the SPCA algorithm argues that the ordering of modes is not an issue. We disagree on this point and propose several approaches to establish sensible orderings. A method that orders modes by decreasing variance and maximizes the sum of variances for all modes is presented and investigated in detail.
Condon, Christopher; Clifford, Matthew
2010-01-01
This brief reviews the publicly available principal assessments and points superintendents and policy makers toward strong instruments to measure principal performance. Specifically, the measures included in this review are expressly intended to evaluate principal performance and have varying degrees of publicly available evidence of psychometric…
Maximum entropy decomposition of quadrupole mass spectra
Toussaint, U. von; Dose, V.; Golan, A.
2004-01-01
We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...
Maximum entropy method in momentum density reconstruction
Dobrzynski, L.; Holas, A.
1997-01-01
The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig
On the maximum drawdown during speculative bubbles
Rotundo, Giulia; Navarra, Mauro
2007-08-01
A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Dynamical maximum entropy approach to flocking.
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Multiperiod Maximum Loss is time unit invariant.
Kovacevic, Raimund M; Breuer, Thomas
2016-01-01
Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Improved Maximum Parsimony Models for Phylogenetic Networks.
Van Iersel, Leo; Jones, Mark; Scornavacca, Celine
2018-05-01
Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.
Ancestral sequence reconstruction with Maximum Parsimony
Herbst, Lina; Fischer, Mareike
2017-01-01
One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...
Record of principal work activities/deliverables
1989-09-01
Over the five year period of performance, thirteen task assignments were issued by the DOE to ARINC Research. During the two year base period seven tasks were assigned. Two task assignments were issued for each of the three consecutive one year option periods. Associated with all task assignments were multiple subtasks, some of which required significant effort. These subtasks are appropriately cited in this report under their respective task assignments as principal work activities or deliverables. The technical and management support provided to the DOE under this contract focused on two general areas: (1) appraisal activities and (2) non-appraisal activities. Support to appraisals included planning, document review, developing lines-of-inquiry, interviewing, data collection, report writing, and follow-up. Such work was executed both on-site at the DOE facility under review and off-site. Non-appraisal support was varied and included such areas as document review, data base development, technical assessments. statistical analysis, policy analysis, reliability engineering, and workshop and conference planning and execution
Principal Hydrologic Responses to Climatic and Geologic Variability in the Sierra Nevada, California
David H. Peterson
2008-02-01
Full Text Available Sierra Nevada snowpack is a critical water source for California’s growing population and agricultural industry. However, because mountain winters and springs are warming, on average, precipitation as snowfall relative to rain is decreasing, and snowmelt is earlier. The changes are stronger at mid-elevations than at higher elevations. The result is that the water supply provided by snowpack is diminishing. In this paper, we describe principal hydrologic responses to climatic and spatial geologic variations as gleaned from a series of observations including snowpack, stream-flow, and bedrock geology. Our analysis focused on peak (maximum and base (minimum daily discharge of the annual snowmelt-driven hydrographs from 18 Sierra Nevada watersheds and 24 stream gage locations using standard correlation methods. Insights into the importance of the relative magnitudes of peak flow and soil water storage led us to develop a hydrologic classification of mountain watersheds based on runoff versus base flow as a percentage of peak flow. Our findings suggest that watersheds with a stronger base flow response store more soil water than watersheds with a stronger peak-flow response. Further, the influence of antecedent wet or dry years is greater in watersheds with high base flow, measured as a percentage of peak flow. The strong correlation between 1 the magnitude of peak flow, and 2 snow water equivalent can be used to predict peak flow weeks in advance. A weaker but similar correlation can be used to predict the magnitude of base flow months in advance. Most of the watersheds show a trend that peak flow is occurring earlier in the year.
Objective Bayesianism and the Maximum Entropy Principle
Jon Williamson
2013-09-01
Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.
Efficient heuristics for maximum common substructure search.
Englert, Péter; Kovács, Péter
2015-05-26
Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.
Ing, Marsha
2013-01-01
Informally observing classrooms is one way that principals can help improve teaching and learning. This study describes the variability of principals' classroom observations across schools and identifies the conditions under which observations relate to the instructional climate in some schools and not others. Data for this study come from…
2010-01-01
... individual for the purposes of determining principal shareholder status. (2) Related interest means: (i) Any... executive officers and principal shareholders. 215.9 Section 215.9 Banks and Banking FEDERAL RESERVE SYSTEM... SHAREHOLDERS OF MEMBER BANKS (REGULATION O) § 215.9 Disclosure of credit from member banks to executive...
O'Malley, Michael P.; Long, Tanya A.; King, Jeffry
2015-01-01
Multiple and complex issues simultaneously present themselves for the principal's attention. Learning how to identify, prioritize, synthesize, and act in relation to these issues poses a particular challenge to early career principals. This case study engages aspiring and current school leaders in critical reflection upon leadership opportunities…
Einstein-Dirac theory in spin maximum I
Crumeyrolle, A.
1975-01-01
An unitary Einstein-Dirac theory, first in spin maximum 1, is constructed. An original feature of this article is that it is written without any tetrapod technics; basic notions and existence conditions for spinor structures on pseudo-Riemannian fibre bundles are only used. A coupling gravitation-electromagnetic field is pointed out, in the geometric setting of the tangent bundle over space-time. Generalized Maxwell equations for inductive media in presence of gravitational field are obtained. Enlarged Einstein-Schroedinger theory, gives a particular case of this E.D. theory. E. S. theory is a truncated E.D. theory in spin maximum 1. A close relation between torsion-vector and Schroedinger's potential exists and nullity of torsion-vector has a spinor meaning. Finally the Petiau-Duffin-Kemmer theory is incorporated in this geometric setting [fr
Jarzynski equality in the context of maximum path entropy
González, Diego; Davis, Sergio
2017-06-01
In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.
Rumor Identification with Maximum Entropy in MicroNet
Suisheng Yu
2017-01-01
Full Text Available The widely used applications of Microblog, WeChat, and other social networking platforms (that we call MicroNet shorten the period of information dissemination and expand the range of information dissemination, which allows rumors to cause greater harm and have more influence. A hot topic in the information dissemination field is how to identify and block rumors. Based on the maximum entropy model, this paper constructs the recognition mechanism of rumor information in the micronetwork environment. First, based on the information entropy theory, we obtained the characteristics of rumor information using the maximum entropy model. Next, we optimized the original classifier training set and the feature function to divide the information into rumors and nonrumors. Finally, the experimental simulation results show that the rumor identification results using this method are better than the original classifier and other related classification methods.
Miller, Catherine M.; Martin, Barbara N.
2015-01-01
This multi-case study sought to construct meaning using a cultural capital lens in relation to educational leadership preparation programs building the capacities of social justice leaders in demographically changing schools. Data revealed principals' perceptions about preparation, expectations and general beliefs and assumptions related to…
Hydraulic Limits on Maximum Plant Transpiration
Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.
2011-12-01
Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water
Analogue of Pontryagin's maximum principle for multiple integrals minimization problems
Mikhail, Zelikin
2016-01-01
The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Modelling maximum likelihood estimation of availability
Waller, R.A.; Tietjen, G.L.; Rock, G.W.
1975-01-01
Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)
STUDY ON MAXIMUM SPECIFIC SLUDGE ACIVITY OF DIFFERENT ANAEROBIC GRANULAR SLUDGE BY BATCH TESTS
无
2001-01-01
The maximum specific sludge activity of granular sludge from large-scale UASB, IC and Biobed anaerobic reactors were investigated by batch tests. The limitation factors related to maximum specific sludge activity (diffusion, substrate sort, substrate concentration and granular size) were studied. The general principle and procedure for the precise measurement of maximum specific sludge activity were suggested. The potential capacity of loading rate of the IC and Biobed anaerobic reactors were analyzed and compared by use of the batch tests results.
Wavelet decomposition based principal component analysis for face recognition using MATLAB
Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish
2016-03-01
For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.
M. Cova
2015-07-01
Full Text Available The critical plane calculation for multiaxial damage assessment is often a demanding task, particularly for large FEM models of real components. Anyway, in actual engineering requests, sometime, it is possible to take advantage of the specific properties of the investigated case. This paper deals with the problem of a mechanical component loaded by multiple, but “time-separated”, multiaxial external loads. The specific material damage is dependent from the max principal stress variation with a significant mean stress sensitivity too. A specifically fitted procedure was developed for a fast computation, at each node of a large FEM model, of the direction undergoing the maximum fatigue damage; the procedure is defined according to an effective stress definition based on the max principal stress amplitude and mean value. The procedure is presented in a general form, applicable to the similar cases.
Maximum-likelihood estimation of recent shared ancestry (ERSA).
Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B
2011-05-01
Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.
Principal spectra describing magnetooptic permittivity tensor in cubic crystals
Hamrlová, Jana [Nanotechnology Centre, VSB – Technical University of Ostrava, listopadu 15, Ostrava, 708 33 Czech Republic (Czech Republic); IT4Innovations Centre, VSB – Technical University of Ostrava, listopadu 15, Ostrava, 708 33 Czech Republic (Czech Republic); Legut, Dominik [IT4Innovations Centre, VSB – Technical University of Ostrava, listopadu 15, Ostrava, 708 33 Czech Republic (Czech Republic); Veis, Martin [Faculty of Mathematics and Physics, Charles University, Ke Karlovu 3, Prague, 121 16 Czech Republic (Czech Republic); Pištora, Jaromír [Nanotechnology Centre, VSB – Technical University of Ostrava, listopadu 15, Ostrava, 708 33 Czech Republic (Czech Republic); Hamrle, Jaroslav, E-mail: jaroslav.hamrle@vsb.cz [IT4Innovations Centre, VSB – Technical University of Ostrava, listopadu 15, Ostrava, 708 33 Czech Republic (Czech Republic); Faculty of Mathematics and Physics, Charles University, Ke Karlovu 3, Prague, 121 16 Czech Republic (Czech Republic); Department of Physics, VSB – Technical University of Ostrava, 17. listopadu 15, Ostrava, 708 33 Czech Republic (Czech Republic)
2016-12-15
We provide unified phenomenological description of magnetooptic effects being linear and quadratic in magnetization. The description is based on few principal spectra, describing elements of permittivity tensor up to the second order in magnetization. Each permittivity tensor element for any magnetization direction and any sample surface orientation is simply determined by weighted summation of the principal spectra, where weights are given by crystallographic and magnetization orientations. The number of principal spectra depends on the symmetry of the crystal. In cubic crystals owning point symmetry we need only four principal spectra. Here, the principal spectra are expressed by ab initio calculations for bcc Fe, fcc Co and fcc Ni in optical range as well as in hard and soft x-ray energy range, i.e. at the 2p- and 3p-edges. We also express principal spectra analytically using modified Kubo formula.
Does Superintendents' Leadership Styles Influence Principals' Performance?
Davis, Theresa D.
2014-01-01
Educational leaders across the United States face changes affecting the educational system related to federal and state mandates. The stress of those changes may be related to superintendents' longevity. The superintendent position has a mobility rate that is quite high. Every superintendent is different and may have a different leadership style…
A maximum power point tracking for photovoltaic-SPE system using a maximum current controller
Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)
2003-02-01
Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)
Computing Cumulative Interest and Principal Paid For a Calendar Year
John O. MASON
2011-01-01
This paper demonstrates how easy it is use Microsoft Excel’s CUMPRINC and CUMIPMT functions to compute principal and interest paid for an entire year, even though the payments were made monthly. The CUMPRINC function computes the principal paid by a series of loan payments; the CUMIPMT function computes the interest paid. These two functions provide an alternative to preparing a monthly loan amortization schedule and adding up the amounts of monthly interest paid and principal paid for the ye...
Cleber Souza Corrêa
2007-04-01
nocturnal convective events and with the generation of Mesoscale Convective Complexes (MCCs, giving high rainfall intensities, which may have important economic consequences. To describe the relations involved, the study uses a statistical method principal component analysis. Use of this method makes it possible to understand the complex interactions operating at meteorological scales involved in synoptic processes, from macro- to meso-scale. In such complexity, the part played by LLJs and Flows in this interaction is to effect transport in the lower atmosphere, thereby coupling regional meteorology and water circulation at the continental scale.
Principal succession: The socialisation of a primary school principal in South Africa
Gertruida M. Steyn
2013-04-01
Full Text Available This study focussed on the socialisation of a new principal in a South African primary school with a strong Christian culture. He was appointed when the predecessor retired after more than two decades. The conceptual framework focuses on the three phases of socialisation: professional socialisation, organisational socialisation and occupational identity, which are used to interpret the study. A qualitative study, which occurred during two phases, investigated the phenomenon, principal succession, in the particular school. The data collection methods included a number of interviews with the principal, a focus group interview with staff members who experienced the previous principal’s leadership practice, and individual interviews with staff members. The following categories emerged from the data analysis: Recalling the previous principal: ‘One sees Mr X [the predecessor] everywhere’; Entry and orientation: ‘I found it intimidating initially’; and Immersion and reshaping: ‘Reins that previously were a bit slack, he is now pulling tight’.Die sosialisering van ’n primêre skoolhoof in Suid-Afrika. Hierdie studie het gefokus op die sosialisering van ’n nuwe skoolhoof in ’n Suid-Afrikaanse primêre skool met ’n sterk Christelike kultuur. Hy is aangestel toe sy voorganger ná meer as twee dekades afgetree het. Die konseptuele raamwerk, wat gebruik is om die bevindinge te interpreteer, het op die drie fases van sosialisering gefokus, naamlik professionele sosialisering, organisatoriese sosialisering en beroepsidentiteit. ’n Kwalitatiewe ondersoek na die skoolhoofopvolgingverskynsel in die bepaalde skool is in twee fases gedoen. Die data-insamelingsmetodes het ’n aantal onderhoude met die skoolhoof, ’n fokusgroeponderhoud met personeellede wat ook onder leierskap van die vorige skoolhoof gewerk het en individuele onderhoude met personeellede ingesluit. Tydens die data-analise het die volgende kategorieë na vore gekom
Mapping ash properties using principal components analysis
Pereira, Paulo; Brevik, Eric; Cerda, Artemi; Ubeda, Xavier; Novara, Agata; Francos, Marcos; Rodrigo-Comino, Jesus; Bogunovic, Igor; Khaledian, Yones
2017-04-01
In post-fire environments ash has important benefits for soils, such as protection and source of nutrients, crucial for vegetation recuperation (Jordan et al., 2016; Pereira et al., 2015a; 2016a,b). The thickness and distribution of ash are fundamental aspects for soil protection (Cerdà and Doerr, 2008; Pereira et al., 2015b) and the severity at which was produced is important for the type and amount of elements that is released in soil solution (Bodi et al., 2014). Ash is very mobile material, and it is important were it will be deposited. Until the first rainfalls are is very mobile. After it, bind in the soil surface and is harder to erode. Mapping ash properties in the immediate period after fire is complex, since it is constantly moving (Pereira et al., 2015b). However, is an important task, since according the amount and type of ash produced we can identify the degree of soil protection and the nutrients that will be dissolved. The objective of this work is to apply to map ash properties (CaCO3, pH, and select extractable elements) using a principal component analysis (PCA) in the immediate period after the fire. Four days after the fire we established a grid in a 9x27 m area and took ash samples every 3 meters for a total of 40 sampling points (Pereira et al., 2017). The PCA identified 5 different factors. Factor 1 identified high loadings in electrical conductivity, calcium, and magnesium and negative with aluminum and iron, while Factor 3 had high positive loadings in total phosphorous and silica. Factor 3 showed high positive loadings in sodium and potassium, factor 4 high negative loadings in CaCO3 and pH, and factor 5 high loadings in sodium and potassium. The experimental variograms of the extracted factors showed that the Gaussian model was the most precise to model factor 1, the linear to model factor 2 and the wave hole effect to model factor 3, 4 and 5. The maps produced confirm the patternd observed in the experimental variograms. Factor 1 and 2
Incremental Tensor Principal Component Analysis for Handwritten Digit Recognition
Chang Liu
2014-01-01
Full Text Available To overcome the shortcomings of traditional dimensionality reduction algorithms, incremental tensor principal component analysis (ITPCA based on updated-SVD technique algorithm is proposed in this paper. This paper proves the relationship between PCA, 2DPCA, MPCA, and the graph embedding framework theoretically and derives the incremental learning procedure to add single sample and multiple samples in detail. The experiments on handwritten digit recognition have demonstrated that ITPCA has achieved better recognition performance than that of vector-based principal component analysis (PCA, incremental principal component analysis (IPCA, and multilinear principal component analysis (MPCA algorithms. At the same time, ITPCA also has lower time and space complexity.
Observations of Halley's Comet by the Solar Maximum Mission (SMM)
Niedner, M. B.
1986-01-01
Solar Maximum Mission coronagraph/polarimeter observations of large scale phenomena in Halley's Comet are discussed. Observations of the hydrogen coma with the UV spectrometer are considered. It is concluded that coronograph/polarimeter observations of the disconnection event, in which the entire plasma tail uproots itself from the head of the comet, is convected away in the solar wind at speeds in the 50 to 100 km/sec range (relative to the head), and is replaced by a plasma tail constructed from folding ion-tail rays, are the most interesting.
Gentile statistics with a large maximum occupation number
Dai Wusheng; Xie Mi
2004-01-01
In Gentile statistics the maximum occupation number can take on unrestricted integers: 1 1 the Bose-Einstein case is not recovered from Gentile statistics as n goes to N. Attention is also concentrated on the contribution of the ground state which was ignored in related literature. The thermodynamic behavior of a ν-dimensional Gentile ideal gas of particle of dispersion E=p s /2m, where ν and s are arbitrary, is analyzed in detail. Moreover, we provide an alternative derivation of the partition function for Gentile statistics
Global Harmonization of Maximum Residue Limits for Pesticides.
Ambrus, Árpád; Yang, Yong Zhen
2016-01-13
International trade plays an important role in national economics. The Codex Alimentarius Commission develops harmonized international food standards, guidelines, and codes of practice to protect the health of consumers and to ensure fair practices in the food trade. The Codex maximum residue limits (MRLs) elaborated by the Codex Committee on Pesticide Residues are based on the recommendations of the FAO/WHO Joint Meeting on Pesticides (JMPR). The basic principles applied currently by the JMPR for the evaluation of experimental data and related information are described together with some of the areas in which further developments are needed.
Maximum mass of magnetic white dwarfs
Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez
2015-01-01
We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)
TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS
Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M
2007-11-12
Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.
Mammographic image restoration using maximum entropy deconvolution
Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R
2004-01-01
An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization
Maximum Margin Clustering of Hyperspectral Data
Niazmardi, S.; Safari, A.; Homayouni, S.
2013-09-01
In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.
Paving the road to maximum productivity.
Holland, C
1998-01-01
"Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.
Maximum power flux of auroral kilometric radiation
Benson, R.F.; Fainberg, J.
1991-01-01
The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3