Maximum entropy reconstruction of spin densities involving non uniform prior
International Nuclear Information System (INIS)
Schweizer, J.; Ressouche, E.; Papoular, R.J.; Zheludev, A.I.
1997-01-01
Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m(rvec r), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for ρ(rvec r) = m(rvec r). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing
Effects of bruxism on the maximum bite force
Directory of Open Access Journals (Sweden)
Todić Jelena T.
2017-01-01
Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.
Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics
Abe, Sumiyoshi
2014-11-01
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders
2010-06-01
Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.
Ellis, Sam; Reader, Andrew J
2018-04-26
Many clinical contexts require the acquisition of multiple positron emission tomography (PET) scans of a single subject, for example to observe and quantify changes in functional behaviour in tumours after treatment in oncology. Typically, the datasets from each of these scans are reconstructed individually, without exploiting the similarities between them. We have recently shown that sharing information between longitudinal PET datasets by penalising voxel-wise differences during image reconstruction can improve reconstructed images by reducing background noise and increasing the contrast-to-noise ratio of high activity lesions. Here we present two additional novel longitudinal difference-image priors and evaluate their performance using 2D simulation studies and a 3D real dataset case study. We have previously proposed a simultaneous difference-image-based penalised maximum likelihood (PML) longitudinal image reconstruction method that encourages sparse difference images (DS-PML), and in this work we propose two further novel prior terms. The priors are designed to encourage longitudinal images with corresponding differences which have i) low entropy (DE-PML), and ii) high sparsity in their spatial gradients (DTV-PML). These two new priors and the originally proposed longitudinal prior were applied to 2D simulated treatment response [ 18 F]fluorodeoxyglucose (FDG) brain tumour datasets and compared to standard maximum likelihood expectation-maximisation (MLEM) reconstructions. These 2D simulation studies explored the effects of penalty strengths, tumour behaviour, and inter-scan coupling on reconstructed images. Finally, a real two-scan longitudinal data series acquired from a head and neck cancer patient was reconstructed with the proposed methods and the results compared to standard reconstruction methods. Using any of the three priors with an appropriate penalty strength produced images with noise levels equivalent to those seen when using standard
Directory of Open Access Journals (Sweden)
Michael J. Markham
2011-07-01
Full Text Available Some problems occurring in Expert Systems can be resolved by employing a causal (Bayesian network and methodologies exist for this purpose. These require data in a specific form and make assumptions about the independence relationships involved. Methodologies using Maximum Entropy (ME are free from these conditions and have the potential to be used in a wider context including systems consisting of given sets of linear and independence constraints, subject to consistency and convergence. ME can also be used to validate results from the causal network methodologies. Three ME methods for determining the prior probability distribution of causal network systems are considered. The first method is Sequential Maximum Entropy in which the computation of a progression of local distributions leads to the over-all distribution. This is followed by development of the Method of Tribus. The development takes the form of an algorithm that includes the handling of explicit independence constraints. These fall into two groups those relating parents of vertices, and those deduced from triangulation of the remaining graph. The third method involves a variation in the part of that algorithm which handles independence constraints. Evidence is presented that this adaptation only requires the linear constraints and the parental independence constraints to emulate the second method in a substantial class of examples.
Application of orthodontic forces prior to autotransplantation - case reports.
Cho, J-H; Hwang, H-S; Chang, H-S; Hwang, Y-C
2013-02-01
This case report describes the successful autotransplantation of mandibular molars after application of orthodontic forces and discusses the advantages of this technique, that is, pre-application of an orthodontic force for autotransplantation. After clinical and radiographic examination, autotransplantation was planned with the patient's written informed consent. An orthodontic force was applied, and the surgical procedure was performed after tooth mobility had increased. Root canal treatment was performed within 2 weeks of autotransplantation. At the 1-year follow-up, the transplanted teeth revealed asymptomatic and healthy periodontal conditions. Autotransplantation is the surgical movement of a tooth from its original location to another site. The pre-application of orthodontic force technique was recently introduced for autogenous tooth transplantation. Pre-application of an orthodontic force may be a useful treatment option for autotransplantation. © 2012 International Endodontic Journal.
Directory of Open Access Journals (Sweden)
Hea-Jung Kim
2016-05-01
Full Text Available This paper proposes a two-stage maximum entropy prior to elicit uncertainty regarding a multivariate interval constraint of the location parameter of a scale mixture of normal model. Using Shannon’s entropy, this study demonstrates how the prior, obtained by using two stages of a prior hierarchy, appropriately accounts for the information regarding the stochastic constraint and suggests an objective measure of the degree of belief in the stochastic constraint. The study also verifies that the proposed prior plays the role of bridging the gap between the canonical maximum entropy prior of the parameter with no interval constraint and that with a certain multivariate interval constraint. It is shown that the two-stage maximum entropy prior belongs to the family of rectangle screened normal distributions that is conjugate for samples from a normal distribution. Some properties of the prior density, useful for developing a Bayesian inference of the parameter with the stochastic constraint, are provided. We also propose a hierarchical constrained scale mixture of normal model (HCSMN, which uses the prior density to estimate the constrained location parameter of a scale mixture of normal model and demonstrates the scope of its applicability.
Influence of maximum bite force on jaw movement during gummy jelly mastication.
Kuninori, T; Tomonari, H; Uehara, S; Kitashima, F; Yagi, T; Miyawaki, S
2014-05-01
It is known that maximum bite force has various influences on chewing function; however, there have not been studies in which the relationships between maximum bite force and masticatory jaw movement have been clarified. The aim of this study was to investigate the effect of maximum bite force on masticatory jaw movement in subjects with normal occlusion. Thirty young adults (22 men and 8 women; mean age, 22.6 years) with good occlusion were divided into two groups based on whether they had a relatively high or low maximum bite force according to the median. The maximum bite force was determined according to the Dental Prescale System using pressure-sensitive sheets. Jaw movement during mastication of hard gummy jelly (each 5.5 g) on the preferred chewing side was recorded using a six degrees of freedom jaw movement recording system. The motion of the lower incisal point of the mandible was computed, and the mean values of 10 cycles (cycles 2-11) were calculated. A masticatory performance test was conducted using gummy jelly. Subjects with a lower maximum bite force showed increased maximum lateral amplitude, closing distance, width and closing angle; wider masticatory jaw movement; and significantly lower masticatory performance. However, no differences in the maximum vertical or maximum anteroposterior amplitudes were observed between the groups. Although other factors, such as individual morphology, may influence masticatory jaw movement, our results suggest that subjects with a lower maximum bite force show increased lateral jaw motion during mastication. © 2014 John Wiley & Sons Ltd.
Using Tranformation Group Priors and Maximum Relative Entropy for Bayesian Glaciological Inversions
Arthern, R. J.; Hindmarsh, R. C. A.; Williams, C. R.
2014-12-01
One of the key advances that has allowed better simulations of the large ice sheets of Greenland and Antarctica has been the use of inverse methods. These have allowed poorly known parameters such as the basal drag coefficient and ice viscosity to be constrained using a wide variety of satellite observations. Inverse methods used by glaciologists have broadly followed one of two related approaches. The first is minimization of a cost function that describes the misfit to the observations, often accompanied by some kind of explicit or implicit regularization that promotes smallness or smoothness in the inverted parameters. The second approach is a probabilistic framework that makes use of Bayes' theorem to update prior assumptions about the probability of parameters, making use of data with known error estimates. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. What should the functional form of the cost function be if there are alternatives? What kind of regularization should be applied, and how much? How should the prior probability distribution for a parameter such as basal slipperiness be specified when we know so little about the details of the subglacial environment? Here we consider some approaches that have been used to address these questions and discuss ways that probabilistic prior information used for regularizing glaciological inversions might be specified with greater objectivity.
Maximum a posteriori covariance estimation using a power inverse wishart prior
DEFF Research Database (Denmark)
Nielsen, Søren Feodor; Sporring, Jon
2012-01-01
The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximum...
The prior-derived F constraints in the maximum-entropy method
Czech Academy of Sciences Publication Activity Database
Palatinus, Lukáš; van Smaalen, S.
2005-01-01
Roč. 61, - (2005), s. 363-372 ISSN 0108-7673 Institutional research plan: CEZ:AV0Z10100521 Keywords : charge density * maximum-entropy method * sodium nitrite Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.791, year: 2005
Psychophysical basis for maximum pushing and pulling forces: A review and recommendations.
Garg, Arun; Waters, Thomas; Kapellusch, Jay; Karwowski, Waldemar
2014-03-01
The objective of this paper was to perform a comprehensive review of psychophysically determined maximum acceptable pushing and pulling forces. Factors affecting pushing and pulling forces are identified and discussed. Recent studies show a significant decrease (compared to previous studies) in maximum acceptable forces for males but not for females when pushing and pulling on a treadmill. A comparison of pushing and pulling forces measured using a high inertia cart with those measured on a treadmill shows that the pushing and pulling forces using high inertia cart are higher for males but are about the same for females. It is concluded that the recommendations of Snook and Ciriello (1991) for pushing and pulling forces are still valid and provide reasonable recommendations for ergonomics practitioners. Regression equations as a function of handle height, frequency of exertion and pushing/pulling distance are provided to estimate maximum initial and sustained forces for pushing and pulling acceptable to 75% male and female workers. At present it is not clear whether pushing or pulling should be favored. Similarly, it is not clear what handle heights would be optimal for pushing and pulling. Epidemiological studies are needed to determine relationships between psychophysically determined maximum acceptable pushing and pulling forces and risk of musculoskeletal injuries, in particular to low back and shoulders.
Does combined strength training and local vibration improve isometric maximum force? A pilot study.
Goebel, Ruben; Haddad, Monoem; Kleinöder, Heinz; Yue, Zengyuan; Heinen, Thomas; Mester, Joachim
2017-01-01
The aim of the study was to determine whether a combination of strength training (ST) and local vibration (LV) improved the isometric maximum force of arm flexor muscles. ST was applied to the left arm of the subjects; LV was applied to the right arm of the same subjects. The main aim was to examine the effect of LV during a dumbbell biceps curl (Scott Curl) on isometric maximum force of the opposite muscle among the same subjects. It is hypothesized, that the intervention with LV produces a greater gain in isometric force of the arm flexors than ST. Twenty-seven collegiate students participated in the study. The training load was 70% of the individual 1 RM. Four sets with 12 repetitions were performed three times per week during four weeks. The right arm of all subjects represented the vibration trained body side (VS) and the left arm served as the traditional trained body side (TTS). A significant increase of isometric maximum force in both body sides (Arms) occurred. VS, however, significantly increased isometric maximum force about 43% in contrast to 22% of the TTS. The combined intervention of ST and LC improves isometric maximum force of arm flexor muscles. III.
An investigation of rugby scrimmaging posture and individual maximum pushing force.
Wu, Wen-Lan; Chang, Jyh-Jong; Wu, Jia-Hroung; Guo, Lan-Yuen
2007-02-01
Although rugby is a popular contact sport and the isokinetic muscle torque assessment has recently found widespread application in the field of sports medicine, little research has examined the factors associated with the performance of game-specific skills directly by using the isokinetic-type rugby scrimmaging machine. This study is designed to (a) measure and observe the differences in the maximum individual pushing forward force produced by scrimmaging in different body postures (3 body heights x 2 foot positions) with a self-developed rugby scrimmaging machine and (b) observe the variations in hip, knee, and ankle angles at different body postures and explore the relationship between these angle values and the individual maximum pushing force. Ten national rugby players were invited to participate in the examination. The experimental equipment included a self-developed rugby scrimmaging machine and a 3-dimensional motion analysis system. Our results showed that the foot positions (parallel and nonparallel foot positions) do not affect the maximum pushing force; however, the maximum pushing force was significantly lower in posture I (36% body height) than in posture II (38%) and posture III (40%). The maximum forward force in posture III (40% body height) was also slightly greater than for the scrum in posture II (38% body height). In addition, it was determined that hip, knee, and ankle angles under parallel feet positioning are factors that are closely negatively related in terms of affecting maximum pushing force in scrimmaging. In cross-feet postures, there was a positive correlation between individual forward force and hip angle of the rear leg. From our results, we can conclude that if the player stands in an appropriate starting position at the early stage of scrimmaging, it will benefit the forward force production.
Verification of maximum impact force for interim storage cask for the Fast Flux Testing Facility
International Nuclear Information System (INIS)
Chen, W.W.; Chang, S.J.
1996-01-01
The objective of this paper is to perform an impact analysis of the Interim Storage Cask (ISC) of the Fast Flux Test Facility (FFTF) for a 4-ft end drop. The ISC is a concrete cask used to store spent nuclear fuels. The analysis is to justify the impact force calculated by General Atomics (General Atomics, 1994) using the ILMOD computer code. ILMOD determines the maximum force developed by the concrete crushing which occurs when the drop energy has been absorbed. The maximum force, multiplied by the dynamic load factor (DLF), was used to determine the maximum g-level on the cask during a 4-ft end drop accident onto the heavily reinforced FFTF Reactor Service Building's concrete surface. For the analysis, this surface was assumed to be unyielding and the cask absorbed all the drop energy. This conservative assumption simplified the modeling used to qualify the cask's structural integrity for this accident condition
Relationship between oral status and maximum bite force in preschool children
Directory of Open Access Journals (Sweden)
Ching-Ming Su
2009-03-01
Conclusion: By combining the results of this study, it was concluded that associations of bite force with factors like age, maximum mouth opening and the number of teeth in contact were clearer than for other variables such as body height, body weight, occlusal pattern, and tooth decay or fillings.
A preliminary study to find out maximum occlusal bite force in Indian individuals
DEFF Research Database (Denmark)
Jain, Veena; Mathur, Vijay Prakash; Pillai, Rajath
2014-01-01
PURPOSE: This preliminary hospital based study was designed to measure the mean maximum bite force (MMBF) in healthy Indian individuals. An attempt was made to correlate MMBF with body mass index (BMI) and some of the anthropometric features. METHODOLOGY: A total of 358 healthy subjects in the ag...
Bilateral differences in peak force, power, and maximum plie depth during multiple grande jetes
Wyon, M.; Harris, J.; Brown, D.D.; Clark, F.
2013-01-01
A lateral bias has been previously reported in dance training. The aim of this study was to investigate whether there are any bilateral differences in peak forces, power, and maximum knee flexion during a sequence of three grand jetes and how they relate to leg dominance. A randomised observational
Ngo, Chuong; Leonhardt, Steffen; Zhang, Tony; Lüken, Markus; Misgeld, Berno; Vollmer, Thomas; Tenbrock, Klaus; Lehmann, Sylvia
2017-01-01
Electrical impedance tomography (EIT) provides global and regional information about ventilation by means of relative changes in electrical impedance measured with electrodes placed around the thorax. In combination with lung function tests, e.g. spirometry and body plethysmography, regional information about lung ventilation can be achieved. Impedance changes strictly correlate with lung volume during tidal breathing and mechanical ventilation. Initial studies presumed a correlation also during forced expiration maneuvers. To quantify the validity of this correlation in extreme lung volume changes during forced breathing, a measurement system was set up and applied on seven lung-healthy volunteers. Simultaneous measurements of changes in lung volume using EIT imaging and pneumotachography were obtained with different breathing patterns. Data was divided into a synchronizing phase (spontaneous breathing) and a test phase (maximum effort breathing and forced maneuvers). The EIT impedance changes correlate strictly with spirometric data during slow breathing with increasing and maximum effort ([Formula: see text]) and during forced expiration maneuvers ([Formula: see text]). Strong correlations in spirometric volume parameters [Formula: see text] ([Formula: see text]), [Formula: see text]/FVC ([Formula: see text]), and flow parameters PEF, [Formula: see text], [Formula: see text], [Formula: see text] ([Formula: see text]) were observed. According to the linearity during forced expiration maneuvers, EIT can be used during pulmonary function testing in combination with spirometry for visualisation of regional lung ventilation.
Influence of Dynamic Neuromuscular Stabilization Approach on Maximum Kayak Paddling Force
Directory of Open Access Journals (Sweden)
Davidek Pavel
2018-03-01
Full Text Available The purpose of this study was to examine the effect of Dynamic Neuromuscular Stabilization (DNS exercise on maximum paddling force (PF and self-reported pain perception in the shoulder girdle area in flatwater kayakers. Twenty male flatwater kayakers from a local club (age = 21.9 ± 2.4 years, body height = 185.1 ± 7.9 cm, body mass = 83.9 ± 9.1 kg were randomly assigned to the intervention or control groups. During the 6-week study, subjects from both groups performed standard off-season training. Additionally, the intervention group engaged in a DNS-based core stabilization exercise program (quadruped exercise, side sitting exercise, sitting exercise and squat exercise after each standard training session. Using a kayak ergometer, the maximum PF stroke was measured four times during the six weeks. All subjects completed the Disabilities of the Arm, Shoulder and Hand (DASH questionnaire before and after the 6-week interval to evaluate subjective pain perception in the shoulder girdle area. Initially, no significant differences in maximum PF and the DASH questionnaire were identified between the two groups. Repeated measures analysis of variance indicated that the experimental group improved significantly compared to the control group on maximum PF (p = .004; Cohen’s d = .85, but not on the DASH questionnaire score (p = .731 during the study. Integration of DNS with traditional flatwater kayak training may significantly increase maximum PF, but may not affect pain perception to the same extent.
Directory of Open Access Journals (Sweden)
Ariane Martins
2010-08-01
Full Text Available The relationship between force and balance show controversy results and has directimplications in exercise prescription practice. The objective was to investigate the relationshipbetween maximum dynamic force (MDF of inferior limbs and the static and dynamic balances.Participated in the study 60 individuals, with 18 to 24 years old, strength training apprentices.The MDF was available by mean the One Maximum Repetition (1MR in “leg press” and “kneeextension” and motor testes to available of static and dynamic balances. The correlation testsand multiple linear regression were applied. The force and balance variables showed correlationin females (p=0.038. The corporal mass and static balance showed correlation for the males(p=0.045. The explication capacity at MDF and practices time were small: 13% for staticbalance in males, 18% and 17%, respectively, for static and dynamic balance in females. Inconclusion: the MDF of inferior limbs showed low predictive capacity for performance in staticand dynamic balances, especially for males.
Lachniet, M. S.; Asmerom, Y.; Bernal, J. P.; Polyak, V.; Vazquez-Selem, L. V.
2012-12-01
The external forcings on global monsoon strength include summer orbital insolation and ocean circulation changes, both of which are key control knobs on Earth's climate. However, few records of the North American Monsoon (NAM) are available to test its sensitivity to variations in the precession-dominated insolation signal and Atlantic Meridional Overturning Circulation (AMOC) for the Last Glacial Maximum (LGM; 21 ± 3 cal ka BP) and deglacial periods. In particular, well-dated and high-resolution records from the southern sector of the NAM, referred to informally as the Mesoamerican monsoon to distinguish it from the more northerly 'core' NAM, are needed to better elucidate paleoclimate change in North America. Here, we present a 22 ka (ka = kilo years) rainfall history from absolutely-dated speleothems from tropical southwestern Mexico that documents a vigorous LGM summer monsoon, in contradiction to previous interpretations, and that the monsoon collapsed during the Heinrich stadial 1 and Younger Dryas cold events. We conclude that a strong Mesoamerican monsoon requires both a large ocean-to-land temperature contrast, driven as today by summer insolation, and a proximal latitudinal position of the Intertropical Convergence Zone, forced by active AMOC.
Buaraphan, Khajornsak
2018-01-01
According to the constructivist theory, students' prior conceptions play an important role in their process of knowledge construction and teachers must take those prior conceptions into account when designing learning activities. The interpretive study was conducted to explore grade 8 students' conceptions about force and motion. The research participants were 42 students (21 male, 21 female) from seven Educational Opportunity Expansion Schools in Nakhon Pathom province located at the central region of Thailand. In each school, two low, two medium and two high achievers were selected. The Interview-About-Instance (IAI) technique was used to collect data. All interviews were audio recorded and subsequently transcribed verbatim. The students' conceptions were interpreted into scientific conception (SC), partial scientific conception (PC) and alternative conception (AC). The frequency of each category was counted and calculated for percentage. The results revealed that the students held a variety of prior conceptions about force and motion ranged from SC, PC to AC. Each students, including the high achievers, held mixed conceptions of force and motion. Interesting, the two dominant ACs held by the students were: a) force-implies-motion or motion-implies-force, and b) force coming only from an active agent. The science teachers need to take these ACs into account when designing the learning activities to cope with them. The implications regarding teaching and learning about force and motion are also discussed.
Gharehchahi, Jafar; Asadzadeh, Nafiseh; Mirmortazavi, Amirtaher; Shakeri, Mohammad Taghi
2013-10-01
The initial retention of implant-assisted removable partial dentures (IARPDs) is unknown. The purpose of this in vitro study was to compare maximum dislodging forces of distal extension mandibular IARPD with two different attachments and three clasp designs. A simulated class I partially edentulous mandible was prepared with two screw-type 3.75 × 12 mm implants in the first molar regions and 2 metal-ceramic crowns on distal abutments. Fifteen bilateral distal extension frameworks were conventionally fabricated in three clasp designs (suprabulge, infrabulge, no clasp). Locator attachments were connected to the 15 denture bases with autopolymerized resin. Each specimen was subject to four types of retention pulls (main, anterior, posterior, unilateral pull) five times with a universal testing machine. Locator attachments were replaced with O-ring attachments, and the same procedure was performed. Therefore, the study groups included: IRPD with Locator attachment and suprabulge clasp (group 1), IRPD with Locator attachment and infrabulge clasp (group 2), IRPD with Locator attachment and no clasp (group 3), IRPD with O-ring attachment and suprabulge clasp (group 4), IRPD with O-ring attachment and infrabulge clasp (group 5), IRPD with O-ring attachment and no clasp (group 6). Data were analyzed using one-way ANOVA, two-way ANOVA, and Tukey tests. The highest mean value was 22.99 lb for prostheses with a Locator attachment and suprabulge clasp. The lowest retentive values were recorded for IARPDs with O-ring attachments. The results of this in vitro study suggest that the precise selection of attachments with or without clasp assemblies may affect the clinical success of mandibular IARPDs. © 2013 by the American College of Prosthodontists.
DEFF Research Database (Denmark)
Christensen, Peter Astrup; Jacobsen, Jacob Ole; Thorlund, Jonas B
2008-01-01
PURPOSE: The purpose of the present study was to examine the impact of 8 days of immobilization during a Special Support and Reconnaissance mission (SSR) on muscle mass, contraction dynamics, maximum jump height/power, and body composition. METHODS: Unilateral maximal voluntary contraction, rate...... of force development, and maximal jump height were tested to assess muscle strength/power along with whole-body impedance analysis before and after SSR. RESULTS: Body weight, fat-free mass, and total body water decreased (4-5%) after SSR, along with impairments in maximal jump height (-8%) and knee...... extensor maximal voluntary contraction (-10%). Furthermore, rate of force development was severely affected (-15-30%). CONCLUSIONS: Eight days of immobilization during a covert SSR mission by Special Forces soldiers led to substantial decrements in maximal muscle force and especially in rapid muscle force...
Directory of Open Access Journals (Sweden)
Wing-Kai Lam
Full Text Available Lunge is one frequently executed movement in badminton and involves a unique sagittal footstrike angle of more than 40 degrees at initial ground contact compared with other manoeuvres. This study examined if the shoe heel curvature design of a badminton shoe would influence shoe-ground kinematics, ground reaction forces, and knee moments during lunge.Eleven elite and fifteen intermediate players performed five left-forward maximum lunge trials with Rounded Heel Shoe (RHS, Flattened Heel Shoe (FHS, and Standard Heel Shoes (SHS. Shoe-ground kinematics, ground reaction forces, and knee moments were measured by using synchronized force platform and motion analysis system. A 2 (Group x 3 (Shoe ANOVA with repeated measures was performed to determine the effects of different shoes and different playing levels, as well as the interaction of two factors on all variables.Shoe effect indicated that players demonstrated lower maximum vertical loading rate in RHS than the other two shoes (P < 0.05. Group effect revealed that elite players exhibited larger footstrike angle, faster approaching speed, lower peak horizontal force and horizontal loading rates but higher vertical loading rates and larger peak knee flexion and extension moments (P < 0.05. Analysis of Interactions of Group x Shoe for maximum and mean vertical loading rates (P < 0.05 indicated that elite players exhibited lower left maximum and mean vertical loading rates in RHS compared to FHS (P < 0.01, while the intermediate group did not show any Shoe effect on vertical loading rates.These findings indicate that shoe heel curvature would play some role in altering ground reaction force impact during badminton lunge. The differences in impact loads and knee moments between elite and intermediate players may be useful in optimizing footwear design and training strategy to minimize the potential risks for impact related injuries in badminton.
Directory of Open Access Journals (Sweden)
Domingo Morales-Palma
2017-11-01
Full Text Available The maximum force criteria and their derivatives, the Swift and Hill criteria, have been extensively used in the past to study sheet formability. Many extensions or modifications of these criteria have been proposed to improve necking predictions under only stretching conditions. This work analyses the maximum force principle under stretch-bending conditions and develops two different approaches to predict necking. The first is a generalisation of classical maximum force criteria to stretch-bending processes. The second approach is an extension of a previous work of the authors based on critical distance concepts, suggesting that necking of the sheet is controlled by the damage of a critical material volume located at the inner side of the sheet. An analytical deformation model is proposed to characterise the stretch-bending process under plane-strain conditions. Different parameters are considered, such as the thickness reduction, the gradient of variables through the sheet thickness, the thickness stress and the anisotropy of the material. The proposed necking models have been successfully applied to predict the failure in different materials, such as steel, brass and aluminium.
Linsen, Sabine S; Oikonomou, Annina; Martini, Markus; Teschke, Marcus
2018-05-01
The purpose was to analyze mandibular kinematics and maximum voluntary bite force in patients following segmental resection of the mandible without and with reconstruction (autologous bone, alloplastic total temporomandibular joint replacement (TMJ TJR)). Subjects operated from April 2002 to August 2014 were enrolled in the study. Condylar (CRoM) and incisal (InRoM) range of motion and deflection during opening, condylar retrusion, incisal lateral excursion, mandibular rotation angle during opening, and maximum voluntary bite force were determined on the non-affected site and compared between groups. Influence of co-factors (defect size, soft tissue deficit, neck dissection, radiotherapy, occlusal contact zones (OCZ), and time) was determined. Twelve non-reconstructed and 26 reconstructed patients (13 autologous, 13 TMJ TJR) were included in the study. InRoM opening and bite force were significantly higher (P ≤ .024), and both condylar and incisal deflection during opening significantly lower (P ≤ .027) in reconstructed patients compared with non-reconstructed. Differences between the autologous and the TMJ TJR group were statistically not significant. Co-factors defect size, soft tissue deficit, and neck dissection had the greatest impact on kinematics and number of OCZs on bite force. Reconstructed patients (both autologous and TMJ TJR) have better overall function than non-reconstructed patients. Reconstruction of segmental mandibular resection has positive effects on mandibular function. TMJ TJR seems to be a suitable technique for the reconstruction of mandibular defects including the TMJ complex.
Lam, Wing-Kai; Ryue, Jaejin; Lee, Ki-Kwang; Park, Sang-Kyoon; Cheung, Jason Tak-Man; Ryu, Jiseon
2017-01-01
Lunge is one frequently executed movement in badminton and involves a unique sagittal footstrike angle of more than 40 degrees at initial ground contact compared with other manoeuvres. This study examined if the shoe heel curvature design of a badminton shoe would influence shoe-ground kinematics, ground reaction forces, and knee moments during lunge. Eleven elite and fifteen intermediate players performed five left-forward maximum lunge trials with Rounded Heel Shoe (RHS), Flattened Heel Shoe (FHS), and Standard Heel Shoes (SHS). Shoe-ground kinematics, ground reaction forces, and knee moments were measured by using synchronized force platform and motion analysis system. A 2 (Group) x 3 (Shoe) ANOVA with repeated measures was performed to determine the effects of different shoes and different playing levels, as well as the interaction of two factors on all variables. Shoe effect indicated that players demonstrated lower maximum vertical loading rate in RHS than the other two shoes (P badminton lunge. The differences in impact loads and knee moments between elite and intermediate players may be useful in optimizing footwear design and training strategy to minimize the potential risks for impact related injuries in badminton.
Shortwave forcing and feedbacks in Last Glacial Maximum and Mid-Holocene PMIP3 simulations.
Braconnot, Pascale; Kageyama, Masa
2015-11-13
Simulations of the climates of the Last Glacial Maximum (LGM), 21 000 years ago, and of the Mid-Holocene (MH), 6000 years ago, allow an analysis of climate feedbacks in climate states that are radically different from today. The analyses of cloud and surface albedo feedbacks show that the shortwave cloud feedback is a major driver of differences between model results. Similar behaviours appear when comparing the LGM and MH simulated changes, highlighting the fingerprint of model physics. Even though the different feedbacks show similarities between the different climate periods, the fact that their relative strength differs from one climate to the other prevents a direct comparison of past and future climate sensitivity. The land-surface feedback also shows large disparities among models even though they all produce positive sea-ice and snow feedbacks. Models have very different sensitivities when considering the vegetation feedback. This feedback has a regional pattern that differs significantly between models and depends on their level of complexity and model biases. Analyses of the MH climate in two versions of the IPSL model provide further indication on the possibilities to assess the role of model biases and model physics on simulated climate changes using past climates for which observations can be used to assess the model results. © 2015 The Author(s).
Directory of Open Access Journals (Sweden)
ChunPing Ren
2017-01-01
Full Text Available We propose a novel mathematical algorithm to offer a solution for the inverse random dynamic force identification in practical engineering. Dealing with the random dynamic force identification problem using the proposed algorithm, an improved maximum entropy (IME regularization technique is transformed into an unconstrained optimization problem, and a novel conjugate gradient (NCG method was applied to solve the objective function, which was abbreviated as IME-NCG algorithm. The result of IME-NCG algorithm is compared with that of ME, ME-CG, ME-NCG, and IME-CG algorithm; it is found that IME-NCG algorithm is available for identifying the random dynamic force due to smaller root mean-square-error (RMSE, lower restoration time, and fewer iterative steps. Example of engineering application shows that L-curve method is introduced which is better than Generalized Cross Validation (GCV method and is applied to select regularization parameter; thus the proposed algorithm can be helpful to alleviate the ill-conditioned problem in identification of dynamic force and to acquire an optimal solution of inverse problem in practical engineering.
Elsyad, Moustafa Abdou; Khairallah, Ahmed Samir
2017-06-01
This crossover study aimed to evaluate and compare chewing efficiency and maximum bite force (MBF) with resilient telescopic and bar attachment systems of implant overdentures in patients with atrophied mandibles. Ten participants with severely resorbed mandibles and persistent denture problems received new maxillary and mandibular conventional dentures (control, CD). After 3 months of adaptation, two implants were inserted in canine region of the mandible. In a quasi-random method, overdentures were connected to the implants with either bar overdentures (BOD) or resilient telescopic overdentures (TOD) attachment systems. Chewing efficiency in terms of unmixed fraction (UF) was measured using chewing gum (after 5, 10, 20, 30 and 50 strokes), and MBF was measured using a bite force transducer. Measurements were performed 3 months after using each of the following prostheses: CD, BOD and TOD. Chewing efficiency and MBF increased significantly with BOD and TOD compared to CD. As the number of chewing cycles increased, the UF decreased. TOD recorded significant higher chewing efficiency and MBF than BOD. Resilient telescopic attachments are associated with increased chewing efficiency and MBF compared bar attachments when used to retain overdentures to the implants in patients with atrophied mandibles. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Sugiura, Yoshito; Hatanaka, Yasuhiko; Arai, Tomoaki; Sakurai, Hiroaki; Kanada, Yoshikiyo
2016-04-01
We aimed to investigate whether a linear regression formula based on the relationship between joint torque and angular velocity measured using a high-speed video camera and image measurement software is effective for estimating 1 repetition maximum (1RM) and isometric peak torque in knee extension. Subjects comprised 20 healthy men (mean ± SD; age, 27.4 ± 4.9 years; height, 170.3 ± 4.4 cm; and body weight, 66.1 ± 10.9 kg). The exercise load ranged from 40% to 150% 1RM. Peak angular velocity (PAV) and peak torque were used to estimate 1RM and isometric peak torque. To elucidate the relationship between force and velocity in knee extension, the relationship between the relative proportion of 1RM (% 1RM) and PAV was examined using simple regression analysis. The concordance rate between the estimated value and actual measurement of 1RM and isometric peak torque was examined using intraclass correlation coefficients (ICCs). Reliability of the regression line of PAV and % 1RM was 0.95. The concordance rate between the actual measurement and estimated value of 1RM resulted in an ICC(2,1) of 0.93 and that of isometric peak torque had an ICC(2,1) of 0.87 and 0.86 for 6 and 3 levels of load, respectively. Our method for estimating 1RM was effective for decreasing the measurement time and reducing patients' burden. Additionally, isometric peak torque can be estimated using 3 levels of load, as we obtained the same results as those reported previously. We plan to expand the range of subjects and examine the generalizability of our results.
Elsyad, Moustafa Abdou; Mostafa, Aisha Zakaria
2018-01-01
This cross over study aimed to evaluate the effect of telescopic distal extension removable partial dentures on oral health related quality of life and maximum bite force MATERIALS AND METHODS: Twenty patients with complete maxillary edentulism and partially edentulous mandibles with anterior teeth only remaining were selected for this cross over study. All patients received complete maxillary dentures and mandibular partial removable dental prosthesis (PRDP, control). After 3 months of adaptation, PRDP was replaced with conventional telescopic partial dentures (TPD) or telescopic partial dentures with cantilevered extensions (TCPD) in a quasi-random method. Oral health related quality of life (OHRQoL) was measured using OHIP-14 questionnaire and Maximum bite force (MBF) was measured using a bite force transducer. Measurements were performed 3 months after using each of the following prostheses; PRDP, TPD, and TCPD. TCPD showed the OHIP-14 lowest scores (i.e., the highest patient satisfaction with their OHRQoL), followed by TPD, and PRDP showed the highest OHIP-14 scores (i.e., the lowest patient satisfaction with OHRQoL). TCPD showed the highest MBF (70.7 ± 3.71), followed by TPD (57.4 ± 3.43) and the lowest MBF (40.2 ± 2.20) was noted with PRDP. WITHIN The Limitations of This Study, Mandibular Telescopic Distal Extension Removable Partial Dentures with Cantilevered Extensions Were Associated with Improved Oral Health Related Quality of Life and Maximum Bite Force Compared to Telescopic or Conventional PRDP. Telescopic distal extension removable prostheses is an esthetic restoration in partially edentulous patients with free end saddle. This article describes the addition of cantilevered extensions of this prosthesis. The results showed that telescopic distal extension removable prostheses with cantilevered extensions were associated with improved oral health related quality of life and maximum bite force compared to telescopic or conventional RPDs
Polinder, H.; Slootweg, J.G.; Hoeijmakers, M.J.; Compter, J.C.
2003-01-01
The use of linear permanent-magnet (PM) actuators increases in a wide variety of applications because of their high force density, robustness and accuracy. These linear PM motors are often heavily loaded during short intervals of high acceleration, so that magnetic saturation occurs. This paper
Kingston, David C; Acker, Stacey M
2018-01-23
In high knee flexion, contact between the posterior thigh and calf is expected to decrease forces on tibiofemoral contact surfaces, therefore, thigh-calf contact needs to be thoroughly characterized to model its effect. This study measured knee angles and intersegmental contact parameters in fifty-eight young healthy participants for six common high flexion postures using motion tracking and a pressure sensor attached to the right thigh. Additionally, we introduced and assessed the reliability of a method for reducing noise in pressure sensor output. Five repetitions of two squatting, two kneeling, and two unilateral kneeling movements were completed. Interactions of posture by sex occurred for thigh-calf and heel-gluteal center of force, and thigh-calf contact area. Center of force in thigh-calf regions was farther from the knee joint center in females, compared to males, during unilateral kneeling (82 and 67 mm respectively) with an inverted relationship in the heel-gluteal region (331 and 345 mm respectively), although caution is advised when generalizing these findings from a young, relatively fit sample to a population level. Contact area was larger in females when compared to males (mean of 155.61 and 137.33 cm 2 across postures). A posture main effect was observed in contact force and sex main effects were present in onset and max angle. Males had earlier onset (121.0°) and lower max angle (147.4°) with onset and max angles having a range between movements of 8° and 3° respectively. There was a substantial total force difference of 139 N between the largest and smallest activity means. Force parameters measured in this study suggest that knee joint contact models need to incorporate activity-specific parameters when estimating loading. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dynamometric analysis of the maximum force applied in aquatic human gait at 1.3m of immersion.
Roesler, Helio; Haupenthal, Alessandro; Schütz, Gustavo R; de Souza, Patrícia V
2006-12-01
This work had the objective to analyze the values of the vertical and anteroposterior components of the ground reaction force (GRF) during the aquatic gait and the influence of the speed and the upper limb position on the GRF components values. Sixty subjects, with average height between 1.6 and 1.85m and average age of 23 years, were divided in three groups according to the immersion level. The citizens walked over a walking platform, which had two force plates attached. The platform was located at a depth of 1.3m. The subjects walked over the platform in four different situations, with speed and upper limb position variations. For data analysis, descriptive and inferential statistics were used. For the vertical component, the force values varied between 20% and 40% of the subjects' body weight according to the different data collection situations. For the anteroposterior component, the force values reached between 8% and 20% of the subjects' body weight corporal, also according with the data collection situation. INTERPRETATION (DISCUSSION): It was noted that for a given immersion level, the forces can vary according to the request that is imposed to the aquatic gait. It was concluded that either the speed as well as the position of the upper limb influence the values of the GRF components. An increase in the gait speed causes increase of the anteroposterior component (Fx), while an increase in the corporal mass out of the water causes increase mainly of the vertical component (Fy). Knowing the value of these alterations is important for the professional who prescribes activities in aquatic environment.
Directory of Open Access Journals (Sweden)
Bo You
2015-01-01
Full Text Available In order to predict pressing quality of precision press-fit assembly, press-fit curves and maximum press-mounting force of press-fit assemblies were investigated by finite element analysis (FEA. The analysis was based on a 3D Solidworks model using the real dimensions of the microparts and the subsequent FEA model that was built using ANSYS Workbench. The press-fit process could thus be simulated on the basis of static structure analysis. To verify the FEA results, experiments were carried out using a press-mounting apparatus. The results show that the press-fit curves obtained by FEA agree closely with the curves obtained using the experimental method. In addition, the maximum press-mounting force calculated by FEA agrees with that obtained by the experimental method, with the maximum deviation being 4.6%, a value that can be tolerated. The comparison shows that the press-fit curve and max press-mounting force calculated by FEA can be used for predicting the pressing quality during precision press-fit assembly.
One factor that could impact the feasibility of commercial on-farm slaughter of broilers is the time delay from on-farm slaughter to scalding and defeathering in the commercial plant that could be 4 h or more. This experiment evaluated feather retention force (FRF) in broilers that were slaughtered ...
Directory of Open Access Journals (Sweden)
Dan N. Dumitriu
2015-09-01
Full Text Available A Danaher Thomson linear actuator with ball screw drive and a realtime control system are used here to induce vertical displacements under the driver/user seat of an in-house dynamic car simulator. In order to better support the car simulator and to dynamically protect the actuator’s ball screw drive, a layer of coil springs is used to support the whole simulator chassis. More precisely, one coil spring is placed vertically under each corner of the rectangular chassis. The paper presents the choice of the appropriate coil springs, so that to minimize as much as possible the ball screw drive task of generating linear motions, corresponding to the vertical displacements and accelerations encountered by a driver during a real ride. For this application, coil springs with lower spring constant are more suited to reduce the forces in the ball screw drive and thus to increase the ball screw drive life expectancy.
Säwén, Elin; Massad, Tariq; Landersjö, Clas; Damberg, Peter; Widmalm, Göran
2010-08-21
The conformational space available to the flexible molecule α-D-Manp-(1-->2)-α-D-Manp-OMe, a model for the α-(1-->2)-linked mannose disaccharide in N- or O-linked glycoproteins, is determined using experimental data and molecular simulation combined with a maximum entropy approach that leads to a converged population distribution utilizing different input information. A database survey of the Protein Data Bank where structures having the constituent disaccharide were retrieved resulted in an ensemble with >200 structures. Subsequent filtering removed erroneous structures and gave the database (DB) ensemble having three classes of mannose-containing compounds, viz., N- and O-linked structures, and ligands to proteins. A molecular dynamics (MD) simulation of the disaccharide revealed a two-state equilibrium with a major and a minor conformational state, i.e., the MD ensemble. These two different conformation ensembles of the disaccharide were compared to measured experimental spectroscopic data for the molecule in water solution. However, neither of the two populations were compatible with experimental data from optical rotation, NMR (1)H,(1)H cross-relaxation rates as well as homo- and heteronuclear (3)J couplings. The conformational distributions were subsequently used as background information to generate priors that were used in a maximum entropy analysis. The resulting posteriors, i.e., the population distributions after the application of the maximum entropy analysis, still showed notable deviations that were not anticipated based on the prior information. Therefore, reparameterization of homo- and heteronuclear Karplus relationships for the glycosidic torsion angles Φ and Ψ were carried out in which the importance of electronegative substituents on the coupling pathway was deemed essential resulting in four derived equations, two (3)J(COCC) and two (3)J(COCH) being different for the Φ and Ψ torsions, respectively. These Karplus relationships are denoted
Directory of Open Access Journals (Sweden)
Jonhan Ho
2013-01-01
Full Text Available Background: Advances in digital pathology are accelerating integration of this technology into anatomic pathology (AP. To optimize implementation and adoption of digital pathology systems within a large healthcare organization, initial assessment of both end user (pathologist needs and organizational infrastructure are required. Contextual inquiry is a qualitative, user-centered tool for collecting, interpreting, and aggregating such detailed data about work practices that can be employed to help identify specific needs and requirements. Aim: Using contextual inquiry, the objective of this study was to identify the unique work practices and requirements in AP for the United States (US Air Force Medical Service (AFMS that had to be targeted in order to support their transition to digital pathology. Subjects and Methods: A pathology-centered observer team conducted 1.5 h interviews with a total of 24 AFMS pathologists and histology lab personnel at three large regional centers and one smaller peripheral AFMS pathology center using contextual inquiry guidelines. Findings were documented as notes and arranged into a hierarchal organization of common themes based on user-provided data, defined as an affinity diagram. These data were also organized into consolidated graphic models that characterized AFMS pathology work practices, structure, and requirements. Results: Over 1,200 recorded notes were grouped into an affinity diagram composed of 27 third-level, 10 second-level, and five main-level (workflow and workload distribution, quality, communication, military culture, and technology categories. When combined with workflow and cultural models, the findings revealed that AFMS pathologists had needs that were unique to their military setting, when compared to civilian pathologists. These unique needs included having to serve a globally distributed patient population, transient staff, but a uniform information technology (IT structure. Conclusions: The
Constrained noninformative priors
International Nuclear Information System (INIS)
Atwood, C.L.
1994-10-01
The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given
2013-01-01
Background Zirconia materials are known for their optimal aesthetics, but they are brittle, and concerns remain about whether their mechanical properties are sufficient for withstanding the forces exerted in the oral cavity. Therefore, this study compared the maximum deformation and failure forces of titanium implants between titanium-alloy and zirconia abutments under oblique compressive forces in the presence of two levels of marginal bone loss. Methods Twenty implants were divided into Groups A and B, with simulated bone losses of 3.0 and 1.5 mm, respectively. Groups A and B were also each divided into two subgroups with five implants each: (1) titanium implants connected to titanium-alloy abutments and (2) titanium implants connected to zirconia abutments. The maximum deformation and failure forces of each sample was determined using a universal testing machine. The data were analyzed using the nonparametric Mann–Whitney test. Results The mean maximum deformation and failure forces obtained the subgroups were as follows: A1 (simulated bone loss of 3.0 mm, titanium-alloy abutment) = 540.6 N and 656.9 N, respectively; A2 (simulated bone loss of 3.0 mm, zirconia abutment) = 531.8 N and 852.7 N; B1 (simulated bone loss of 1.5 mm, titanium-alloy abutment) = 1070.9 N and 1260.2 N; and B2 (simulated bone loss of 1.5 mm, zirconia abutment) = 907.3 N and 1182.8 N. The maximum deformation force differed significantly between Groups B1 and B2 but not between Groups A1 and A2. The failure force did not differ between Groups A1 and A2 or between Groups B1 and B2. The maximum deformation and failure forces differed significantly between Groups A1 and B1 and between Groups A2 and B2. Conclusions Based on this experimental study, the maximum deformation and failure forces are lower for implants with a marginal bone loss of 3.0 mm than of 1.5 mm. Zirconia abutments can withstand physiological occlusal forces applied in the anterior region. PMID
Maximum Acceleration Recording Circuit
Bozeman, Richard J., Jr.
1995-01-01
Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.
Charlier, G.W.P.
1994-01-01
In a binary choice panel data model with individual effects and two time periods, Manski proposed the maximum score estimator, based on a discontinuous objective function, and proved its consistency under weak distributional assumptions. However, the rate of convergence of this estimator is low (N)
DEFF Research Database (Denmark)
Blazevich, Anthony J; Horne, Sara; Cannavan, Dale
2008-01-01
This study examined the effects of slow-speed resistance training involving concentric (CON, n = 10) versus eccentric (ECC, n = 11) single-joint muscle contractions on contractile rate of force development (RFD) and neuromuscular activity (EMG), and its maintenance through detraining. Isokinetic...
Directory of Open Access Journals (Sweden)
Daniel T. McMaster
2017-11-01
Full Text Available Purpose: The prevalence of compression garment (CG use is increasing with athletes striving to take advantage of the purported benefits to recovery and performance. Here, we investigated the effect of CG on muscle force and movement velocity performance in athletes.Methods: Ten well-trained male rugby athletes wore a wrestling-style CG suit applying 13–31 mmHg of compressive pressure during a training circuit in a repeated-measures crossover design. Force and velocity data were collected during a 5-s isometric mid-thigh pull (IMTP and repeated countermovement jump (CMJ, respectively; and time to complete a 5-m horizontal loaded sled push was also measured.Results: IMTP peak force was enhanced in the CG condition by 139 ± 142 N (effect size [ES] = 0.36. Differences in CMJ peak velocity (ES = 0.08 and loaded sled-push sprint time between the conditions were trivial (ES = −0.01. A qualitative assessment of the effects of CG wear suggested that the likelihood of harm was unlikely in the CMJ and sled push, while a beneficial effect in the CMJ was possible, but not likely. Half of the athletes perceived a functional benefit in the IMTP and CMJ exercises.Conclusion: Consistent with other literature, there was no substantial effect of wearing a CG suit on CMJ and sprint performance. The improvement in peak force generation capability in an IMTP may be of benefit to rugby athletes involved in scrummaging or lineout lifting. The mechanism behind the improved force transmission is unclear, but may involve alterations in neuromuscular recruitment and proprioceptive feedback.
McMaster, Daniel T; Beaven, Christopher M; Mayo, Brad; Gill, Nicholas; Hébert-Losier, Kim
2017-01-01
Purpose: The prevalence of compression garment (CG) use is increasing with athletes striving to take advantage of the purported benefits to recovery and performance. Here, we investigated the effect of CG on muscle force and movement velocity performance in athletes. Methods: Ten well-trained male rugby athletes wore a wrestling-style CG suit applying 13-31 mmHg of compressive pressure during a training circuit in a repeated-measures crossover design. Force and velocity data were collected during a 5-s isometric mid-thigh pull (IMTP) and repeated countermovement jump (CMJ), respectively; and time to complete a 5-m horizontal loaded sled push was also measured. Results: IMTP peak force was enhanced in the CG condition by 139 ± 142 N (effect size [ES] = 0.36). Differences in CMJ peak velocity (ES = 0.08) and loaded sled-push sprint time between the conditions were trivial (ES = -0.01). A qualitative assessment of the effects of CG wear suggested that the likelihood of harm was unlikely in the CMJ and sled push, while a beneficial effect in the CMJ was possible, but not likely. Half of the athletes perceived a functional benefit in the IMTP and CMJ exercises. Conclusion: Consistent with other literature, there was no substantial effect of wearing a CG suit on CMJ and sprint performance. The improvement in peak force generation capability in an IMTP may be of benefit to rugby athletes involved in scrummaging or lineout lifting. The mechanism behind the improved force transmission is unclear, but may involve alterations in neuromuscular recruitment and proprioceptive feedback.
Xu, L; Fan, S; Cai, B; Fang, Z; Jiang, X
2017-05-01
This study aimed to investigate whether the fatigue induced by sustained motor task in the jaw elevator muscles differed between healthy subjects and patients with temporomandibular disorder (TMD). Fifteen patients with TMD and thirteen age- and sex-matched healthy controls performed a fatigue test consisting of sustained clenching contractions at 30% maximal voluntary clenching intensity until test failure (the criterion for terminating the fatigue test was when the biting force decreased by 10% or more from the target force consecutively for >3 s). The pre- and post-maximal bite forces (MBFs) were measured. Surface electromyographic signals were recorded from the superficial masseter muscles and anterior temporal muscles bilaterally, and the median frequency at the beginning, middle and end of the fatigue test was calculated. The duration of the fatigue test was also quantified. Both pre- and post-MBFs were lower in patients with TMD than in controls (P fatigue test in TMD patients was significantly shorter than that of the controls (P fatigued, but the electromyographic activation process during the fatigue test is similar between healthy subjects and patients with TMD. However, the mechanisms involved in this process remain unclear, and further research is warranted. © 2017 John Wiley & Sons Ltd.
Hamdi, M M; Mutungi, G
2010-02-01
It is generally believed that steroid hormones have both genomic and non-genomic (rapid) actions. Although the latter form an important component of the physiological response of these hormones, little is known about the cellular signalling pathway(s) mediating these effects and their physiological functions in adult mammalian skeletal muscle fibres. Therefore, the primary aim of this study was to investigate the non-genomic actions of dihydrotestosterone (DHT) and their physiological role in isolated intact mammalian skeletal muscle fibre bundles. Our results show that treating the fibre bundles with physiological concentrations of DHT increases both twitch and tetanic contractions in fast twitch fibres. However, it decreases them in slow twitch fibres. These changes in force are accompanied by an increase in the phosphorylation of MAPK/ERK1/2 in both fibre types and that of regulatory myosin light chains in fast twitch fibres. Both effects were insensitive to inhibitors of Src kinase, androgen receptor, insulin-like growth factor 1 receptor and platelet-derived growth factor receptor. However, they were abolished by the MAPK/ERK1/2 kinase inhibitor PD98059 and the epidermal growth factor (EGF) receptor inhibitor tyrphostin AG 1478. In contrast, testosterone had no effect on force and increased the phosphorylation of ERK1/2 in slow twitch fibres only. From these results we conclude that sex steroids have non-genomic actions in isolated intact mammalian skeletal muscle fibres. These are mediated through the EGF receptor and one of their main physiological functions is the enhancement of force production in fast twitch skeletal muscle fibres.
Maximum entropy prior uncertainty and correlation of statistical economic data
Dias, Rodriques J.F.
2016-01-01
Empirical estimates of source statistical economic data such as trade flows, greenhouse gas emissions or employment figures are always subject to uncertainty (stemming from measurement errors or confidentiality) but information concerning that uncertainty is often missing. This paper uses concepts
Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.
2009-01-01
We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.
Directory of Open Access Journals (Sweden)
M.J.P. Coelho-Ferraz
2008-12-01
Full Text Available La actividad de los músculos masetero y de la porción anterior temporal de ambos lados, derecho e izquierdo, respectivamente, durante la fuerza máxima de mordedura fue estudiada en voluntarios sanos. El estudio incluyó a 17 voluntarios adultos de ambos sexos, edad promedia de 25 años, que no evidenciaban ningún indicio de disfunción temporomandibular y eran relacionados con la Facultad de Odontología de Piracicaba. Se registraron los datos electromiográficos en ambos lados de la cara del masetero y de la porción anterior de los músculos temporal y suprahioideo en las posiciones postural e isométrica. Se utilizaron electrodos de superficie pasivos para niños, de Ag/AgCl, con forma circular y descargables de Meditrace® Kendall-LTP, modelo Chicopee MA01. Éstos se conectaron a un preamplificador con una ganancia de 20x que formaba un circuito de diferenciales. Se captaron los registros de las señales eléctricas utilizando un equipo EMG-8OOC de EMG System of Brazil, Ltd., de ocho canales, a una frecuencia de 2 KHz con 16 bitios de resolución y un filtro digital con un paso de banda de 20 a 500 Hz. Se utilizó también un transductor de presión que consistía en un tubo de goma con un sensor de presión (MPX 5700* (Motorola SPS, Austin, TX, EE.UU. para registrar la fuerza máxima de mordedura. El análisis estadístico incluyó la correlación lineal, la prueba t emparejada y el análisis de la varianza. Se consideró estadísticamente significativa una probabilidad de pHealthy individuals were examined in terms of the pattern of activity of the masseter and temporal muscles in their anterior portion of both right and left sides, respectively, with the maximum bite force. The study consisted in seventeen adult volunteers with no sign of apparent temporomandibular dysfunction, of both genders, connected to the School of Dentistry of Piracicaba, with average age of 25 years old. The electromyography data were obtained, bilaterally, of
Shen, Fuhui; Lian, Junhe; Münstermann, Sebastian
2018-05-01
Experimental and numerical investigations on the forming limit diagram (FLD) of a ferritic stainless steel were performed in this study. The FLD of this material was obtained by Nakajima tests. Both the Marciniak-Kuczynski (MK) model and the modified maximum force criterion (MMFC) were used for the theoretical prediction of the FLD. From the results of uniaxial tensile tests along different loading directions with respect to the rolling direction, strong anisotropic plastic behaviour was observed in the investigated steel. A recently proposed anisotropic evolving non-associated Hill48 (enHill48) plasticity model, which was developed from the conventional Hill48 model based on the non-associated flow rule with evolving anisotropic parameters, was adopted to describe the anisotropic hardening behaviour of the investigated material. In the previous study, the model was coupled with the MMFC for FLD prediction. In the current study, the enHill48 was further coupled with the MK model. By comparing the predicted forming limit curves with the experimental results, the influences of anisotropy in terms of flow rule and evolving features on the forming limit prediction were revealed and analysed. In addition, the forming limit predictive performances of the MK and the MMFC models in conjunction with the enHill48 plasticity model were compared and evaluated.
Approximate maximum parsimony and ancestral maximum likelihood.
Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat
2010-01-01
We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.
International Nuclear Information System (INIS)
Anon.
1979-01-01
This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed
Neutron spectra unfolding with maximum entropy and maximum likelihood
International Nuclear Information System (INIS)
Itoh, Shikoh; Tsunoda, Toshiharu
1989-01-01
A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)
Generic maximum likely scale selection
DEFF Research Database (Denmark)
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...
Maximum Quantum Entropy Method
Sim, Jae-Hoon; Han, Myung Joon
2018-01-01
Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...
International Nuclear Information System (INIS)
Biondi, L.
1998-01-01
The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it
DEFF Research Database (Denmark)
Engerer, Volkmar Paul; Roued-Cunliffe, Henriette; Albretsen, Jørgen
digitisation of Arthur Prior’s Nachlass kept in the Bodleian Library, Oxford. The DH infrastructure in question is the Prior Virtual Lab (PVL). PVL was established in 2011 in order to provide researchers in the field of temporal logic easy access to the papers of Arthur Norman Prior (1914-1969), and officially......In this paper, we present a DH research infrastructure which relies heavily on a combination of domain knowledge with information technology. The general goal is to develop tools to aid scholars in their interpretations and understanding of temporal logic. This in turn is based on an extensive...
Maximum likely scale estimation
DEFF Research Database (Denmark)
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Robust Maximum Association Estimators
A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)
2017-01-01
textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation
DEFF Research Database (Denmark)
Blackburn, Patrick Rowan; Jørgensen, Klaus Frovin
2016-01-01
’s search led him through the work of Castañeda, and back to his own work on hybrid logic: the first made temporal reference philosophically respectable, the second made it technically feasible in a modal framework. With the aid of hybrid logic, Prior built a bridge from a two-dimensional UT calculus...
Prior Knowledge Assessment Guide
2014-12-01
assessment in a reasonable amount of time. Hands-on assessments can be extremely diverse in makeup and administration depending on the subject matter...DEVELOPING AND USING PRIOR KNOWLEDGE ASSESSMENTS TO TAILOR TRAINING D-3 ___ Brush and scrub ___ Orchards ___ Rice
Maximum speed of dewetting on a fiber
Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus
2011-01-01
A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed
Energy Technology Data Exchange (ETDEWEB)
Zaric, Z [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)
1961-12-15
The quantity of heat generated in the sample was calculated in the Review III. In stationary regime the heat is transferred through the air layer between the sample and the wall of the channel to the heavy water of graphite. Certain value of maximum temperature t{sub 0} is achieved in the sample. The objective of this review is determination of this temperature. [Serbo-Croat] Kolicina toplote generisana u uzorku, izracunata u pregledu III, u ravnoteznom stanju odvodi se kroz vazdusni sloj izmedju uzorka i zida kanala na tesku vodu odnosno grafit, pri cemu se u uzorku dostize izvesna maksimalna temperatura t{sub 0}. Odredjivanje ove temperature je predmet ovog pregleda.
International Nuclear Information System (INIS)
Enslin, J.H.R.
1990-01-01
A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control
International Nuclear Information System (INIS)
Ponman, T.J.
1984-01-01
For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)
Sets of priors reflecting prior-data conflict and agreement
Walter, G.M.; Coolen, F.P.A.; Carvalho, J.P.; Lesot, M.-J.; Kaymak, U.; Vieira, S.; Bouchon-Meunier, B.; Yager, R.R.
2016-01-01
Bayesian inference enables combination of observations with prior knowledge in the reasoning process. The choice of a particular prior distribution to represent the available prior knowledge is, however, often debatable, especially when prior knowledge is limited or data are scarce, as then
Directory of Open Access Journals (Sweden)
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Dinosaur Metabolism and the Allometry of Maximum Growth Rate
Myhrvold, Nathan P.
2016-01-01
The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth...
Prior indigenous technological species
Wright, Jason T.
2018-01-01
One of the primary open questions of astrobiology is whether there is extant or extinct life elsewhere the solar system. Implicit in much of this work is that we are looking for microbial or, at best, unintelligent life, even though technological artefacts might be much easier to find. Search for Extraterrestrial Intelligence (SETI) work on searches for alien artefacts in the solar system typically presumes that such artefacts would be of extrasolar origin, even though life is known to have existed in the solar system, on Earth, for eons. But if a prior technological, perhaps spacefaring, species ever arose in the solar system, it might have produced artefacts or other technosignatures that have survived to present day, meaning solar system artefact SETI provides a potential path to resolving astrobiology's question. Here, I discuss the origins and possible locations for technosignatures of such a prior indigenous technological species, which might have arisen on ancient Earth or another body, such as a pre-greenhouse Venus or a wet Mars. In the case of Venus, the arrival of its global greenhouse and potential resurfacing might have erased all evidence of its existence on the Venusian surface. In the case of Earth, erosion and, ultimately, plate tectonics may have erased most such evidence if the species lived Gyr ago. Remaining indigenous technosignatures might be expected to be extremely old, limiting the places they might still be found to beneath the surfaces of Mars and the Moon, or in the outer solar system.
Probable maximum flood control
International Nuclear Information System (INIS)
DeGabriele, C.E.; Wu, C.L.
1991-11-01
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1988-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
International Nuclear Information System (INIS)
Rust, D.M.
1984-01-01
The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references
Introduction to maximum entropy
International Nuclear Information System (INIS)
Sivia, D.S.
1989-01-01
The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab
Functional Maximum Autocorrelation Factors
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg
2005-01-01
MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin
2015-01-01
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Bayesian optimal experimental design for priors of compact support
Long, Quan
2016-01-01
to account for the bounded domain of the uniform prior pdf of the parameters. The underlying Gaussian distribution is obtained in the spirit of the Laplace method, more precisely, the mode is chosen as the maximum a posteriori (MAP) estimate
The Prior Can Often Only Be Understood in the Context of the Likelihood
Directory of Open Access Journals (Sweden)
Andrew Gelman
2017-10-01
Full Text Available A key sticking point of Bayesian analysis is the choice of prior distribution, and there is a vast literature on potential defaults including uniform priors, Jeffreys’ priors, reference priors, maximum entropy priors, and weakly informative priors. These methods, however, often manifest a key conceptual tension in prior modeling: a model encoding true prior information should be chosen without reference to the model of the measurement process, but almost all common prior modeling techniques are implicitly motivated by a reference likelihood. In this paper we resolve this apparent paradox by placing the choice of prior into the context of the entire Bayesian analysis, from inference to prediction to model evaluation.
International Nuclear Information System (INIS)
Ryan, J.
1981-01-01
By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments
Prior Elicitation, Assessment and Inference with a Dirichlet Prior
Directory of Open Access Journals (Sweden)
Michael Evans
2017-10-01
Full Text Available Methods are developed for eliciting a Dirichlet prior based upon stating bounds on the individual probabilities that hold with high prior probability. This approach to selecting a prior is applied to a contingency table problem where it is demonstrated how to assess the prior with respect to the bias it induces as well as how to check for prior-data conflict. It is shown that the assessment of a hypothesis via relative belief can easily take into account what it means for the falsity of the hypothesis to correspond to a difference of practical importance and provide evidence in favor of a hypothesis.
Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.
Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan
2016-04-28
This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.
PET reconstruction via nonlocal means induced prior.
Hou, Qingfeng; Huang, Jing; Bian, Zhaoying; Chen, Wufan; Ma, Jianhua
2015-01-01
The traditional Bayesian priors for maximum a posteriori (MAP) reconstruction methods usually incorporate local neighborhood interactions that penalize large deviations in parameter estimates for adjacent pixels; therefore, only local pixel differences are utilized. This limits their abilities of penalizing the image roughness. To achieve high-quality PET image reconstruction, this study investigates a MAP reconstruction strategy by incorporating a nonlocal means induced (NLMi) prior (NLMi-MAP) which enables utilizing global similarity information of image. The present NLMi prior approximates the derivative of Gibbs energy function by an NLM filtering process. Specially, the NLMi prior is obtained by subtracting the current image estimation from its NLM filtered version and feeding the residual error back to the reconstruction filter to yield the new image estimation. We tested the present NLMi-MAP method with simulated and real PET datasets. Comparison studies with conventional filtered backprojection (FBP) and a few iterative reconstruction methods clearly demonstrate that the present NLMi-MAP method performs better in lowering noise, preserving image edge and in higher signal to noise ratio (SNR). Extensive experimental results show that the NLMi-MAP method outperforms the existing methods in terms of cross profile, noise reduction, SNR, root mean square error (RMSE) and correlation coefficient (CORR).
Maximum stellar iron core mass
Indian Academy of Sciences (India)
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly ...
Accommodating Uncertainty in Prior Distributions
Energy Technology Data Exchange (ETDEWEB)
Picard, Richard Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vander Wiel, Scott Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-01-19
A fundamental premise of Bayesian methodology is that a priori information is accurately summarized by a single, precisely de ned prior distribution. In many cases, especially involving informative priors, this premise is false, and the (mis)application of Bayes methods produces posterior quantities whose apparent precisions are highly misleading. We examine the implications of uncertainty in prior distributions, and present graphical methods for dealing with them.
Credal Networks under Maximum Entropy
Lukasiewicz, Thomas
2013-01-01
We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...
Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.
van Erp, Sara; Mulder, Joris; Oberski, Daniel L
2017-11-27
Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The Prior Internet Resources 2017
DEFF Research Database (Denmark)
Engerer, Volkmar Paul; Albretsen, Jørgen
2017-01-01
The Prior Internet Resources (PIR) are presented. Prior’s unpublished scientific manuscripts and his wast letter correspondence with fellow researchers at the time, his Nachlass, is now subject to transcription by Prior-researchers worldwide, and form an integral part of PIR. It is demonstrated...
The Importance of Prior Knowledge.
Cleary, Linda Miller
1989-01-01
Recounts a college English teacher's experience of reading and rereading Noam Chomsky, building up a greater store of prior knowledge. Argues that Frank Smith provides a theory for the importance of prior knowledge and Chomsky's work provided a personal example with which to interpret and integrate that theory. (RS)
Hydraulic Limits on Maximum Plant Transpiration
Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.
2011-12-01
Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water
Recruiting for Prior Service Market
2008-06-01
perceptions, expectations and issues for re-enlistment • Develop potential marketing and advertising tactics and strategies targeted to the defined...01 JUN 2008 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Recruiting for Prior Service Market 5a. CONTRACT NUMBER 5b. GRANT...Command First Handshake to First Unit of Assignment An Army of One Proud to Be e e to Serve Recruiting for Prior Service Market MAJ Eric Givens / MAJ Brian
A Maximum Resonant Set of Polyomino Graphs
Directory of Open Access Journals (Sweden)
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
International Nuclear Information System (INIS)
Sutton, C.
1989-01-01
Inside the atom, particles interact through two forces which are never felt in the everyday world. But they may hold the key to the Universe. These ideas on subatomic forces are discussed with respect to the strong force, the electromagnetic force and the electroweak force. (author)
Maximum Entropy in Drug Discovery
Directory of Open Access Journals (Sweden)
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Maximum stellar iron core mass
Indian Academy of Sciences (India)
60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs
Maximum entropy beam diagnostic tomography
International Nuclear Information System (INIS)
Mottershead, C.T.
1985-01-01
This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore
A portable storage maximum thermometer
International Nuclear Information System (INIS)
Fayart, Gerard.
1976-01-01
A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr
Testability evaluation using prior information of multiple sources
Directory of Open Access Journals (Sweden)
Wang Chao
2014-08-01
Full Text Available Testability plays an important role in improving the readiness and decreasing the life-cycle cost of equipment. Testability demonstration and evaluation is of significance in measuring such testability indexes as fault detection rate (FDR and fault isolation rate (FIR, which is useful to the producer in mastering the testability level and improving the testability design, and helpful to the consumer in making purchase decisions. Aiming at the problems with a small sample of testability demonstration test data (TDTD such as low evaluation confidence and inaccurate result, a testability evaluation method is proposed based on the prior information of multiple sources and Bayes theory. Firstly, the types of prior information are analyzed. The maximum entropy method is applied to the prior information with the mean and interval estimate forms on the testability index to obtain the parameters of prior probability density function (PDF, and the empirical Bayesian method is used to get the parameters for the prior information with a success-fail form. Then, a parametrical data consistency check method is used to check the compatibility between all the sources of prior information and TDTD. For the prior information to pass the check, the prior credibility is calculated. A mixed prior distribution is formed based on the prior PDFs and the corresponding credibility. The Bayesian posterior distribution model is acquired with the mixed prior distribution and TDTD, based on which the point and interval estimates are calculated. Finally, examples of a flying control system are used to verify the proposed method. The results show that the proposed method is feasible and effective.
Testability evaluation using prior information of multiple sources
Institute of Scientific and Technical Information of China (English)
Wang Chao; Qiu Jing; Liu Guanjun; Zhang Yong
2014-01-01
Testability plays an important role in improving the readiness and decreasing the life-cycle cost of equipment. Testability demonstration and evaluation is of significance in measuring such testability indexes as fault detection rate (FDR) and fault isolation rate (FIR), which is useful to the producer in mastering the testability level and improving the testability design, and helpful to the consumer in making purchase decisions. Aiming at the problems with a small sample of testabil-ity demonstration test data (TDTD) such as low evaluation confidence and inaccurate result, a test-ability evaluation method is proposed based on the prior information of multiple sources and Bayes theory. Firstly, the types of prior information are analyzed. The maximum entropy method is applied to the prior information with the mean and interval estimate forms on the testability index to obtain the parameters of prior probability density function (PDF), and the empirical Bayesian method is used to get the parameters for the prior information with a success-fail form. Then, a parametrical data consistency check method is used to check the compatibility between all the sources of prior information and TDTD. For the prior information to pass the check, the prior credibility is calculated. A mixed prior distribution is formed based on the prior PDFs and the corresponding credibility. The Bayesian posterior distribution model is acquired with the mixed prior distribution and TDTD, based on which the point and interval estimates are calculated. Finally, examples of a flying control system are used to verify the proposed method. The results show that the proposed method is feasible and effective.
On Maximum Entropy and Inference
Directory of Open Access Journals (Sweden)
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
Combining Experiments and Simulations Using the Maximum Entropy Principle
DEFF Research Database (Denmark)
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results...
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
Maximum Water Hammer Sensitivity Analysis
Jalil Emadi; Abbas Solemani
2011-01-01
Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...
Directory of Open Access Journals (Sweden)
Yunfeng Shan
2008-01-01
Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.
Vandenboom, Rene; Hannon, James D; Sieck, Gary C
2002-01-01
We tested the hypothesis that force-velocity history modulates thin filament activation, as assessed by the rate of force redevelopment after shortening (+dF/dtR). The influence of isotonic force on +dF/dtR was assessed by imposing uniform amplitude (2.55 to 2.15 μm sarcomere−1) but different speed releases to intact frog muscle fibres during fused tetani. Each release consisted of a contiguous ramp- and step-change in length. Ramp speed was changed from release to release to vary fibre shortening speed from 1.00 (2.76 ± 0.11 μm half-sarcomere−1 s−1) to 0.30 of maximum unloaded shortening velocity (Vu), thereby modulating isotonic force from 0 to 0.34 Fo, respectively. The step zeroed force and allowed the fibre to shorten unloaded for a brief period of time prior to force redevelopment. Although peak force redevelopment after different releases was similar, +dF/dtR increased by 81 ± 6% (P < 0.05) as fibre shortening speed was reduced from 1.00 Vu. The +dF/dtR after different releases was strongly correlated with the preceding isotonic force (r = 0.99, P < 0.001). Results from additional experiments showed that the slope of slack test plots produced by systematically increasing the step size that followed each ramp were similar. Thus, isotonic force did not influence Vu (mean: 2.84 ± 0.10 μm half-sarcomere−1 s−1, P < 0.05). We conclude that isotonic force modulates +dF/dtR independent of change in Vu, an outcome consistent with a cooperative influence of attached cross-bridges on thin filament activation that increases cross-bridge attachment rate without alteration to cross-bridge detachment rate. PMID:12205189
LCLS Maximum Credible Beam Power
International Nuclear Information System (INIS)
Clendenin, J.
2005-01-01
The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed
On the prior probabilities for two-stage Bayesian estimates
International Nuclear Information System (INIS)
Kohut, P.
1992-01-01
The method of Bayesian inference is reexamined for its applicability and for the required underlying assumptions in obtaining and using prior probability estimates. Two different approaches are suggested to determine the first-stage priors in the two-stage Bayesian analysis which avoid certain assumptions required for other techniques. In the first scheme, the prior is obtained through a true frequency based distribution generated at selected intervals utilizing actual sampling of the failure rate distributions. The population variability distribution is generated as the weighed average of the frequency distributions. The second method is based on a non-parametric Bayesian approach using the Maximum Entropy Principle. Specific features such as integral properties or selected parameters of prior distributions may be obtained with minimal assumptions. It is indicated how various quantiles may also be generated with a least square technique
Quantum steganography using prior entanglement
International Nuclear Information System (INIS)
Mihara, Takashi
2015-01-01
Steganography is the hiding of secret information within innocent-looking information (e.g., text, audio, image, video, etc.). A quantum version of steganography is a method based on quantum physics. In this paper, we propose quantum steganography by combining quantum error-correcting codes with prior entanglement. In many steganographic techniques, embedding secret messages in error-correcting codes may cause damage to them if the embedded part is corrupted. However, our proposed steganography can separately create secret messages and the content of cover messages. The intrinsic form of the cover message does not have to be modified for embedding secret messages. - Highlights: • Our steganography combines quantum error-correcting codes with prior entanglement. • Our steganography can separately create secret messages and the content of cover messages. • Errors in cover messages do not have affect the recovery of secret messages. • We embed a secret message in the Steane code as an example of our steganography
Quantum steganography using prior entanglement
Energy Technology Data Exchange (ETDEWEB)
Mihara, Takashi, E-mail: mihara@toyo.jp
2015-06-05
Steganography is the hiding of secret information within innocent-looking information (e.g., text, audio, image, video, etc.). A quantum version of steganography is a method based on quantum physics. In this paper, we propose quantum steganography by combining quantum error-correcting codes with prior entanglement. In many steganographic techniques, embedding secret messages in error-correcting codes may cause damage to them if the embedded part is corrupted. However, our proposed steganography can separately create secret messages and the content of cover messages. The intrinsic form of the cover message does not have to be modified for embedding secret messages. - Highlights: • Our steganography combines quantum error-correcting codes with prior entanglement. • Our steganography can separately create secret messages and the content of cover messages. • Errors in cover messages do not have affect the recovery of secret messages. • We embed a secret message in the Steane code as an example of our steganography.
Prior information in structure estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav; Nedoma, Petr; Khailova, Natalia; Pavelková, Lenka
2003-01-01
Roč. 150, č. 6 (2003), s. 643-653 ISSN 1350-2379 R&D Projects: GA AV ČR IBS1075102; GA AV ČR IBS1075351; GA ČR GA102/03/0049 Institutional research plan: CEZ:AV0Z1075907 Keywords : prior knowledge * structure estimation * autoregressive models Subject RIV: BC - Control Systems Theory Impact factor: 0.745, year: 2003 http://library.utia.cas.cz/separaty/historie/karny-0411258.pdf
On a full Bayesian inference for force reconstruction problems
Aucejo, M.; De Smet, O.
2018-05-01
In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.
International Nuclear Information System (INIS)
Guikema, Seth D.
2007-01-01
Priors play an important role in the use of Bayesian methods in risk analysis, and using all available information to formulate an informative prior can lead to more accurate posterior inferences. This paper examines the practical implications of using five different methods for formulating an informative prior for a failure probability based on past data. These methods are the method of moments, maximum likelihood (ML) estimation, maximum entropy estimation, starting from a non-informative 'pre-prior', and fitting a prior based on confidence/credible interval matching. The priors resulting from the use of these different methods are compared qualitatively, and the posteriors are compared quantitatively based on a number of different scenarios of observed data used to update the priors. The results show that the amount of information assumed in the prior makes a critical difference in the accuracy of the posterior inferences. For situations in which the data used to formulate the informative prior is an accurate reflection of the data that is later observed, the ML approach yields the minimum variance posterior. However, the maximum entropy approach is more robust to differences between the data used to formulate the prior and the observed data because it maximizes the uncertainty in the prior subject to the constraints imposed by the past data
Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging
Directory of Open Access Journals (Sweden)
Shuanghui Zhang
2016-04-01
Full Text Available This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP estimation and the maximum likelihood estimation (MLE are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.
Extreme Maximum Land Surface Temperatures.
Garratt, J. R.
1992-09-01
There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).
Combining Experiments and Simulations Using the Maximum Entropy Principle
DEFF Research Database (Denmark)
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
Occupational Outlook Quarterly, 2012
2012-01-01
The labor force is the number of people ages 16 or older who are either working or looking for work. It does not include active-duty military personnel or the institutionalized population, such as prison inmates. Determining the size of the labor force is a way of determining how big the economy can get. The size of the labor force depends on two…
Finding A Minimally Informative Dirichlet Prior Using Least Squares
International Nuclear Information System (INIS)
Kelly, Dana
2011-01-01
In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
Finding a minimally informative Dirichlet prior distribution using least squares
International Nuclear Information System (INIS)
Kelly, Dana; Atwood, Corwin
2011-01-01
In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares
International Nuclear Information System (INIS)
Kelly, Dana; Atwood, Corwin
2011-01-01
In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
System for memorizing maximum values
Bozeman, Richard J., Jr.
1992-08-01
The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.
Remarks on the maximum luminosity
Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon
2018-04-01
The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Scintillation counter, maximum gamma aspect
International Nuclear Information System (INIS)
Thumim, A.D.
1975-01-01
A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)
Maximum mutual information regularized classification
Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin
2014-01-01
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
Buhmann, Stefan Yoshi
2012-01-01
In this book, a modern unified theory of dispersion forces on atoms and bodies is presented which covers a broad range of advanced aspects and scenarios. Macroscopic quantum electrodynamics is shown to provide a powerful framework for dispersion forces which allows for discussing general properties like their non-additivity and the relation between microscopic and macroscopic interactions. It is demonstrated how the general results can be used to obtain dispersion forces on atoms in the presence of bodies of various shapes and materials. Starting with a brief recapitulation of volume I, this volume II deals especially with bodies of irregular shapes, universal scaling laws, dynamical forces on excited atoms, enhanced forces in cavity quantum electrodynamics, non-equilibrium forces in thermal environments and quantum friction. The book gives both the specialist and those new to the field a thorough overview over recent results in the field. It provides a toolbox for studying dispersion forces in various contex...
Bayesian Image Segmentations by Potts Prior and Loopy Belief Propagation
Tanaka, Kazuyuki; Kataoka, Shun; Yasuda, Muneki; Waizumi, Yuji; Hsu, Chiou-Ting
2014-12-01
This paper presents a Bayesian image segmentation model based on Potts prior and loopy belief propagation. The proposed Bayesian model involves several terms, including the pairwise interactions of Potts models, and the average vectors and covariant matrices of Gauss distributions in color image modeling. These terms are often referred to as hyperparameters in statistical machine learning theory. In order to determine these hyperparameters, we propose a new scheme for hyperparameter estimation based on conditional maximization of entropy in the Potts prior. The algorithm is given based on loopy belief propagation. In addition, we compare our conditional maximum entropy framework with the conventional maximum likelihood framework, and also clarify how the first order phase transitions in loopy belief propagations for Potts models influence our hyperparameter estimation procedures.
Generalized multiple kernel learning with data-dependent priors.
Mao, Qi; Tsang, Ivor W; Gao, Shenghua; Wang, Li
2015-06-01
Multiple kernel learning (MKL) and classifier ensemble are two mainstream methods for solving learning problems in which some sets of features/views are more informative than others, or the features/views within a given set are inconsistent. In this paper, we first present a novel probabilistic interpretation of MKL such that maximum entropy discrimination with a noninformative prior over multiple views is equivalent to the formulation of MKL. Instead of using the noninformative prior, we introduce a novel data-dependent prior based on an ensemble of kernel predictors, which enhances the prediction performance of MKL by leveraging the merits of the classifier ensemble. With the proposed probabilistic framework of MKL, we propose a hierarchical Bayesian model to learn the proposed data-dependent prior and classification model simultaneously. The resultant problem is convex and other information (e.g., instances with either missing views or missing labels) can be seamlessly incorporated into the data-dependent priors. Furthermore, a variety of existing MKL models can be recovered under the proposed MKL framework and can be readily extended to incorporate these priors. Extensive experiments demonstrate the benefits of our proposed framework in supervised and semisupervised settings, as well as in tasks with partial correspondence among multiple views.
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
Maximum entropy and Bayesian methods
International Nuclear Information System (INIS)
Smith, C.R.; Erickson, G.J.; Neudorfer, P.O.
1992-01-01
Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come
Theory and application of maximum magnetic energy in toroidal plasmas
International Nuclear Information System (INIS)
Chu, T.K.
1992-02-01
The magnetic energy in an inductively driven steady-state toroidal plasma is a maximum for a given rate of dissipation of energy (Poynting flux). A purely resistive steady state of the piecewise force-free configuration, however, cannot exist, as the periodic removal of the excess poloidal flux and pressure, due to heating, ruptures the static equilibrium of the partitioning rational surfaces intermittently. The rupture necessitates a plasma with a negative q'/q (as in reverse field pinches and spheromaks) to have the same α in all its force-free regions and with a positive q'/q (as in tokamaks) to have centrally peaked α's
An Adaptively Accelerated Bayesian Deblurring Method with Entropy Prior
Directory of Open Access Journals (Sweden)
Yong-Hoon Kim
2008-05-01
Full Text Available The development of an efficient adaptively accelerated iterative deblurring algorithm based on Bayesian statistical concept has been reported. Entropy of an image has been used as a Ã¢Â€ÂœpriorÃ¢Â€Â distribution and instead of additive form, used in conventional acceleration methods an exponent form of relaxation constant has been used for acceleration. Thus the proposed method is called hereafter as adaptively accelerated maximum a posteriori with entropy prior (AAMAPE. Based on empirical observations in different experiments, the exponent is computed adaptively using first-order derivatives of the deblurred image from previous two iterations. This exponent improves speed of the AAMAPE method in early stages and ensures stability at later stages of iteration. In AAMAPE method, we also consider the constraint of the nonnegativity and flux conservation. The paper discusses the fundamental idea of the Bayesian image deblurring with the use of entropy as prior, and the analytical analysis of superresolution and the noise amplification characteristics of the proposed method. The experimental results show that the proposed AAMAPE method gives lower RMSE and higher SNR in 44% lesser iterations as compared to nonaccelerated maximum a posteriori with entropy prior (MAPE method. Moreover, AAMAPE followed by wavelet wiener filtering gives better result than the state-of-the-art methods.
Maximum entropy principal for transportation
International Nuclear Information System (INIS)
Bilich, F.; Da Silva, R.
2008-01-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Contribution of Prior Semantic Knowledge to New Episodic Learning in Amnesia
Kan, Irene P.; Alexander, Michael P.; Verfaellie, Mieke
2009-01-01
We evaluated whether prior semantic knowledge would enhance episodic learning in amnesia. Subjects studied prices that are either congruent or incongruent with prior price knowledge for grocery and household items and then performed a forced-choice recognition test for the studied prices. Consistent with a previous report, healthy controls'…
Last Glacial Maximum Salinity Reconstruction
Homola, K.; Spivack, A. J.
2016-12-01
It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were
Maximum Parsimony on Phylogenetic networks
2012-01-01
Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are
1982-01-01
The different forces, together with a pictorial analogy of how the exchange of particles works. The table lists the relative strength of the couplings, the quanta associated with the force fields and the bodies or phenomena in which they have a dominant role.
Occupational Outlook Quarterly, 2010
2010-01-01
The labor force is the number of people aged 16 or older who are either working or looking for work. It does not include active-duty military personnel or institutionalized people, such as prison inmates. Quantifying this total supply of labor is a way of determining how big the economy can get. Labor force participation rates vary significantly…
Source Localization by Entropic Inference and Backward Renormalization Group Priors
Directory of Open Access Journals (Sweden)
Nestor Caticha
2015-04-01
Full Text Available A systematic method of transferring information from coarser to finer resolution based on renormalization group (RG transformations is introduced. It permits building informative priors in finer scales from posteriors in coarser scales since, under some conditions, RG transformations in the space of hyperparameters can be inverted. These priors are updated using renormalized data into posteriors by Maximum Entropy. The resulting inference method, backward RG (BRG priors, is tested by doing simulations of a functional magnetic resonance imaging (fMRI experiment. Its results are compared with a Bayesian approach working in the finest available resolution. Using BRG priors sources can be partially identified even when signal to noise ratio levels are up to ~ -25dB improving vastly on the single step Bayesian approach. For low levels of noise the BRG prior is not an improvement over the single scale Bayesian method. Analysis of the histograms of hyperparameters can show how to distinguish if the method is failing, due to very high levels of noise, or if the identification of the sources is, at least partially possible.
``Force,'' ontology, and language
Brookes, David T.; Etkina, Eugenia
2009-06-01
We introduce a linguistic framework through which one can interpret systematically students’ understanding of and reasoning about force and motion. Some researchers have suggested that students have robust misconceptions or alternative frameworks grounded in everyday experience. Others have pointed out the inconsistency of students’ responses and presented a phenomenological explanation for what is observed, namely, knowledge in pieces. We wish to present a view that builds on and unifies aspects of this prior research. Our argument is that many students’ difficulties with force and motion are primarily due to a combination of linguistic and ontological difficulties. It is possible that students are primarily engaged in trying to define and categorize the meaning of the term “force” as spoken about by physicists. We found that this process of negotiation of meaning is remarkably similar to that engaged in by physicists in history. In this paper we will describe a study of the historical record that reveals an analogous process of meaning negotiation, spanning multiple centuries. Using methods from cognitive linguistics and systemic functional grammar, we will present an analysis of the force and motion literature, focusing on prior studies with interview data. We will then discuss the implications of our findings for physics instruction.
Maximum entropy deconvolution of low count nuclear medicine images
International Nuclear Information System (INIS)
McGrath, D.M.
1998-12-01
Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were
Divergent Priors and well Behaved Bayes Factors
R.W. Strachan (Rodney); H.K. van Dijk (Herman)
2011-01-01
textabstractDivergent priors are improper when defined on unbounded supports. Bartlett's paradox has been taken to imply that using improper priors results in ill-defined Bayes factors, preventing model comparison by posterior probabilities. However many improper priors have attractive properties
Two-dimensional maximum entropy image restoration
International Nuclear Information System (INIS)
Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.
1977-07-01
An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures
Iterated random walks with shape prior
DEFF Research Database (Denmark)
Pujadas, Esmeralda Ruiz; Kjer, Hans Martin; Piella, Gemma
2016-01-01
the parametric probability density function. Then, random walks is performed iteratively aligning the prior with the current segmentation in every iteration. We tested the proposed approach with natural and medical images and compared it with the latest techniques with random walks and shape priors......We propose a new framework for image segmentation using random walks where a distance shape prior is combined with a region term. The shape prior is weighted by a confidence map to reduce the influence of the prior in high gradient areas and the region term is computed with k-means to estimate....... The experiments suggest that this method gives promising results for medical and natural images....
Modeling Climate Responses to Spectral Solar Forcing on Centennial and Decadal Time Scales
Wen, G.; Cahalan, R.; Rind, D.; Jonas, J.; Pilewskie, P.; Harder, J.
2012-01-01
We report a series of experiments to explore clima responses to two types of solar spectral forcing on decadal and centennial time scales - one based on prior reconstructions, and another implied by recent observations from the SORCE (Solar Radiation and Climate Experiment) SIM (Spectral 1rradiance Monitor). We apply these forcings to the Goddard Institute for Space Studies (GISS) Global/Middle Atmosphere Model (GCMAM). that couples atmosphere with ocean, and has a model top near the mesopause, allowing us to examine the full response to the two solar forcing scenarios. We show different climate responses to the two solar forCing scenarios on decadal time scales and also trends on centennial time scales. Differences between solar maximum and solar minimum conditions are highlighted, including impacts of the time lagged reSponse of the lower atmosphere and ocean. This contrasts with studies that assume separate equilibrium conditions at solar maximum and minimum. We discuss model feedback mechanisms involved in the solar forced climate variations.
A maximum likelihood framework for protein design
Directory of Open Access Journals (Sweden)
Philippe Hervé
2006-06-01
Full Text Available Abstract Background The aim of protein design is to predict amino-acid sequences compatible with a given target structure. Traditionally envisioned as a purely thermodynamic question, this problem can also be understood in a wider context, where additional constraints are captured by learning the sequence patterns displayed by natural proteins of known conformation. In this latter perspective, however, we still need a theoretical formalization of the question, leading to general and efficient learning methods, and allowing for the selection of fast and accurate objective functions quantifying sequence/structure compatibility. Results We propose a formulation of the protein design problem in terms of model-based statistical inference. Our framework uses the maximum likelihood principle to optimize the unknown parameters of a statistical potential, which we call an inverse potential to contrast with classical potentials used for structure prediction. We propose an implementation based on Markov chain Monte Carlo, in which the likelihood is maximized by gradient descent and is numerically estimated by thermodynamic integration. The fit of the models is evaluated by cross-validation. We apply this to a simple pairwise contact potential, supplemented with a solvent-accessibility term, and show that the resulting models have a better predictive power than currently available pairwise potentials. Furthermore, the model comparison method presented here allows one to measure the relative contribution of each component of the potential, and to choose the optimal number of accessibility classes, which turns out to be much higher than classically considered. Conclusion Altogether, this reformulation makes it possible to test a wide diversity of models, using different forms of potentials, or accounting for other factors than just the constraint of thermodynamic stability. Ultimately, such model-based statistical analyses may help to understand the forces
Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†
Directory of Open Access Journals (Sweden)
Steven H. Waldrip
2017-02-01
Full Text Available We compare the application of Bayesian inference and the maximum entropy (MaxEnt method for the analysis of ﬂow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of ﬂow rates and other variables, when there is insufﬁcient information to obtain a deterministic solution, and also allow the effects of uncertainty to be included. Both methods of inference update a prior to a posterior probability density function (pdf by the inclusion of new information, in the form of data or constraints. The MaxEnt method maximises an entropy function subject to constraints, using the method of Lagrange multipliers,to give the posterior, while the Bayesian method ﬁnds its posterior by multiplying the prior with likelihood functions incorporating the measured data. In this study, we examine MaxEnt using soft constraints, either included in the prior or as probabilistic constraints, in addition to standard moment constraints. We show that when the prior is Gaussian,both Bayesian inference and the MaxEnt method with soft prior constraints give the same posterior means, but their covariances are different. In the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. The MaxEnt method with soft prior constraints, therefore, has a numerical advantage over Bayesian inference, in that the covariance terms are avoided in its integrations. The second MaxEnt method with soft probabilistic constraints is shown to give posterior means of similar, but not identical, structure to the other two methods, due to its different formulation.
Kishima, Hideyuki; Mine, Takanao; Takahashi, Satoshi; Ashida, Kenki; Ishihara, Masaharu; Masuyama, Tohru
2018-02-01
Left atrium (LA) systolic dysfunction is observed in the early stages of atrial fibrillation (AF) prior to LA anatomical change. We investigated whether LA systolic dysfunction predicts recurrent AF after catheter ablation (CA) in patients with paroxysmal AF. We studied 106 patients who underwent CA for paroxysmal AF. LA systolic function was assessed with the LA emptying volume = Maximum LA volume (LAV max ) - Minimum LA volume (LAV min ), LA emptying fraction = [(LAV max - LAV min )/LAV max ] × 100, and LA ejection force calculated with Manning's method [LA ejection force = (0.5 × ρ × mitral valve area × A 2 )], where ρ is the blood density and A is the late-diastolic mitral inflow velocity. Recurrent AF was detected in 35/106 (33%) during 14.6 ± 9.1 months. Univariate analysis revealed reduced LA ejection force, decreased LA emptying fraction, larger LA diameter, and elevated brain natriuretic peptide as significant variables. On multivariate analysis, reduced LA ejection force and larger LA diameter were independently associated with recurrent AF. Moreover, patients with reduced LA ejection force and larger LA diameter had a higher risk of recurrent AF than preserved LA ejection force (log-rank P = 0.0004). Reduced LA ejection force and larger LA diameter were associated with poor outcome after CA for paroxysmal AF, and could be a new index to predict recurrent AF. © 2017 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Xia Lei
2010-12-01
Full Text Available General multi-objective optimization methods are hard to obtain prior information, how to utilize prior information has been a challenge. This paper analyzes the characteristics of Bayesian decision-making based on maximum entropy principle and prior information, especially in case that how to effectively improve decision-making reliability in deficiency of reference samples. The paper exhibits effectiveness of the proposed method using the real application of multi-frequency offset estimation in distributed multiple-input multiple-output system. The simulation results demonstrate Bayesian decision-making based on prior information has better global searching capability when sampling data is deficient.
Receiver function estimated by maximum entropy deconvolution
Institute of Scientific and Technical Information of China (English)
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Park, Sangsoo; Spirduso, Waneen; Eakin, Tim; Abraham, Lawrence
2018-01-01
The authors investigated how varying the required low-level forces and the direction of force change affect accuracy and variability of force production in a cyclic isometric pinch force tracking task. Eighteen healthy right-handed adult volunteers performed the tracking task over 3 different force ranges. Root mean square error and coefficient of variation were higher at lower force levels and during minimum reversals compared with maximum reversals. Overall, the thumb showed greater root mean square error and coefficient of variation scores than did the index finger during maximum reversals, but not during minimum reversals. The observed impaired performance during minimum reversals might originate from history-dependent mechanisms of force production and highly coupled 2-digit performance.
Maximum Power from a Solar Panel
Directory of Open Access Journals (Sweden)
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Bayesian hierarchical models for regional climate reconstructions of the last glacial maximum
Weitzel, Nils; Hense, Andreas; Ohlwein, Christian
2017-04-01
Spatio-temporal reconstructions of past climate are important for the understanding of the long term behavior of the climate system and the sensitivity to forcing changes. Unfortunately, they are subject to large uncertainties, have to deal with a complex proxy-climate structure, and a physically reasonable interpolation between the sparse proxy observations is difficult. Bayesian Hierarchical Models (BHMs) are a class of statistical models that is well suited for spatio-temporal reconstructions of past climate because they permit the inclusion of multiple sources of information (e.g. records from different proxy types, uncertain age information, output from climate simulations) and quantify uncertainties in a statistically rigorous way. BHMs in paleoclimatology typically consist of three stages which are modeled individually and are combined using Bayesian inference techniques. The data stage models the proxy-climate relation (often named transfer function), the process stage models the spatio-temporal distribution of the climate variables of interest, and the prior stage consists of prior distributions of the model parameters. For our BHMs, we translate well-known proxy-climate transfer functions for pollen to a Bayesian framework. In addition, we can include Gaussian distributed local climate information from preprocessed proxy records. The process stage combines physically reasonable spatial structures from prior distributions with proxy records which leads to a multivariate posterior probability distribution for the reconstructed climate variables. The prior distributions that constrain the possible spatial structure of the climate variables are calculated from climate simulation output. We present results from pseudoproxy tests as well as new regional reconstructions of temperatures for the last glacial maximum (LGM, ˜ 21,000 years BP). These reconstructions combine proxy data syntheses with information from climate simulations for the LGM that were
PET image reconstruction using multi-parametric anato-functional priors
Mehranian, Abolfazl; Belzunce, Martin A.; Niccolini, Flavia; Politis, Marios; Prieto, Claudia; Turkheimer, Federico; Hammers, Alexander; Reader, Andrew J.
2017-08-01
In this study, we investigate the application of multi-parametric anato-functional (MR-PET) priors for the maximum a posteriori (MAP) reconstruction of brain PET data in order to address the limitations of the conventional anatomical priors in the presence of PET-MR mismatches. In addition to partial volume correction benefits, the suitability of these priors for reconstruction of low-count PET data is also introduced and demonstrated, comparing to standard maximum-likelihood (ML) reconstruction of high-count data. The conventional local Tikhonov and total variation (TV) priors and current state-of-the-art anatomical priors including the Kaipio, non-local Tikhonov prior with Bowsher and Gaussian similarity kernels are investigated and presented in a unified framework. The Gaussian kernels are calculated using both voxel- and patch-based feature vectors. To cope with PET and MR mismatches, the Bowsher and Gaussian priors are extended to multi-parametric priors. In addition, we propose a modified joint Burg entropy prior that by definition exploits all parametric information in the MAP reconstruction of PET data. The performance of the priors was extensively evaluated using 3D simulations and two clinical brain datasets of [18F]florbetaben and [18F]FDG radiotracers. For simulations, several anato-functional mismatches were intentionally introduced between the PET and MR images, and furthermore, for the FDG clinical dataset, two PET-unique active tumours were embedded in the PET data. Our simulation results showed that the joint Burg entropy prior far outperformed the conventional anatomical priors in terms of preserving PET unique lesions, while still reconstructing functional boundaries with corresponding MR boundaries. In addition, the multi-parametric extension of the Gaussian and Bowsher priors led to enhanced preservation of edge and PET unique features and also an improved bias-variance performance. In agreement with the simulation results, the clinical results
Penalized Maximum Likelihood Estimation for univariate normal mixture distributions
International Nuclear Information System (INIS)
Ridolfi, A.; Idier, J.
2001-01-01
Due to singularities of the likelihood function, the maximum likelihood approach for the estimation of the parameters of normal mixture models is an acknowledged ill posed optimization problem. Ill posedness is solved by penalizing the likelihood function. In the Bayesian framework, it amounts to incorporating an inverted gamma prior in the likelihood function. A penalized version of the EM algorithm is derived, which is still explicit and which intrinsically assures that the estimates are not singular. Numerical evidence of the latter property is put forward with a test
Maximum Likelihood and Bayes Estimation in Randomly Censored Geometric Distribution
Directory of Open Access Journals (Sweden)
Hare Krishna
2017-01-01
Full Text Available In this article, we study the geometric distribution under randomly censored data. Maximum likelihood estimators and confidence intervals based on Fisher information matrix are derived for the unknown parameters with randomly censored data. Bayes estimators are also developed using beta priors under generalized entropy and LINEX loss functions. Also, Bayesian credible and highest posterior density (HPD credible intervals are obtained for the parameters. Expected time on test and reliability characteristics are also analyzed in this article. To compare various estimates developed in the article, a Monte Carlo simulation study is carried out. Finally, for illustration purpose, a randomly censored real data set is discussed.
Extracting volatility signal using maximum a posteriori estimation
Neto, David
2016-11-01
This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.
Force-Time Entropy of Isometric Impulse.
Hsieh, Tsung-Yu; Newell, Karl M
2016-01-01
The relation between force and temporal variability in discrete impulse production has been viewed as independent (R. A. Schmidt, H. Zelaznik, B. Hawkins, J. S. Frank, & J. T. Quinn, 1979 ) or dependent on the rate of force (L. G. Carlton & K. M. Newell, 1993 ). Two experiments in an isometric single finger force task investigated the joint force-time entropy with (a) fixed time to peak force and different percentages of force level and (b) fixed percentage of force level and different times to peak force. The results showed that the peak force variability increased either with the increment of force level or through a shorter time to peak force that also reduced timing error variability. The peak force entropy and entropy of time to peak force increased on the respective dimension as the parameter conditions approached either maximum force or a minimum rate of force production. The findings show that force error and timing error are dependent but complementary when considered in the same framework with the joint force-time entropy at a minimum in the middle parameter range of discrete impulse.
Physical examination prior to initiating hormonal contraception: a systematic review.
Tepper, Naomi K; Curtis, Kathryn M; Steenland, Maria W; Marchbanks, Polly A
2013-05-01
Provision of contraception is often linked with physical examination, including clinical breast examination (CBE) and pelvic examination. This review was conducted to evaluate the evidence regarding outcomes among women with and without physical examination prior to initiating hormonal contraceptives. The PubMed database was searched from database inception through March 2012 for all peer-reviewed articles in any language concerning CBE and pelvic examination prior to initiating hormonal contraceptives. The quality of each study was assessed using the United States Preventive Services Task Force grading system. The search did not identify any evidence regarding outcomes among women screened versus not screened with CBE prior to initiation of hormonal contraceptives. The search identified two case-control studies of fair quality which compared women who did or did not undergo pelvic examination prior to initiating oral contraceptives (OCs) or depot medroxyprogesterone acetate (DMPA). No differences in risk factors for cervical neoplasia, incidence of sexually transmitted infections, incidence of abnormal Pap smears or incidence of abnormal wet mount findings were observed. Although women with breast cancer should not use hormonal contraceptives, there is little utility in screening prior to initiation, due to the low incidence of breast cancer and uncertain value of CBE among women of reproductive age. Two fair quality studies demonstrated no differences between women who did or did not undergo pelvic examination prior to initiating OCs or DMPA with respect to risk factors or clinical outcomes. In addition, pelvic examination is not likely to detect any conditions for which hormonal contraceptives would be unsafe. Published by Elsevier Inc.
On the maximum entropy distributions of inherently positive nuclear data
Energy Technology Data Exchange (ETDEWEB)
Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.
2017-05-11
The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.
International Nuclear Information System (INIS)
Holinde, K.
1990-01-01
In this paper the present status of the meson theory of nuclear forces is reviewed. After some introductory remarks about the relevance of the meson exchange concept in the era of QCD and the empirical features of the NN interaction, the exciting history of nuclear forces is briefly outlined. In the main part, the author gives the basic physical ideas and sketch the derivation of the one-boson-exchange model of the nuclear force, in the Feynman approach. Secondly we describe, in a qualitative way, various necessary extensions, leading to the Bonn model of the N interaction. Finally, points to some interesting pen questions connected with the extended quark structure of the hadrons, which are topics of current research activity
Principle of maximum Fisher information from Hardy's axioms applied to statistical systems.
Frieden, B Roy; Gatenby, Robert A
2013-10-01
Consider a finite-sized, multidimensional system in parameter state a. The system is either at statistical equilibrium or general nonequilibrium, and may obey either classical or quantum physics. L. Hardy's mathematical axioms provide a basis for the physics obeyed by any such system. One axiom is that the number N of distinguishable states a in the system obeys N=max. This assumes that N is known as deterministic prior knowledge. However, most observed systems suffer statistical fluctuations, for which N is therefore only known approximately. Then what happens if the scope of the axiom N=max is extended to include such observed systems? It is found that the state a of the system must obey a principle of maximum Fisher information, I=I(max). This is important because many physical laws have been derived, assuming as a working hypothesis that I=I(max). These derivations include uses of the principle of extreme physical information (EPI). Examples of such derivations were of the De Broglie wave hypothesis, quantum wave equations, Maxwell's equations, new laws of biology (e.g., of Coulomb force-directed cell development and of in situ cancer growth), and new laws of economic fluctuation and investment. That the principle I=I(max) itself derives from suitably extended Hardy axioms thereby eliminates its need to be assumed in these derivations. Thus, uses of I=I(max) and EPI express physics at its most fundamental level, its axiomatic basis in math.
Penalised Complexity Priors for Stationary Autoregressive Processes
Sø rbye, Sigrunn Holbek; Rue, Haavard
2017-01-01
The autoregressive (AR) process of order p(AR(p)) is a central model in time series analysis. A Bayesian approach requires the user to define a prior distribution for the coefficients of the AR(p) model. Although it is easy to write down some prior, it is not at all obvious how to understand and interpret the prior distribution, to ensure that it behaves according to the users' prior knowledge. In this article, we approach this problem using the recently developed ideas of penalised complexity (PC) priors. These prior have important properties like robustness and invariance to reparameterisations, as well as a clear interpretation. A PC prior is computed based on specific principles, where model component complexity is penalised in terms of deviation from simple base model formulations. In the AR(1) case, we discuss two natural base model choices, corresponding to either independence in time or no change in time. The latter case is illustrated in a survival model with possible time-dependent frailty. For higher-order processes, we propose a sequential approach, where the base model for AR(p) is the corresponding AR(p-1) model expressed using the partial autocorrelations. The properties of the new prior distribution are compared with the reference prior in a simulation study.
Penalised Complexity Priors for Stationary Autoregressive Processes
Sørbye, Sigrunn Holbek
2017-05-25
The autoregressive (AR) process of order p(AR(p)) is a central model in time series analysis. A Bayesian approach requires the user to define a prior distribution for the coefficients of the AR(p) model. Although it is easy to write down some prior, it is not at all obvious how to understand and interpret the prior distribution, to ensure that it behaves according to the users\\' prior knowledge. In this article, we approach this problem using the recently developed ideas of penalised complexity (PC) priors. These prior have important properties like robustness and invariance to reparameterisations, as well as a clear interpretation. A PC prior is computed based on specific principles, where model component complexity is penalised in terms of deviation from simple base model formulations. In the AR(1) case, we discuss two natural base model choices, corresponding to either independence in time or no change in time. The latter case is illustrated in a survival model with possible time-dependent frailty. For higher-order processes, we propose a sequential approach, where the base model for AR(p) is the corresponding AR(p-1) model expressed using the partial autocorrelations. The properties of the new prior distribution are compared with the reference prior in a simulation study.
Wilson, P R
1996-07-01
The marginal adaptation of full coverage restorations is adversely affected by the introduction of luting agents of various minimum film thicknesses during the cementation process. The increase in the marginal opening may have long-term detrimental effects on the health of both pulpal and periodontal tissues. The purpose of this study was to determine the effects of varying seating forces (2.5, 12.5, 25 N), venting, and cement types on post-cementation marginal elevation in cast crowns. A standardized cement space of 40 microns was provided between a machined gold crown and a stainless steel die. An occlusal vent was placed that could be opened or closed. The post-cementation crown elevation was measured, following the use of two commercially available capsulated dental cements (Phosphacap, and Ketac-cem Applicap). The results indicate that only the combination of Ketac-Cem Applicap and crown venting produced post-cementation crown elevation of less than 20 microns when 12.5 N seating force was used. Higher forces (25 N) and venting were required for comparable seating when using Phosphacap (19 microns). The amount of force required to allow maximum seating of cast crowns appears to be cement specific, and is reduced by effective venting procedures.
Maximum permissible voltage of YBCO coated conductors
Energy Technology Data Exchange (ETDEWEB)
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Energy Technology Data Exchange (ETDEWEB)
Lee, Taek-Soo; Tsui, Benjamin M.W. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Radiology; Gullberg, Grant T. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States)
2011-07-01
We evaluated and proposed here a 4D maximum a posteriori rescaled-block iterative (MAP-RBI)-EM image reconstruction method with a motion prior to improve the accuracy of 4D gated myocardial perfusion (GMP) SPECT images. We hypothesized that a 4D motion prior which resembles the global motion of the true 4D motion of the heart will improve the accuracy of the reconstructed images with regional myocardial motion defect. Normal heart model in the 4D XCAT (eXtended CArdiac-Torso) phantom is used as the prior in the 4D MAP-RBI-EM algorithm where a Gaussian-shaped distribution is used as the derivative of potential function (DPF) that determines the smoothing strength and range of the prior in the algorithm. The mean and width of the DPF equal to the expected difference between the reconstructed image and the motion prior, and smoothing range, respectively. To evaluate the algorithm, we used simulated projection data from a typical clinical {sup 99m}Tc Sestamibi GMP SPECT study using the 4D XCAT phantom. The noise-free projection data were generated using an analytical projector that included the effects of attenuation, collimator-detector response and scatter (ADS) and Poisson noise was added to generated noisy projection data. The projection datasets were reconstructed using the modified 4D MAP-RBI-EM with various iterations, prior weights, and sigma values as well as with ADS correction. The results showed that the 4D reconstructed image estimates looked more like the motion prior with sharper edges as the weight of prior increased. It also demonstrated that edge preservation of the myocardium in the GMP SPECT images could be controlled by a proper motion prior. The Gaussian-shaped DPF allowed stronger and weaker smoothing force for smaller and larger difference of neighboring voxel values, respectively, depending on its parameter values. We concluded the 4D MAP-RBI-EM algorithm with the general motion prior can be used to provide 4D GMP SPECT images with improved
Directory of Open Access Journals (Sweden)
Edward W. J. Cadigan
2017-09-01
Full Text Available Transcranial magnetic (TMS and motor point stimulation have been used to determine voluntary activation (VA. However, very few studies have directly compared the two stimulation techniques for assessing VA of the elbow flexors. The purpose of this study was to compare TMS and motor point stimulation for assessing VA in non-fatigued and fatigued elbow flexors. Participants performed a fatigue protocol that included twelve, 15 s isometric elbow flexor contractions. Participants completed a set of isometric elbow flexion contractions at 100, 75, 50, and 25% of maximum voluntary contraction (MVC prior to and following fatigue contractions 3, 6, 9, and 12 and 5 and 10 min post-fatigue. Force and EMG of the bicep and triceps brachii were measured for each contraction. Force responses to TMS and motor point stimulation and EMG responses to TMS (motor evoked potentials, MEPs and Erb's point stimulation (maximal M-waves, Mmax were also recorded. VA was estimated using the equation: VA% = (1−SITforce/PTforce × 100. The resting twitch was measured directly for motor point stimulation and estimated for both motor point stimulation and TMS by extrapolation of the linear regression between the superimposed twitch force and voluntary force. MVC force, potentiated twitch force and VA significantly (p < 0.05 decreased throughout the elbow flexor fatigue protocol and partially recovered 10 min post fatigue. VA was significantly (p < 0.05 underestimated when using TMS compared to motor point stimulation in non-fatigued and fatigued elbow flexors. Motor point stimulation compared to TMS superimposed twitch forces were significantly (p < 0.05 higher at 50% MVC but similar at 75 and 100% MVC. The linear relationship between TMS superimposed twitch force and voluntary force significantly (p < 0.05 decreased with fatigue. There was no change in triceps/biceps electromyography, biceps/triceps MEP amplitudes, or bicep MEP amplitudes throughout the fatigue protocol at
Bhalla, Amneet Pal Singh; Griffith, Boyce E.; Patankar, Neelesh A.
2013-01-01
A fundamental issue in locomotion is to understand how muscle forcing produces apparently complex deformation kinematics leading to movement of animals like undulatory swimmers. The question of whether complicated muscle forcing is required to create the observed deformation kinematics is central to the understanding of how animals control movement. In this work, a forced damped oscillation framework is applied to a chain-link model for undulatory swimming to understand how forcing leads to deformation and movement. A unified understanding of swimming, caused by muscle contractions (“active” swimming) or by forces imparted by the surrounding fluid (“passive” swimming), is obtained. We show that the forcing triggers the first few deformation modes of the body, which in turn cause the translational motion. We show that relatively simple forcing patterns can trigger seemingly complex deformation kinematics that lead to movement. For given muscle activation, the forcing frequency relative to the natural frequency of the damped oscillator is important for the emergent deformation characteristics of the body. The proposed approach also leads to a qualitative understanding of optimal deformation kinematics for fast swimming. These results, based on a chain-link model of swimming, are confirmed by fully resolved computational fluid dynamics (CFD) simulations. Prior results from the literature on the optimal value of stiffness for maximum speed are explained. PMID:23785272
Bhalla, Amneet Pal Singh; Griffith, Boyce E; Patankar, Neelesh A
2013-01-01
A fundamental issue in locomotion is to understand how muscle forcing produces apparently complex deformation kinematics leading to movement of animals like undulatory swimmers. The question of whether complicated muscle forcing is required to create the observed deformation kinematics is central to the understanding of how animals control movement. In this work, a forced damped oscillation framework is applied to a chain-link model for undulatory swimming to understand how forcing leads to deformation and movement. A unified understanding of swimming, caused by muscle contractions ("active" swimming) or by forces imparted by the surrounding fluid ("passive" swimming), is obtained. We show that the forcing triggers the first few deformation modes of the body, which in turn cause the translational motion. We show that relatively simple forcing patterns can trigger seemingly complex deformation kinematics that lead to movement. For given muscle activation, the forcing frequency relative to the natural frequency of the damped oscillator is important for the emergent deformation characteristics of the body. The proposed approach also leads to a qualitative understanding of optimal deformation kinematics for fast swimming. These results, based on a chain-link model of swimming, are confirmed by fully resolved computational fluid dynamics (CFD) simulations. Prior results from the literature on the optimal value of stiffness for maximum speed are explained.
Directory of Open Access Journals (Sweden)
Amneet Pal Singh Bhalla
Full Text Available A fundamental issue in locomotion is to understand how muscle forcing produces apparently complex deformation kinematics leading to movement of animals like undulatory swimmers. The question of whether complicated muscle forcing is required to create the observed deformation kinematics is central to the understanding of how animals control movement. In this work, a forced damped oscillation framework is applied to a chain-link model for undulatory swimming to understand how forcing leads to deformation and movement. A unified understanding of swimming, caused by muscle contractions ("active" swimming or by forces imparted by the surrounding fluid ("passive" swimming, is obtained. We show that the forcing triggers the first few deformation modes of the body, which in turn cause the translational motion. We show that relatively simple forcing patterns can trigger seemingly complex deformation kinematics that lead to movement. For given muscle activation, the forcing frequency relative to the natural frequency of the damped oscillator is important for the emergent deformation characteristics of the body. The proposed approach also leads to a qualitative understanding of optimal deformation kinematics for fast swimming. These results, based on a chain-link model of swimming, are confirmed by fully resolved computational fluid dynamics (CFD simulations. Prior results from the literature on the optimal value of stiffness for maximum speed are explained.
Mbah, Nsehniitooh; Philips, Prejesh; Voor, Michael J; Martin, Robert C G
2017-12-01
The optimal use of esophageal stents for malignant and benign esophageal strictures continues to be plagued with variability in pain tolerance, migration rates, and reflux-related symptoms. The aim of this study was to evaluate the differences in radial force exhibited by a variety of esophageal stents with respect to the patient's esophageal stricture. Radial force testing was performed on eight stents manufactured by four different companies using a hydraulic press and a 5000 N force gage. Radial force was measured using three different tests: transverse compression, circumferential compression, and a three-point bending test. Esophageal stricture composition and diameters were measured to assess maximum diameter, length, and proximal esophageal diameter among 15 patients prior to stenting. There was a statistically significant difference in mean radial force for transverse compression tests at the middle (range 4.25-0.66 newtons/millimeter N/mm) and at the flange (range 3.32-0.48 N/mm). There were also statistical differences in mean radial force for circumferential test (ranged from 1.19 to 10.50 N/mm, p force, which provides further clarification of stent pain and intolerance in certain patients, with either benign or malignant disease. Similarly, current stent diameters do not successfully exclude the proximal esophagus, which can lead to obstructive-type symptoms. Awareness of radial force, esophageal stricture composition, and proximal esophageal diameter must be known and understood for optimal stent tolerance.
Maximum heat flux in boiling in a large volume
International Nuclear Information System (INIS)
Bergmans, Dzh.
1976-01-01
Relationships are derived for the maximum heat flux qsub(max) without basing on the assumptions of both the critical vapor velocity corresponding to the zero growth rate, and planar interface. The Helmholz nonstability analysis of vapor column has been made to this end. The results of this examination have been used to find maximum heat flux for spherical, cylindric and flat plate heaters. The conventional hydrodynamic theory was found to be incapable of producing a satisfactory explanation of qsub(max) for small heaters. The occurrence of qsub(max) in the present case can be explained by inadequate removal of vapor output from the heater (the force of gravity for cylindrical heaters and surface tension for the spherical ones). In case of flat plate heater the qsub(max) value can be explained with the help of the hydrodynamic theory
Dinosaur Metabolism and the Allometry of Maximum Growth Rate.
Myhrvold, Nathan P
2016-01-01
The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth rates of extant groups are found to have a great deal of overlap, including between groups with endothermic and ectothermic metabolism. Dinosaur growth rates show similar overlap, matching the rates found for mammals, reptiles and fish. The allometric scaling of growth rate with mass is found to have curvature (on a log-log scale) for many groups, contradicting the prevailing view that growth rate allometry follows a simple power law. Reanalysis shows that no correlation between growth rate and basal metabolic rate (BMR) has been demonstrated. These findings drive a conclusion that growth rate allometry studies to date cannot be used to determine dinosaur metabolism as has been previously argued.
Dinosaur Metabolism and the Allometry of Maximum Growth Rate
Myhrvold, Nathan P.
2016-01-01
The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth rates of extant groups are found to have a great deal of overlap, including between groups with endothermic and ectothermic metabolism. Dinosaur growth rates show similar overlap, matching the rates found for mammals, reptiles and fish. The allometric scaling of growth rate with mass is found to have curvature (on a log-log scale) for many groups, contradicting the prevailing view that growth rate allometry follows a simple power law. Reanalysis shows that no correlation between growth rate and basal metabolic rate (BMR) has been demonstrated. These findings drive a conclusion that growth rate allometry studies to date cannot be used to determine dinosaur metabolism as has been previously argued. PMID:27828977
Reproducing kernel Hilbert spaces of Gaussian priors
Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.
2008-01-01
We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described
Improving Open Access through Prior Learning Assessment
Yin, Shuangxu; Kawachi, Paul
2013-01-01
This paper explores and presents new data on how to improve open access in distance education through using prior learning assessments. Broadly there are three types of prior learning assessment (PLAR): Type-1 for prospective students to be allowed to register for a course; Type-2 for current students to avoid duplicating work-load to gain…
Quantitative Evidence Synthesis with Power Priors
Rietbergen, C.|info:eu-repo/dai/nl/322847796
2016-01-01
The aim of this thesis is to provide the applied researcher with a practical approach for quantitative evidence synthesis using the conditional power prior that allows for subjective input and thereby provides an alternative tgbgo deal with the difficulties as- sociated with the joint power prior
Tamura, Koichiro; Peterson, Daniel; Peterson, Nicholas; Stecher, Glen; Nei, Masatoshi; Kumar, Sudhir
2011-01-01
Comparative analysis of molecular sequence data is essential for reconstructing the evolutionary histories of species and inferring the nature and extent of selective forces shaping the evolution of genes and species. Here, we announce the release of Molecular Evolutionary Genetics Analysis version 5 (MEGA5), which is a user-friendly software for mining online databases, building sequence alignments and phylogenetic trees, and using methods of evolutionary bioinformatics in basic biology, biomedicine, and evolution. The newest addition in MEGA5 is a collection of maximum likelihood (ML) analyses for inferring evolutionary trees, selecting best-fit substitution models (nucleotide or amino acid), inferring ancestral states and sequences (along with probabilities), and estimating evolutionary rates site-by-site. In computer simulation analyses, ML tree inference algorithms in MEGA5 compared favorably with other software packages in terms of computational efficiency and the accuracy of the estimates of phylogenetic trees, substitution parameters, and rate variation among sites. The MEGA user interface has now been enhanced to be activity driven to make it easier for the use of both beginners and experienced scientists. This version of MEGA is intended for the Windows platform, and it has been configured for effective use on Mac OS X and Linux desktops. It is available free of charge from http://www.megasoftware.net. PMID:21546353
Maximum allowable heat flux for a submerged horizontal tube bundle
International Nuclear Information System (INIS)
McEligot, D.M.
1995-01-01
For application to industrial heating of large pools by immersed heat exchangers, the socalled maximum allowable (or open-quotes criticalclose quotes) heat flux is studied for unconfined tube bundles aligned horizontally in a pool without forced flow. In general, we are considering boiling after the pool reaches its saturation temperature rather than sub-cooled pool boiling which should occur during early stages of transient operation. A combination of literature review and simple approximate analysis has been used. To date our main conclusion is that estimates of q inch chf are highly uncertain for this configuration
Revealing the Maximum Strength in Nanotwinned Copper
DEFF Research Database (Denmark)
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Modelling maximum canopy conductance and transpiration in ...
African Journals Online (AJOL)
There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...
Bisetti, Fabrizio; Kim, Daesang; Knio, Omar; Long, Quan; Tempone, Raul
2016-01-01
to account for the bounded domain of the uniform prior pdf of the parameters. The underlying Gaussian distribution is obtained in the spirit of the Laplace method, more precisely, the mode is chosen as the maximum a posteriori (MAP) estimate
Zero forcing parameters and minimum rank problems
Barioli, F.; Barrett, W.; Fallat, S.M.; Hall, H.T.; Hogben, L.; Shader, B.L.; Driessche, van den P.; Holst, van der H.
2010-01-01
The zero forcing number Z(G), which is the minimum number of vertices in a zero forcing set of a graph G, is used to study the maximum nullity/minimum rank of the family of symmetric matrices described by G. It is shown that for a connected graph of order at least two, no vertex is in every zero
Terminology for pregnancy loss prior to viability
DEFF Research Database (Denmark)
Kolte, A M; Bernardi, L A; Christiansen, O B
2015-01-01
Pregnancy loss prior to viability is common and research in the field is extensive. Unfortunately, terminology in the literature is inconsistent. The lack of consensus regarding nomenclature and classification of pregnancy loss prior to viability makes it difficult to compare study results from...... different centres. In our opinion, terminology and definitions should be based on clinical findings, and when possible, transvaginal ultrasound. With this Early Pregnancy Consensus Statement, it is our goal to provide clear and consistent terminology for pregnancy loss prior to viability....
Maximum Entropy and Theory Construction: A Reply to Favretti
Directory of Open Access Journals (Sweden)
John Harte
2018-04-01
Full Text Available In the maximum entropy theory of ecology (METE, the form of a function describing the distribution of abundances over species and metabolic rates over individuals in an ecosystem is inferred using the maximum entropy inference procedure. Favretti shows that an alternative maximum entropy model exists that assumes the same prior knowledge and makes predictions that differ from METE’s. He shows that both cannot be correct and asserts that his is the correct one because it can be derived from a classic microstate-counting calculation. I clarify here exactly what the core entities and definitions are for METE, and discuss the relevance of two critical issues raised by Favretti: the existence of a counting procedure for microstates and the choices of definition of the core elements of a theory. I emphasize that a theorist controls how the core entities of his or her theory are defined, and that nature is the final arbiter of the validity of a theory.
MXLKID: a maximum likelihood parameter identifier
International Nuclear Information System (INIS)
Gavel, D.T.
1980-07-01
MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables
A Simulation of Pell Grant Awards and Costs Using Prior-Prior Year Financial Data
Kelchen, Robert; Jones, Gigi
2015-01-01
We examine the likely implications of switching from a prior year (PY) financial aid system, the current practice in which students file the Free Application for Federal Student Aid (FAFSA) using income data from the previous tax year, to prior-prior year (PPY), in which data from two years before enrollment is used. While PPY allows students to…
Prior Authorization of PMDs Demonstration - Status Update
U.S. Department of Health & Human Services — CMS implemented a Prior Authorization process for scooters and power wheelchairs for people with Fee-For-Service Medicare who reside in seven states with high...
Short Report Biochemical derangements prior to emergency ...
African Journals Online (AJOL)
MMJ VOL 29 (1): March 2017. Biochemical derangements prior to emergency laparotomy at QECH 55. Malawi Medical Journal 29 (1): March 2017 ... Venepuncture was performed preoperatively for urgent cases, defined as those requiring.
Maximum neutron flux in thermal reactors
International Nuclear Information System (INIS)
Strugar, P.V.
1968-12-01
Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples
Maximum allowable load on wheeled mobile manipulators
International Nuclear Information System (INIS)
Habibnejad Korayem, M.; Ghariblu, H.
2003-01-01
This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy
Maximum phytoplankton concentrations in the sea
DEFF Research Database (Denmark)
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Attentional and Contextual Priors in Sound Perception.
Wolmetz, Michael; Elhilali, Mounya
2016-01-01
Behavioral and neural studies of selective attention have consistently demonstrated that explicit attentional cues to particular perceptual features profoundly alter perception and performance. The statistics of the sensory environment can also provide cues about what perceptual features to expect, but the extent to which these more implicit contextual cues impact perception and performance, as well as their relationship to explicit attentional cues, is not well understood. In this study, the explicit cues, or attentional prior probabilities, and the implicit cues, or contextual prior probabilities, associated with different acoustic frequencies in a detection task were simultaneously manipulated. Both attentional and contextual priors had similarly large but independent impacts on sound detectability, with evidence that listeners tracked and used contextual priors for a variety of sound classes (pure tones, harmonic complexes, and vowels). Further analyses showed that listeners updated their contextual priors rapidly and optimally, given the changing acoustic frequency statistics inherent in the paradigm. A Bayesian Observer model accounted for both attentional and contextual adaptations found with listeners. These results bolster the interpretation of perception as Bayesian inference, and suggest that some effects attributed to selective attention may be a special case of contextual prior integration along a feature axis.
Varying prior information in Bayesian inversion
International Nuclear Information System (INIS)
Walker, Matthew; Curtis, Andrew
2014-01-01
Bayes' rule is used to combine likelihood and prior probability distributions. The former represents knowledge derived from new data, the latter represents pre-existing knowledge; the Bayesian combination is the so-called posterior distribution, representing the resultant new state of knowledge. While varying the likelihood due to differing data observations is common, there are also situations where the prior distribution must be changed or replaced repeatedly. For example, in mixture density neural network (MDN) inversion, using current methods the neural network employed for inversion needs to be retrained every time prior information changes. We develop a method of prior replacement to vary the prior without re-training the network. Thus the efficiency of MDN inversions can be increased, typically by orders of magnitude when applied to geophysical problems. We demonstrate this for the inversion of seismic attributes in a synthetic subsurface geological reservoir model. We also present results which suggest that prior replacement can be used to control the statistical properties (such as variance) of the final estimate of the posterior in more general (e.g., Monte Carlo based) inverse problem solutions. (paper)
Energy Technology Data Exchange (ETDEWEB)
Papoular, R
1997-07-01
The Fourier Transform is of central importance to Crystallography since it allows the visualization in real space of tridimensional scattering densities pertaining to physical systems from diffraction data (powder or single-crystal diffraction, using x-rays, neutrons, electrons or else). In turn, this visualization makes it possible to model and parametrize these systems, the crystal structures of which are eventually refined by Least-Squares techniques (e.g., the Rietveld method in the case of Powder Diffraction). The Maximum Entropy Method (sometimes called MEM or MaxEnt) is a general imaging technique, related to solving ill-conditioned inverse problems. It is ideally suited for tackling undetermined systems of linear questions (for which the number of variables is much larger than the number of equations). It is already being applied successfully in Astronomy, Radioastronomy and Medical Imaging. The advantages of using MAXIMUM Entropy over conventional Fourier and `difference Fourier` syntheses stem from the following facts: MaxEnt takes the experimental error bars into account; MaxEnt incorporate Prior Knowledge (e.g., the positivity of the scattering density in some instances); MaxEnt allows density reconstructions from incompletely phased data, as well as from overlapping Bragg reflections; MaxEnt substantially reduces truncation errors to which conventional experimental Fourier reconstructions are usually prone. The principles of Maximum Entropy imaging as applied to Crystallography are first presented. The method is then illustrated by a detailed example specific to Neutron Diffraction: the search for proton in solids. (author). 17 refs.
Basing the US Air Force Special Operations Forces.
1986-12-01
Headquarters Military Airlift Command (Hq MAC/XONP), Scott AFB, IL, July 8, 1986. 2. Daskin , Mark S. " A Maximum Expected Covering Location Model: Formulation...7942 m~ I SAIR F ORME S ECI L PE DO S CHOOL’ 1 OF EMNIEERINO A E KCRAUS DEC 66 RFIT/OOLOS/MN-6 IUCLRS SIFIE F.’G1F/OI L Ehhmhmmhhhhhhl smomhmhmhhum...Ap a . %Ř ~ ,~, ~~%9~ q%%~ * % . i %%~ . ~* - out; ’-ILE Copy / AFIT/GOR/OS/86D-6 II BASING THE US AIR FORCE SPECIAL OPERATIONS FORCES THESIS Mark E
Erich Regener and the ionisation maximum of the atmosphere
Carlson, P.; Watson, A. A.
2014-12-01
In the 1930s the German physicist Erich Regener (1881-1955) did important work on the measurement of the rate of production of ionisation deep under water and in the atmosphere. Along with one of his students, Georg Pfotzer, he discovered the altitude at which the production of ionisation in the atmosphere reaches a maximum, often, but misleadingly, called the Pfotzer maximum. Regener was one of the first to estimate the energy density of cosmic rays, an estimate that was used by Baade and Zwicky to bolster their postulate that supernovae might be their source. Yet Regener's name is less recognised by present-day cosmic ray physicists than it should be, largely because in 1937 he was forced to take early retirement by the National Socialists as his wife had Jewish ancestors. In this paper we briefly review his work on cosmic rays and recommend an alternative naming of the ionisation maximum. The influence that Regener had on the field through his son, his son-in-law, his grandsons and his students, and through his links with Rutherford's group in Cambridge, is discussed in an appendix. Regener was nominated for the Nobel Prize in Physics by Schrödinger in 1938. He died in 1955 at the age of 73.
Priming in implicit memory tasks: prior study causes enhanced discriminability, not only bias.
Zeelenberg, René; Wagenmakers, Eric-Jan M; Raaijmakers, Jeroen G W
2002-03-01
R. Ratcliff and G. McKoon (1995, 1996, 1997; R. Ratcliff, D. Allbritton, & G. McKoon, 1997) have argued that repetition priming effects are solely due to bias. They showed that prior study of the target resulted in a benefit in a later implicit memory task. However, prior study of a stimulus similar to the target resulted in a cost. The present study, using a 2-alternative forced-choice procedure, investigated the effect of prior study in an unbiased condition: Both alternatives were studied prior to their presentation in an implicit memory task. Contrary to a pure bias interpretation of priming, consistent evidence was obtained in 3 implicit memory tasks (word fragment completion, auditory word identification, and picture identification) that performance was better when both alternatives were studied than when neither alternative was studied. These results show that prior study results in enhanced discriminability, not only bias.
Heuristics as Bayesian inference under extreme priors.
Parpart, Paula; Jones, Matt; Love, Bradley C
2018-05-01
Simple heuristics are often regarded as tractable decision strategies because they ignore a great deal of information in the input data. One puzzle is why heuristics can outperform full-information models, such as linear regression, which make full use of the available information. These "less-is-more" effects, in which a relatively simpler model outperforms a more complex model, are prevalent throughout cognitive science, and are frequently argued to demonstrate an inherent advantage of simplifying computation or ignoring information. In contrast, we show at the computational level (where algorithmic restrictions are set aside) that it is never optimal to discard information. Through a formal Bayesian analysis, we prove that popular heuristics, such as tallying and take-the-best, are formally equivalent to Bayesian inference under the limit of infinitely strong priors. Varying the strength of the prior yields a continuum of Bayesian models with the heuristics at one end and ordinary regression at the other. Critically, intermediate models perform better across all our simulations, suggesting that down-weighting information with the appropriate prior is preferable to entirely ignoring it. Rather than because of their simplicity, our analyses suggest heuristics perform well because they implement strong priors that approximate the actual structure of the environment. We end by considering how new heuristics could be derived by infinitely strengthening the priors of other Bayesian models. These formal results have implications for work in psychology, machine learning and economics. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Marciuc, Daly; Solschi, Viorel
2017-04-01
Understanding the Coriolis effect is essential for explaining the movement of air masses and ocean currents. The lesson we propose aims to familiarize students with the manifestation of the Coriolis effect. Students are guided to build, using the GeoGebra software, a simulation of the motion of a body, related to a rotating reference system. The mathematical expression of the Coriolis force is deduced, for particular cases, and the Foucault's pendulum is presented and explained. Students have the opportunity to deepen the subject, by developing materials related to topics such as: • Global Wind Pattern • Ocean Currents • Coriolis Effect in Long Range Shooting • Finding the latitude with a Foucault Pendulum
International Nuclear Information System (INIS)
Panek, Richard
2010-01-01
Astronomers have compiled evidence that what we always thought of as the actual universe- all the planets, stars, galaxies and matter in space -represents a mere 4% of what's out there. The rest is dark: 23% is called dark matter, 73% dark energy. Scientists have ideas about what dark matter is, but hardly any understanding about dark energy. This has led to rethinking traditional physics and cosmology. Assuming the existence of dark matter and that the law of gravitation is universal, two teams of astrophysicists, from Lawrence Berkeley National Laboratory and the Australian National University, analysed the universe's growth and to their surprise both concluded that the universe expansion is not slowing but speeding up. If the dominant force of evolution isn't gravity what is it?
External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising
Xu, Jun; Zhang, Lei; Zhang, David
2018-06-01
Most of existing image denoising methods learn image priors from either external data or the noisy image itself to remove noise. However, priors learned from external data may not be adaptive to the image to be denoised, while priors learned from the given noisy image may not be accurate due to the interference of corrupted noise. Meanwhile, the noise in real-world noisy images is very complex, which is hard to be described by simple distributions such as Gaussian distribution, making real noisy image denoising a very challenging problem. We propose to exploit the information in both external data and the given noisy image, and develop an external prior guided internal prior learning method for real noisy image denoising. We first learn external priors from an independent set of clean natural images. With the aid of learned external priors, we then learn internal priors from the given noisy image to refine the prior model. The external and internal priors are formulated as a set of orthogonal dictionaries to efficiently reconstruct the desired image. Extensive experiments are performed on several real noisy image datasets. The proposed method demonstrates highly competitive denoising performance, outperforming state-of-the-art denoising methods including those designed for real noisy images.
How emotion context modulates unconscious goal activation during motor force exertion.
Blakemore, Rebekah L; Neveu, Rémi; Vuilleumier, Patrik
2017-02-01
Priming participants with emotional or action-related concepts influences goal formation and motor force output during effort exertion tasks, even without awareness of priming information. However, little is known about neural processes underpinning how emotional cues interact with action (or inaction) goals to motivate (or demotivate) motor behaviour. In a novel functional neuroimaging paradigm, visible emotional images followed by subliminal action or inaction word primes were presented before participants performed a maximal force exertion. In neutral emotional contexts, maximum force was lower following inaction than action primes. However, arousing emotional images had interactive motivational effects on the motor system: Unpleasant images prior to inaction primes increased force output (enhanced effort exertion) relative to control primes, and engaged a motivation-related network involving ventral striatum, extended amygdala, as well as right inferior frontal cortex. Conversely, pleasant images presented before action (versus control) primes decreased force and activated regions of the default-mode network, including inferior parietal lobule and medial prefrontal cortex. These findings show that emotional context can determine how unconscious goal representations influence motivational processes and are transformed into actual motor output, without direct rewarding contingencies. Furthermore, they provide insight into altered motor behaviour in psychopathological disorders with dysfunctional motivational processes. Copyright © 2016 Elsevier Inc. All rights reserved.
40 CFR 60.2953 - What information must I submit prior to initial startup?
2010-07-01
... initial startup? 60.2953 Section 60.2953 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... initial startup? You must submit the information specified in paragraphs (a) through (e) of this section prior to initial startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning...
40 CFR 60.2195 - What information must I submit prior to initial startup?
2010-07-01
... initial startup? 60.2195 Section 60.2195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY..., 2001 Recordkeeping and Reporting § 60.2195 What information must I submit prior to initial startup? You... startup. (a) The type(s) of waste to be burned. (b) The maximum design waste burning capacity. (c) The...
Optimization of Pretreatment and Enzymatic Saccharification of Cogon Grass Prior Ethanol Production
Jhalique Jane R. Fojas; Ernesto J. Del Rosario
2013-01-01
The dilute acid pretreatment and enzymatic saccharification of lignocellulosic substrate, cogon grass (Imperata cylindrical, L.) was optimized prior ethanol fermentation using simultaneous saccharification and fermentation (SSF) method. The optimum pretreatment conditions, temperature, sulfuric acid concentration, and reaction time were evaluated by determining the maximum sugar yield at constant enzyme loading. Cogon grass, at 10% w/v substrate loading, has optimum pretr...
Maximum gravitational redshift of white dwarfs
International Nuclear Information System (INIS)
Shapiro, S.L.; Teukolsky, S.A.
1976-01-01
The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores
Offending prior to first psychiatric contact
DEFF Research Database (Denmark)
Stevens, H; Agerbo, E; Dean, K
2012-01-01
There is a well-established association between psychotic disorders and subsequent offending but the extent to which those who develop psychosis might have a prior history of offending is less clear. Little is known about whether the association between illness and offending exists in non-psychot......-psychotic disorders. The aim of this study was to determine whether the association between mental disorder and offending is present prior to illness onset in psychotic and non-psychotic disorders.......There is a well-established association between psychotic disorders and subsequent offending but the extent to which those who develop psychosis might have a prior history of offending is less clear. Little is known about whether the association between illness and offending exists in non...
GENERAL ASPECTS REGARDING THE PRIOR DISCIPLINARY RESEARCH
Directory of Open Access Journals (Sweden)
ANDRA PURAN (DASCĂLU
2012-05-01
Full Text Available Disciplinary research is the first phase of the disciplinary action. According to art. 251 paragraph 1 of the Labour Code no disciplinary sanction may be ordered before performing the prior disciplinary research.These regulations provide an exception: the sanction of written warning. The current regulations in question, kept from the old regulation, provides a protection for employees against abuses made by employers, since sanctions are affecting the salary or the position held, or even the development of individual employment contract. Thus, prior research of the fact that is a misconduct, before a disciplinary sanction is applied, is an essential condition for the validity of the measure ordered. Through this study we try to highlight some general issues concerning the characteristics, processes and effects of prior disciplinary research.
Bayesian Prior Probability Distributions for Internal Dosimetry
Energy Technology Data Exchange (ETDEWEB)
Miller, G.; Inkret, W.C.; Little, T.T.; Martz, H.F.; Schillaci, M.E
2001-07-01
The problem of choosing a prior distribution for the Bayesian interpretation of measurements (specifically internal dosimetry measurements) is considered using a theoretical analysis and by examining historical tritium and plutonium urine bioassay data from Los Alamos. Two models for the prior probability distribution are proposed: (1) the log-normal distribution, when there is some additional information to determine the scale of the true result, and (2) the 'alpha' distribution (a simplified variant of the gamma distribution) when there is not. These models have been incorporated into version 3 of the Bayesian internal dosimetric code in use at Los Alamos (downloadable from our web site). Plutonium internal dosimetry at Los Alamos is now being done using prior probability distribution parameters determined self-consistently from population averages of Los Alamos data. (author)
Maximum entropy analysis of EGRET data
DEFF Research Database (Denmark)
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
The Maximum Resource Bin Packing Problem
DEFF Research Database (Denmark)
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Shower maximum detector for SDC calorimetry
International Nuclear Information System (INIS)
Ernwein, J.
1994-01-01
A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs
Topics in Bayesian statistics and maximum entropy
International Nuclear Information System (INIS)
Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.
1998-12-01
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)
Density estimation by maximum quantum entropy
International Nuclear Information System (INIS)
Silver, R.N.; Wallstrom, T.; Martz, H.F.
1993-01-01
A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets
Can natural selection encode Bayesian priors?
Ramírez, Juan Camilo; Marshall, James A R
2017-08-07
The evolutionary success of many organisms depends on their ability to make decisions based on estimates of the state of their environment (e.g., predation risk) from uncertain information. These decision problems have optimal solutions and individuals in nature are expected to evolve the behavioural mechanisms to make decisions as if using the optimal solutions. Bayesian inference is the optimal method to produce estimates from uncertain data, thus natural selection is expected to favour individuals with the behavioural mechanisms to make decisions as if they were computing Bayesian estimates in typically-experienced environments, although this does not necessarily imply that favoured decision-makers do perform Bayesian computations exactly. Each individual should evolve to behave as if updating a prior estimate of the unknown environment variable to a posterior estimate as it collects evidence. The prior estimate represents the decision-maker's default belief regarding the environment variable, i.e., the individual's default 'worldview' of the environment. This default belief has been hypothesised to be shaped by natural selection and represent the environment experienced by the individual's ancestors. We present an evolutionary model to explore how accurately Bayesian prior estimates can be encoded genetically and shaped by natural selection when decision-makers learn from uncertain information. The model simulates the evolution of a population of individuals that are required to estimate the probability of an event. Every individual has a prior estimate of this probability and collects noisy cues from the environment in order to update its prior belief to a Bayesian posterior estimate with the evidence gained. The prior is inherited and passed on to offspring. Fitness increases with the accuracy of the posterior estimates produced. Simulations show that prior estimates become accurate over evolutionary time. In addition to these 'Bayesian' individuals, we also
Pawlik, Ralph; Krause, David; Bremenour, Frank
2011-01-01
The Force Limit System (FLS) was developed to protect test specimens from inadvertent overload. The load limit value is fully adjustable by the operator and works independently of the test system control as a mechanical (non-electrical) device. When a test specimen is loaded via an electromechanical or hydraulic test system, a chance of an overload condition exists. An overload applied to a specimen could result in irreparable damage to the specimen and/or fixturing. The FLS restricts the maximum load that an actuator can apply to a test specimen. When testing limited-run test articles or using very expensive fixtures, the use of such a device is highly recommended. Test setups typically use electronic peak protection, which can be the source of overload due to malfunctioning components or the inability to react quickly enough to load spikes. The FLS works independently of the electronic overload protection.
Turbulence modification by periodically modulated scale-depending forcing
Kuczaj, Arkadiusz K.; Geurts, Bernardus J.; Lohse, Detlef; van de Water, W.
2006-01-01
The response of turbulent flow to time-modulated forcing is studied by direct numerical simulation of the Navier-Stokes equations. The forcing is modulated via periodic energy input variations at a frequency $\\omega$. Such forcing of the large-scales is shown to yield a response maximum at
Turbulence modification by periodically modulated scale-dependent forcing
Kuczaj, A.K.; Geurts, B.J.; Lohse, D.; Water, van de W.
2006-01-01
The response of turbulent flow to time-modulated forcing is studied by direct numerical simulation of the Navier-Stokes equations. The forcing is modulated via periodic energy input variations at a frequency !. Such forcing of the large-scales is shown to yield a response maximum at frequencies in
Turbulence modification by periodically modulated scale-dependent forcing
Kuczaj, A.K.; Geurts, B.J.; Lohse, D.; Water, van de W.
2008-01-01
The response of turbulent flow to time-modulated forcing is studied by direct numerical simulation of the Navier–Stokes equations. The forcing is modulated via periodic energy-input variations at a frequency ¿. Harmonically modulated forcing of the large scales is shown to yield a response maximum
Turbulence modification by periodically modulated scale-dependent forcing
Kuczaj, Arkadiusz K.; Geurts, Bernardus J.; Lohse, Detlef; van de Water, W.
2008-01-01
The response of turbulent flow to time-modulated forcing is studied by direct numerical simulation of the Navier–Stokes equations. The forcing is modulated via periodic energy-input variations at a frequency x. Harmonically modulated forcing of the large scales is shown to yield a response maximum
Negotiating Multicollinearity with Spike-and-Slab Priors.
Ročková, Veronika; George, Edward I
2014-08-01
In multiple regression under the normal linear model, the presence of multicollinearity is well known to lead to unreliable and unstable maximum likelihood estimates. This can be particularly troublesome for the problem of variable selection where it becomes more difficult to distinguish between subset models. Here we show how adding a spike-and-slab prior mitigates this difficulty by filtering the likelihood surface into a posterior distribution that allocates the relevant likelihood information to each of the subset model modes. For identification of promising high posterior models in this setting, we consider three EM algorithms, the fast closed form EMVS version of Rockova and George (2014) and two new versions designed for variants of the spike-and-slab formulation. For a multimodal posterior under multicollinearity, we compare the regions of convergence of these three algorithms. Deterministic annealing versions of the EMVS algorithm are seen to substantially mitigate this multimodality. A single simple running example is used for illustration throughout.
Nonsymmetric entropy and maximum nonsymmetric entropy principle
International Nuclear Information System (INIS)
Liu Chengshi
2009-01-01
Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.
Maximum potential preventive effect of hip protectors
van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.
2007-01-01
OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who
Maximum gain of Yagi-Uda arrays
DEFF Research Database (Denmark)
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
correlation between maximum dry density and cohesion
African Journals Online (AJOL)
HOD
represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
The maximum-entropy method in superspace
Czech Academy of Sciences Publication Activity Database
van Smaalen, S.; Palatinus, Lukáš; Schneider, M.
2003-01-01
Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003
Achieving maximum sustainable yield in mixed fisheries
Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna
2017-01-01
Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example
5 CFR 534.203 - Maximum stipends.
2010-01-01
... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...
Minimal length, Friedmann equations and maximum density
Energy Technology Data Exchange (ETDEWEB)
Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)
2014-06-16
Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.
International Nuclear Information System (INIS)
Yang, W M; Chao, X X; Bian, X B; Liu, P; Feng, Y; Zhang, P X; Zhou, L
2003-01-01
The levitation forces between a single-domain YBCO bulk and several magnets of different sizes have been measured at 77 K to investigate the effect of the magnet size on the levitation force. It is found that the levitation force reaches a largest (peak) value when the size of the magnet approaches that of the superconductor when the other conditions are fixed. The absolute maximum attractive force (in the field-cooled state) increases with the increasing of the magnet size, and is saturated when the magnet size approaches that of the superconductor. The maximum attractive force in the field-cooled (FC) state is much higher than that of the maximum attractive force in the zero field-cooled (ZFC) state. The results indicate that the effects of magnetic field distribution on the levitation force have to be considered during the designing and manufacturing of superconducting devices
Recognition of Prior Learning: The Participants' Perspective
Miguel, Marta C.; Ornelas, José H.; Maroco, João P.
2016-01-01
The current narrative on lifelong learning goes beyond formal education and training, including learning at work, in the family and in the community. Recognition of prior learning is a process of evaluation of those skills and knowledge acquired through life experience, allowing them to be formally recognized by the qualification systems. It is a…
Validity in assessment of prior learning
DEFF Research Database (Denmark)
Wahlgren, Bjarne; Aarkrog, Vibe
2015-01-01
, the article discusses the need for specific criteria for assessment. The reliability and validity of the assessment procedures depend on whether the competences are well-defined, and whether the teachers are adequately trained for the assessment procedures. Keywords: assessment, prior learning, adult...... education, vocational training, lifelong learning, validity...
Prior learning assessment and quality assurance practice ...
African Journals Online (AJOL)
The use of RPL (Recognition of Prior Learning) in higher education to assess RPL candidates for admission into programmes of study met with a lot of criticism from faculty academics. Lecturers viewed the possibility of admitting large numbers of under-qualified adult learners, as a threat to the institution's reputation, or an ...
Action priors for learning domain invariances
CSIR Research Space (South Africa)
Rosman, Benjamin S
2015-04-01
Full Text Available behavioural invariances in the domain, by identifying actions to be prioritised in local contexts, invariant to task details. This information has the effect of greatly increasing the speed of solving new problems. We formalise this notion as action priors...
New Riemannian Priors on the Univariate Normal Model
Directory of Open Access Journals (Sweden)
Salem Said
2014-07-01
Full Text Available The current paper introduces new prior distributions on the univariate normal model, with the aim of applying them to the classification of univariate normal populations. These new prior distributions are entirely based on the Riemannian geometry of the univariate normal model, so that they can be thought of as “Riemannian priors”. Precisely, if {pθ ; θ ∈ Θ} is any parametrization of the univariate normal model, the paper considers prior distributions G( θ - , γ with hyperparameters θ - ∈ Θ and γ > 0, whose density with respect to Riemannian volume is proportional to exp(−d2(θ, θ - /2γ2, where d2(θ, θ - is the square of Rao’s Riemannian distance. The distributions G( θ - , γ are termed Gaussian distributions on the univariate normal model. The motivation for considering a distribution G( θ - , γ is that this distribution gives a geometric representation of a class or cluster of univariate normal populations. Indeed, G( θ - , γ has a unique mode θ - (precisely, θ - is the unique Riemannian center of mass of G( θ - , γ, as shown in the paper, and its dispersion away from θ - is given by γ. Therefore, one thinks of members of the class represented by G( θ - , γ as being centered around θ - and lying within a typical distance determined by γ. The paper defines rigorously the Gaussian distributions G( θ - , γ and describes an algorithm for computing maximum likelihood estimates of their hyperparameters. Based on this algorithm and on the Laplace approximation, it describes how the distributions G( θ - , γ can be used as prior distributions for Bayesian classification of large univariate normal populations. In a concrete application to texture image classification, it is shown that this leads to an improvement in performance over the use of conjugate priors.
Maximum margin semi-supervised learning with irrelevant data.
Yang, Haiqin; Huang, Kaizhu; King, Irwin; Lyu, Michael R
2015-10-01
Semi-supervised learning (SSL) is a typical learning paradigms training a model from both labeled and unlabeled data. The traditional SSL models usually assume unlabeled data are relevant to the labeled data, i.e., following the same distributions of the targeted labeled data. In this paper, we address a different, yet formidable scenario in semi-supervised classification, where the unlabeled data may contain irrelevant data to the labeled data. To tackle this problem, we develop a maximum margin model, named tri-class support vector machine (3C-SVM), to utilize the available training data, while seeking a hyperplane for separating the targeted data well. Our 3C-SVM exhibits several characteristics and advantages. First, it does not need any prior knowledge and explicit assumption on the data relatedness. On the contrary, it can relieve the effect of irrelevant unlabeled data based on the logistic principle and maximum entropy principle. That is, 3C-SVM approaches an ideal classifier. This classifier relies heavily on labeled data and is confident on the relevant data lying far away from the decision hyperplane, while maximally ignoring the irrelevant data, which are hardly distinguished. Second, theoretical analysis is provided to prove that in what condition, the irrelevant data can help to seek the hyperplane. Third, 3C-SVM is a generalized model that unifies several popular maximum margin models, including standard SVMs, Semi-supervised SVMs (S(3)VMs), and SVMs learned from the universum (U-SVMs) as its special cases. More importantly, we deploy a concave-convex produce to solve the proposed 3C-SVM, transforming the original mixed integer programming, to a semi-definite programming relaxation, and finally to a sequence of quadratic programming subproblems, which yields the same worst case time complexity as that of S(3)VMs. Finally, we demonstrate the effectiveness and efficiency of our proposed 3C-SVM through systematical experimental comparisons. Copyright
Effects of force reflection on servomanipulator task performance
International Nuclear Information System (INIS)
Draper, J.V.; Moore, W.E.; Herndon, J.N.; Weil, B.S.
1986-01-01
This paper reports results of a testing program that assessed the impact of force reflection on servomanipulator task performance. The testing program compared three force-reflection levels: 4 to 1 (four units of force on the slave produce one unit of force at the master controller), 1 to 1, and infinity to 1 (no force reflection). Time required to complete tasks, rate of occurrence of errors, the maximum force applied to task components, and variability in forces during completion of representative remote handling tasks were used as dependent variables. Operators exhibited lower error rates, lower peak forces, and more consistent application of forces using force reflection than they did without it. These data support the hypothesis that force reflection provides useful information for servomanipulator operators
Fletcher, B. C.
1972-01-01
The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.
International Nuclear Information System (INIS)
1991-01-01
The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de
2010-07-27
...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...
Zipf's law, power laws and maximum entropy
International Nuclear Information System (INIS)
Visser, Matt
2013-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum likelihood estimation for integrated diffusion processes
DEFF Research Database (Denmark)
Baltazar-Larios, Fernando; Sørensen, Michael
We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum parsimony on subsets of taxa.
Fischer, Mareike; Thatte, Bhalchandra D
2009-09-21
In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.
Hernandez, Rafael; Onar-Thomas, Arzu; Travascio, Francesco; Asfour, Shihab
2017-11-01
Laparoscopic training with visual force feedback can lead to immediate improvements in force moderation. However, the long-term retention of this kind of learning and its potential decay are yet unclear. A laparoscopic resection task and force sensing apparatus were designed to assess the benefits of visual force feedback training. Twenty-two male university students with no previous experience in laparoscopy underwent relevant FLS proficiency training. Participants were randomly assigned to either a control or treatment group. Both groups trained on the task for 2 weeks as follows: initial baseline, sixteen training trials, and post-test immediately after. The treatment group had visual force feedback during training, whereas the control group did not. Participants then performed four weekly test trials to assess long-term retention of training. Outcomes recorded were maximum pulling and pushing forces, completion time, and rated task difficulty. Extreme maximum pulling force values were tapered throughout both the training and retention periods. Average maximum pushing forces were significantly lowered towards the end of training and during retention period. No significant decay of applied force learning was found during the 4-week retention period. Completion time and rated task difficulty were higher during training, but results indicate that the difference eventually fades during the retention period. Significant differences in aptitude across participants were found. Visual force feedback training improves on certain aspects of force moderation in a laparoscopic resection task. Results suggest that with enough training there is no significant decay of learning within the first month of the retention period. It is essential to account for differences in aptitude between individuals in this type of longitudinal research. This study shows how an inexpensive force measuring system can be used with an FLS Trainer System after some retrofitting. Surgical
Maximum entropy analysis of liquid diffraction data
International Nuclear Information System (INIS)
Root, J.H.; Egelstaff, P.A.; Nickel, B.G.
1986-01-01
A maximum entropy method for reducing truncation effects in the inverse Fourier transform of structure factor, S(q), to pair correlation function, g(r), is described. The advantages and limitations of the method are explored with the PY hard sphere structure factor as model input data. An example using real data on liquid chlorine, is then presented. It is seen that spurious structure is greatly reduced in comparison to traditional Fourier transform methods. (author)
Automatic maximum entropy spectral reconstruction in NMR
International Nuclear Information System (INIS)
Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.
2007-01-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system
maximum neutron flux at thermal nuclear reactors
International Nuclear Information System (INIS)
Strugar, P.
1968-10-01
Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr
Novel Friction Law for the Static Friction Force based on Local Precursor Slipping
Katano, Yu; Nakano, Ken; Otsuki, Michio; Matsukawa, Hiroshi
2014-01-01
The sliding of a solid object on a solid substrate requires a shear force that is larger than the maximum static friction force. It is commonly believed that the maximum static friction force is proportional to the loading force and does not depend on the apparent contact area. The ratio of the maximum static friction force to the loading force is called the static friction coefficient µ M, which is considered to be a constant. Here, we conduct experiments demonstrating that the static fricti...
Selective effects of weight and inertia on maximum lifting.
Leontijevic, B; Pazin, N; Kukolj, M; Ugarkovic, D; Jaric, S
2013-03-01
A novel loading method (loading ranged from 20% to 80% of 1RM) was applied to explore the selective effects of externally added simulated weight (exerted by stretched rubber bands pulling downward), weight+inertia (external weights added), and inertia (covariation of the weights and the rubber bands pulling upward) on maximum bench press throws. 14 skilled participants revealed a load associated decrease in peak velocity that was the least associated with an increase in weight (42%) and the most associated with weight+inertia (66%). However, the peak lifting force increased markedly with an increase in both weight (151%) and weight+inertia (160%), but not with inertia (13%). As a consequence, the peak power output increased most with weight (59%), weight+inertia revealed a maximum at intermediate loads (23%), while inertia was associated with a gradual decrease in the peak power output (42%). The obtained findings could be of importance for our understanding of mechanical properties of human muscular system when acting against different types of external resistance. Regarding the possible application in standard athletic training and rehabilitation procedures, the results speak in favor of applying extended elastic bands which provide higher movement velocity and muscle power output than the usually applied weights. © Georg Thieme Verlag KG Stuttgart · New York.
Random template placement and prior information
International Nuclear Information System (INIS)
Roever, Christian
2010-01-01
In signal detection problems, one is usually faced with the task of searching a parameter space for peaks in the likelihood function which indicate the presence of a signal. Random searches have proven to be very efficient as well as easy to implement, compared e.g. to searches along regular grids in parameter space. Knowledge of the parameterised shape of the signal searched for adds structure to the parameter space, i.e., there are usually regions requiring to be densely searched while in other regions a coarser search is sufficient. On the other hand, prior information identifies the regions in which a search will actually be promising or may likely be in vain. Defining specific figures of merit allows one to combine both template metric and prior distribution and devise optimal sampling schemes over the parameter space. We show an example related to the gravitational wave signal from a binary inspiral event. Here the template metric and prior information are particularly contradictory, since signals from low-mass systems tolerate the least mismatch in parameter space while high-mass systems are far more likely, as they imply a greater signal-to-noise ratio (SNR) and hence are detectable to greater distances. The derived sampling strategy is implemented in a Markov chain Monte Carlo (MCMC) algorithm where it improves convergence.
Modeling Mediterranean Ocean climate of the Last Glacial Maximum
Directory of Open Access Journals (Sweden)
U. Mikolajewicz
2011-03-01
Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the Last Glacial Maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions complicated. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of salinity in the Mediterranean in spite of reduced net evaporation.
Soylu, Abdullah Ruhi; Arpinar-Avsar, Pinar
2010-08-01
The effects of fatigue on maximum voluntary contraction (MVC) parameters were examined by using force and surface electromyography (sEMG) signals of the biceps brachii muscles (BBM) of 12 subjects. The purpose of the study was to find the sEMG time interval of the MVC recordings which is not affected by the muscle fatigue. At least 10s of force and sEMG signals of BBM were recorded simultaneously during MVC. The subjects reached the maximum force level within 2s by slightly increasing the force, and then contracted the BBM maximally. The time index of each sEMG and force signal were labeled with respect to the time index of the maximum force (i.e. after the time normalization, each sEMG or force signal's 0s time index corresponds to maximum force point). Then, the first 8s of sEMG and force signals were divided into 0.5s intervals. Mean force, median frequency (MF) and integrated EMG (iEMG) values were calculated for each interval. Amplitude normalization was performed by dividing the force signals to their mean values of 0s time intervals (i.e. -0.25 to 0.25s). A similar amplitude normalization procedure was repeated for the iEMG and MF signals. Statistical analysis (Friedman test with Dunn's post hoc test) was performed on the time and amplitude normalized signals (MF, iEMG). Although the ANOVA results did not give statistically significant information about the onset of the muscle fatigue, linear regression (mean force vs. time) showed a decreasing slope (Pearson-r=0.9462, pfatigue starts after the 0s time interval as the muscles cannot attain their peak force levels. This implies that the most reliable interval for MVC calculation which is not affected by the muscle fatigue is from the onset of the EMG activity to the peak force time. Mean, SD, and range of this interval (excluding 2s gradual increase time) for 12 subjects were 2353, 1258ms and 536-4186ms, respectively. Exceeding this interval introduces estimation errors in the maximum amplitude calculations
Energy Technology Data Exchange (ETDEWEB)
Chao, Ong Zhi; Cheet, Lim Hong; Yee, Khoo Shin [Mechanical Engineering Department, Faculty of EngineeringUniversity of Malaya, Kuala Lumpur (Malaysia); Rahman, Abdul Ghaffar Abdul [Faculty of Mechanical Engineering, University Malaysia Pahang, Pekan (Malaysia); Ismail, Zubaidah [Civil Engineering Department, Faculty of Engineering, University of Malaya, Kuala Lumpur (Malaysia)
2016-08-15
A novel method called Impact-synchronous modal analysis (ISMA) was proposed previously which allows modal testing to be performed during operation. This technique focuses on signal processing of the upstream data to provide cleaner Frequency response function (FRF) estimation prior to modal extraction. Two important parameters, i.e., windowing function and impact force level were identified and their effect on the effectiveness of this technique were experimentally investigated. When performing modal testing during running condition, the cyclic loads signals are dominant in the measured response for the entire time history. Exponential window is effectively in minimizing leakage and attenuating signals of non-synchronous running speed, its harmonics and noises to zero at the end of each time record window block. Besides, with the information of the calculated cyclic force, suitable amount of impact force to be applied on the system could be decided prior to performing ISMA. Maximum allowable impact force could be determined from nonlinearity test using coherence function. By applying higher impact forces than the cyclic loads along with an ideal decay rate in ISMA, harmonic reduction is significantly achieved in FRF estimation. Subsequently, the dynamic characteristics of the system are successfully extracted from a cleaner FRF and the results obtained are comparable with Experimental modal analysis (EMA)
International Nuclear Information System (INIS)
Chao, Ong Zhi; Cheet, Lim Hong; Yee, Khoo Shin; Rahman, Abdul Ghaffar Abdul; Ismail, Zubaidah
2016-01-01
A novel method called Impact-synchronous modal analysis (ISMA) was proposed previously which allows modal testing to be performed during operation. This technique focuses on signal processing of the upstream data to provide cleaner Frequency response function (FRF) estimation prior to modal extraction. Two important parameters, i.e., windowing function and impact force level were identified and their effect on the effectiveness of this technique were experimentally investigated. When performing modal testing during running condition, the cyclic loads signals are dominant in the measured response for the entire time history. Exponential window is effectively in minimizing leakage and attenuating signals of non-synchronous running speed, its harmonics and noises to zero at the end of each time record window block. Besides, with the information of the calculated cyclic force, suitable amount of impact force to be applied on the system could be decided prior to performing ISMA. Maximum allowable impact force could be determined from nonlinearity test using coherence function. By applying higher impact forces than the cyclic loads along with an ideal decay rate in ISMA, harmonic reduction is significantly achieved in FRF estimation. Subsequently, the dynamic characteristics of the system are successfully extracted from a cleaner FRF and the results obtained are comparable with Experimental modal analysis (EMA)
A cutting force model for micromilling applications
DEFF Research Database (Denmark)
Bissacco, Giuliano; Hansen, Hans Nørgaard; De Chiffre, Leonardo
2006-01-01
In micro milling the maximum uncut chip thickness is often smaller than the cutting edge radius. This paper introduces a new cutting force model for ball nose micro milling that is capable of taking into account the effect of the edge radius.......In micro milling the maximum uncut chip thickness is often smaller than the cutting edge radius. This paper introduces a new cutting force model for ball nose micro milling that is capable of taking into account the effect of the edge radius....
Barge Train Maximum Impact Forces Using Limit States for the Lashings Between Barges
National Research Council Canada - National Science Library
Arroyo, Jose R; Ebeling, Robert M
2005-01-01
... on: the mass including hydrodynamic added mass of the barge train, the approach velocity, the approach angle, the barge train moment of inertia, damage sustained by the barge structure, and friction...
The Sidereal Time Variations of the Lorentz Force and Maximum Attainable Speed of Electrons
Nowak, Gabriel; Wojtsekhowski, Bogdan; Roblin, Yves; Schmookler, Barak
2016-09-01
The Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab produces electrons that orbit through a known magnetic system. The electron beam's momentum can be determined through the radius of the beam's orbit. This project compares the beam orbit's radius while travelling in a transverse magnetic field with theoretical predictions from special relativity, which predict a constant beam orbit radius. Variations in the beam orbit's radius are found by comparing the beam's momentum entering and exiting a magnetic arc. Beam position monitors (BPMs) provide the information needed to calculate the beam momentum. Multiple BPM's are included in the analysis and fitted using the method of least squares to decrease statistical uncertainty. Preliminary results from data collected over a 24 hour period show that the relative momentum change was less than 10-4. Further study will be conducted including larger time spans and stricter cuts applied to the BPM data. The data from this analysis will be used in a larger experiment attempting to verify special relativity. While the project is not traditionally nuclear physics, it involves the same technology (the CEBAF accelerator) and the same methods (ROOT) as a nuclear physics experiment. DOE SULI Program.
Force AOR Travel Info News prevnext Slide show 76,410 pounds of food delivered to Haiti 12th Air Force the French Air Force, Colombian Air Force, Pakistan Air Force, Belgian Air Force, Brazilian Air Force
Prior exercise and antioxidant supplementation: effect on oxidative stress and muscle injury
Directory of Open Access Journals (Sweden)
Schilling Brian K
2007-10-01
Full Text Available Abstract Background Both acute bouts of prior exercise (preconditioning and antioxidant nutrients have been used in an attempt to attenuate muscle injury or oxidative stress in response to resistance exercise. However, most studies have focused on untrained participants rather than on athletes. The purpose of this work was to determine the independent and combined effects of antioxidant supplementation (vitamin C + mixed tocopherols/tocotrienols and prior eccentric exercise in attenuating markers of skeletal muscle injury and oxidative stress in resistance trained men. Methods Thirty-six men were randomly assigned to: no prior exercise + placebo; no prior exercise + antioxidant; prior exercise + placebo; prior exercise + antioxidant. Markers of muscle/cell injury (muscle performance, muscle soreness, C-reactive protein, and creatine kinase activity, as well as oxidative stress (blood protein carbonyls and peroxides, were measured before and through 48 hours of exercise recovery. Results No group by time interactions were noted for any variable (P > 0.05. Time main effects were noted for creatine kinase activity, muscle soreness, maximal isometric force and peak velocity (P Conclusion There appears to be no independent or combined effect of a prior bout of eccentric exercise or antioxidant supplementation as used here on markers of muscle injury in resistance trained men. Moreover, eccentric exercise as used in the present study results in minimal blood oxidative stress in resistance trained men. Hence, antioxidant supplementation for the purpose of minimizing blood oxidative stress in relation to eccentric exercise appears unnecessary in this population.
Maximum entropy decomposition of quadrupole mass spectra
International Nuclear Information System (INIS)
Toussaint, U. von; Dose, V.; Golan, A.
2004-01-01
We present an information-theoretic method called generalized maximum entropy (GME) for decomposing mass spectra of gas mixtures from noisy measurements. In this GME approach to the noisy, underdetermined inverse problem, the joint entropies of concentration, cracking, and noise probabilities are maximized subject to the measured data. This provides a robust estimation for the unknown cracking patterns and the concentrations of the contributing molecules. The method is applied to mass spectroscopic data of hydrocarbons, and the estimates are compared with those received from a Bayesian approach. We show that the GME method is efficient and is computationally fast
Maximum power operation of interacting molecular motors
DEFF Research Database (Denmark)
Golubeva, Natalia; Imparato, Alberto
2013-01-01
, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics.......We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors...
Maximum entropy method in momentum density reconstruction
International Nuclear Information System (INIS)
Dobrzynski, L.; Holas, A.
1997-01-01
The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig
On the maximum drawdown during speculative bubbles
Rotundo, Giulia; Navarra, Mauro
2007-08-01
A taxonomy of large financial crashes proposed in the literature locates the burst of speculative bubbles due to endogenous causes in the framework of extreme stock market crashes, defined as falls of market prices that are outlier with respect to the bulk of drawdown price movement distribution. This paper goes on deeper in the analysis providing a further characterization of the rising part of such selected bubbles through the examination of drawdown and maximum drawdown movement of indices prices. The analysis of drawdown duration is also performed and it is the core of the risk measure estimated here.
Multi-Channel Maximum Likelihood Pitch Estimation
DEFF Research Database (Denmark)
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Conductivity maximum in a charged colloidal suspension
Energy Technology Data Exchange (ETDEWEB)
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Dynamical maximum entropy approach to flocking.
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
Multiperiod Maximum Loss is time unit invariant.
Kovacevic, Raimund M; Breuer, Thomas
2016-01-01
Time unit invariance is introduced as an additional requirement for multiperiod risk measures: for a constant portfolio under an i.i.d. risk factor process, the multiperiod risk should equal the one period risk of the aggregated loss, for an appropriate choice of parameters and independent of the portfolio and its distribution. Multiperiod Maximum Loss over a sequence of Kullback-Leibler balls is time unit invariant. This is also the case for the entropic risk measure. On the other hand, multiperiod Value at Risk and multiperiod Expected Shortfall are not time unit invariant.
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Improved Maximum Parsimony Models for Phylogenetic Networks.
Van Iersel, Leo; Jones, Mark; Scornavacca, Celine
2018-05-01
Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.
Ancestral sequence reconstruction with Maximum Parsimony
Herbst, Lina; Fischer, Mareike
2017-01-01
One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...
Sparse Multivariate Modeling: Priors and Applications
DEFF Research Database (Denmark)
Henao, Ricardo
This thesis presents a collection of statistical models that attempt to take advantage of every piece of prior knowledge available to provide the models with as much structure as possible. The main motivation for introducing these models is interpretability since in practice we want to be able...... a general yet self-contained description of every model in terms of generative assumptions, interpretability goals, probabilistic formulation and target applications. Case studies, benchmark results and practical details are also provided as appendices published elsewhere, containing reprints of peer...
Genome position specific priors for genomic prediction
DEFF Research Database (Denmark)
Brøndum, Rasmus Froberg; Su, Guosheng; Lund, Mogens Sandø
2012-01-01
casual mutation is different between the populations but affects the same gene. Proportions of a four-distribution mixture for SNP effects in segments of fixed size along the genome are derived from one population and set as location specific prior proportions of distributions of SNP effects...... for the target population. The model was tested using dairy cattle populations of different breeds: 540 Australian Jersey bulls, 2297 Australian Holstein bulls and 5214 Nordic Holstein bulls. The traits studied were protein-, fat- and milk yield. Genotypic data was Illumina 777K SNPs, real or imputed Results...
Models for Validation of Prior Learning (VPL)
DEFF Research Database (Denmark)
Ehlers, Søren
The national policies for the education/training of adults are in the 21st century highly influenced by proposals which are formulated and promoted by The European Union (EU) as well as other transnational players and this shift in policy making has consequences. One is that ideas which in the past...... would have been categorized as utopian can become realpolitik. Validation of Prior Learning (VPL) was in Europe mainly regarded as utopian while universities in the United States of America (USA) were developing ways to obtain credits to those students which was coming with experiences from working life....
Objective Bayesianism and the Maximum Entropy Principle
Directory of Open Access Journals (Sweden)
Jon Williamson
2013-09-01
Full Text Available Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities; they should be calibrated to our evidence of physical probabilities; and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism are usually justified in different ways. In this paper, we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem.
Efficient heuristics for maximum common substructure search.
Englert, Péter; Kovács, Péter
2015-05-26
Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.
Simultaneous maximum a posteriori longitudinal PET image reconstruction
Ellis, Sam; Reader, Andrew J.
2017-09-01
Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.
The relationship between oral tori and bite force.
Jeong, Chan-Woo; Kim, Kyung-Ho; Jang, Hyo-Won; Kim, Hye-Sun; Huh, Jong-Ki
2018-01-12
Objective The relationship between bite force and torus palatinus or mandibularis remains to be explained. The major aim of this study was to determine the correlation between bite force and oral tori. Methods The bite force of 345 patients was measured with a bite force recorder; impressions of the shape and size of the oral tori were taken on plaster models prior to orthodontic treatments. Subsequently, the relationship between oral tori and bite force was analyzed. Results The size, shape, and incidence of torus palatinus was not significantly correlated with bite force. However, the size of torus mandibularis increased significantly in proportion to the bite force (p = 0.020). The occurrence of different types of oral tori was not correlated with the bite force. Discussion The size of torus mandibularis provides information about bite force and can thus be used to clinically assess occlusal stress.
Directory of Open Access Journals (Sweden)
Lotfi Khribi
2017-12-01
Full Text Available In the Bayesian framework, the usual choice of prior in the prediction of homogeneous Poisson processes with random effects is the gamma one. Here, we propose the use of higher order maximum entropy priors. Their advantage is illustrated in a simulation study and the choice of the best order is established by two goodness-of-fit criteria: Kullback–Leibler divergence and a discrepancy measure. This procedure is illustrated on a warranty data set from the automobile industry.
Depth image enhancement using perceptual texture priors
Bang, Duhyeon; Shim, Hyunjung
2015-03-01
A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.
Analogue of Pontryagin's maximum principle for multiple integrals minimization problems
Mikhail, Zelikin
2016-01-01
The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Stefanescu, Dan Mihai
2011-01-01
Part I introduces the basic ""Principles and Methods of Force Measurement"" acording to a classification into a dozen of force transducers types: resistive, inductive, capacitive, piezoelectric, electromagnetic, electrodynamic, magnetoelastic, galvanomagnetic (Hall-effect), vibrating wires, (micro)resonators, acoustic and gyroscopic. Two special chapters refer to force balance techniques and to combined methods in force measurement. Part II discusses the ""(Strain Gauge) Force Transducers Components"", evolving from the classical force transducer to the digital / intelligent one, with the inco
Bayesian image reconstruction in SPECT using higher order mechanical models as priors
International Nuclear Information System (INIS)
Lee, S.J.; Gindi, G.; Rangarajan, A.
1995-01-01
While the ML-EM (maximum-likelihood-expectation maximization) algorithm for reconstruction for emission tomography is unstable due to the ill-posed nature of the problem, Bayesian reconstruction methods overcome this instability by introducing prior information, often in the form of a spatial smoothness regularizer. More elaborate forms of smoothness constraints may be used to extend the role of the prior beyond that of a stabilizer in order to capture actual spatial information about the object. Previously proposed forms of such prior distributions were based on the assumption of a piecewise constant source distribution. Here, the authors propose an extension to a piecewise linear model--the weak plate--which is more expressive than the piecewise constant model. The weak plate prior not only preserves edges but also allows for piecewise ramplike regions in the reconstruction. Indeed, for the application in SPECT, such ramplike regions are observed in ground-truth source distributions in the form of primate autoradiographs of rCBF radionuclides. To incorporate the weak plate prior in a MAP approach, the authors model the prior as a Gibbs distribution and use a GEM formulation for the optimization. They compare quantitative performance of the ML-EM algorithm, a GEM algorithm with a prior favoring piecewise constant regions, and a GEM algorithm with the weak plate prior. Pointwise and regional bias and variance of ensemble image reconstructions are used as indications of image quality. The results show that the weak plate and membrane priors exhibit improved bias and variance relative to ML-EM techniques
Maximum Entropy Method in Moessbauer Spectroscopy - a Problem of Magnetic Texture
International Nuclear Information System (INIS)
Satula, D.; Szymanski, K.; Dobrzynski, L.
2011-01-01
A reconstruction of the three dimensional distribution of the hyperfine magnetic field, isomer shift and texture parameter z from the Moessbauer spectra by the maximum entropy method is presented. The method was tested on the simulated spectrum consisting of two Gaussian hyperfine field distributions with different values of the texture parameters. It is shown that proper prior has to be chosen in order to arrive at the physically meaningful results. (authors)
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Maximum Profit Configurations of Commercial Engines
Directory of Open Access Journals (Sweden)
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
The worst case complexity of maximum parsimony.
Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal
2014-11-01
One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.
Bayesian optimal experimental design for priors of compact support
Long, Quan
2016-01-08
In this study, we optimize the experimental setup computationally by optimal experimental design (OED) in a Bayesian framework. We approximate the posterior probability density functions (pdf) using truncated Gaussian distributions in order to account for the bounded domain of the uniform prior pdf of the parameters. The underlying Gaussian distribution is obtained in the spirit of the Laplace method, more precisely, the mode is chosen as the maximum a posteriori (MAP) estimate, and the covariance is chosen as the negative inverse of the Hessian of the misfit function at the MAP estimate. The model related entities are obtained from a polynomial surrogate. The optimality, quantified by the information gain measures, can be estimated efficiently by a rejection sampling algorithm against the underlying Gaussian probability distribution, rather than against the true posterior. This approach offers a significant error reduction when the magnitude of the invariants of the posterior covariance are comparable to the size of the bounded domain of the prior. We demonstrate the accuracy and superior computational efficiency of our method for shock-tube experiments aiming to measure the model parameters of a key reaction which is part of the complex kinetic network describing the hydrocarbon oxidation. In the experiments, the initial temperature and fuel concentration are optimized with respect to the expected information gain in the estimation of the parameters of the target reaction rate. We show that the expected information gain surface can change its shape dramatically according to the level of noise introduced into the synthetic data. The information that can be extracted from the data saturates as a logarithmic function of the number of experiments, and few experiments are needed when they are conducted at the optimal experimental design conditions.
Modelling maximum likelihood estimation of availability
International Nuclear Information System (INIS)
Waller, R.A.; Tietjen, G.L.; Rock, G.W.
1975-01-01
Suppose the performance of a nuclear powered electrical generating power plant is continuously monitored to record the sequence of failure and repairs during sustained operation. The purpose of this study is to assess one method of estimating the performance of the power plant when the measure of performance is availability. That is, we determine the probability that the plant is operational at time t. To study the availability of a power plant, we first assume statistical models for the variables, X and Y, which denote the time-to-failure and the time-to-repair variables, respectively. Once those statistical models are specified, the availability, A(t), can be expressed as a function of some or all of their parameters. Usually those parameters are unknown in practice and so A(t) is unknown. This paper discusses the maximum likelihood estimator of A(t) when the time-to-failure model for X is an exponential density with parameter, lambda, and the time-to-repair model for Y is an exponential density with parameter, theta. Under the assumption of exponential models for X and Y, it follows that the instantaneous availability at time t is A(t)=lambda/(lambda+theta)+theta/(lambda+theta)exp[-[(1/lambda)+(1/theta)]t] with t>0. Also, the steady-state availability is A(infinity)=lambda/(lambda+theta). We use the observations from n failure-repair cycles of the power plant, say X 1 , X 2 , ..., Xsub(n), Y 1 , Y 2 , ..., Ysub(n) to present the maximum likelihood estimators of A(t) and A(infinity). The exact sampling distributions for those estimators and some statistical properties are discussed before a simulation model is used to determine 95% simulation intervals for A(t). The methodology is applied to two examples which approximate the operating history of two nuclear power plants. (author)
Dynamical effects prior to heavy ion fusion
International Nuclear Information System (INIS)
Mikhajlova, T.I.; Mikhajlov, I.N.; Molodtsova, I.V.; Di Toro, M.
2002-01-01
Dynamical effects in the initial phase of fusion reactions are studied following the evolution of two colliding 100 Mo ions. The role of elastic forces associated with the Fermi-surface deformation is shown by comparing the results obtained with and without taking the memory effects into account. The Bass barrier separating fused and scattered configurations and the lower bound for the extra push energy are estimated. Examples of cases are shown in which the excitation energy and deformation dependence of the friction parameter are fictitious and simulate the effects of collective motion related with the Fermi-surface deformations
Preliminary Evaluation of the Effectiveness of Air Force Advertising.
Vitola, Bart M.
The Airman Enlistment Questionnaire was administered to a sample of non prior service enlistees, 1,667 males and 300 females. Analysis of the responses shows (1)educational opportunity is the strongest motivator for enlisting in the Air Force; (2) there is an indication that Air Force advertising should make different appeals to men and women; and…
DEFF Research Database (Denmark)
Lange, Katrine; Frydendall, Jan; Cordua, Knud Skou
2012-01-01
The frequency matching method defines a closed form expression for a complex prior that quantifies the higher order statistics of a proposed solution model to an inverse problem. While existing solution methods to inverse problems are capable of sampling the solution space while taking into account...... arbitrarily complex a priori information defined by sample algorithms, it is not possible to directly compute the maximum a posteriori model, as the prior probability of a solution model cannot be expressed. We demonstrate how the frequency matching method enables us to compute the maximum a posteriori...... solution model to an inverse problem by using a priori information based on multiple point statistics learned from training images. We demonstrate the applicability of the suggested method on a synthetic tomographic crosshole inverse problem....
Halo-independence with quantified maximum entropy at DAMA/LIBRA
Energy Technology Data Exchange (ETDEWEB)
Fowlie, Andrew, E-mail: andrew.j.fowlie@googlemail.com [ARC Centre of Excellence for Particle Physics at the Tera-scale, Monash University, Melbourne, Victoria 3800 (Australia)
2017-10-01
Using the DAMA/LIBRA anomaly as an example, we formalise the notion of halo-independence in the context of Bayesian statistics and quantified maximum entropy. We consider an infinite set of possible profiles, weighted by an entropic prior and constrained by a likelihood describing noisy measurements of modulated moments by DAMA/LIBRA. Assuming an isotropic dark matter (DM) profile in the galactic rest frame, we find the most plausible DM profiles and predictions for unmodulated signal rates at DAMA/LIBRA. The entropic prior contains an a priori unknown regularisation factor, β, that describes the strength of our conviction that the profile is approximately Maxwellian. By varying β, we smoothly interpolate between a halo-independent and a halo-dependent analysis, thus exploring the impact of prior information about the DM profile.
EFFECT OF CAFFEINE ON OXIDATIVE STRESS DURING MAXIMUM INCREMENTAL EXERCISE
Directory of Open Access Journals (Sweden)
Guillermo J. Olcina
2006-12-01
Full Text Available Caffeine (1,3,7-trimethylxanthine is an habitual substance present in a wide variety of beverages and in chocolate-based foods and it is also used as adjuvant in some drugs. The antioxidant ability of caffeine has been reported in contrast with its pro- oxidant effects derived from its action mechanism such as the systemic release of catecholamines. The aim of this work was to evaluate the effect of caffeine on exercise oxidative stress, measuring plasma vitamins A, E, C and malonaldehyde (MDA as markers of non enzymatic antioxidant status and lipid peroxidation respectively. Twenty young males participated in a double blind (caffeine 5mg·kg- 1 body weight or placebo cycling test until exhaustion. In the exercise test, where caffeine was ingested prior to the test, exercise time to exhaustion, maximum heart rate, and oxygen uptake significantly increased, whereas respiratory exchange ratio (RER decreased. Vitamins A and E decreased with exercise and vitamin C and MDA increased after both the caffeine and placebo tests but, regarding these particular variables, there were no significant differences between the two test conditions. The results obtained support the conclusion that this dose of caffeine enhances the ergospirometric response to cycling and has no effect on lipid peroxidation or on the antioxidant vitamins A, E and C
Bottom boundary layer forced by finite amplitude long and short surface waves motions
Elsafty, H.; Lynett, P.
2018-04-01
A multiple-scale perturbation approach is implemented to solve the Navier-Stokes equations while including bottom boundary layer effects under a single wave and under two interacting waves. In this approach, fluid velocities and the pressure field are decomposed into two components: a potential component and a rotational component. In this study, the two components are exist throughout the entire water column and each is scaled with appropriate length and time scales. A one-way coupling between the two components is implemented. The potential component is assumed to be known analytically or numerically a prior, and the rotational component is forced by the potential component. Through order of magnitude analysis, it is found that the leading-order coupling between the two components occurs through the vertical convective acceleration. It is shown that this coupling plays an important role in the bottom boundary layer behavior. Its effect on the results is discussed for different wave-forcing conditions: purely harmonic forcing and impurely harmonic forcing. The approach is then applied to derive the governing equations for the bottom boundary layer developed under two interacting wave motions. Both motions-the shorter and the longer wave-are decomposed into two components, potential and rotational, as it is done in the single wave. Test cases are presented wherein two different wave forcings are simulated: (1) two periodic oscillatory motions and (2) short waves interacting with a solitary wave. The analysis of the two periodic motions indicates that nonlinear effects in the rotational solution may be significant even though nonlinear effects are negligible in the potential forcing. The local differences in the rotational velocity due to the nonlinear vertical convection coupling term are found to be on the order of 30% of the maximum boundary layer velocity for the cases simulated in this paper. This difference is expected to increase with the increase in wave
Extended Linear Models with Gaussian Priors
DEFF Research Database (Denmark)
Quinonero, Joaquin
2002-01-01
In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model...... a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors...... on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....
Cambodian students’ prior knowledge of projectile motion
Piten, S.; Rakkapao, S.; Prasitpong, S.
2017-09-01
Students always bring intuitive ideas about physics into classes, which can impact what they learn and how successful they are. To examine what Cambodian students think about projectile motion, we have developed seven open-ended questions and applied into grade 11 students before (N=124) and after (N=131) conventional classes. Results revealed several consistent misconceptions, for instance, many students believed that the direction of a velocity vector of a projectile follows the curved path at every position. They also thought the direction of an acceleration (or a force) follows the direction of motion. Observed by a pilot sitting on the plane, the falling object, dropped from a plane moving at a constant initial horizontal speed, would travel backward and land after the point of its release. The greater angle of the launched projectile creates the greater horizontal range. The hand force imparted with the ball leads the ball goes straight to hit the target. The acceleration direction points from the higher position to lower position. The misconceptions will be used as primary resources to develop instructional instruments to promote Cambodian students’ understanding of projectile motion in the following work.
Directory of Open Access Journals (Sweden)
Asrofi Shicas Nabawi
2017-11-01
Full Text Available The purpose of this study was: (1 to analyze the effect of creatine monohydrate to give strength after doing physical exercise with maximum intensity, towards endurance after doing physical exercise with maximum intensity, (2 to analyze the effect of non creatine monohydrate to give strength after doing physical exercise with maximum intensity, towards endurance after doing physical exercise with maximum intensity, (3 to analyze the results of the difference by administering creatine and non creatine on strength and endurance after exercise with maximum intensity. This type of research used in this research was quantitative with quasi experimental research methods. The design of this study was using pretest and posttest control group design, and data analysis was using a paired sample t-test. The process of data collection was done with the test leg muscle strength using a strength test with back and leg dynamometer, sit ups test with 1 minute sit ups, push ups test with push ups and 30 seconds with a VO2max test cosmed quart CPET during the pretest and posttest. Furthermore, the data were analyzed using SPSS 22.0 series. The results showed: (1 There was the influence of creatine administration against the strength after doing exercise with maximum intensity; (2 There was the influence of creatine administration against the group endurance after doing exercise with maximum intensity; (3 There was the influence of non creatine against the force after exercise maximum intensity; (4 There was the influence of non creatine against the group after endurance exercise maximum intensity; (5 The significant difference with the provision of non creatine and creatine from creatine group difference delta at higher against the increased strength and endurance after exercise maximum intensity. Based on the above analysis, it can be concluded that the increased strength and durability for each of the groups after being given a workout.
Birth-death prior on phylogeny and speed dating
Directory of Open Access Journals (Sweden)
Sennblad Bengt
2008-03-01
Full Text Available Abstract Background In recent years there has been a trend of leaving the strict molecular clock in order to infer dating of speciations and other evolutionary events. Explicit modeling of substitution rates and divergence times makes formulation of informative prior distributions for branch lengths possible. Models with birth-death priors on tree branching and auto-correlated or iid substitution rates among lineages have been proposed, enabling simultaneous inference of substitution rates and divergence times. This problem has, however, mainly been analysed in the Markov chain Monte Carlo (MCMC framework, an approach requiring computation times of hours or days when applied to large phylogenies. Results We demonstrate that a hill-climbing maximum a posteriori (MAP adaptation of the MCMC scheme results in considerable gain in computational efficiency. We demonstrate also that a novel dynamic programming (DP algorithm for branch length factorization, useful both in the hill-climbing and in the MCMC setting, further reduces computation time. For the problem of inferring rates and times parameters on a fixed tree, we perform simulations, comparisons between hill-climbing and MCMC on a plant rbcL gene dataset, and dating analysis on an animal mtDNA dataset, showing that our methodology enables efficient, highly accurate analysis of very large trees. Datasets requiring a computation time of several days with MCMC can with our MAP algorithm be accurately analysed in less than a minute. From the results of our example analyses, we conclude that our methodology generally avoids getting trapped early in local optima. For the cases where this nevertheless can be a problem, for instance when we in addition to the parameters also infer the tree topology, we show that the problem can be evaded by using a simulated-annealing like (SAL method in which we favour tree swaps early in the inference while biasing our focus towards rate and time parameter changes
A maximum power point tracking for photovoltaic-SPE system using a maximum current controller
Energy Technology Data Exchange (ETDEWEB)
Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)
2003-02-01
Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)
Leow, Li-Ann; de Rugy, Aymar; Marinovic, Welber; Riek, Stephan; Carroll, Timothy J
2016-10-01
When we move, perturbations to our body or the environment can elicit discrepancies between predicted and actual outcomes. We readily adapt movements to compensate for such discrepancies, and the retention of this learning is evident as savings, or faster readaptation to a previously encountered perturbation. The mechanistic processes contributing to savings, or even the necessary conditions for savings, are not fully understood. One theory suggests that savings requires increased sensitivity to previously experienced errors: when perturbations evoke a sequence of correlated errors, we increase our sensitivity to the errors experienced, which subsequently improves error correction (Herzfeld et al. 2014). An alternative theory suggests that a memory of actions is necessary for savings: when an action becomes associated with successful target acquisition through repetition, that action is more rapidly retrieved at subsequent learning (Huang et al. 2011). In the present study, to better understand the necessary conditions for savings, we tested how savings is affected by prior experience of similar errors and prior repetition of the action required to eliminate errors using a factorial design. Prior experience of errors induced by a visuomotor rotation in the savings block was either prevented at initial learning by gradually removing an oppositely signed perturbation or enforced by abruptly removing the perturbation. Prior repetition of the action required to eliminate errors in the savings block was either deprived or enforced by manipulating target location in preceding trials. The data suggest that prior experience of errors is both necessary and sufficient for savings, whereas prior repetition of a successful action is neither necessary nor sufficient for savings. Copyright © 2016 the American Physiological Society.
Ideas of Physical Forces and Differential Calculus in Ancient India
Girish, T. E.; Nair, C. Radhakrishnan
2010-01-01
We have studied the context and development of the ideas of physical forces and differential calculus in ancient India by studying relevant literature related to both astrology and astronomy since pre-Greek periods. The concept of Naisargika Bala (natural force) discussed in Hora texts from India is defined to be proportional to planetary size and inversely related to planetary distance. This idea developed several centuries prior to Isaac Newton resembles fundamental physical forces in natur...
Multi-digit maximum voluntary torque production on a circular object
SHIM, JAE KUN; HUANG, JUNFENG; HOOKE, ALEXANDER W.; LATSH, MARK L.; ZATSIORSKY, VLADIMIR M.
2010-01-01
Individual digit-tip forces and moments during torque production on a mechanically fixed circular object were studied. During the experiments, subjects positioned each digit on a 6-dimensional force/moment sensor attached to a circular handle and produced a maximum voluntary torque on the handle. The torque direction and the orientation of the torque axis were varied. From this study, it is concluded that: (1) the maximum torque in the closing (clockwise) direction was larger than in the opening (counter clockwise) direction; (2) the thumb and little finger had the largest and the smallest share of both total normal force and total moment, respectively; (3) the sharing of total moment between individual digits was not affected by the orientation of the torque axis or by the torque direction, while the sharing of total normal force between the individual digit varied with torque direction; (4) the normal force safety margins were largest and smallest in the thumb and little finger, respectively. PMID:17454086
Future changes over the Himalayas: Maximum and minimum temperature
Dimri, A. P.; Kumar, D.; Choudhary, A.; Maharana, P.
2018-03-01
An assessment of the projection of minimum and maximum air temperature over the Indian Himalayan region (IHR) from the COordinated Regional Climate Downscaling EXperiment- South Asia (hereafter, CORDEX-SA) regional climate model (RCM) experiments have been carried out under two different Representative Concentration Pathway (RCP) scenarios. The major aim of this study is to assess the probable future changes in the minimum and maximum climatology and its long-term trend under different RCPs along with the elevation dependent warming over the IHR. A number of statistical analysis such as changes in mean climatology, long-term spatial trend and probability distribution function are carried out to detect the signals of changes in climate. The study also tries to quantify the uncertainties associated with different model experiments and their ensemble in space, time and for different seasons. The model experiments and their ensemble show prominent cold bias over Himalayas for present climate. However, statistically significant higher warming rate (0.23-0.52 °C/decade) for both minimum and maximum air temperature (Tmin and Tmax) is observed for all the seasons under both RCPs. The rate of warming intensifies with the increase in the radiative forcing under a range of greenhouse gas scenarios starting from RCP4.5 to RCP8.5. In addition to this, a wide range of spatial variability and disagreements in the magnitude of trend between different models describes the uncertainty associated with the model projections and scenarios. The projected rate of increase of Tmin may destabilize the snow formation at the higher altitudes in the northern and western parts of Himalayan region, while rising trend of Tmax over southern flank may effectively melt more snow cover. Such combined effect of rising trend of Tmin and Tmax may pose a potential threat to the glacial deposits. The overall trend of Diurnal temperature range (DTR) portrays increasing trend across entire area with
Maximum mass of magnetic white dwarfs
International Nuclear Information System (INIS)
Paret, Daryel Manreza; Horvath, Jorge Ernesto; Martínez, Aurora Perez
2015-01-01
We revisit the problem of the maximum masses of magnetized white dwarfs (WDs). The impact of a strong magnetic field on the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into parallel and perpendicular components. We first construct stable solutions of the Tolman-Oppenheimer-Volkoff equations for parallel pressures and find that physical solutions vanish for the perpendicular pressure when B ≳ 10 13 G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WDs with super-Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we can apply results for structure equations appropriate for a cylindrical metric with anisotropic pressures that were derived in our previous work. From the solutions of the structure equations in cylindrical symmetry we have confirmed the same bound for B ∼ 10 13 G, since beyond this value no physical solutions are possible. Our tentative conclusion is that massive WDs with masses well beyond the Chandrasekhar limit do not constitute stable solutions and should not exist. (paper)
TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS
Energy Technology Data Exchange (ETDEWEB)
Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M
2007-11-12
Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.
Mammographic image restoration using maximum entropy deconvolution
International Nuclear Information System (INIS)
Jannetta, A; Jackson, J C; Kotre, C J; Birch, I P; Robson, K J; Padgett, R
2004-01-01
An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization
Maximum Margin Clustering of Hyperspectral Data
Niazmardi, S.; Safari, A.; Homayouni, S.
2013-09-01
In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.
Paving the road to maximum productivity.
Holland, C
1998-01-01
"Job security" is an oxymoron in today's environment of downsizing, mergers, and acquisitions. Workers find themselves living by new rules in the workplace that they may not understand. How do we cope? It is the leader's charge to take advantage of this chaos and create conditions under which his or her people can understand the need for change and come together with a shared purpose to effect that change. The clinical laboratory at Arkansas Children's Hospital has taken advantage of this chaos to down-size and to redesign how the work gets done to pave the road to maximum productivity. After initial hourly cutbacks, the workers accepted the cold, hard fact that they would never get their old world back. They set goals to proactively shape their new world through reorganizing, flexing staff with workload, creating a rapid response laboratory, exploiting information technology, and outsourcing. Today the laboratory is a lean, productive machine that accepts change as a way of life. We have learned to adapt, trust, and support each other as we have journeyed together over the rough roads. We are looking forward to paving a new fork in the road to the future.
Maximum power flux of auroral kilometric radiation
International Nuclear Information System (INIS)
Benson, R.F.; Fainberg, J.
1991-01-01
The maximum auroral kilometric radiation (AKR) power flux observed by distant satellites has been increased by more than a factor of 10 from previously reported values. This increase has been achieved by a new data selection criterion and a new analysis of antenna spin modulated signals received by the radio astronomy instrument on ISEE 3. The method relies on selecting AKR events containing signals in the highest-frequency channel (1980, kHz), followed by a careful analysis that effectively increased the instrumental dynamic range by more than 20 dB by making use of the spacecraft antenna gain diagram during a spacecraft rotation. This analysis has allowed the separation of real signals from those created in the receiver by overloading. Many signals having the appearance of AKR harmonic signals were shown to be of spurious origin. During one event, however, real second harmonic AKR signals were detected even though the spacecraft was at a great distance (17 R E ) from Earth. During another event, when the spacecraft was at the orbital distance of the Moon and on the morning side of Earth, the power flux of fundamental AKR was greater than 3 x 10 -13 W m -2 Hz -1 at 360 kHz normalized to a radial distance r of 25 R E assuming the power falls off as r -2 . A comparison of these intense signal levels with the most intense source region values (obtained by ISIS 1 and Viking) suggests that multiple sources were observed by ISEE 3
Maximum likelihood window for time delay estimation
International Nuclear Information System (INIS)
Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup
2004-01-01
Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.
Ancestral Sequence Reconstruction with Maximum Parsimony.
Herbst, Lina; Fischer, Mareike
2017-12-01
One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.
Evaluation of maximum power point tracking in hydrokinetic energy conversion systems
Directory of Open Access Journals (Sweden)
Jahangir Khan
2015-11-01
Full Text Available Maximum power point tracking is a mature control issue for wind, solar and other systems. On the other hand, being a relatively new technology, detailed discussion on power tracking of hydrokinetic energy conversion systems are generally not available. Prior to developing sophisticated control schemes for use in hydrokinetic systems, existing know-how in wind or solar technologies can be explored. In this study, a comparative evaluation of three generic classes of maximum power point scheme is carried out. These schemes are (a tip speed ratio control, (b power signal feedback control, and (c hill climbing search control. In addition, a novel concept for maximum power point tracking: namely, extremum seeking control is introduced. Detailed and validated system models are used in a simulation environment. Potential advantages and drawbacks of each of these schemes are summarised.
A Bayes-Maximum Entropy method for multi-sensor data fusion
Energy Technology Data Exchange (ETDEWEB)
Beckerman, M.
1991-01-01
In this paper we introduce a Bayes-Maximum Entropy formalism for multi-sensor data fusion, and present an application of this methodology to the fusion of ultrasound and visual sensor data as acquired by a mobile robot. In our approach the principle of maximum entropy is applied to the construction of priors and likelihoods from the data. Distances between ultrasound and visual points of interest in a dual representation are used to define Gibbs likelihood distributions. Both one- and two-dimensional likelihoods are presented, and cast into a form which makes explicit their dependence upon the mean. The Bayesian posterior distributions are used to test a null hypothesis, and Maximum Entropy Maps used for navigation are updated using the resulting information from the dual representation. 14 refs., 9 figs.
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
20 CFR 226.52 - Total annuity subject to maximum.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...
Half-width at half-maximum, full-width at half-maximum analysis
Indian Academy of Sciences (India)
addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.
Prolonged Instability Prior to a Regime Shift | Science ...
Regime shifts are generally defined as the point of ‘abrupt’ change in the state of a system. However, a seemingly abrupt transition can be the product of a system reorganization that has been ongoing much longer than is evident in statistical analysis of a single component of the system. Using both univariate and multivariate statistical methods, we tested a long-term high-resolution paleoecological dataset with a known change in species assemblage for a regime shift. Analysis of this dataset with Fisher Information and multivariate time series modeling showed that there was a∼2000 year period of instability prior to the regime shift. This period of instability and the subsequent regime shift coincide with regional climate change, indicating that the system is undergoing extrinsic forcing. Paleoecological records offer a unique opportunity to test tools for the detection of thresholds and stable-states, and thus to examine the long-term stability of ecosystems over periods of multiple millennia. This manuscript explores various methods of assessing the transition between alternative states in an ecological system described by a long-term high-resolution paleoecological dataset.
Noncircular Chainrings Do Not Influence Maximum Cycling Power.
Leong, Chee-Hoi; Elmer, Steven J; Martin, James C
2017-12-01
Noncircular chainrings could increase cycling power by prolonging the powerful leg extension/flexion phases, and curtailing the low-power transition phases. We compared maximal cycling power-pedaling rate relationships, and joint-specific kinematics and powers across 3 chainring eccentricities (CON = 1.0; LOW ecc = 1.13; HIGH ecc = 1.24). Part I: Thirteen cyclists performed maximal inertial-load cycling under 3 chainring conditions. Maximum cycling power and optimal pedaling rate were determined. Part II: Ten cyclists performed maximal isokinetic cycling (120 rpm) under the same 3 chainring conditions. Pedal and joint-specific powers were determined using pedal forces and limb kinematics. Neither maximal cycling power nor optimal pedaling rate differed across chainring conditions (all p > .05). Peak ankle angular velocity for HIGH ecc was less than CON (p pedal system allowed cyclists to manipulate ankle angular velocity to maintain their preferred knee and hip actions, suggesting maximizing extension/flexion and minimizing transition phases may be counterproductive for maximal power.
Putting Priors in Mixture Density Mercer Kernels
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Prior expectations facilitate metacognition for perceptual decision.
Sherman, M T; Seth, A K; Barrett, A B; Kanai, R
2015-09-01
The influential framework of 'predictive processing' suggests that prior probabilistic expectations influence, or even constitute, perceptual contents. This notion is evidenced by the facilitation of low-level perceptual processing by expectations. However, whether expectations can facilitate high-level components of perception remains unclear. We addressed this question by considering the influence of expectations on perceptual metacognition. To isolate the effects of expectation from those of attention we used a novel factorial design: expectation was manipulated by changing the probability that a Gabor target would be presented; attention was manipulated by instructing participants to perform or ignore a concurrent visual search task. We found that, independently of attention, metacognition improved when yes/no responses were congruent with expectations of target presence/absence. Results were modeled under a novel Bayesian signal detection theoretic framework which integrates bottom-up signal propagation with top-down influences, to provide a unified description of the mechanisms underlying perceptual decision and metacognition. Copyright © 2015 Elsevier Inc. All rights reserved.
Washing of waste prior to landfilling.
Cossu, Raffaello; Lai, Tiziana
2012-05-01
The main impact produced by landfills is represented by the release of leachate emissions. Waste washing treatment has been investigated to evaluate its efficiency in reducing the waste leaching fraction prior to landfilling. The results of laboratory-scale washing tests applied to several significant residues from integrated management of solid waste are presented in this study, specifically: non-recyclable plastics from source separation, mechanical-biological treated municipal solid waste and a special waste, automotive shredded residues. Results obtained demonstrate that washing treatment contributes towards combating the environmental impacts of raw wastes. Accordingly, a leachate production model was applied, leading to the consideration that the concentrations of chemical oxygen demand (COD) and total Kjeldahl nitrogen (TKN), parameters of fundamental importance in the characterization of landfill leachate, from a landfill containing washed wastes, are comparable to those that would only be reached between 90 and 220years later in the presence of raw wastes. The findings obtained demonstrated that washing of waste may represent an effective means of reducing the leachable fraction resulting in a consequent decrease in landfill emissions. Further studies on pilot scale are needed to assess the potential for full-scale application of this treatment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Pitch perception prior to cortical maturation
Lau, Bonnie K.
Pitch perception plays an important role in many complex auditory tasks including speech perception, music perception, and sound source segregation. Because of the protracted and extensive development of the human auditory cortex, pitch perception might be expected to mature, at least over the first few months of life. This dissertation investigates complex pitch perception in 3-month-olds, 7-month-olds and adults -- time points when the organization of the auditory pathway is distinctly different. Using an observer-based psychophysical procedure, a series of four studies were conducted to determine whether infants (1) discriminate the pitch of harmonic complex tones, (2) discriminate the pitch of unresolved harmonics, (3) discriminate the pitch of missing fundamental melodies, and (4) have comparable sensitivity to pitch and spectral changes as adult listeners. The stimuli used in these studies were harmonic complex tones, with energy missing at the fundamental frequency. Infants at both three and seven months of age discriminated the pitch of missing fundamental complexes composed of resolved and unresolved harmonics as well as missing fundamental melodies, demonstrating perception of complex pitch by three months of age. More surprisingly, infants in both age groups had lower pitch and spectral discrimination thresholds than adult listeners. Furthermore, no differences in performance on any of the tasks presented were observed between infants at three and seven months of age. These results suggest that subcortical processing is not only sufficient to support pitch perception prior to cortical maturation, but provides adult-like sensitivity to pitch by three months.
Febrile seizures prior to sudden cardiac death
DEFF Research Database (Denmark)
Stampe, Niels Kjær; Glinge, Charlotte; Jabbari, Reza
2018-01-01
Aims: Febrile seizure (FS) is a common disorder affecting 2-5% of children up to 5 years of age. The aim of this study was to determine whether FS in early childhood are over-represented in young adults dying from sudden cardiac death (SCD). Methods and results: We included all deaths (n = 4595...... with FS was sudden arrhythmic death syndrome (5/8; 62.5%). Conclusion: In conclusion, this study demonstrates a significantly two-fold increase in the frequency of FS prior to death in young SCD cases compared with the two control groups, suggesting that FS could potentially contribute in a risk......) nationwide and through review of all death certificates, we identified 245 SCD in Danes aged 1-30 years in 2000-09. Through the usage of nationwide registries, we identified all persons admitted with first FS among SCD cases (14/245; 5.7%) and in the corresponding living Danish population (71 027/2 369 785...
Interfacial force measurements using atomic force microscopy
Chu, L.
2018-01-01
Atomic Force Microscopy (AFM) can not only image the topography of surfaces at atomic resolution, but can also measure accurately the different interaction forces, like repulsive, adhesive and lateral existing between an AFM tip and the sample surface. Based on AFM, various extended techniques have
Tendon surveillance requirements - average tendon force
International Nuclear Information System (INIS)
Fulton, J.F.
1982-01-01
Proposed Rev. 3 to USNRC Reg. Guide 1.35 discusses the need for comparing, for individual tendons, the measured and predicted lift-off forces. Such a comparison is intended to detect any abnormal tendon force loss which might occur. Recognizing that there are uncertainties in the prediction of tendon losses, proposed Guide 1.35.1 has allowed specific tolerances on the fundamental losses. Thus, the lift-off force acceptance criteria for individual tendons appearing in Reg. Guide 1.35, Proposed Rev. 3, is stated relative to a lower bound predicted tendon force, which is obtained using the 'plus' tolerances on the fundamental losses. There is an additional acceptance criterion for the lift-off forces which is not specifically addressed in these two Reg. Guides; however, it is included in a proposed Subsection IWX to ASME Code Section XI. This criterion is based on the overriding requirement that the magnitude of prestress in the containment structure be sufficeint to meet the minimum prestress design requirements. This design requirement can be expressed as an average tendon force for each group of vertical hoop, or dome tendons. For the purpose of comparing the actual tendon forces with the required average tendon force, the lift-off forces measured for a sample of tendons within each group can be averaged to construct the average force for the entire group. However, the individual lift-off forces must be 'corrected' (normalized) prior to obtaining the sample average. This paper derives the correction factor to be used for this purpose. (orig./RW)
In-shoe plantar tri-axial stress profiles during maximum-effort cutting maneuvers.
Cong, Yan; Lam, Wing Kai; Cheung, Jason Tak-Man; Zhang, Ming
2014-12-18
Soft tissue injuries, such as anterior cruciate ligament rupture, ankle sprain and foot skin problems, frequently occur during cutting maneuvers. These injuries are often regarded as associated with abnormal joint torque and interfacial friction caused by excessive external and in-shoe shear forces. This study simultaneously investigated the dynamic in-shoe localized plantar pressure and shear stress during lateral shuffling and 45° sidestep cutting maneuvers. Tri-axial force transducers were affixed at the first and second metatarsal heads, lateral forefoot, and heel regions in the midsole of a basketball shoe. Seventeen basketball players executed both cutting maneuvers with maximum efforts. Lateral shuffling cutting had a larger mediolateral braking force than 45° sidestep cutting. This large braking force was concentrated at the first metatarsal head, as indicated by its maximum medial shear stress (312.2 ± 157.0 kPa). During propulsion phase, peak shear stress occurred at the second metatarsal head (271.3 ± 124.3 kPa). Compared with lateral shuffling cutting, 45° sidestep cutting produced larger peak propulsion shear stress (463.0 ± 272.6 kPa) but smaller peak braking shear stress (184.8 ± 181.7 kPa), of which both were found at the first metatarsal head. During both cutting maneuvers, maximum medial and posterior shear stress occurred at the first metatarsal head, whereas maximum pressure occurred at the second metatarsal head. The first and second metatarsal heads sustained relatively high pressure and shear stress and were expected to be susceptible to plantar tissue discomfort or injury. Due to different stress distribution, distinct pressure and shear cushioning mechanisms in basketball footwear might be considered over different foot regions. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOUBLE SHEAR DESIGN TO REDUCED STAMPING FORCE
Directory of Open Access Journals (Sweden)
Rudi Kurniawan Arief
2017-12-01
Full Text Available Ideally processing of part using stamping machine using only 70-80 % of available force to keep machine in good shape for a long periods. But in some certain case the force may equal to or exceed the available maximum force so the company must sent the process to another outsource company. A case found in a metal stamping company where a final product consist of 3 parts to assembly with one part exceeded the force of available machine. This part can only process in a 1000 tons machine while this company only have 2 of this machine with full workload. Sending this parts outsource will induce delivery problems because other parts are processed, assembled and paint inhouse, this also need additional transportation cost and extra supervision to ensure the quality and delivery schedule. The only exit action of this problem is by reducing the force tonnage. This paper using punch inclining method to reduce the force. The incline punch will distributed the force along the inclined surface that reduce stamping force as well. Inclined surface of punch also cause another major problems that the product becoming curved after process. This problems solved with additional flattening process that add more process cost but better than to outsource the process. Chisel type of inclining punch tip was choosen to avoid worst deformation of product. This paper will give the scientific recomendation to the company.
Production of isometric forces during sustained acceleration.
Sand, D P; Girgenrath, M; Bock, O; Pongratz, H
2003-06-01
The operation of high-performance aircraft requires pilots to apply finely graded forces on controls. Since they are often exposed to high levels of acceleration in flight, we investigated to what extent this ability is degraded in such an environment. Twelve healthy non-pilot volunteers were seated in the gondola of a centrifuge and their performance was tested at normal gravity (1 G) and while exposed to sustained forces of 1.5 G and 3 G oriented from head to foot (+Gz). Using an isometric joystick, they attempted to produce force vectors with specific lengths and directions commanded in random order by a visual display. Acceleration had substantial effects on the magnitude of produced force. Compared with 1 G, maximum produced force was about 2 N higher at 1.5 G and about 10 N higher at 3 G. The size of this effect was constant across the different magnitudes, but varied with the direction of the prescribed force. Acceleration degrades control of force production. This finding may indicate that the motor system misinterprets the unusual gravitoinertial environment and/or that proprioceptive feedback is degraded due to increased muscle tone. The production of excessive isometric force could affect the safe operation of high-performance aircraft.
Maximum entropy production rate in quantum thermodynamics
Energy Technology Data Exchange (ETDEWEB)
Beretta, Gian Paolo, E-mail: beretta@ing.unibs.i [Universita di Brescia, via Branze 38, 25123 Brescia (Italy)
2010-06-01
In the framework of the recent quest for well-behaved nonlinear extensions of the traditional Schroedinger-von Neumann unitary dynamics that could provide fundamental explanations of recent experimental evidence of loss of quantum coherence at the microscopic level, a recent paper [Gheorghiu-Svirschevski 2001 Phys. Rev. A 63 054102] reproposes the nonlinear equation of motion proposed by the present author [see Beretta G P 1987 Found. Phys. 17 365 and references therein] for quantum (thermo)dynamics of a single isolated indivisible constituent system, such as a single particle, qubit, qudit, spin or atomic system, or a Bose-Einstein or Fermi-Dirac field. As already proved, such nonlinear dynamics entails a fundamental unifying microscopic proof and extension of Onsager's reciprocity and Callen's fluctuation-dissipation relations to all nonequilibrium states, close and far from thermodynamic equilibrium. In this paper we propose a brief but self-contained review of the main results already proved, including the explicit geometrical construction of the equation of motion from the steepest-entropy-ascent ansatz and its exact mathematical and conceptual equivalence with the maximal-entropy-generation variational-principle formulation presented in Gheorghiu-Svirschevski S 2001 Phys. Rev. A 63 022105. Moreover, we show how it can be extended to the case of a composite system to obtain the general form of the equation of motion, consistent with the demanding requirements of strong separability and of compatibility with general thermodynamics principles. The irreversible term in the equation of motion describes the spontaneous attraction of the state operator in the direction of steepest entropy ascent, thus implementing the maximum entropy production principle in quantum theory. The time rate at which the path of steepest entropy ascent is followed has so far been left unspecified. As a step towards the identification of such rate, here we propose a possible
Determination of the maximum-depth to potential field sources by a maximum structural index method
Fedi, M.; Florio, G.
2013-01-01
A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.
Takayasu, Kenta; Yoshida, Kenji; Kinoshita, Hidefumi; Yoshimoto, Syunsuke; Oshiro, Osamu; Matsuda, Tadashi
2017-07-19
Quantifying surgical skills assists novice surgeons when learning operative techniques. We measured the interaction force at a ligation point and clarified the features of the force pattern among surgeons with different skill levels during laparoscopic knot tying. Forty-four surgeons were divided into three groups based on experience: 13 novice (0-5 years), 16 intermediate (6-15 years), and 15 expert (16-30 years). To assess the tractive force direction and volume during knot tying, we used a sensor that measures six force-torque values (x-axis: Fx, y-axis: Fy, z-axis: Fz, and xy-axis: Fxy) attached to a slit Penrose drain. All participants completed one double knot and five single knot sequences. We recorded completion time, force volume (FV), maximum force (MF), time over 1.5 N, duration of non-zero force, and percentage time when vertical force exceeded horizontal force (PTz). There was a significant difference between groups for completion time (p = 0.007); FV (total: p = 0.002; Fx: p = 0.004, Fy: p = 0.007, Fxy: p = 0.004, Fz: p force (p = 0.029); and PTz (p force pattern at the ligation point during suturing by surgeons with three levels of experience using a force measurement system. We revealed that both force volume and force direction differed depending on surgeons' skill level during knot tying. Copyright © 2017. Published by Elsevier Inc.
International Nuclear Information System (INIS)
Ridgely, Charles T
2010-01-01
Many textbooks dealing with general relativity do not demonstrate the derivation of forces in enough detail. The analyses presented herein demonstrate straightforward methods for computing forces by way of general relativity. Covariant divergence of the stress-energy-momentum tensor is used to derive a general expression of the force experienced by an observer in general coordinates. The general force is then applied to the local co-moving coordinate system of a uniformly accelerating observer, leading to an expression of the inertial force experienced by the observer. Next, applying the general force in Schwarzschild coordinates is shown to lead to familiar expressions of the gravitational force. As a more complex demonstration, the general force is applied to an observer in Boyer-Lindquist coordinates near a rotating, Kerr black hole. It is then shown that when the angular momentum of the black hole goes to zero, the force on the observer reduces to the force on an observer held stationary in Schwarzschild coordinates. As a final consideration, the force on an observer moving in rotating coordinates is derived. Expressing the force in terms of Christoffel symbols in rotating coordinates leads to familiar expressions of the centrifugal and Coriolis forces on the observer. It is envisioned that the techniques presented herein will be most useful to graduate level students, as well as those undergraduate students having experience with general relativity and tensor analysis.
Weighted Maximum-Clique Transversal Sets of Graphs
Chuan-Min Lee
2011-01-01
A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Directory of Open Access Journals (Sweden)
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Prior Mental Fatigue Impairs Marksmanship Decision Performance
Directory of Open Access Journals (Sweden)
James Head
2017-09-01
Full Text Available Purpose: Mental fatigue has been shown to impair subsequent physical performance in continuous and discontinuous exercise. However, its influence on subsequent fine-motor performance in an applied setting (e.g., marksmanship for trained soldiers is relatively unknown. The purpose of this study was to investigate whether prior mental fatigue influences subsequent marksmanship performance as measured by shooting accuracy and judgment of soldiers in a live-fire scenario.Methods: Twenty trained infantry soldiers engaged targets after completing either a mental fatigue or control intervention in a repeated measure design. Heart rate variability and the NASA-TLX were used to gauge physiological and subjective effects of the interventions. Target hit proportion, projectile group accuracy, and precision were used to measure marksmanship accuracy. Marksmanship accuracy was assessed by measuring bullet group accuracy (i.e., how close a group of shots are relative to center of mass and bullet group precision (i.e., how close are each individual shot to each other. Additionally, marksmanship decision accuracy (correctly shooting vs. correctly withholding shot when engaging targets was used to examine marksmanship performance.Results: Soldiers rated the mentally fatiguing task (59.88 ± 23.7 as having greater mental workload relative to the control intervention [31.29 ± 12.3, t(19 = 1.72, p < 0.001]. Additionally, soldiers completing the mental fatigue intervention (96.04 ± = 37.1 also had lower time-domain (standard deviation of normal to normal R-R intervals heart rate variability relative to the control [134.39 ± 47.4, t(18 = 3.59, p < 0.001]. Projectile group accuracy and group precision failed to show differences between interventions [t(19 = 0.98, p = 0.34, t(19 = 0.18, p = 0.87, respectively]. Marksmanship decision errors significantly increased after soldiers completed the mental fatigue intervention (48% ± 22.4 relative to the control
Digital communication constraints in prior space missions
Yassine, Nathan K.
2004-01-01
Digital communication is crucial for space endeavors. Jt transmits scientific and command data between earth stations and the spacecraft crew. It facilitates communications between astronauts, and provides live coverage during all phases of the mission. Digital communications provide ground stations and spacecraft crew precise data on the spacecraft position throughout the entire mission. Lessons learned from prior space missions are valuable for our new lunar and Mars missions set by our president s speech. These data will save our agency time and money, and set course our current developing technologies. Limitations on digital communications equipment pertaining mass, volume, data rate, frequency, antenna type and size, modulation, format, and power in the passed space missions are of particular interest. This activity is in support of ongoing communication architectural studies pertaining to robotic and human lunar exploration. The design capabilities and functionalities will depend on the space and power allocated for digital communication equipment. My contribution will be gathering these data, write a report, and present it to Communications Technology Division Staff. Antenna design is very carefully studied for each mission scenario. Currently, Phased array antennas are being developed for the lunar mission. Phased array antennas use little power, and electronically steer a beam instead of DC motors. There are 615 patches in the phased array antenna. These patches have to be modified to have high yield. 50 patches were created for testing. My part is to assist in the characterization of these patch antennas, and determine whether or not certain modifications to quartz micro-strip patch radiators result in a significant yield to warrant proceeding with repairs to the prototype 19 GHz ferroelectric reflect-array antenna. This work requires learning how to calibrate an automatic network, and mounting and testing antennas in coaxial fixtures. The purpose of this
Tropospheric radiative forcing of CH4
International Nuclear Information System (INIS)
Grossman, A.S.; Grant, K.E.
1994-04-01
We have evaluated the tropospheric radiative forcing of CH 4 in the 0-3000 cm -1 wavenumber range and compared this with prior published calculations. The atmospheric test cases involved perturbed methane scenarios in both a McClatchey mid latitude, summer, clear sky approximation, model atmosphere, as well as a globally and seasonally averaged model atmosphere containing a representative cloud distribution. The scenarios involved pure CH 4 radiative forcing and CH 4 plus a mixture of H 2 O, CO 2 , O 3 , and N 2 O. The IR radiative forcing was calculated using a correlated k-distribution transmission model. The major purposes of this paper are to first, use the correlated k-distribution model to calculate the tropospheric radiative forcing for CH 4 , as the only radiatively active gas, and in a mixture with H 2 O, CO 2 , O 3 , and N 2 O, for a McClatchey mid-latitude summer, clear-sky model atmosphere, and to compare the results to those obtained in the studies mentioned above. Second, we will calculate the tropospheric methane forcing in a globally and annually averaged atmosphere with and without a representative cloud distribution in order to validate the conjecture given in IPCC (1990) that the inclusion of clouds in the forcing calculations results in forcing values which are approximately 20 percent less than those obtained using clear sky approximations
Novel friction law for the static friction force based on local precursor slipping.
Katano, Yu; Nakano, Ken; Otsuki, Michio; Matsukawa, Hiroshi
2014-09-10
The sliding of a solid object on a solid substrate requires a shear force that is larger than the maximum static friction force. It is commonly believed that the maximum static friction force is proportional to the loading force and does not depend on the apparent contact area. The ratio of the maximum static friction force to the loading force is called the static friction coefficient µM, which is considered to be a constant. Here, we conduct experiments demonstrating that the static friction force of a slider on a substrate follows a novel friction law under certain conditions. The magnitude of µM decreases as the loading force increases or as the apparent contact area decreases. This behavior is caused by the slip of local precursors before the onset of bulk sliding and is consistent with recent theory. The results of this study will develop novel methods for static friction control.
Estimating security betas using prior information based on firm fundamentals
Cosemans, M.; Frehen, R.; Schotman, P.C.; Bauer, R.
2010-01-01
This paper proposes a novel approach for estimating time-varying betas of individual stocks that incorporates prior information based on fundamentals. We shrink the rolling window estimate of beta towards a firm-specific prior that is motivated by asset pricing theory. The prior captures structural
Directory of Open Access Journals (Sweden)
Arzura Idris
2012-06-01
Full Text Available This paper analyzes the phenomenon of “forced migration” in Malaysia. It examines the nature of forced migration, the challenges faced by Malaysia, the policy responses and their impact on the country and upon the forced migrants. It considers forced migration as an event hosting multifaceted issues related and relevant to forced migrants and suggests that Malaysia has been preoccupied with the issue of forced migration movements. This is largely seen in various responses invoked from Malaysia due to “south-south forced migration movements.” These responses are, however, inadequate in terms of commitment to the international refugee regime. While Malaysia did respond to economic and migration challenges, the paper asserts that such efforts are futile if she ignores issues critical to forced migrants.
Labor Force Participation Rate
City and County of Durham, North Carolina — This thematic map presents the labor force participation rate of working-age people in the United States in 2010. The 2010 Labor Force Participation Rate shows the...
International Nuclear Information System (INIS)
Sauer, P.U.
2014-01-01
In this paper, the role of three-nucleon forces in ab initio calculations of nuclear systems is investigated. The difference between genuine and induced many-nucleon forces is emphasized. Induced forces arise in the process of solving the nuclear many-body problem as technical intermediaries toward calculationally converged results. Genuine forces make up the Hamiltonian. They represent the chosen underlying dynamics. The hierarchy of contributions arising from genuine two-, three- and many-nucleon forces is discussed. Signals for the need of the inclusion of genuine three-nucleon forces are studied in nuclear systems, technically best under control, especially in three-nucleon and four-nucleon systems. Genuine three-nucleon forces are important for details in the description of some observables. Their contributions to observables are small on the scale set by two-nucleon forces. (author)
RSOI: Force Deployment Bottleneck
National Research Council Canada - National Science Library
D'Amato, Mark
1998-01-01
.... This runs counter to the popular belief that strategic lift is the limiting constraint. The study begins by highlighting the genesis of the military's current force projection strategy and the resulting importance of rapid force deployments...
Kim, Hea-Jung
2014-01-01
This paper considers a hierarchical screened Gaussian model (HSGM) for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.
Directory of Open Access Journals (Sweden)
Hea-Jung Kim
2014-01-01
Full Text Available This paper considers a hierarchical screened Gaussian model (HSGM for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.
Linking actions and objects: Context-specific learning of novel weight priors.
Trewartha, Kevin M; Flanagan, J Randall
2017-06-01
Distinct explicit and implicit memory processes support weight predictions used when lifting objects and making perceptual judgments about weight, respectively. The first time that an object is encountered weight is predicted on the basis of learned associations, or priors, linking size and material to weight. A fundamental question is whether the brain maintains a single, global representation of priors, or multiple representations that can be updated in a context specific way. A second key question is whether the updating of priors, or the ability to scale lifting forces when repeatedly lifting unusually weighted objects requires focused attention. To investigate these questions we compared the adaptability of weight predictions used when lifting objects and judging their weights in different groups of participants who experienced size-weight inverted objects passively (with the objects placed on the hands) or actively (where participants lift the objects) under full or divided attention. To assess weight judgments we measured the size-weight illusion after every 20 trials of experience with the inverted objects both passively and actively. The attenuation of the illusion that arises when lifting inverted object was found to be context-specific such that the attenuation was larger when the mode of interaction with the inverted objects matched the method of assessment of the illusion. Dividing attention during interaction with the inverted objects had no effect on attenuation of the illusion, but did slow the rate at which lifting forces were scaled to the weight inverted objects. These findings suggest that the brain stores multiple representations of priors that are context specific, and that focused attention is important for scaling lifting forces, but not for updating weight predictions used when judging object weight. Copyright © 2017 Elsevier B.V. All rights reserved.
Barba-Montoya, Jose; Dos Reis, Mario; Yang, Ziheng
2017-09-01
Fossil calibrations are the utmost source of information for resolving the distances between molecular sequences into estimates of absolute times and absolute rates in molecular clock dating analysis. The quality of calibrations is thus expected to have a major impact on divergence time estimates even if a huge amount of molecular data is available. In Bayesian molecular clock dating, fossil calibration information is incorporated in the analysis through the prior on divergence times (the time prior). Here, we evaluate three strategies for converting fossil calibrations (in the form of minimum- and maximum-age bounds) into the prior on times, which differ according to whether they borrow information from the maximum age of ancestral nodes and minimum age of descendent nodes to form constraints for any given node on the phylogeny. We study a simple example that is analytically tractable, and analyze two real datasets (one of 10 primate species and another of 48 seed plant species) using three Bayesian dating programs: MCMCTree, MrBayes and BEAST2. We examine how different calibration strategies, the birth-death process, and automatic truncation (to enforce the constraint that ancestral nodes are older than descendent nodes) interact to determine the time prior. In general, truncation has a great impact on calibrations so that the effective priors on the calibration node ages after the truncation can be very different from the user-specified calibration densities. The different strategies for generating the effective prior also had considerable impact, leading to very different marginal effective priors. Arbitrary parameters used to implement minimum-bound calibrations were found to have a strong impact upon the prior and posterior of the divergence times. Our results highlight the importance of inspecting the joint time prior used by the dating program before any Bayesian dating analysis. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Sitters, G.; Kamsma, D.; Thalhammer, G.; Ritsch-Marte, M.; Peterman, E.J.G.; Wuite, G.J.L.
2015-01-01
Force spectroscopy has become an indispensable tool to unravel the structural and mechanochemical properties of biomolecules. Here we extend the force spectroscopy toolbox with an acoustic manipulation device that can exert forces from subpiconewtons to hundreds of piconewtons on thousands of
International Nuclear Information System (INIS)
Mulcahy, T.M.
1982-05-01
A force transducer for measuring lift and drag coefficients for a circular cylinder in turbulent water flow is presented. In addition to describing the actual design and construction of the strain-gauged force- ring based transducer, requirements for obtained valid fluid force test data are discussed, and pertinent flow test experience is related
Ridgely, Charles T.
2010-01-01
Many textbooks dealing with general relativity do not demonstrate the derivation of forces in enough detail. The analyses presented herein demonstrate straightforward methods for computing forces by way of general relativity. Covariant divergence of the stress-energy-momentum tensor is used to derive a general expression of the force experienced…
Force TV Radio Week in Photos About Us Air Force Senior Leaders SECAF CSAF CMSAF Biographies Adjunct Professors Senior Mentor Biographies Fact Sheets Commander's Call Topics CCT Archive CSAF Reading List 2017 Media Sites Site Registration Contact Us Search AF.mil: Home > About Us > Air Force Senior Leaders
Accurate modeling and maximum power point detection of ...
African Journals Online (AJOL)
Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.
Maximum power per VA control of vector controlled interior ...
Indian Academy of Sciences (India)
Thakur Sumeet Singh
2018-04-11
Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...
Electron density distribution in Si and Ge using multipole, maximum ...
Indian Academy of Sciences (India)
Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.
The implications of force reflection for teleoperation in space
International Nuclear Information System (INIS)
Draper, J.V.; Herndon, J.N.; Moore, W.E.
1987-01-01
This paper reviews previous research on teleoperator force feedback and reports results of a testing program which assessed the impact of force reflection on teleoperator task performance. Force reflection is a type of force feedback in which the forces acting on the remote portion of the teleoperator are displayed to the operator by back-driving the master controller. The testing program compared three force reflection levels: 4 to 1 (four units of force on the slave produce one unit of force at the master controller), 1 to 1, and infinity to 1 (no force reflection). Time required to complete tasks, rate of occurrence of errors, the maximum force applied to tasks components, and variability in forces applied to components during completion of representative remote handling tasks were used as dependent variables. Operators exhibited lower error rates, lower peak forces, and more consistent application of forces using force reflection than they did without it. These data support the hypothesis that force reflection provides useful information for teleoperator users. The earlier literature and the results of the experiment are discussed in terms of their implications for space-based teleoperator systems. The discussion describes the impact of force reflection on task completion performance and task strategies, as suggested by the literature. It is important to understand the trade-offs involved in using telerobotic systems with and without force reflection. Force-reflecting systems are typically more expensive (in mass, volume, and price per unit), but they reduce mean time to repair and may be safer to use, compared to systems without force reflection
International Nuclear Information System (INIS)
Lober, R.W.; Reeder, J.D.; Porter, L.K.
1987-01-01
Studies were conducted to quantify and compare the efficiencies of various evaporative systems used in evaporating 15 N samples prior to mass spectrometric analysis. Two new forced-air systems were designed and compared with a conventional forced-air system and with an open-air dry bath technique for effectiveness in preventing atmospheric contamination of evaporating samples. The forced-air evaporative systems significantly reduced the time needed to evaporate samples as compared to the open-air dry bath technique; samples were evaporated to dryness in 2.5 h with the forced-air systems as compared to 8 to 10 h on the open-air dry bath. The effectiveness of a given forced-air system to prevent atmospheric contamination of evaporating samples was significantly affected by the flow rate of the air stream flowing over the samples. The average atmospheric contaminant N found in samples evaporated on the open-air dry bath was 0.3 μ N, indicating very low concentrations of atmospheric NH 3 during this study. However, in previous studies the authors have experienced significant contamination of 15 N samples evaporated on an open-air dry bath because the level of contaminant N in the laboratory atmosphere varied and could not be adequately controlled. Average cross-contaminant levels of 0.28, 0.20, and 1.01 μ of N were measured between samples evaporated on the open-air dry bath, the newly-designed forced-air system, and the conventional forced-air system, respectively. The cross-contamination level is significantly higher on the conventional forced-air system than on the other two systems, and could significantly alter the atom % 15 N of high-enriched, low [N] evaporating samples
Evaluation of stream crossing methods prior to gas pipeline construction
International Nuclear Information System (INIS)
Murphy, M.H.; Rogers, J.S.; Ricca, A.
1995-01-01
Stream surveys are conducted along proposed gas pipeline routes prior to construction to assess potential impacts to stream ecosystems and to recommend preferred crossing methods. Recently, there has been a high level of scrutiny from the Public Service Commission (PSC) to conduct these stream crossings with minimal effects to the aquatic community. PSC's main concern is the effect of sediment on aquatic biota. Smaller, low flowing or intermittent streams are generally crossed using a wet crossing technique. This technique involves digging a trench for the pipeline while the stream is flowing. Sediment control measures are used to reduce sediment loads downstream. Wider, faster flowing, or protected streams are typically crossed with a dry crossing technique. A dry crossing involves placing a barrier upstream of the crossing and diverting the water around the crossing location. The pipeline trench is then dug in the dry area. O'Brien and Gere and NYSEG have jointly designed a modified wet crossing for crossing streams that exceed maximum flows for a dry crossing, and are too wide for a typical wet crossing. This method diverts water around the crossing using a pumping system, instead of constructing a dam. The trench is similar to a wet crossing, with sediment control devices in place upstream and downstream. If streams are crossed during low flow periods, the pumping system will be able to reduce the majority of water flow and volume form the crossing area, thereby reducing ecological impacts. Evaluation of effects of this crossing type on the stream biota are currently proposed and may proceed when construction begins
Walter, G.
2015-01-01
In the Bayesian approach to statistical inference, possibly subjective knowledge on model parameters can be expressed by so-called prior distributions. A prior distribution is updated, via Bayes’ Rule, to the so-called posterior distribution, which combines prior information and information from
Wetzels, Sandra A. J.; Kester, Liesbeth; van Merrienboer, Jeroen J. G.; Broers, Nick J.
2011-01-01
Background: Prior knowledge activation facilitates learning. Note taking during prior knowledge activation (i.e., note taking directed at retrieving information from memory) might facilitate the activation process by enabling learners to build an external representation of their prior knowledge. However, taking notes might be less effective in…
40 CFR 141.13 - Maximum contaminant levels for turbidity.
2010-07-01
... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...
Maximum Power Training and Plyometrics for Cross-Country Running.
Ebben, William P.
2001-01-01
Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…
13 CFR 107.840 - Maximum term of Financing.
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...
7 CFR 3565.210 - Maximum interest rate.
2010-01-01
... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...
Characterizing graphs of maximum matching width at most 2
DEFF Research Database (Denmark)
Jeong, Jisu; Ok, Seongmin; Suh, Geewon
2017-01-01
The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...
DEFF Research Database (Denmark)
Bialynicki-Birula, I; Cirone, M.A.; Dahl, Jens Peder
2002-01-01
We present Heisenberg's equation of motion for the radial variable of a free non-relativistic particle in D dimensions. The resulting radial force consists of three contributions: (i) the quantum fictitious force which is either attractive or repulsive depending on the number of dimensions, (ii......) a singular quantum force located at the origin, and (iii) the centrifugal force associated with non-vanishing angular momentum. Moreover, we use Heisenberg's uncertainty relation to introduce a lower bound for the kinetic energy of an ensemble of neutral particles. This bound is quadratic in the number...... of atoms and can be traced back to the repulsive quantum fictitious potential. All three forces arise for a free particle: "Force without force"....
Muscle Force-Velocity Relationships Observed in Four Different Functional Tests.
Zivkovic, Milena Z; Djuric, Sasa; Cuk, Ivan; Suzovic, Dejan; Jaric, Slobodan
2017-02-01
The aims of the present study were to investigate the shape and strength of the force-velocity relationships observed in different functional movement tests and explore the parameters depicting force, velocity and power producing capacities of the tested muscles. Twelve subjects were tested on maximum performance in vertical jumps, cycling, bench press throws, and bench pulls performed against different loads. Thereafter, both the averaged and maximum force and velocity variables recorded from individual trials were used for force-velocity relationship modeling. The observed individual force-velocity relationships were exceptionally strong (median correlation coefficients ranged from r = 0.930 to r = 0.995) and approximately linear independently of the test and variable type. Most of the relationship parameters observed from the averaged and maximum force and velocity variable types were strongly related in all tests (r = 0.789-0.991), except for those in vertical jumps (r = 0.485-0.930). However, the generalizability of the force-velocity relationship parameters depicting maximum force, velocity and power of the tested muscles across different tests was inconsistent and on average moderate. We concluded that the linear force-velocity relationship model based on either maximum or averaged force-velocity data could provide the outcomes depicting force, velocity and power generating capacity of the tested muscles, although such outcomes can only be partially generalized across different muscles.
Stotts, Steven A; Koch, Robert A
2017-08-01
In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.
Targeted search for continuous gravitational waves: Bayesian versus maximum-likelihood statistics
International Nuclear Information System (INIS)
Prix, Reinhard; Krishnan, Badri
2009-01-01
We investigate the Bayesian framework for detection of continuous gravitational waves (GWs) in the context of targeted searches, where the phase evolution of the GW signal is assumed to be known, while the four amplitude parameters are unknown. We show that the orthodox maximum-likelihood statistic (known as F-statistic) can be rediscovered as a Bayes factor with an unphysical prior in amplitude parameter space. We introduce an alternative detection statistic ('B-statistic') using the Bayes factor with a more natural amplitude prior, namely an isotropic probability distribution for the orientation of GW sources. Monte Carlo simulations of targeted searches show that the resulting Bayesian B-statistic is more powerful in the Neyman-Pearson sense (i.e., has a higher expected detection probability at equal false-alarm probability) than the frequentist F-statistic.
Disability correlates in Canadian Armed Forces Regular Force Veterans.
Thompson, James M; Pranger, Tina; Sweet, Jill; VanTil, Linda; McColl, Mary Ann; Besemann, Markus; Shubaly, Colleen; Pedlar, David
2015-01-01
This study was undertaken to inform disability mitigation for military veterans by identifying personal, environmental, and health factors associated with activity limitations. A sample of 3154 Canadian Armed Forces Regular Force Veterans who were released during 1998-2007 participated in the 2010 Survey on Transition to Civilian Life. Associations between personal and environmental factors, health conditions and activity limitations were explored using ordinal logistic regression. The prevalence of activity reduction in life domains was higher than the Canadian general population (49% versus 21%), as was needing assistance with at least one activity of daily living (17% versus 5%). Prior to adjusting for health conditions, disability odds were elevated for increased age, females, non-degree post-secondary graduation, low income, junior non-commissioned members, deployment, low social support, low mastery, high life stress, and weak sense of community belonging. Reduced odds were found for private/recruit ranks. Disability odds were highest for chronic pain (10.9), any mental health condition (2.7), and musculoskeletal conditions (2.6), and there was a synergistic additive effect of physical and mental health co-occurrence. Disability, measured as activity limitation, was associated with a range of personal and environmental factors and health conditions, indicating multifactorial and multidisciplinary approaches to disability mitigation.
2010-07-01
... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...
Directory of Open Access Journals (Sweden)
Vasile Cojocaru
2016-12-01
Full Text Available Several methods can be used in the FEM studies to apply the loads on a plain bearing. The paper presents a comparative analysis of maximum stress obtained for three loading scenarios: resultant force applied on the shaft – bearing assembly, variable pressure with sinusoidal distribution applied on the bearing surface, variable pressure with parabolic distribution applied on the bearing surface.
Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems
International Nuclear Information System (INIS)
Helin, T; Burger, M
2015-01-01
A demanding challenge in Bayesian inversion is to efficiently characterize the posterior distribution. This task is problematic especially in high-dimensional non-Gaussian problems, where the structure of the posterior can be very chaotic and difficult to analyse. Current inverse problem literature often approaches the problem by considering suitable point estimators for the task. Typically the choice is made between the maximum a posteriori (MAP) or the conditional mean (CM) estimate. The benefits of either choice are not well-understood from the perspective of infinite-dimensional theory. Most importantly, there exists no general scheme regarding how to connect the topological description of a MAP estimate to a variational problem. The recent results by Dashti and others (Dashti et al 2013 Inverse Problems 29 095017) resolve this issue for nonlinear inverse problems in Gaussian framework. In this work we improve the current understanding by introducing a novel concept called the weak MAP (wMAP) estimate. We show that any MAP estimate in the sense of Dashti et al (2013 Inverse Problems 29 095017) is a wMAP estimate and, moreover, how the wMAP estimate connects to a variational formulation in general infinite-dimensional non-Gaussian problems. The variational formulation enables to study many properties of the infinite-dimensional MAP estimate that were earlier impossible to study. In a recent work by the authors (Burger and Lucka 2014 Maximum a posteriori estimates in linear inverse problems with logconcave priors are proper bayes estimators preprint) the MAP estimator was studied in the context of the Bayes cost method. Using Bregman distances, proper convex Bayes cost functions were introduced for which the MAP estimator is the Bayes estimator. Here, we generalize these results to the infinite-dimensional setting. Moreover, we discuss the implications of our results for some examples of prior models such as the Besov prior and hierarchical prior. (paper)
A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator
Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai
2017-05-01
To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.
Characteristics of Plantar Loads in Maximum Forward Lunge Tasks in Badminton.
Hu, Xiaoyue; Li, Jing Xian; Hong, Youlian; Wang, Lin
2015-01-01
Badminton players often perform powerful and long-distance lunges during such competitive matches. The objective of this study is to compare the plantar loads of three one-step maximum forward lunges in badminton. Fifteen right-handed male badminton players participated in the study. Each participant performed five successful maximum lunges at three directions. For each direction, the participant wore three different shoe brands. Plantar loading, including peak pressure, maximum force, and contact area, was measured by using an insole pressure measurement system. Two-way ANOVA with repeated measures was employed to determine the effects of the different lunge directions and different shoes, as well as the interaction of these two variables, on the measurements. The maximum force (MF) on the lateral midfoot was lower when performing left-forward lunges than when performing front-forward lunges (p = 0.006, 95% CI = -2.88 to -0.04%BW). The MF and peak pressures (PP) on the great toe region were lower for the front-forward lunge than for the right-forward lunge (MF, p = 0.047, 95% CI = -3.62 to -0.02%BW; PP, p = 0.048, 95% CI = -37.63 to -0.16 KPa) and left-forward lunge (MF, p = 0.015, 95% CI = -4.39 to -0.38%BW; PP, p = 0.008, 95% CI = -47.76 to -5.91 KPa). These findings indicate that compared with the front-forward lunge, left and right maximum forward lunges induce greater plantar loads on the great toe region of the dominant leg of badminton players. The differences in the plantar loads of the different lunge directions may be potential risks for injuries to the lower extremities of badminton players.
Characteristics of Plantar Loads in Maximum Forward Lunge Tasks in Badminton.
Directory of Open Access Journals (Sweden)
Xiaoyue Hu
Full Text Available Badminton players often perform powerful and long-distance lunges during such competitive matches. The objective of this study is to compare the plantar loads of three one-step maximum forward lunges in badminton.Fifteen right-handed male badminton players participated in the study. Each participant performed five successful maximum lunges at three directions. For each direction, the participant wore three different shoe brands. Plantar loading, including peak pressure, maximum force, and contact area, was measured by using an insole pressure measurement system. Two-way ANOVA with repeated measures was employed to determine the effects of the different lunge directions and different shoes, as well as the interaction of these two variables, on the measurements.The maximum force (MF on the lateral midfoot was lower when performing left-forward lunges than when performing front-forward lunges (p = 0.006, 95% CI = -2.88 to -0.04%BW. The MF and peak pressures (PP on the great toe region were lower for the front-forward lunge than for the right-forward lunge (MF, p = 0.047, 95% CI = -3.62 to -0.02%BW; PP, p = 0.048, 95% CI = -37.63 to -0.16 KPa and left-forward lunge (MF, p = 0.015, 95% CI = -4.39 to -0.38%BW; PP, p = 0.008, 95% CI = -47.76 to -5.91 KPa.These findings indicate that compared with the front-forward lunge, left and right maximum forward lunges induce greater plantar loads on the great toe region of the dominant leg of badminton players. The differences in the plantar loads of the different lunge directions may be potential risks for injuries to the lower extremities of badminton players.
International Nuclear Information System (INIS)
Fiebig, H. Rudolf
2002-01-01
We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss the practical issues of the approach
The maximum entropy production and maximum Shannon information entropy in enzyme kinetics
Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš
2018-04-01
We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.
Tandberg-Hanssen, E.; Cheng, C. C.; Woodgate, B. E.; Brandt, J. C.; Chapman, R. D.; Athay, R. G.; Beckers, J. M.; Bruner, E. C.; Gurman, J. B.; Hyder, C. L.
1981-01-01
The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission spacecraft is described. It is pointed out that the instrument, which operates in the wavelength range 1150-3600 A, has a spatial resolution of 2-3 arcsec and a spectral resolution of 0.02 A FWHM in second order. A Gregorian telescope, with a focal length of 1.8 m, feeds a 1 m Ebert-Fastie spectrometer. A polarimeter comprising rotating Mg F2 waveplates can be inserted behind the spectrometer entrance slit; it permits all four Stokes parameters to be determined. Among the observing modes are rasters, spectral scans, velocity measurements, and polarimetry. Examples of initial observations made since launch are presented.
Use on non-conjugate prior distributions in compound failure models. Final technical report
International Nuclear Information System (INIS)
Shultis, J.K.; Johnson, D.E.; Milliken, G.A.; Eckhoff, N.D.
1981-12-01
Several theoretical and computational techniques are presented for compound failure models in which the failure rate or failure probability for a class of components is considered to be a random variable. Both the failure-on-demand and failure-rate situation are considered. Ten different prior families are presented for describing the variation or uncertainty of the failure parameter. Methods considered for estimating values for the prior parameters from a given set of failure data are (1) matching data moments to those of the prior distribution, (2) matching data moments to those of the compound marginal distribution, and (3) the marginal maximum likelihood method. Numerical methods for computing the parameter estimators for all ten prior families are presented, as well as methods for obtaining estimates of the variances and covariance of the parameter estimators, it is shown that various confidence, probability, and tolerance intervals can be evaluated. Finally, to test the resulting failure models against the given failure data, generalized chi-squage and Kolmogorov-Smirnov goodness-of-fit tests are proposed together with a test to eliminate outliers from the failure data. Computer codes based on the results presented here have been prepared and are presented in a companion report
Forward Deployed Naval Forces in the Republic of the Philippines
2016-06-10
French prior to World War II. The United States has also stationed naval forces in areas that were previously colonized such as the Philippines after the...Forward Deployed Naval Forces is not a new concept or strategy. In fact, it was utilized by other nations such as the British and French prior to World...the west, to the Cook Islands in the east, and from Russia in the north, to New Zealand in the south The region covers an area from Mongolia in the
Traction forces exerted by epithelial cell sheets
International Nuclear Information System (INIS)
Saez, A; Anon, E; Ghibaudo, M; Di Meglio, J-M; Hersen, P; Ladoux, B; Du Roure, O; Silberzan, P; Buguin, A
2010-01-01
Whereas the adhesion and migration of individual cells have been well described in terms of physical forces, the mechanics of multicellular assemblies is still poorly understood. Here, we study the behavior of epithelial cells cultured on microfabricated substrates designed to measure cell-to-substrate interactions. These substrates are covered by a dense array of flexible micropillars whose deflection enables us to measure traction forces. They are obtained by lithography and soft replica molding. The pillar deflection is measured by video microscopy and images are analyzed with home-made multiple particle tracking software. First, we have characterized the temporal and spatial distributions of traction forces of cellular assemblies of various sizes. The mechanical force balance within epithelial cell sheets shows that the forces exerted by neighboring cells strongly depend on their relative position in the monolayer: the largest deformations are always localized at the edge of the islands of cells in the active areas of cell protrusions. The average traction stress rapidly decreases from its maximum value at the edge but remains much larger than the inherent noise due to the force resolution of our pillar tracking software, indicating an important mechanical activity inside epithelial cell islands. Moreover, these traction forces vary linearly with the rigidity of the substrate over about two decades, suggesting that cells exert a given amount of deformation rather than a force. Finally, we engineer micropatterned substrates supporting pillars with anisotropic stiffness. On such substrates cellular growth is aligned with respect to the stiffest direction in correlation with the magnitude of the applied traction forces.
DEFF Research Database (Denmark)
Barendregt, Wolmet; Börjesson, Peter; Eriksson, Eva
2017-01-01
In this paper, we present the forced collaborative interaction game StringForce. StringForce is developed for a special education context to support training of collaboration skills, using readily available technologies and avoiding the creation of a "mobile bubble". In order to play String......Force two or four physically collocated tablets are required. These tablets are connected to form one large shared game area. The game can only be played by collaborating. StringForce extends previous work, both technologically and regarding social-emotional training. We believe String......Force to be an interesting demo for the IDC community, as it intertwines several relevant research fields, such as mobile interaction and collaborative gaming in the special education context....
International Nuclear Information System (INIS)
Cirone, M.A.; Schleich, W.P.; Straub, F.; Rzazewski, K.; Wheeler, J.A.
2002-01-01
In a two-dimensional world, a free quantum particle of vanishing angular momentum experiences an attractive force. This force originates from a modification of the classical centrifugal force due to the wave nature of the particle. For positive energies the quantum anticentrifugal force manifests itself in a bunching of the nodes of the energy wave functions towards the origin. For negative energies this force is sufficient to create a bound state in a two-dimensional δ-function potential. In a counterintuitive way, the attractive force pushes the particle away from the location of the δ-function potential. As a consequence, the particle is localized in a band-shaped domain around the origin
New results on the mid-latitude midnight temperature maximum
Mesquita, Rafael L. A.; Meriwether, John W.; Makela, Jonathan J.; Fisher, Daniel J.; Harding, Brian J.; Sanders, Samuel C.; Tesema, Fasil; Ridley, Aaron J.
2018-04-01
Fabry-Perot interferometer (FPI) measurements of thermospheric temperatures and winds show the detection and successful determination of the latitudinal distribution of the midnight temperature maximum (MTM) in the continental mid-eastern United States. These results were obtained through the operation of the five FPI observatories in the North American Thermosphere Ionosphere Observing Network (NATION) located at the Pisgah Astronomic Research Institute (PAR) (35.2° N, 82.8° W), Virginia Tech (VTI) (37.2° N, 80.4° W), Eastern Kentucky University (EKU) (37.8° N, 84.3° W), Urbana-Champaign (UAO) (40.2° N, 88.2° W), and Ann Arbor (ANN) (42.3° N, 83.8° W). A new approach for analyzing the MTM phenomenon is developed, which features the combination of a method of harmonic thermal background removal followed by a 2-D inversion algorithm to generate sequential 2-D temperature residual maps at 30 min intervals. The simultaneous study of the temperature data from these FPI stations represents a novel analysis of the MTM and its large-scale latitudinal and longitudinal structure. The major finding in examining these maps is the frequent detection of a secondary MTM peak occurring during the early evening hours, nearly 4.5 h prior to the timing of the primary MTM peak that generally appears after midnight. The analysis of these observations shows a strong night-to-night variability for this double-peaked MTM structure. A statistical study of the behavior of the MTM events was carried out to determine the extent of this variability with regard to the seasonal and latitudinal dependence. The results show the presence of the MTM peak(s) in 106 out of the 472 determinable nights (when the MTM presence, or lack thereof, can be determined with certainty in the data set) selected for analysis (22 %) out of the total of 846 nights available. The MTM feature is seen to appear slightly more often during the summer (27 %), followed by fall (22 %), winter (20 %), and spring
The Role of Prior Knowledge in International Franchise Partner Recruitment
Wang, Catherine; Altinay, Levent
2006-01-01
Purpose To investigate the role of prior knowledge in the international franchise partner recruitment process and to evaluate how cultural distance influences the role of prior knowledge in this process. Design/Methodology/Approach A single embedded case study of an international hotel firm was the focus of the enquiry. Interviews, observations and document analysis were used as the data collection techniques. Findings Findings reveal that prior knowledge of the franchisor enab...
Spectrally Consistent Satellite Image Fusion with Improved Image Priors
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Aanæs, Henrik; Jensen, Thomas B.S.
2006-01-01
Here an improvement to our previous framework for satellite image fusion is presented. A framework purely based on the sensor physics and on prior assumptions on the fused image. The contributions of this paper are two fold. Firstly, a method for ensuring 100% spectrally consistency is proposed......, even when more sophisticated image priors are applied. Secondly, a better image prior is introduced, via data-dependent image smoothing....
Acquisition of multiple prior distributions in tactile temporal order judgment
Directory of Open Access Journals (Sweden)
Yasuhito eNagai
2012-08-01
Full Text Available The Bayesian estimation theory proposes that the brain acquires the prior distribution of a task and integrates it with sensory signals to minimize the effect of sensory noise. Psychophysical studies have demonstrated that our brain actually implements Bayesian estimation in a variety of sensory-motor tasks. However, these studies only imposed one prior distribution on participants within a task period. In this study, we investigated the conditions that enable the acquisition of multiple prior distributions in temporal order judgment (TOJ of two tactile stimuli across the hands. In Experiment 1, stimulation intervals were randomly selected from one of two prior distributions (biased to right hand earlier and biased to left hand earlier in association with color cues (green and red, respectively. Although the acquisition of the two priors was not enabled by the color cues alone, it was significant when participants shifted their gaze (above or below in response to the color cues. However, the acquisition of multiple priors was not significant when participants moved their mouths (opened or closed. In Experiment 2, the spatial cues (above and below were used to identify which eye position or retinal cue position was crucial for the eye-movement-dependent acquisition of multiple priors in Experiment 1. The acquisition of the two priors was significant when participants moved their gaze to the cues (i.e., the cue positions on the retina were constant across the priors, as well as when participants did not shift their gazes (i.e., the cue positions on the retina changed according to the priors. Thus, both eye and retinal cue positions were effective in acquiring multiple priors. Based on previous neurophysiological reports, we discuss possible neural correlates that contribute to the acquisition of multiple priors.
Relativistic Linear Restoring Force
Clark, D.; Franklin, J.; Mann, N.
2012-01-01
We consider two different forms for a relativistic version of a linear restoring force. The pair comes from taking Hooke's law to be the force appearing on the right-hand side of the relativistic expressions: d"p"/d"t" or d"p"/d["tau"]. Either formulation recovers Hooke's law in the non-relativistic limit. In addition to these two forces, we…
Temperature of maximum density and excess thermodynamics of aqueous mixtures of methanol
Energy Technology Data Exchange (ETDEWEB)
González-Salgado, D.; Zemánková, K. [Departamento de Física Aplicada, Universidad de Vigo, Campus del Agua, Edificio Manuel Martínez-Risco, E-32004 Ourense (Spain); Noya, E. G.; Lomba, E. [Instituto de Química Física Rocasolano, CSIC, Calle Serrano 119, E-28006 Madrid (Spain)
2016-05-14
In this work, we present a study of representative excess thermodynamic properties of aqueous mixtures of methanol over the complete concentration range, based on extensive computer simulation calculations. In addition to test various existing united atom model potentials, we have developed a new force-field which accurately reproduces the excess thermodynamics of this system. Moreover, we have paid particular attention to the behavior of the temperature of maximum density (TMD) in dilute methanol mixtures. The presence of a temperature of maximum density is one of the essential anomalies exhibited by water. This anomalous behavior is modified in a non-monotonous fashion by the presence of fully miscible solutes that partly disrupt the hydrogen bond network of water, such as methanol (and other short chain alcohols). In order to obtain a better insight into the phenomenology of the changes in the TMD of water induced by small amounts of methanol, we have performed a new series of experimental measurements and computer simulations using various force fields. We observe that none of the force-fields tested capture the non-monotonous concentration dependence of the TMD for highly diluted methanol solutions.
Training shortest-path tractography: Automatic learning of spatial priors
DEFF Research Database (Denmark)
Kasenburg, Niklas; Liptrot, Matthew George; Reislev, Nina Linde
2016-01-01
Tractography is the standard tool for automatic delineation of white matter tracts from diffusion weighted images. However, the output of tractography often requires post-processing to remove false positives and ensure a robust delineation of the studied tract, and this demands expert prior...... knowledge. Here we demonstrate how such prior knowledge, or indeed any prior spatial information, can be automatically incorporated into a shortest-path tractography approach to produce more robust results. We describe how such a prior can be automatically generated (learned) from a population, and we...
Crowdsourcing prior information to improve study design and data analysis.
Directory of Open Access Journals (Sweden)
Jeffrey S Chrabaszcz
Full Text Available Though Bayesian methods are being used more frequently, many still struggle with the best method for setting priors with novel measures or task environments. We propose a method for setting priors by eliciting continuous probability distributions from naive participants. This allows us to include any relevant information participants have for a given effect. Even when prior means are near-zero, this method provides a principle way to estimate dispersion and produce shrinkage, reducing the occurrence of overestimated effect sizes. We demonstrate this method with a number of published studies and compare the effect of different prior estimation and aggregation methods.
Prior knowledge in recalling arguments in bioethical dilemmas
Directory of Open Access Journals (Sweden)
Hiemke Katharina Schmidt
2015-09-01
Full Text Available Prior knowledge is known to facilitate learning new information. Normally in studies confirming this outcome the relationship between prior knowledge and the topic to be learned is obvious: the information to be acquired is part of the domain or topic to which the prior knowledge belongs. This raises the question as to whether prior knowledge of various domains facilitates recalling information. In this study 79 eleventh-grade students completed a questionnaire on their prior knowledge of seven different domains related to the bioethical dilemma of prenatal diagnostics. The students read a text containing arguments for and arguments against prenatal diagnostics. After one week and again 12 weeks later they were asked to write down all the arguments they remembered. Prior knowledge helped them recall the arguments one week (r = .350 and 12 weeks (r = .316 later. Prior knowledge of three of the seven domains significantly helped them recall the arguments one week later (correlations between r = .194 to r = .394. Partial correlations with interest as a control item revealed that interest did not explain the relationship between prior knowledge and recall. Prior knowledge of different domains jointly supports the recall of arguments related to bioethical topics.
International Nuclear Information System (INIS)
Evans, M.S.; Stoughton, R.S.; Kazerooni, H.
1994-08-01
This paper presents a theoretical and experimental investigation of a new kind of force sensor which detects forces by measuring an induced pressure change in a material of large Poisson's ratio. In this investigation we develop mathematical expressions for the sensor's sensitivity and bandwidth, and show that its sensitivity can be much larger and its bandwidth is usually smaller than those of existing strain-gage-type sensors. This force sensor is well-suited for measuring large but slowly varying forces. It can be installed in a space smaller than that required by existing sensors
International Nuclear Information System (INIS)
Santosh, Mogurampelly; Maiti, Prabal K
2009-01-01
When pulled along the axis, double-strand DNA undergoes a large conformational change and elongates by roughly twice its initial contour length at a pulling force of about 70 pN. The transition to this highly overstretched form of DNA is very cooperative. Applying a force perpendicular to the DNA axis (unzipping), double-strand DNA can also be separated into two single-stranded DNA, this being a fundamental process in DNA replication. We study the DNA overstretching and unzipping transition using fully atomistic molecular dynamics (MD) simulations and argue that the conformational changes of double-strand DNA associated with either of the above mentioned processes can be viewed as force induced DNA melting. As the force at one end of the DNA is increased the DNA starts melting abruptly/smoothly above a critical force depending on the pulling direction. The critical force f m , at which DNA melts completely decreases as the temperature of the system is increased. The melting force in the case of unzipping is smaller compared to the melting force when the DNA is pulled along the helical axis. In the case of melting through unzipping, the double-strand separation has jumps which correspond to the different energy minima arising due to sequence of different base pairs. The fraction of Watson-Crick base pair hydrogen bond breaking as a function of force does not show smooth and continuous behavior and consists of plateaus followed by sharp jumps.
Intermolecular and surface forces
Israelachvili, Jacob N
2011-01-01
This reference describes the role of various intermolecular and interparticle forces in determining the properties of simple systems such as gases, liquids and solids, with a special focus on more complex colloidal, polymeric and biological systems. The book provides a thorough foundation in theories and concepts of intermolecular forces, allowing researchers and students to recognize which forces are important in any particular system, as well as how to control these forces. This third edition is expanded into three sections and contains five new chapters over the previous edition.· starts fr
RSOI: Force Deployment Bottleneck
National Research Council Canada - National Science Library
D'Amato, Mark
1998-01-01
This study uses The Theory Of Constraints (TOC) management methodology and recent military missions to show that RSOI operations are generally the limiting constraint to force deployment operations...
Communications Focal Point Contracting Squadron Force Support Squadron Mortuary Affairs Logistics Readiness Squadron Cadet Logistics Deployment and Distribution Material Management Operations PM Equipment Lab
International Nuclear Information System (INIS)
Mamuris, Z.; Dumont, J.; Dutrillaux, B.; Aurias, A.
1989-01-01
A cytogenetic study of 14 patients with secondary acute nonlymphocytic leukemia (S-ANLL) with prior treatment for breast cancer is reported. The chromosomes recurrently involved in numerical or structural anomalies are chromosomes 7, 5, 17, and 11, in decreasing order of frequency. The distribution of the anomalies detected in this sample of patients is similar to that observed in published cases with prior breast or other solid tumors, though anomalies of chromosome 11 were not pointed out, but it significantly differs from that of the S-ANLL with prior hematologic malignancies. This difference is principally due to a higher involvement of chromosome 7 in patients with prior hematologic malignancies and of chromosomes 11 and 17 in patients with prior solid tumors. A genetic determinism involving abnormal recessive alleles located on chromosomes 5, 7, 11, and 17 uncovered by deletions of the normal homologs may be a cause of S-ANLL. The difference between patients with prior hematologic malignancies or solid tumors may be explained by different constitutional mutations of recessive genes in the two groups of patients
Simulation of a force on force exercise
International Nuclear Information System (INIS)
Terhune, R.; Van Slyke, D.; Sheppard, T.; Brandrup, M.
1988-01-01
The Security Exercise Evaluation System (SEES) is under development for use in planning Force on Force exercises and as an aid in post-exercise evaluation. This study is part of the development cycle where the simulation results are compared to field data to provide guidance for further development of the model. SEES is an event-driven stochastic computer program simulating individual movement and combat within an urban terrain environment. The simulator models the physics of movement, line of sight, and weapon effects. It relies on the controllers to provide all knowledge of security tactics, which are entered by the controllers during the simulation using interactive color graphic workstations. They are able to develop, modify and implement plans promptly as the simulator maintains real time. This paper reports on how SEES will be used to develop an intrusion plan, test the security response tactics and develop observer logistics. A Force on Force field exercise will then be executed to follow the plan with observations recorded. An analysis is made by first comparing the plan and events of the simulation with the field exercise, modifying the simulation plan to match the actual field exercise, and then running the simulation to develop a distribution of possible outcomes
Equilibrium capillary forces with atomic force microscopy
Sprakel, J.H.B.; Besseling, N.A.M.; Leermakers, F.A.M.; Cohen Stuart, M.A.
2007-01-01
We present measurements of equilibrium forces resulting from capillary condensation. The results give access to the ultralow interfacial tensions between the capillary bridge and the coexisting bulk phase. We demonstrate this with solutions of associative polymers and an aqueous mixture of gelatin
Forced flow cooling of ISABELLE dipole magnets
International Nuclear Information System (INIS)
Bamberger, J.A.; Aggus, J.; Brown, D.P.; Kassner, D.A.; Sondericker, J.H.; Strobridge, T.R.
1976-01-01
The superconducting magnets for ISABELLE will use a forced flow supercritical helium cooling system. In order to evaluate this cooling scheme, two individual dipole magnets were first tested in conventional dewars using pool boiling helium. These magnets were then modified for forced flow cooling and retested with the identical magnet coils. The first evaluation test used a l m-long ISA model dipole magnet whose pool boiling performance had been established. The same magnet was then retested with forced flow cooling, energizing it at various operating temperatures until quench occurred. The magnet performance with forced flow cooling was consistent with data from the previous pool boiling tests. The next step in the program was a full-scale ISABELLE dipole ring magnet, 4.25 m long, whose performance was first evaluated with pool boiling. For the forced flow test the magnet was shrunk-fit into an unsplit laminated core encased in a stainless steel cylinder. The high pressure gas is cooled below 4 K by a helium bath which is pumped below atmospheric pressure with an ejector nozzle. The performance of the full-scale dipole magnet in the new configuration with forced flow cooling, showed a 10 percent increase in the attainable maximum current as compared to the pool boiling data
Acoustic radiation force control: Pulsating spherical carriers.
Rajabi, Majid; Mojahed, Alireza
2018-02-01
The interaction between harmonic plane progressive acoustic beams and a pulsating spherical radiator is studied. The acoustic radiation force function exerted on the spherical body is derived as a function of the incident wave pressure and the monopole vibration characteristics (i.e., amplitude and phase) of the body. Two distinct strategies are presented in order to alter the radiation force effects (i.e., pushing and pulling states) by changing its magnitude and direction. In the first strategy, an incident wave field with known amplitude and phase is considered. It is analytically shown that the zero- radiation force state (i.e., radiation force function cancellation) is achievable for specific pulsation characteristics belong to a frequency-dependent straight line equation in the plane of real-imaginary components (i.e., Nyquist Plane) of prescribed surface displacement. It is illustrated that these characteristic lines divide the mentioned displacement plane into two regions of positive (i.e., pushing) and negative (i.e., pulling) radiation forces. In the second strategy, the zero, negative and positive states of radiation force are obtained through adjusting the incident wave field characteristics (i.e., amplitude and phase) which insonifies the radiator with prescribed pulsation characteristics. It is proved that zero radiation force state occurs for incident wave pressure characteristics belong to specific frequency-dependent circles in Nyquist plane of incident wave pressure. These characteristic circles divide the Nyquist plane into two distinct regions corresponding to positive (out of circles) and negative (in the circles) values of radiation force function. It is analytically shown that the maximum amplitude of negative radiation force is exactly equal to the amplitude of the (positive) radiation force exerted upon the sphere in the passive state, by the same incident field. The developed concepts are much more deepened by considering the required
International Nuclear Information System (INIS)
Hutchinson, Thomas H.; Boegi, Christian; Winter, Matthew J.; Owens, J. Willie
2009-01-01
There is increasing recognition of the need to identify specific sublethal effects of chemicals, such as reproductive toxicity, and specific modes of actions of the chemicals, such as interference with the endocrine system. To achieve these aims requires criteria which provide a basis to interpret study findings so as to separate these specific toxicities and modes of action from not only acute lethality per se but also from severe inanition and malaise that non-specifically compromise reproductive capacity and the response of endocrine endpoints. Mammalian toxicologists have recognized that very high dose levels are sometimes required to elicit both specific adverse effects and present the potential of non-specific 'systemic toxicity'. Mammalian toxicologists have developed the concept of a maximum tolerated dose (MTD) beyond which a specific toxicity or action cannot be attributed to a test substance due to the compromised state of the organism. Ecotoxicologists are now confronted by a similar challenge and must develop an analogous concept of a MTD and the respective criteria. As examples of this conundrum, we note recent developments in efforts to validate protocols for fish reproductive toxicity and endocrine screens (e.g. some chemicals originally selected as 'negatives' elicited decreases in fecundity or changes in endpoints intended to be biomarkers for endocrine modes of action). Unless analogous criteria can be developed, the potentially confounding effects of systemic toxicity may then undermine the reliable assessment of specific reproductive effects or biomarkers such as vitellogenin or spiggin. The same issue confronts other areas of aquatic toxicology (e.g., genotoxicity) and the use of aquatic animals for preclinical assessments of drugs (e.g., use of zebrafish for drug safety assessment). We propose that there are benefits to adopting the concept of an MTD for toxicology and pharmacology studies using fish and other aquatic organisms and the
Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application
International Nuclear Information System (INIS)
Jiya, J. D.; Tahirou, G.
2002-01-01
This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle
Constant force extensional rheometry of polymer solutions
DEFF Research Database (Denmark)
Szabo, Peter; McKinley, Gareth H.; Clasen, Christian
2012-01-01
We revisit the rapid stretching of a liquid filament under the action of a constant imposed tensile force, a problem which was first considered by Matta and Tytus [J. Non-Newton. Fluid Mech. 35 (1990) 215–229]. A liquid bridge formed from a viscous Newtonian fluid or from a dilute polymer solution...... is first established between two cylindrical disks. The upper disk is held fixed and may be connected to a force transducer while the lower cylinder falls due to gravity. By varying the mass of the falling cylinder and measuring its resulting acceleration, the viscoelastic nature of the elongating fluid...... filament can be probed. In particular, we show that with this constant force pull (CFP) technique it is possible to readily impose very large material strains and strain rates so that the maximum extensibility of the polymer molecules may be quantified. This unique characteristic of the experiment...
Air Force construction automation/robotics
Nease, AL; Dusseault, Christopher
1994-01-01
The Air Force has several unique requirements that are being met through the development of construction robotic technology. The missions associated with these requirements place construction/repair equipment operators in potentially harmful situations. Additionally, force reductions require that human resources be leveraged to the maximum extent possible and that more stringent construction repair requirements push for increased automation. To solve these problems, the U.S. Air Force is undertaking a research and development effort at Tyndall AFB, FL to develop robotic teleoperation, telerobotics, robotic vehicle communications, automated damage assessment, vehicle navigation, mission/vehicle task control architecture, and associated computing environment. The ultimate goal is the fielding of robotic repair capability operating at the level of supervised autonomy. The authors of this paper will discuss current and planned efforts in construction/repair, explosive ordnance disposal, hazardous waste cleanup, fire fighting, and space construction.
Apples and oranges: avoiding different priors in Bayesian DNA sequence analysis
Directory of Open Access Journals (Sweden)
Posch Stefan
2010-03-01
Full Text Available Abstract Background One of the challenges of bioinformatics remains the recognition of short signal sequences in genomic DNA such as donor or acceptor splice sites, splicing enhancers or silencers, translation initiation sites, transcription start sites, transcription factor binding sites, nucleosome binding sites, miRNA binding sites, or insulator binding sites. During the last decade, a wealth of algorithms for the recognition of such DNA sequences has been developed and compared with the goal of improving their performance and to deepen our understanding of the underlying cellular processes. Most of these algorithms are based on statistical models belonging to the family of Markov random fields such as position weight matrix models, weight array matrix models, Markov models of higher order, or moral Bayesian networks. While in many comparative studies different learning principles or different statistical models have been compared, the influence of choosing different prior distributions for the model parameters when using different learning principles has been overlooked, and possibly lead to questionable conclusions. Results With the goal of allowing direct comparisons of different learning principles for models from the family of Markov random fields based on the same a-priori information, we derive a generalization of the commonly-used product-Dirichlet prior. We find that the derived prior behaves like a Gaussian prior close to the maximum and like a Laplace prior in the far tails. In two case studies, we illustrate the utility of the derived prior for a direct comparison of different learning principles with different models for the recognition of binding sites of the transcription factor Sp1 and human donor splice sites. Conclusions We find that comparisons of different learning principles using the same a-priori information can lead to conclusions different from those of previous studies in which the effect resulting from different
Construction and test of the PRIOR proton microscope; Aufbau und Test des Protonenmikroskops PRIOR
Energy Technology Data Exchange (ETDEWEB)
Lang, Philipp-Michael
2015-01-15
The study of High Energy Density Matter (HEDM) in the laboratory makes great demands on the diagnostics because these states can usually only be created for a short time and usual diagnostic techniques with visible light or X-rays come to their limit because of the high density. The high energy proton radiography technique that was developed in the 1990s at the Los Alamos National Laboratory is a very promising possibility to overcome those limits so that one can measure the density of HEDM with high spatial and time resolution. For this purpose the proton microscope PRIOR (Proton Radiography for FAIR) was set up at GSI, which not only reproduces the image, but also magnifies it by a factor of 4.2 and thereby penetrates matter with a density up to 20 g/cm{sup 2}. Straightaway a spatial resolution of less than 30 μm and a time resolution on the nanosecond scale was achieved. This work describes details to the principle, design and construction of the proton microscope as well as first measurements and simulations of essential components like magnetic lenses, a collimator and a scintillator screen. For the latter one it was possible to show that plastic scintillators can be used as converter as an alternative to the slower but more radiation resistant crystals, so that it is possible to reach a time resolution of 10 ns. Moreover the characteristics were investigated for the system at the commissioning in April 2014. Also the changes in the magnetic field due to radiation damage were studied. Besides that an overview about future applications is given. First experiments with Warm Dense Matter created by using a Pulsed Power Setup have already been performed. Furthermore the promising concept of combining proton radiography with particle therapy has been investigated in context of the PaNTERA project. An outlook on the possibilities with future experiments at the FAIR accelerator facility is given as well. Because of higher beam intensity an energy one can expect even
DEFF Research Database (Denmark)
Maffiuletti, Nicola A; Aagaard, Per; Blazevich, Anthony J
2016-01-01
The evaluation of rate of force development during rapid contractions has recently become quite popular for characterising explosive strength of athletes, elderly individuals and patients. The main aims of this narrative review are to describe the neuromuscular determinants of rate of force devel...
CERN AC
1998-01-01
The different forces, together with a pictorial analogy of how the exchange of particles works. The table lists the relative strength of the couplings, the quanta associated with the force fields and the bodies of phenomena in which they have a dominant role.
International Nuclear Information System (INIS)
Fischbach, E.; Sudarsky, D.; Szafer, A.; Talmadge, C.; Aronson, S.H.
1986-01-01
We review recent experimental and theoretical work dealing with the proposed fifth force. Further analysis of the original Eoetvoes experiments has uncovered no challenges to our original assertion that these data evidence a correlation characteristic of the presence of a new coupling to baryon number or hypercharge. Various models suggest that the proposed fifth force could be accomodated naturally into the existing theoretical framework
Adaptive nonparametric Bayesian inference using location-scale mixture priors
Jonge, de R.; Zanten, van J.H.
2010-01-01
We study location-scale mixture priors for nonparametric statistical problems, including multivariate regression, density estimation and classification. We show that a rate-adaptive procedure can be obtained if the prior is properly constructed. In particular, we show that adaptation is achieved if
Nudging toward Inquiry: Awakening and Building upon Prior Knowledge
Fontichiaro, Kristin, Comp.
2010-01-01
"Prior knowledge" (sometimes called schema or background knowledge) is information one already knows that helps him/her make sense of new information. New learning builds on existing prior knowledge. In traditional reporting-style research projects, students bypass this crucial step and plow right into answer-finding. It's no wonder that many…
Drunkorexia: Calorie Restriction Prior to Alcohol Consumption among College Freshman
Burke, Sloane C.; Cremeens, Jennifer; Vail-Smith, Karen; Woolsey, Conrad
2010-01-01
Using a sample of 692 freshmen at a southeastern university, this study examined caloric restriction among students prior to planned alcohol consumption. Participants were surveyed for self-reported alcohol consumption, binge drinking, and caloric intake habits prior to drinking episodes. Results indicated that 99 of 695 (14%) of first year…
Personality, depressive symptoms and prior trauma exposure of new ...
African Journals Online (AJOL)
Background. Police officers are predisposed to trauma exposure. The development of depression and post-traumatic stress disorder (PTSD) may be influenced by personality style, prior exposure to traumatic events and prior depression. Objectives. To describe the personality profiles of new Metropolitan Police Service ...
34 CFR 303.403 - Prior notice; native language.
2010-07-01
... 34 Education 2 2010-07-01 2010-07-01 false Prior notice; native language. 303.403 Section 303.403... TODDLERS WITH DISABILITIES Procedural Safeguards General § 303.403 Prior notice; native language. (a... file a complaint and the timelines under those procedures. (c) Native language. (1) The notice must be...
On the use of a pruning prior for neural networks
DEFF Research Database (Denmark)
Goutte, Cyril
1996-01-01
We address the problem of using a regularization prior that prunes unnecessary weights in a neural network architecture. This prior provides a convenient alternative to traditional weight-decay. Two examples are studied to support this method and illustrate its use. First we use the sunspots...
Bayesian Inference for Structured Spike and Slab Priors
DEFF Research Database (Denmark)
Andersen, Michael Riis; Winther, Ole; Hansen, Lars Kai
2014-01-01
Sparse signal recovery addresses the problem of solving underdetermined linear inverse problems subject to a sparsity constraint. We propose a novel prior formulation, the structured spike and slab prior, which allows to incorporate a priori knowledge of the sparsity pattern by imposing a spatial...
5 CFR 6201.103 - Prior approval for outside employment.
2010-01-01
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Prior approval for outside employment. 6201.103 Section 6201.103 Administrative Personnel EXPORT-IMPORT BANK OF THE UNITED STATES SUPPLEMENTAL STANDARDS OF ETHICAL CONDUCT FOR EMPLOYEES OF THE EXPORT-IMPORT BANK OF THE UNITED STATES § 6201.103 Prior...
Prior authorisation schemes: trade barriers in need of scientific justification
Meulen, van der B.M.J.
2010-01-01
Case C-333/08 Commission v. French Republic ‘processing aids’ [2010] ECR-0000 French prior authorisation scheme for processing aids in food production infringes upon Article 34 TFEU** 1. A prior authorisation scheme not complying with the principle of proportionality, infringes upon Article 34 TFEU.
Ponderomotive Forces in Cosmos
Lundin, R.; Guglielmi, A.
2006-12-01
This review is devoted to ponderomotive forces and their importance for the acceleration of charged particles by electromagnetic waves in space plasmas. Ponderomotive forces constitute time-averaged nonlinear forces acting on a media in the presence of oscillating electromagnetic fields. Ponderomotive forces represent a useful analytical tool to describe plasma acceleration. Oscillating electromagnetic fields are also related with dissipative processes, such as heating of particles. Dissipative processes are, however, left outside these discussions. The focus will be entirely on the (conservative) ponderomotive forces acting in space plasmas. The review consists of seven sections. In Section 1, we explain the rational for using the auxiliary ponderomotive forces instead of the fundamental Lorentz force for the study of particle motions in oscillating fields. In Section 2, we present the Abraham, Miller, Lundin-Hultqvist and Barlow ponderomotive forces, and the Bolotovsky-Serov ponderomotive drift. The hydrodynamic, quasi-hydrodynamic, and ‘`test-particle’' approaches are used for the study of ponderomotive wave-particle interaction. The problems of self-consistency and regularization are discussed in Section 3. The model of static balance of forces (Section 4) exemplifies the interplay between thermal, gravitational and ponderomotive forces, but it also introduces a set of useful definitions, dimensionless parameters, etc. We analyze the Alfvén and ion cyclotron waves in static limit with emphasis on the specific distinction between traveling and standing waves. Particular attention has been given to the impact of traveling Alfvén waves on the steady state anabatic wind that blows over the polar regions (Section~5). We demonstrate the existence of a wave-induced cold anabatic wind. We also show that, at a critical point, the ponderomotive acceleration of the wind is a factor of 3 greater than the thermal acceleration. Section 6 demonstrates various
The Other Quiet Professionals: Lessons for Future Cyber Forces from the Evolution of Special Forces
2014-01-01
tions Command), John Mense (INSCOM), Paul Schuh (JFCC-NW), Russell Fenton (U.S. Army Network Enterprise Technology Command), and CW5 Todd Boudreau and...Surdu, 2009, pp. 16–17). Doctrine Irregular warfare and SOF doctrine lagged operational activities after the Vietnam War and prior to the...forces are, at their operating core, small teams of highly skilled specialists, and both communities value skilled personnel above all else. Irregular
Variational segmentation problems using prior knowledge in imaging and vision
DEFF Research Database (Denmark)
Fundana, Ketut
This dissertation addresses variational formulation of segmentation problems using prior knowledge. Variational models are among the most successful approaches for solving many Computer Vision and Image Processing problems. The models aim at finding the solution to a given energy functional defined......, prior knowledge is needed to obtain the desired solution. The introduction of shape priors in particular, has proven to be an effective way to segment objects of interests. Firstly, we propose a prior-based variational segmentation model to segment objects of interest in image sequences, that can deal....... Many objects have high variability in shape and orientation. This often leads to unsatisfactory results, when using a segmentation model with single shape template. One way to solve this is by using more sophisticated shape models. We propose to incorporate shape priors from a shape sub...
Total Variability Modeling using Source-specific Priors
DEFF Research Database (Denmark)
Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou
2016-01-01
sequence of an utterance. In both cases the prior for the latent variable is assumed to be non-informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows in the heterogeneous case, that using informative priors for com- puting the posterior......, can lead to favorable results. We focus on modeling the priors using minimum divergence criterion or fac- tor analysis techniques. Tests on the NIST 2008 and 2010 Speaker Recognition Evaluation (SRE) dataset show that our proposed method beats four baselines: For i-vector extraction using an already...... trained matrix, for the short2-short3 task in SRE’08, five out of eight female and four out of eight male common conditions, were improved. For the core-extended task in SRE’10, four out of nine female and six out of nine male common conditions were improved. When incorporating prior information...
Example-driven manifold priors for image deconvolution.
Ni, Jie; Turaga, Pavan; Patel, Vishal M; Chellappa, Rama
2011-11-01
Image restoration methods that exploit prior information about images to be estimated have been extensively studied, typically using the Bayesian framework. In this paper, we consider the role of prior knowledge of the object class in the form of a patch manifold to address the deconvolution problem. Specifically, we incorporate unlabeled image data of the object class, say natural images, in the form of a patch-manifold prior for the object class. The manifold prior is implicitly estimated from the given unlabeled data. We show how the patch-manifold prior effectively exploits the available sample class data for regularizing the deblurring problem. Furthermore, we derive a generalized cross-validation (GCV) function to automatically determine the regularization parameter at each iteration without explicitly knowing the noise variance. Extensive experiments show that this method performs better than many competitive image deconvolution methods.
49 CFR 195.406 - Maximum operating pressure.
2010-10-01
... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...
78 FR 49370 - Inflation Adjustment of Maximum Forfeiture Penalties
2013-08-14
... ``civil monetary penalties provided by law'' at least once every four years. DATES: Effective September 13... increases the maximum civil monetary forfeiture penalties available to the Commission under its rules... maximum civil penalties established in that section to account for inflation since the last adjustment to...
22 CFR 201.67 - Maximum freight charges.
2010-04-01
..., commodity rate classification, quantity, vessel flag category (U.S.-or foreign-flag), choice of ports, and... the United States. (2) Maximum charter rates. (i) USAID will not finance ocean freight under any... owner(s). (4) Maximum liner rates. USAID will not finance ocean freight for a cargo liner shipment at a...
Maximum penetration level of distributed generation without violating voltage limits
Morren, J.; Haan, de S.W.H.
2009-01-01
Connection of Distributed Generation (DG) units to a distribution network will result in a local voltage increase. As there will be a maximum on the allowable voltage increase, this will limit the maximum allowable penetration level of DG. By reactive power compensation (by the DG unit itself) a
Particle Swarm Optimization Based of the Maximum Photovoltaic ...
African Journals Online (AJOL)
Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency. In this work, a Particle Swarm ...
Maximum-entropy clustering algorithm and its global convergence analysis
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.
Application of maximum entropy to neutron tunneling spectroscopy
International Nuclear Information System (INIS)
Mukhopadhyay, R.; Silver, R.N.
1990-01-01
We demonstrate the maximum entropy method for the deconvolution of high resolution tunneling data acquired with a quasielastic spectrometer. Given a precise characterization of the instrument resolution function, a maximum entropy analysis of lutidine data obtained with the IRIS spectrometer at ISIS results in an effective factor of three improvement in resolution. 7 refs., 4 figs
The regulation of starch accumulation in Panicum maximum Jacq ...
African Journals Online (AJOL)
... decrease the starch level. These observations are discussed in relation to the photosynthetic characteristics of P. maximum. Keywords: accumulation; botany; carbon assimilation; co2 fixation; growth conditions; mesophyll; metabolites; nitrogen; nitrogen levels; nitrogen supply; panicum maximum; plant physiology; starch; ...
32 CFR 842.35 - Depreciation and maximum allowances.
2010-07-01
... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide” to...
The maximum significant wave height in the Southern North Sea
Bouws, E.; Tolman, H.L.; Holthuijsen, L.H.; Eldeberky, Y.; Booij, N.; Ferier, P.
1995-01-01
The maximum possible wave conditions along the Dutch coast, which seem to be dominated by the limited water depth, have been estimated in the present study with numerical simulations. Discussions with meteorologists suggest that the maximum possible sustained wind speed in North Sea conditions is
PTree: pattern-based, stochastic search for maximum parsimony phylogenies
Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.
2013-01-01
Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...
5 CFR 838.711 - Maximum former spouse survivor annuity.
2010-01-01
... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the amount...
Winning the Retention Wars: The Air Force, Women Officers, and the Need for Transformation
National Research Council Canada - National Science Library
DiSilverio, Laura
2003-01-01
.... Although specific separation figures are not available, analysis of the percentage of men and women by commissioned years of service in the Air Force indicates that women separate prior to retirement...
Maximum physical capacity testing in cancer patients undergoing chemotherapy
DEFF Research Database (Denmark)
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...
Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation
Directory of Open Access Journals (Sweden)
Petr Stehlík
2015-01-01
Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′ (or Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.
Bite force and occlusal stress production in hominin evolution.
Eng, Carolyn M; Lieberman, Daniel E; Zink, Katherine D; Peters, Michael A
2013-08-01
Maximum bite force affects craniofacial morphology and an organism's ability to break down foods with different material properties. Humans are generally believed to produce low bite forces and spend less time chewing compared with other apes because advances in mechanical and thermal food processing techniques alter food material properties in such a way as to reduce overall masticatory effort. However, when hominins began regularly consuming mechanically processed or cooked diets is not known. Here, we apply a model for estimating maximum bite forces and stresses at the second molar in modern human, nonhuman primate, and hominin skulls that incorporates skeletal data along with species-specific estimates of jaw muscle architecture. The model, which reliably estimates bite forces, shows a significant relationship between second molar bite force and second molar area across species but does not confirm our hypothesis of isometry. Specimens in the genus Homo fall below the regression line describing the relationship between bite force and molar area for nonhuman anthropoids and australopiths. These results suggest that Homo species generate maximum bite forces below those predicted based on scaling among australopiths and nonhuman primates. Because this decline occurred before evidence for cooking, we hypothesize that selection for lower bite force production was likely made possible by an increased reliance on nonthermal food processing. However, given substantial variability among in vivo bite force magnitudes measured in humans, environmental effects, especially variations in food mechanical properties, may also be a factor. The results also suggest that australopiths had ape-like bite force capabilities. Copyright © 2013 Wiley Periodicals, Inc.
Laryngeal Force Sensor: Quantifying Extralaryngeal Complications after Suspension Microlaryngoscopy.
Feng, Allen L; Song, Phillip C
2018-04-01
Objectives To develop a novel sensor capable of dynamically analyzing the force exerted during suspension microlaryngoscopy and to examine the relationship between force and postoperative tongue complications. Study Design Prospective observational study. Setting Academic tertiary care center. Methods The laryngeal force sensor is a designed for use during microphonosurgery. Prospectively enrolled patients completed pre- and postoperative surveys to assess the development of tongue-related symptoms (dysgeusia, pain, paresthesia, and paresis) or dysphagia (10-item Eating Assessment Tool [EAT-10]). To prevent operator bias, surgeons were blinded to the force recordings during surgery. Results Fifty-six patients completed the study. Of these, 20 (36%) developed postoperative tongue symptoms, and 12 (21%) had abnormal EAT-10 scores. The mean maximum force across all procedures was 164.7 N (95% CI, 141.0-188.4; range, 48.5-402.6), while the mean suspension time was 34.3 minutes (95% CI, 27.4-41.2; range, 7.1-108.1). Multiple logistic regression showed maximum force (odds ratio, 1.15; 95% CI, 1.02-1.29; P = .019) and female sex (30.1%; 95% CI, 22.7%-37.5%; P force (odds ratio, 1.03; 95% CI, 1.00-1.06; P = .045). Conclusions The laryngeal force sensor is capable of providing dynamic force measurements throughout suspension microlaryngoscopy. An increase in maximum force during surgery may be a significant predictor for the development of tongue-related symptoms and an abnormal EAT-10 score. Female patients may also be at greater risk for developing postoperative tongue symptoms.
Feasibility of novel four degrees of freedom capacitive force sensor for skin interface force
Directory of Open Access Journals (Sweden)
Murakami Chisato
2012-11-01
Full Text Available Abstract Background The objective of our study was to develop a novel capacitive force sensor that enables simultaneous measurements of yaw torque around the pressure axis and normal force and shear forces at a single point for the purpose of elucidating pressure ulcer pathogenesis and establishing criteria for selection of cushions and mattresses. Methods Two newly developed sensors (approximately 10 mm×10 mm×5 mm (10 and 20 mm×20 mm×5 mm (20 were constructed from silicone gel and four upper and lower electrodes. The upper and lower electrodes had sixteen combinations that had the function as capacitors of parallel plate type. The full scale (FS ranges of force/torque were defined as 0–1.5 N, –0.5-0.5 N and −1.5-1.5 N mm (10 and 0–8.7 N, –2.9-2.9 N and −16.8-16.8 N mm (20 in normal force, shear forces and yaw torque, respectively. The capacitances of sixteen capacitors were measured by an LCR meter (AC1V, 100 kHz when displacements corresponding to four degrees of freedom (DOF forces within FS ranges were applied to the sensor. The measurement was repeated three times in each displacement condition (10 only. Force/torque were calculated by corrected capacitance and were evaluated by comparison to theoretical values and standard normal force measured by an universal tester. Results In measurements of capacitance, the coefficient of variation was 3.23% (10. The Maximum FS errors of estimated force/torque were less than or equal to 10.1 (10 and 16.4% (20, respectively. The standard normal forces were approximately 1.5 (10 and 9.4 N (20 when pressure displacements were 3 (10 and 2 mm (20, respectively. The estimated normal forces were approximately 1.5 (10 and 8.6 N (10 in the same condition. Conclusions In this study, we developed a new four DOF force sensor for measurement of force/torque that occur between the skin and a mattress. In measurement of capacitance, the repeatability was good and it was confirmed that the sensor had
Wetzels, Sandra; Kester, Liesbeth; Van Merriënboer, Jeroen; Broers, Nick
2010-01-01
Wetzels, S. A. J., Kester, L., Van Merriënboer, J. J. G., & Broers, N. J. (2011). The influence of prior knowledge on the retrieval-directed function of note taking in prior knowledge activation. British Journal of Educational Psychology, 81(2), 274-291. doi: 10.1348/000709910X517425
Bayesian modeling of the assimilative capacity component of nutrient total maximum daily loads
Faulkner, B. R.
2008-08-01
Implementing stream restoration techniques and best management practices to reduce nonpoint source nutrients implies enhancement of the assimilative capacity for the stream system. In this paper, a Bayesian method for evaluating this component of a total maximum daily load (TMDL) load capacity is developed and applied. The joint distribution of nutrient retention metrics from a literature review of 495 measurements was used for Monte Carlo sampling with a process transfer function for nutrient attenuation. Using the resulting histograms of nutrient retention, reference prior distributions were developed for sites in which some of the metrics contributing to the transfer function were measured. Contributing metrics for the prior include stream discharge, cross-sectional area, fraction of storage volume to free stream volume, denitrification rate constant, storage zone mass transfer rate, dispersion coefficient, and others. Confidence of compliance (CC) that any given level of nutrient retention has been achieved is also determined using this approach. The shape of the CC curve is dependent on the metrics measured and serves in part as a measure of the information provided by the metrics to predict nutrient retention. It is also a direct measurement, with a margin of safety, of the fraction of export load that can be reduced through changing retention metrics. For an impaired stream in western Oklahoma, a combination of prior information and measurement of nutrient attenuation was used to illustrate the proposed approach. This method may be considered for TMDL implementation.
International Nuclear Information System (INIS)
He, Yi; Scheraga, Harold A.; Liwo, Adam
2015-01-01
Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field
Energy Technology Data Exchange (ETDEWEB)
Bell, R.E.; Hartley, D.S.III; Packard, S.L.
1999-05-01
This report documents refined requirements for tools to aid the process of force design in Operations Other Than War (OOTWs). It recommends actions for the creation of one tool and work on other tools relating to mission planning. It also identifies the governmental agencies and commands with interests in each tool, from whom should come the user advisory groups overseeing the respective tool development activities. The understanding of OOTWs and their analytical support requirements has matured to the point where action can be taken in three areas: force design, collaborative analysis, and impact analysis. While the nature of the action and the length of time before complete results can be expected depends on the area, in each case the action should begin immediately. Force design for OOTWs is not a technically difficult process. Like force design for combat operations, it is a process of matching the capabilities of forces against the specified and implied tasks of the operation, considering the constraints of logistics, transport and force availabilities. However, there is a critical difference that restricts the usefulness of combat force design tools for OOTWs: the combat tools are built to infer non-combat capability requirements from combat capability requirements and cannot reverse the direction of the inference, as is required for OOTWs. Recently, OOTWs have played a larger role in force assessment, system effectiveness and tradeoff analysis, and concept and doctrine development and analysis. In the first Quadrennial Defense Review (QDR), each of the Services created its own OOTW force design tool. Unfortunately, the tools address different parts of the problem and do not coordinate the use of competing capabilities. These tools satisfied the immediate requirements of the QDR, but do not provide a long-term cost-effective solution.
Learning priors for Bayesian computations in the nervous system.
Directory of Open Access Journals (Sweden)
Max Berniker
Full Text Available Our nervous system continuously combines new information from our senses with information it has acquired throughout life. Numerous studies have found that human subjects manage this by integrating their observations with their previous experience (priors in a way that is close to the statistical optimum. However, little is known about the way the nervous system acquires or learns priors. Here we present results from experiments where the underlying distribution of target locations in an estimation task was switched, manipulating the prior subjects should use. Our experimental design allowed us to measure a subject's evolving prior while they learned. We confirm that through extensive practice subjects learn the correct prior for the task. We found that subjects can rapidly learn the mean of a new prior while the variance is learned more slowly and with a variable learning rate. In addition, we found that a Bayesian inference model could predict the time course of the observed learning while offering an intuitive explanation for the findings. The evidence suggests the nervous system continuously updates its priors to enable efficient behavior.
Implicit Priors in Galaxy Cluster Mass and Scaling Relation Determinations
Mantz, A.; Allen, S. W.
2011-01-01
Deriving the total masses of galaxy clusters from observations of the intracluster medium (ICM) generally requires some prior information, in addition to the assumptions of hydrostatic equilibrium and spherical symmetry. Often, this information takes the form of particular parametrized functions used to describe the cluster gas density and temperature profiles. In this paper, we investigate the implicit priors on hydrostatic masses that result from this fully parametric approach, and the implications of such priors for scaling relations formed from those masses. We show that the application of such fully parametric models of the ICM naturally imposes a prior on the slopes of the derived scaling relations, favoring the self-similar model, and argue that this prior may be influential in practice. In contrast, this bias does not exist for techniques which adopt an explicit prior on the form of the mass profile but describe the ICM non-parametrically. Constraints on the slope of the cluster mass-temperature relation in the literature show a separation based the approach employed, with the results from fully parametric ICM modeling clustering nearer the self-similar value. Given that a primary goal of scaling relation analyses is to test the self-similar model, the application of methods subject to strong, implicit priors should be avoided. Alternative methods and best practices are discussed.
Vekstein, G.
2017-10-01
This is a tutorial-style selective review explaining basic concepts of forced magnetic reconnection. It is based on a celebrated model of forced reconnection suggested by J. B. Taylor. The standard magnetohydrodynamic (MHD) theory of this process has been pioneered by Hahm & Kulsrud (Phys. Fluids, vol. 28, 1985, p. 2412). Here we also discuss several more recent developments related to this problem. These include energetics of forced reconnection, its Hall-mediated regime, and nonlinear effects with the associated onset of the secondary tearing (plasmoid) instability.
DEFF Research Database (Denmark)
Sun, Peng; Speicher, Nora K; Röttger, Richard
2014-01-01
of pairwise similarities. We first evaluated the power of Bi-Force to solve dedicated bicluster editing problems by comparing Bi-Force with two existing algorithms in the BiCluE software package. We then followed a biclustering evaluation protocol in a recent review paper from Eren et al. (2013) (A...... comparative analysis of biclustering algorithms for gene expressiondata. Brief. Bioinform., 14:279-292.) and compared Bi-Force against eight existing tools: FABIA, QUBIC, Cheng and Church, Plaid, BiMax, Spectral, xMOTIFs and ISA. To this end, a suite of synthetic datasets as well as nine large gene expression...
Bite force measurement based on fiber Bragg grating sensor
Padma, Srivani; Umesh, Sharath; Asokan, Sundarrajan; Srinivas, Talabattula
2017-10-01
The maximum level of voluntary bite force, which results from the combined action of muscle of mastication, joints, and teeth, i.e., craniomandibular structure, is considered as one of the major indicators for the functional state of the masticatory system. Measurement of voluntary bite force provides useful data for the jaw muscle function and activity along with assessment of prosthetics. This study proposes an in vivo methodology for the dynamic measurement of bite force employing a fiber Bragg grating (FBG) sensor known as bite force measurement device (BFMD). The BFMD developed is a noninvasive intraoral device, which transduces the bite force exerted at the occlusal surface into strain variations on a metal plate. These strain variations are acquired by the FBG sensor bonded over it. The BFMD developed facilitates adjustment of the distance between the biting platform, which is essential to capture the maximum voluntary bite force at three different positions of teeth, namely incisor, premolar, and molar sites. The clinically relevant bite forces are measured at incisor, molar, and premolar position and have been compared against each other. Furthermore, the bite forces measured with all subjects are segregated according to gender and also compared against each other.
Directory of Open Access Journals (Sweden)
Cassio Neri
2014-05-01
Full Text Available We study the problem of finding probability densities that match given European call option prices. To allow prior information about such a density to be taken into account, we generalise the algorithm presented in Neri and Schneider (Appl. Math. Finance 2013 to find the maximum entropy density of an asset price to the relative entropy case. This is applied to study the impact of the choice of prior density in two market scenarios. In the first scenario, call option prices are prescribed at only a small number of strikes, and we see that the choice of prior, or indeed its omission, yields notably different densities. The second scenario is given by CBOE option price data for S&P500 index options at a large number of strikes. Prior information is now considered to be given by calibrated Heston, Schöbel–Zhu or Variance Gamma models. We find that the resulting digital option prices are essentially the same as those given by the (non-relative Buchen–Kelly density itself. In other words, in a sufficiently liquid market, the influence of the prior density seems to vanish almost completely. Finally, we study variance swaps and derive a simple formula relating the fair variance swap rate to entropy. Then we show, again, that the prior loses its influence on the fair variance swap rate as the number of strikes increases.
2013-02-12
... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
Finite element analysis of cutting tools prior to fracture in hard turning operations
International Nuclear Information System (INIS)
Cakir, M. Cemal; I Sik, Yahya
2005-01-01
In this work cutting FEA of cutting tools prior to fracture is investigated. Fracture is the catastrophic end of the cutting edge that should be avoided for the cutting tool in order to have a longer tool life. This paper presents finite element modelling of a cutting tool just before its fracture. The data used in FEA are gathered from a tool breakage system that detects the fracture according to the variations of the cutting forces measured by a three-dimensional force dynamometer. The workpiece material used in the experiments is cold work tool steel, AISI O1 (60 HRC) and the cutting tool material is uncoated tungsten carbide (DNMG 150608). In order to investigate the cutting tool conditions in longitudinal external turning operations prior to fracture, static and dynamic finite element analyses are conducted. After the static finite element analysis, the modal and harmonic response analyses are carried on and the dynamic behaviours of the cutting tool structure are investigated. All FE analyses were performed using a commercial finite element package ANSYS
Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.
Farsani, Zahra Amini; Schmid, Volker J
2017-01-01
In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.
Disentangling the effects of alternation rate and maximum run length on judgments of randomness
Directory of Open Access Journals (Sweden)
Sabine G. Scholl
2011-08-01
Full Text Available Binary sequences are characterized by various features. Two of these characteristics---alternation rate and run length---have repeatedly been shown to influence judgments of randomness. The two characteristics, however, have usually been investigated separately, without controlling for the other feature. Because the two features are correlated but not identical, it seems critical to analyze their unique impact, as well as their interaction, so as to understand more clearly what influences judgments of randomness. To this end, two experiments on the perception of binary sequences orthogonally manipulated alternation rate and maximum run length (i.e., length of the longest run within the sequence. Results show that alternation rate consistently exerts a unique effect on judgments of randomness, but that the effect of alternation rate is contingent on the length of the longest run within the sequence. The effect of maximum run length was found to be small and less consistent. Together, these findings extend prior randomness research by integrating literature from the realms of perception, categorization, and prediction, as well as by showing the unique and joint effects of alternation rate and maximum run length on judgments of randomness.
Reliable Dual Tensor Model Estimation in Single and Crossing Fibers Based on Jeffreys Prior
Yang, Jianfei; Poot, Dirk H. J.; Caan, Matthan W. A.; Su, Tanja; Majoie, Charles B. L. M.; van Vliet, Lucas J.; Vos, Frans M.
2016-01-01
Purpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD). This data-acquisition prior is based on the Fisher information matrix and enables the assessment whether two tensors are mandatory to describe the data. The method is compared to Maximum Likelihood Estimation (MLE) of the dual tensor model and to FSL’s ball-and-stick approach. Results Monte Carlo experiments demonstrated that JARD’s volume fractions correlated well with the ground truth for single and crossing fiber configurations. In single fiber configurations JARD automatically reduced the volume fraction of one compartment to (almost) zero. The variance in fractional anisotropy (FA) of the main tensor component was thereby reduced compared to MLE. JARD and MLE gave a comparable outcome in data simulating crossing fibers. On brain data, JARD yielded a smaller spread in FA along the corpus callosum compared to MLE. Tract-based spatial statistics demonstrated a higher sensitivity in detecting age-related white matter atrophy using JARD compared to both MLE and the ball-and-stick approach. Conclusions The proposed framework offers accurate and precise estimation of diffusion properties in single and dual fiber regions. PMID:27760166
Institutionalizing Security Force Assistance
National Research Council Canada - National Science Library
Binetti, Michael R
2008-01-01
.... It looks at the manner in which security assistance guidance is developed and executed. An examination of national level policy and the guidance from senior military and civilian leaders highlights the important role of Security Force Assistance...
Federal Laboratory Consortium — MIT Lincoln Laboratory occupies 75 acres (20 acres of which are MIT property) on the eastern perimeter of Hanscom Air Force Base, which is at the nexus of Lexington,...
Packing force data correlations
International Nuclear Information System (INIS)
Heiman, S.M.
1994-01-01
One of the issues facing valve maintenance personnel today deals with an appropriate methodology for installing and setting valve packing that will minimize leak rates, yet ensure functionality of the the valve under all anticipated operating conditions. Several variables can affect a valve packing's ability to seal, such as packing bolt torque, stem finish, and lubrication. Stem frictional force can be an excellent overall indicator of some of the underlying conditions that affect the sealing characteristics of the packing and the best parameter to use when adjusting the packing. This paper addresses stem friction forces, analytically derives the equations related to these forces, presents a methodology for measuring these forces on valve stems, and attempts to correlate the data directly to the underlying variables
Expeditionary Warfare- Force Protection
National Research Council Canada - National Science Library
Higgins, Eric
2004-01-01
In 2003, the Systems Engineering and Analysis students were tasked to develop a system of systems conceptual solution to provide force protection for the Sea Base conceptualized in the 2002 Expeditionary Warfare study...
Parameters determining maximum wind velocity in a tropical cyclone
International Nuclear Information System (INIS)
Choudhury, A.M.
1984-09-01
The spiral structure of a tropical cyclone was earlier explained by a tangential velocity distribution which varies inversely as the distance from the cyclone centre outside the circle of maximum wind speed. The case has been extended in the present paper by adding a radial velocity. It has been found that a suitable combination of radial and tangential velocities can account for the spiral structure of a cyclone. This enables parametrization of the cyclone. Finally a formula has been derived relating maximum velocity in a tropical cyclone with angular momentum, radius of maximum wind speed and the spiral angle. The shapes of the spirals have been computed for various spiral angles. (author)
Estimating Functions with Prior Knowledge, (EFPK) for diffusions
DEFF Research Database (Denmark)
Nolsøe, Kim; Kessler, Mathieu; Madsen, Henrik
2003-01-01
In this paper a method is formulated in an estimating function setting for parameter estimation, which allows the use of prior information. The main idea is to use prior knowledge of the parameters, either specified as moments restrictions or as a distribution, and use it in the construction of a...... of an estimating function. It may be useful when the full Bayesian analysis is difficult to carry out for computational reasons. This is almost always the case for diffusions, which is the focus of this paper, though the method applies in other settings.......In this paper a method is formulated in an estimating function setting for parameter estimation, which allows the use of prior information. The main idea is to use prior knowledge of the parameters, either specified as moments restrictions or as a distribution, and use it in the construction...
29 CFR 452.40 - Prior office holding.
2010-07-01
... DISCLOSURE ACT OF 1959 Candidacy for Office; Reasonable Qualifications § 452.40 Prior office holding. A.... 26 26 Wirtz v. Hotel, Motel and Club Employees Union, Local 6, 391 U.S. 492 at 504. The Court stated...
Form of prior for constrained thermodynamic processes with uncertainty
Aneja, Preety; Johal, Ramandeep S.
2015-05-01
We consider the quasi-static thermodynamic processes with constraints, but with additional uncertainty about the control parameters. Motivated by inductive reasoning, we assign prior distribution that provides a rational guess about likely values of the uncertain parameters. The priors are derived explicitly for both the entropy-conserving and the energy-conserving processes. The proposed form is useful when the constraint equation cannot be treated analytically. The inference is performed using spin-1/2 systems as models for heat reservoirs. Analytical results are derived in the high-temperatures limit. An agreement beyond linear response is found between the estimates of thermal quantities and their optimal values obtained from extremum principles. We also seek an intuitive interpretation for the prior and the estimated value of temperature obtained therefrom. We find that the prior over temperature becomes uniform over the quantity kept conserved in the process.
Prior Expectations Bias Sensory Representations in Visual Cortex
Kok, P.; Brouwer, G.J.; Gerven, M.A.J. van; Lange, F.P. de
2013-01-01
Perception is strongly influenced by expectations. Accordingly, perception has sometimes been cast as a process of inference, whereby sensory inputs are combined with prior knowledge. However, despite a wealth of behavioral literature supporting an account of perception as probabilistic inference,
What good are actions? Accelerating learning using learned action priors
CSIR Research Space (South Africa)
Rosman, Benjamin S
2012-11-01
Full Text Available The computational complexity of learning in sequential decision problems grows exponentially with the number of actions available to the agent at each state. We present a method for accelerating this process by learning action priors that express...
Assessment of prior learning in vocational education and training
DEFF Research Database (Denmark)
Wahlgren, Bjarne; Aarkrog, Vibe
’ knowledge, skills and competences during the students’ performances and the methods that the teachers apply in order to assess the students’ prior learning in relation to the regulations of the current VET-program. In particular the study focuses on how to assess not only the students’ explicated knowledge......The article deals about the results of a study of the assessment of prior learning among adult workers who want to obtain formal qualifications as skilled workers. The study contributes to developing methods for assessing prior learning including both the teachers’ ways of eliciting the students...... and skills but also their competences, i.e. the way the students use their skills and knowledge to perform in practice. Based on a description of the assessment procedures the article discusses central issues in relation to the assessment of prior learning. The empirical data have been obtained in the VET...
Valid MR imaging predictors of prior knee arthroscopy
International Nuclear Information System (INIS)
Discepola, Federico; Le, Huy B.Q.; Park, John S.; Clopton, Paul; Knoll, Andrew N.; Austin, Matthew J.; Resnick, Donald L.
2012-01-01
To determine whether fibrosis of the medial patellar reticulum (MPR), lateral patellar reticulum (LPR), deep medial aspect of Hoffa's fat pad (MDH), or deep lateral aspect of Hoffa's fat pad (LDH) is a valid predictor of prior knee arthroscopy. Institutional review board approval and waiver of informed consent were obtained for this HIPPA-compliant study. Initially, fibrosis of the MPR, LPR, MDH, or LDH in MR imaging studies of 50 patients with prior knee arthroscopy and 100 patients without was recorded. Subsequently, two additional radiologists, blinded to clinical data, retrospectively and independently recorded the presence of fibrosis of the MPR in 50 patients with prior knee arthroscopy and 50 without. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for detecting the presence of fibrosis in the MPR were calculated. κ statistics were used to analyze inter-observer agreement. Fibrosis of each of the regions examined during the first portion of the study showed a significant association with prior knee arthroscopy (p < 0.005 for each). A patient with fibrosis of the MPR, LDH, or LPR was 45.5, 9, or 3.7 times more likely, respectively, to have had a prior knee arthroscopy. Logistic regression analysis indicated that fibrosis of the MPR supplanted the diagnostic utility of identifying fibrosis of the LPR, LDH, or MDH, or combinations of these (p ≥ 0.09 for all combinations). In the second portion of the study, fibrosis of the MPR demonstrated a mean sensitivity of 82%, specificity of 72%, PPV of 75%, NPV of 81%, and accuracy of 77% for predicting prior knee arthroscopy. Analysis of MR images can be used to determine if a patient has had prior knee arthroscopy by identifying fibrosis of the MPR, LPR, MDH, or LDH. Fibrosis of the MPR was the strongest predictor of prior knee arthroscopy. (orig.)
Valid MR imaging predictors of prior knee arthroscopy
Energy Technology Data Exchange (ETDEWEB)
Discepola, Federico; Le, Huy B.Q. [McGill University Health Center, Jewsih General Hospital, Division of Musculoskeletal Radiology, Montreal, Quebec (Canada); Park, John S. [Annapolis Radiology Associates, Division of Musculoskeletal Radiology, Annapolis, MD (United States); Clopton, Paul; Knoll, Andrew N.; Austin, Matthew J.; Resnick, Donald L. [University of California San Diego (UCSD), Division of Musculoskeletal Radiology, San Diego, CA (United States)
2012-01-15
To determine whether fibrosis of the medial patellar reticulum (MPR), lateral patellar reticulum (LPR), deep medial aspect of Hoffa's fat pad (MDH), or deep lateral aspect of Hoffa's fat pad (LDH) is a valid predictor of prior knee arthroscopy. Institutional review board approval and waiver of informed consent were obtained for this HIPPA-compliant study. Initially, fibrosis of the MPR, LPR, MDH, or LDH in MR imaging studies of 50 patients with prior knee arthroscopy and 100 patients without was recorded. Subsequently, two additional radiologists, blinded to clinical data, retrospectively and independently recorded the presence of fibrosis of the MPR in 50 patients with prior knee arthroscopy and 50 without. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for detecting the presence of fibrosis in the MPR were calculated. {kappa} statistics were used to analyze inter-observer agreement. Fibrosis of each of the regions examined during the first portion of the study showed a significant association with prior knee arthroscopy (p < 0.005 for each). A patient with fibrosis of the MPR, LDH, or LPR was 45.5, 9, or 3.7 times more likely, respectively, to have had a prior knee arthroscopy. Logistic regression analysis indicated that fibrosis of the MPR supplanted the diagnostic utility of identifying fibrosis of the LPR, LDH, or MDH, or combinations of these (p {>=} 0.09 for all combinations). In the second portion of the study, fibrosis of the MPR demonstrated a mean sensitivity of 82%, specificity of 72%, PPV of 75%, NPV of 81%, and accuracy of 77% for predicting prior knee arthroscopy. Analysis of MR images can be used to determine if a patient has had prior knee arthroscopy by identifying fibrosis of the MPR, LPR, MDH, or LDH. Fibrosis of the MPR was the strongest predictor of prior knee arthroscopy. (orig.)
International Nuclear Information System (INIS)
Shigeru Aoki
2005-01-01
The secondary system such as pipings, tanks and other mechanical equipment is installed in the primary system such as building. The important secondary systems should be designed to maintain their function even if they are subjected to destructive earthquake excitations. The secondary system has many nonlinear characteristics. Impact and friction characteristic, which are observed in mechanical supports and joints, are common nonlinear characteristics. As impact damper and friction damper, impact and friction characteristic are used for reduction of seismic response. In this paper, analytical methods of the first excursion probability of the secondary system with impact and friction, subjected to earthquake excitation are proposed. By using the methods, the effects of impact force, gap size and friction force on the first excursion probability are examined. When the tolerance level is normalized by the maximum response of the secondary system without impact or friction characteristics, variation of the first excursion probability is very small for various values of the natural period. In order to examine the effectiveness of the proposed method, the obtained results are compared with those obtained by the simulation method. Some estimation methods for the maximum response of the secondary system with nonlinear characteristics have been developed. (author)
Mechanical limits to maximum weapon size in a giant rhinoceros beetle.
McCullough, Erin L
2014-07-07
The horns of giant rhinoceros beetles are a classic example of the elaborate morphologies that can result from sexual selection. Theory predicts that sexual traits will evolve to be increasingly exaggerated until survival costs balance the reproductive benefits of further trait elaboration. In Trypoxylus dichotomus, long horns confer a competitive advantage to males, yet previous studies have found that they do not incur survival costs. It is therefore unlikely that horn size is limited by the theoretical cost-benefit equilibrium. However, males sometimes fight vigorously enough to break their horns, so mechanical limits may set an upper bound on horn size. Here, I tested this mechanical limit hypothesis by measuring safety factors across the full range of horn sizes. Safety factors were calculated as the ratio between the force required to break a horn and the maximum force exerted on a horn during a typical fight. I found that safety factors decrease with increasing horn length, indicating that the risk of breakage is indeed highest for the longest horns. Structural failure of oversized horns may therefore oppose the continued exaggeration of horn length driven by male-male competition and set a mechanical limit on the maximum size of rhinoceros beetle horns. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Fractional Gaussian noise: Prior specification and model comparison
Sø rbye, Sigrunn Holbek; Rue, Haavard
2017-01-01
Fractional Gaussian noise (fGn) is a stationary stochastic process used to model antipersistent or persistent dependency structures in observed time series. Properties of the autocovariance function of fGn are characterised by the Hurst exponent (H), which, in Bayesian contexts, typically has been assigned a uniform prior on the unit interval. This paper argues why a uniform prior is unreasonable and introduces the use of a penalised complexity (PC) prior for H. The PC prior is computed to penalise divergence from the special case of white noise and is invariant to reparameterisations. An immediate advantage is that the exact same prior can be used for the autocorrelation coefficient ϕ(symbol) of a first-order autoregressive process AR(1), as this model also reflects a flexible version of white noise. Within the general setting of latent Gaussian models, this allows us to compare an fGn model component with AR(1) using Bayes factors, avoiding the confounding effects of prior choices for the two hyperparameters H and ϕ(symbol). Among others, this is useful in climate regression models where inference for underlying linear or smooth trends depends heavily on the assumed noise model.
Fractional Gaussian noise: Prior specification and model comparison
Sørbye, Sigrunn Holbek
2017-07-07
Fractional Gaussian noise (fGn) is a stationary stochastic process used to model antipersistent or persistent dependency structures in observed time series. Properties of the autocovariance function of fGn are characterised by the Hurst exponent (H), which, in Bayesian contexts, typically has been assigned a uniform prior on the unit interval. This paper argues why a uniform prior is unreasonable and introduces the use of a penalised complexity (PC) prior for H. The PC prior is computed to penalise divergence from the special case of white noise and is invariant to reparameterisations. An immediate advantage is that the exact same prior can be used for the autocorrelation coefficient ϕ(symbol) of a first-order autoregressive process AR(1), as this model also reflects a flexible version of white noise. Within the general setting of latent Gaussian models, this allows us to compare an fGn model component with AR(1) using Bayes factors, avoiding the confounding effects of prior choices for the two hyperparameters H and ϕ(symbol). Among others, this is useful in climate regression models where inference for underlying linear or smooth trends depends heavily on the assumed noise model.
SUSPENSION OF THE PRIOR DISCIPLINARY INVESTIGATION ACCORDING TO LABOR LAW
Directory of Open Access Journals (Sweden)
Nicolae, GRADINARU
2014-11-01
Full Text Available In order to conduct the prior disciplinary investigation, the employee shall be convoked in writing by the person authorized by the employer to carry out the research, specifying the subject, date, time and place of the meeting. For this purpose the employer shall appoint a committee charged with conducting the prior disciplinary investigation. Prior disciplinary research cannot be done without the possibility of the accused person to defend himself. It would be an abuse of the employer to violate these provisions. Since the employee is entitled to formulate and sustain defence in proving innocence or lesser degree of guilt than imputed, it needs between the moment were disclosed to the employee and the one of performing the prior disciplinary investigation to be a reasonable term for the employee to be able to prepare a defence in this regard. The employee's failure to present at the convocation, without an objective reason entitles the employer to dispose the sanctioning without making the prior disciplinary investigation. The objective reason which makes the employee, that is subject to prior disciplinary investigation, unable to present to the preliminary disciplinary investigation, should be at the time of the investigation in question.
Identification of subsurface structures using electromagnetic data and shape priors
Energy Technology Data Exchange (ETDEWEB)
Tveit, Svenn, E-mail: svenn.tveit@uni.no [Uni CIPR, Uni Research, Bergen 5020 (Norway); Department of Mathematics, University of Bergen, Bergen 5020 (Norway); Bakr, Shaaban A., E-mail: shaaban.bakr1@gmail.com [Department of Mathematics, Faculty of Science, Assiut University, Assiut 71516 (Egypt); Uni CIPR, Uni Research, Bergen 5020 (Norway); Lien, Martha, E-mail: martha.lien@octio.com [Uni CIPR, Uni Research, Bergen 5020 (Norway); Octio AS, Bøhmergaten 44, Bergen 5057 (Norway); Mannseth, Trond, E-mail: trond.mannseth@uni.no [Uni CIPR, Uni Research, Bergen 5020 (Norway); Department of Mathematics, University of Bergen, Bergen 5020 (Norway)
2015-03-01
We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.
Environmental Monitoring, Water Quality - Total Maximum Daily Load (TMDL)
NSGIC Education | GIS Inventory — The Clean Water Act Section 303(d) establishes the Total Maximum Daily Load (TMDL) program. The purpose of the TMDL program is to identify sources of pollution and...
Probabilistic maximum-value wind prediction for offshore environments
DEFF Research Database (Denmark)
Staid, Andrea; Pinson, Pierre; Guikema, Seth D.
2015-01-01
statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...... the full probabilistic distribution of maximum wind speed. Knowledge of the maximum wind speed for an offshore location within a given period can inform decision-making regarding turbine operations, planned maintenance operations and power grid scheduling in order to improve safety and reliability...
Parametric optimization of thermoelectric elements footprint for maximum power generation
DEFF Research Database (Denmark)
Rezania, A.; Rosendahl, Lasse; Yin, Hao
2014-01-01
The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost......-performance, and variation of efficiency in the uni-couple over a wide range of the heat transfer coefficient on the cold junction. The three-dimensional (3D) governing equations of the thermoelectricity and the heat transfer are solved using the finite element method (FEM) for temperature dependent properties of TE...... materials. The results, which are in good agreement with the previous computational studies, show that the maximum power generation and the maximum cost-performance in the module occur at An/Ap
Ethylene Production Maximum Achievable Control Technology (MACT) Compliance Manual
This July 2006 document is intended to help owners and operators of ethylene processes understand and comply with EPA's maximum achievable control technology standards promulgated on July 12, 2002, as amended on April 13, 2005 and April 20, 2006.
ORIGINAL ARTICLES Surgical practice in a maximum security prison
African Journals Online (AJOL)
Prison Clinic, Mangaung Maximum Security Prison, Bloemfontein. F Kleinhans, BA (Cur) .... HIV positivity rate and the use of the rectum to store foreign objects. ... fruit in sunlight. Other positive health-promoting factors may also play a role,.
A technique for estimating maximum harvesting effort in a stochastic ...
Indian Academy of Sciences (India)
Unknown
Estimation of maximum harvesting effort has a great impact on the ... fluctuating environment has been developed in a two-species competitive system, which shows that under realistic .... The existence and local stability properties of the equi-.