Maximum entropy reconstruction of spin densities involving non uniform prior
Schweizer, J.; Ressouche, E. [DRFMC/SPSMS/MDN CEA-Grenoble (France); Papoular, R.J. [CEA-Saclay, Gif sur Yvette (France). Lab. Leon Brillouin; Tasset, F. [Inst. Laue Langevin, Grenoble (France); Zheludev, A.I. [Brookhaven National Lab., Upton, NY (United States). Physics Dept.
1997-09-01
Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m({rvec r}), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for {rho}({rvec r}) = m({rvec r}). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing.
Effects of bruxism on the maximum bite force
Todić Jelena T.
2017-01-01
Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.
Solar Forcing of Greenland Climate during the Last Glacial Maximum
Adolphi, Florian; Muscheler, Raimund; Svensson, Anders; Aldahan, Ala; Possnert, Göran; Beer, Juerg; Sjolte, Jesper; Björck, Svante
2014-05-01
The role of solar forcing in climate changes is a matter of continuous debate. Challenges arise from the short period of direct observations of total solar irradiance (TSI), which indicate minor TSI variations of approximately 1 ‰ over an 11-year cycle, and the limited understanding of possible feedback mechanisms. Opposed to this, there is evidence from paleoclimate records for a tight coupling of solar activity and regional climate (e.g., Bond et al. 2001, Martin-Puertas et al. 2012). One proposed mechanism to amplify the Sun's influence on climate involves the relatively large modulation of the solar UV output (Haigh et al. 2010). This alters the radiative balance in the stratosphere via ozone feedback processes and eventually propagates downwards causing changes in the tropospheric circulation (Inesson et al. 2011). The regional response to this forcing may, however, also depend on orbital forcing of the mean state of the atmosphere (Dietrich et al. 2012). Prior to direct observations cosmogenic radionuclides such as 10Be and 14C are the most reliable proxies of solar activity. Their atmospheric production rates depend on the flux of galactic cosmic rays into the atmosphere which in turn is modulated by the strength of the Earth's and the solar magnetic fields. However, archives of 10Be and 14C are additionally affected by changes of their respective geochemical environment. Owing to their fundamentally different geochemistry, a combined analysis of 10Be and 14C records can aid to isolate production rate variations more reliably and thus, lead to improved reconstructions of solar variability. Due to the absence of high-quality high-resolution data this approach has so far been limited to the Holocene. We will present the first solar activity reconstruction for the end of the last glacial (22.5 - 10 ka BP) based on the cosmogenic radionuclides 10Be and 14C. We will compare glacial solar activity variations to Holocene features through combined interpretation
The maximum force in a column under constant speed compression
Kuzkin, Vitaly A
2015-01-01
Dynamic buckling of an elastic column under compression at constant speed is investigated assuming the first-mode buckling. Two cases are considered: (i) an imperfect column (Hoff's statement), and (ii) a perfect column having an initial lateral deflection. The range of parameters, where the maximum load supported by a column exceeds Euler static force is determined. In this range, the maximum load is represented as a function of the compression rate, slenderness ratio, and imperfection/initial deflection. Considering the results we answer the following question: "How slowly the column should be compressed in order to measure static load-bearing capacity?" This question is important for the proper setup of laboratory experiments and computer simulations of buckling. Additionally, it is shown that the behavior of a perfect column having an initial deflection differ significantlys form the behavior of an imperfect column. In particular, the dependence of the maximum force on the compression rate is non-monotoni...
Siegler, Jason C; Marshall, Paul W M; Raftry, Sean; Brooks, Cristy; Dowswell, Ben; Romero, Rick; Green, Simon
2013-12-01
The purpose of this investigation was to assess the influence of sodium bicarbonate supplementation on maximal force production, rate of force development (RFD), and muscle recruitment during repeated bouts of high-intensity cycling. Ten male and female (n = 10) subjects completed two fixed-cadence, high-intensity cycling trials. Each trial consisted of a series of 30-s efforts at 120% peak power output (maximum graded test) that were interspersed with 30-s recovery periods until task failure. Prior to each trial, subjects consumed 0.3 g/kg sodium bicarbonate (ALK) or placebo (PLA). Maximal voluntary contractions were performed immediately after each 30-s effort. Maximal force (F max) was calculated as the greatest force recorded over a 25-ms period throughout the entire contraction duration while maximal RFD (RFD max) was calculated as the greatest 10-ms average slope throughout that same contraction. F max declined similarly in both the ALK and PLA conditions, with baseline values (ALK: 1,226 ± 393 N; PLA: 1,222 ± 369 N) declining nearly 295 ± 54 N [95% confidence interval (CI) = 84-508 N; P force vs. maximum rate of force development during a whole body fatiguing task.
Maximum a posteriori covariance estimation using a power inverse wishart prior
Nielsen, Søren Feodor; Sporring, Jon
2012-01-01
The estimation of the covariance matrix is an initial step in many multivariate statistical methods such as principal components analysis and factor analysis, but in many practical applications the dimensionality of the sample space is large compared to the number of samples, and the usual maximu...... class of prior distributions generalizing the inverse Wishart prior, discuss its properties, and demonstrate the estimator on simulated and real data....
Yang Fengfan
2004-01-01
A new technique for turbo decoder is proposed by using a local subsidiary maximum likelihood decoding and a probability distributions family for the extrinsic information. The optimal distribution of the extrinsic information is dynamically specified for each component decoder.The simulation results show that the iterative decoder with the new technique outperforms that of the decoder with the traditional Gaussian approach for the extrinsic information under the same conditions.
Takagi, Mari; Kojima, Takashi; Ichikawa, Kei; Tanaka, Yoshiki; Kato, Yukihito; Horai, Rie; Tamaoki, Akeno; Ichikawa, Kazuo
2017-01-01
The current study reports comparing the postoperative mechanical properties of the anterior capsule between femtosecond laser capsulotomy (FLC) and continuous curvilinear capsulorhexis (CCC) of variable size and shape in porcine eyes. All CCCs were created using capsule forceps. Irregular or eccentric CCCs were also created to simulate real cataract surgery. For FLC, capsulotomies 5.3 mm in diameter were created using the LenSx® (Alcon) platform. Fresh porcine eyes were used in all experiments. The edges of the capsule openings were pulled at a constant speed using two L-shaped jigs. Stretch force and distance were recorded over time, and the maximum values in this regard were defined as those that were recorded when the capsule broke. There was no difference in maximum stretch force between CCC and FLC. There were no differences in circularity between FLC and same-sized CCC. However, same-sized CCC did show significantly higher maximum stretch forces than FLC. Teardrop-shaped CCC showed lower maximum stretch forces than same-sized CCC and FLC. Heart-shaped CCC showed lower maximum stretch forces than same-sized CCC. Conclusively, while capsule edge strength after CCC varied depending on size or irregularities, FLC had the advantage of stable maximum stretch forces.
Psychophysical basis for maximum pushing and pulling forces: A review and recommendations.
Garg, Arun; Waters, Thomas; Kapellusch, Jay; Karwowski, Waldemar
2014-03-01
The objective of this paper was to perform a comprehensive review of psychophysically determined maximum acceptable pushing and pulling forces. Factors affecting pushing and pulling forces are identified and discussed. Recent studies show a significant decrease (compared to previous studies) in maximum acceptable forces for males but not for females when pushing and pulling on a treadmill. A comparison of pushing and pulling forces measured using a high inertia cart with those measured on a treadmill shows that the pushing and pulling forces using high inertia cart are higher for males but are about the same for females. It is concluded that the recommendations of Snook and Ciriello (1991) for pushing and pulling forces are still valid and provide reasonable recommendations for ergonomics practitioners. Regression equations as a function of handle height, frequency of exertion and pushing/pulling distance are provided to estimate maximum initial and sustained forces for pushing and pulling acceptable to 75% male and female workers. At present it is not clear whether pushing or pulling should be favored. Similarly, it is not clear what handle heights would be optimal for pushing and pulling. Epidemiological studies are needed to determine relationships between psychophysically determined maximum acceptable pushing and pulling forces and risk of musculoskeletal injuries, in particular to low back and shoulders.
Kotera, Jan; Å roubek, Filip
2015-02-01
Single image blind deconvolution aims to estimate the unknown blur from a single observed blurred image and recover the original sharp image. Such task is severely ill-posed and typical approaches involve some heuristic or other steps without clear mathematical explanation to arrive at an acceptable solution. We show that a straight- forward maximum a posteriori estimation incorporating sparse priors and mechanism to deal with boundary artifacts, combined with an efficient numerical method can produce results which compete with or outperform much more complicated state-of-the-art methods. Our method is naturally extended to deal with overexposure in low-light photography, where linear blurring model is violated.
An investigation of rugby scrimmaging posture and individual maximum pushing force.
Wu, Wen-Lan; Chang, Jyh-Jong; Wu, Jia-Hroung; Guo, Lan-Yuen
2007-02-01
Although rugby is a popular contact sport and the isokinetic muscle torque assessment has recently found widespread application in the field of sports medicine, little research has examined the factors associated with the performance of game-specific skills directly by using the isokinetic-type rugby scrimmaging machine. This study is designed to (a) measure and observe the differences in the maximum individual pushing forward force produced by scrimmaging in different body postures (3 body heights x 2 foot positions) with a self-developed rugby scrimmaging machine and (b) observe the variations in hip, knee, and ankle angles at different body postures and explore the relationship between these angle values and the individual maximum pushing force. Ten national rugby players were invited to participate in the examination. The experimental equipment included a self-developed rugby scrimmaging machine and a 3-dimensional motion analysis system. Our results showed that the foot positions (parallel and nonparallel foot positions) do not affect the maximum pushing force; however, the maximum pushing force was significantly lower in posture I (36% body height) than in posture II (38%) and posture III (40%). The maximum forward force in posture III (40% body height) was also slightly greater than for the scrum in posture II (38% body height). In addition, it was determined that hip, knee, and ankle angles under parallel feet positioning are factors that are closely negatively related in terms of affecting maximum pushing force in scrimmaging. In cross-feet postures, there was a positive correlation between individual forward force and hip angle of the rear leg. From our results, we can conclude that if the player stands in an appropriate starting position at the early stage of scrimmaging, it will benefit the forward force production.
Relationship between oral status and maximum bite force in preschool children
Ching-Ming Su
2009-03-01
Conclusion: By combining the results of this study, it was concluded that associations of bite force with factors like age, maximum mouth opening and the number of teeth in contact were clearer than for other variables such as body height, body weight, occlusal pattern, and tooth decay or fillings.
Maximum clenching force of patients with moderate loss of posterior tooth support: a pilot study.
Gibbs, Charles H; Anusavice, Kenneth J; Young, Henry M; Jones, Jack S; Esquivel-Upshaw, Josephine F
2002-11-01
Patients who have lost moderate posterior tooth support may also lose clenching force as a result of sensitivity to increased loading to the remaining teeth and possibly a loss of muscle strength, because clenching forces are limited to avoid stress to the remaining teeth. Few studies have correlated moderate posterior tooth loss with maximum clenching force. The purpose of this pilot study was to test the hypothesis that moderate loss of posterior tooth support will have a significant effect on maximum clenching force. The maximum clenching force of 44 adults, ages 28 to 76 (mean 46), with posterior tooth loss was compared with the maximum clenching force of a control group of 20 healthy full dentition adults, ages 18 to 55 (mean 30), by use of a bilateral strain-gauged transducer. The transducer consisted of 2 stainless steel plates separated by a steel sphere that balanced occlusal forces between right and left sides. Acrylic resin pads were fabricated for each patient to protect the cusps of the teeth. The overall accuracy was found to be within 2.3% of full scale over a range of 0 to 4000 N (0 to 900 lbs). The calibration reliability of the system was checked frequently by use of a dead weight of 222 N (50 lbs). Clenching forces were supported by first and second molars and second premolars when possible. The instrumentation, methods, and operator were the same for both groups. A 2-tailed Student t test (alpha=0.01) and a pooled estimate of the mean were used to determine possible statistical significance. To test for possible correlations between clenching force and lost tooth support and between clenching force and age, a linear regression correlation coefficient R was calculated. For the 44 subjects with posterior tooth loss, the mean clenching force was 462 N (104 lbs), with a range of 98 to 1031 N (22 to 232 lbs). This compares with a mean of 720 N (162 lbs) with a range of 244 to 1243 N (55 to 280 lbs) for the full-dentition subjects. A 2-tailed t test
Control system for maximum use of adhesive forces of a railway vehicle in a tractive mode
Spiryagin, Maksym; Lee, Kwan Soo; Yoo, Hong Hee
2008-04-01
The realization of maximum adhesive forces for a railway vehicle is a very difficult process, because it involves using tractive efforts and depends on friction characteristics in the contact zone between wheels and rails. Tractive efforts are realized by means of tractive torques of motors, and their maximum values can provide negative effects such as slip and skid. These situations usually happen when information about friction conditions is lacking. The negative processes have a major influence on wearing of contact bodies and tractive units. Therefore, many existing control systems for vehicles use an effect of a prediction of a friction coefficient between wheels and rails because measuring a friction coefficient at the moment of running vehicle movement is very difficult. One of the ways to solve this task is to use noise spectrum analysis for friction coefficient detection. This noise phenomenon has not been clearly studied and analyzed. In this paper, we propose an adhesion control system of railway vehicles based on an observer, which allows one to determine the maximum tractive torque based on the optimal adhesive force between the wheels (wheel pair) of a railway vehicle and rails (rail track) depending on weight load from a wheel to a rail, friction conditions in the contact zone, a lateral displacement of wheel set and wheel sleep. As a result, it allows a railway vehicle to be driven in a tractive mode by the maximum adhesion force for real friction conditions.
Kalafut, Bennett; Visscher, Koen
2008-10-01
Optical tweezers experiments allow us to probe the role of force and mechanical work in a variety of biochemical processes. However, observable states do not usually correspond in a one-to-one fashion with the internal state of an enzyme or enzyme-substrate complex. Different kinetic pathways yield different distributions for the dwells in the observable states. Furthermore, the dwell-time distribution will be dependent upon force, and upon where in the biochemical pathway force acts. I will present a maximum-likelihood method for identifying rate constants and the locations of force-dependent transitions in transcription initiation by T7 RNA Polymerase. This method is generalizable to systems with more complicated kinetic pathways in which there are two observable states (e.g. bound and unbound) and an irreversible final transition.
Ngo, Chuong; Leonhardt, Steffen; Zhang, Tony; Lüken, Markus; Misgeld, Berno; Vollmer, Thomas; Tenbrock, Klaus; Lehmann, Sylvia
2017-01-01
Electrical impedance tomography (EIT) provides global and regional information about ventilation by means of relative changes in electrical impedance measured with electrodes placed around the thorax. In combination with lung function tests, e.g. spirometry and body plethysmography, regional information about lung ventilation can be achieved. Impedance changes strictly correlate with lung volume during tidal breathing and mechanical ventilation. Initial studies presumed a correlation also during forced expiration maneuvers. To quantify the validity of this correlation in extreme lung volume changes during forced breathing, a measurement system was set up and applied on seven lung-healthy volunteers. Simultaneous measurements of changes in lung volume using EIT imaging and pneumotachography were obtained with different breathing patterns. Data was divided into a synchronizing phase (spontaneous breathing) and a test phase (maximum effort breathing and forced maneuvers). The EIT impedance changes correlate strictly with spirometric data during slow breathing with increasing and maximum effort ([Formula: see text]) and during forced expiration maneuvers ([Formula: see text]). Strong correlations in spirometric volume parameters [Formula: see text] ([Formula: see text]), [Formula: see text]/FVC ([Formula: see text]), and flow parameters PEF, [Formula: see text], [Formula: see text], [Formula: see text] ([Formula: see text]) were observed. According to the linearity during forced expiration maneuvers, EIT can be used during pulmonary function testing in combination with spirometry for visualisation of regional lung ventilation.
Wheel-slip Control Method for Seeking Maximum Value of Tangential Force between Wheel and Rail
Kondo, Keiichiro; Yasuoka, Ikuo; Yamazaki, Osamu; Toda, Shinichi; Nakazawa, Yosuke
A method for reducing motor torque in proportion to wheel slip is applied to an inverter-driven electric locomotive. The motor torque at wheel-slip speed is less than the torque at the maximum tangential force or the adhesion force. A novel anti-slip control method for seeking the maximum value of the tangential force between the wheel and rail is proposed in this paper. The characteristics of the proposed method are analyzed theoretically to design the torque reduction ratio and the rate of change of the pattern between the wheel-slip speed and motor current. In addition, experimental tests are also carried out to verify that the use of the proposed method increases the traction force of an electric locomotive driven by induction motors and inverters. The experimental test results obtained by using the proposed control method are compared with the experimental results obtained by using a conventional control method. The averaged operational current when using the proposed control method is 10% more than that when using the conventional control method.
Ariane Martins
2010-08-01
Full Text Available The relationship between force and balance show controversy results and has directimplications in exercise prescription practice. The objective was to investigate the relationshipbetween maximum dynamic force (MDF of inferior limbs and the static and dynamic balances.Participated in the study 60 individuals, with 18 to 24 years old, strength training apprentices.The MDF was available by mean the One Maximum Repetition (1MR in “leg press” and “kneeextension” and motor testes to available of static and dynamic balances. The correlation testsand multiple linear regression were applied. The force and balance variables showed correlationin females (p=0.038. The corporal mass and static balance showed correlation for the males(p=0.045. The explication capacity at MDF and practices time were small: 13% for staticbalance in males, 18% and 17%, respectively, for static and dynamic balance in females. Inconclusion: the MDF of inferior limbs showed low predictive capacity for performance in staticand dynamic balances, especially for males.
An exploration of ozone changes and their radiative forcing prior to the chlorofluorocarbon era
D. T. Shindell
2002-01-01
Full Text Available Using historical observations and model simulations, we investigate ozone trends prior to the mid-1970s onset of halogen-induced ozone depletion. Though measurements are quite limited, an analysis based on multiple, independent data sets (direct and indirect provides better constraints than any individual set of observations. We find that three data sets support an apparent long-term stratospheric ozone trend of -7.2 ± 2.3 DU during 1957-1975, which modeling attributes primarily to water vapor increases. The results suggest that 20th century stratospheric ozone depletion may have been roughly 50% more than is generally supposed. Similarly, three data sets support tropospheric ozone increases over polluted Northern Hemisphere continental regions of 8.2 ± 2.1 DU during this period, which are mutually consistent with the stratospheric trends. As with paleoclimate data, which is also based on indirect proxies and/or limited spatial coverage, these results must be interpreted with caution. However, they provide the most thorough estimates presently available of ozone changes prior to the coincident onset of satellite data and halogen dominated ozone changes. If these apparent trends were real, the radiative forcing by stratospheric ozone since the 1950s would then have been -0.15 ± 0.05 W/m2, and -0.2 W/m2 since the preindustrial. For tropospheric ozone, it would have been 0.38 ± 0.10 W/m2 since the late 1950s. Combined with even a very conservative estimate of tropospheric ozone forcing prior to that time, this would be larger than current estimates since 1850 which are derived from models that are even less well constrained. These calculations demonstrate the importance of gaining a better understanding of historical ozone changes.
Comparison of maximum force to failure of 4 thoracostomy tube connecting devices.
Psathas, Ιoannis; Papazoglou, Lysimachos G; Bikiaris, Dimitrios; Savvas, Ioannis; Kazakos, Georgios; Basdani, Eleni
2017-02-01
To compare the maximum force and displacement to failure of 4 different types of thoracostomy tube connecting devices. Experimental in vitro study. Four types of thoracostomy tube connecting devices (n = 10 each). Four different connecting device configurations (10 constructs each) were tested by maximum distraction to failure using a dynamometer: (1) CTTWW-a 3-way connector with a male luer slip attached to a thoracostomy tube by a Christmas tree adapter and secured to the tube with 21 gauge orthopedic wire; (2) CTTWRCW-a 3-way connector with a male luer lock with a rotating collar attached to a tube by a Christmas tree adapter and secured to the tube with 21 gauge orthopedic wire; (3) LVSBC-a Lopez valve attached to a tube with its short-barbed connector; and (4) LVLBC-a Lopez valve attached to a tube with its long-barbed connector. The maximum distraction force to failure was significantly greater for CTTWRCW (250.9 N; range 143.7-293.6) than CTTWW (132.9 N; range 84.2-224.1), LVLBC (90.8 N; range 74.0-123.4), and LVSBC (54.6 N; range 39.6-164.2). The median displacement to failure of CTTWRCW (150 mm; range 54-190) was significantly longer than that of CTTWW (34.5 mm; range 22-70), LVLBC (32.5 mm; range 24-57), and LVSBC (16 mm; range 11-69). The CTTWRCW group required greater force to create failure and had a longer displacement to failure, making it a more secure choice for connection to thoracostomy tubes. © 2016 The American College of Veterinary Surgeons.
A preliminary study to find out maximum occlusal bite force in Indian individuals
Jain, Veena; Mathur, Vijay Prakash; Pillai, Rajath;
2014-01-01
PURPOSE: This preliminary hospital based study was designed to measure the mean maximum bite force (MMBF) in healthy Indian individuals. An attempt was made to correlate MMBF with body mass index (BMI) and some of the anthropometric features. METHODOLOGY: A total of 358 healthy subjects in the age...... in subjects having concave facial profile when compared to convex (P = 0.045) and straight (P = 0.039) facial profile. BMI and arch form showed no significant relationship with MMBF. CONCLUSION: The MMBF is found to be affected by gender and some of the anthropometric features like facial form and palatal...
Cognitive task performance causes impaired maximum force production in human hand flexor muscles.
Bray, Steven R; Graham, Jeffrey D; Martin Ginis, Kathleen A; Hicks, Audrey L
2012-01-01
The purpose of this study was to investigate effects of demanding cognitive task performance on intermittent maximum voluntary muscle contraction (MVC) force production. Participants performed either a modified Stroop or control task for 22 min. After the first min and at 3-min intervals thereafter, participants rated fatigue, perceived mental exertion and performed a 4-s MVC handgrip squeeze. A mixed ANOVA showed a significant interaction, F(7, 259)=2.43, p=.02, with a significant linear reduction in MVC force production over time in the cognitively depleting condition (p=.01) and no change for controls. Ratings of perceived mental exertion, F(7, 252)=2.39, p<.05, mirrored the force production results with a greater linear increase over time in the cognitive depletion condition (p<.001) compared to controls. Findings support current views that performance of cognitively demanding tasks diminishes central nervous system resources that govern self-regulation of physical tasks requiring maximal voluntary effort. Copyright © 2011 Elsevier B.V. All rights reserved.
Tetsuo Touge
2012-01-01
Full Text Available Three trials of transcranial magnetic stimulation (TMS during the maximum voluntary muscle contraction (MVC were repeated at 15-minute intervals for 1 hour to examine the effects on motor evoked potentials (MEPs in the digital muscles and pinching muscle force before and after 4 high-intensity TMSs (test 1 condition or sham TMS (test 2 condition with MVC. Under the placebo condition, real TMS with MVC was administered only before and 1 hour after the sham TMS with MVC. Magnetic stimulation at the foramen magnum level (FMS with MVC was performed by the same protocol as that for the test 2 condition. As a result, MEP sizes in the digital muscles significantly increased after TMS with MVC under test conditions compared with the placebo conditions (P<0.05. Pinching muscle force was significantly larger 45 minutes and 1 hour after TMS with MVC under the test conditions than under the placebo condition (P<0.05. FMS significantly decreased MEP amplitudes 60 minutes after the sham TMS with MVC (P<0.005. The present results suggest that intermittently repeated TMS with MVC facilitates motor neuron excitabilities and muscle force. However, further studies are needed to confirm the effects of TMS with MVC and its mechanism.
Kim, K; Lee, S K; Kim, Y H
2010-10-01
The weakening of trunk muscles is known to be related to a reduction of the stabilization function provided by the muscles to the lumbar spine; therefore, strengthening deep muscles might reduce the possibility of injury and pain in the lumbar spine. In this study, the effect of variation in maximum forces of trunk muscles on the joint forces and moments in the lumbar spine was investigated. Accordingly, a three-dimensional finite element model of the lumbar spine that included the trunk muscles was used in this study. The variation in maximum forces of specific muscle groups was then modelled, and joint compressive and shear forces, as well as resultant joint moments, which were presumed to be related to spinal stabilization from a mechanical viewpoint, were analysed. The increase in resultant joint moments occurred owing to decrease in maximum forces of the multifidus, interspinales, intertransversarii, rotatores, iliocostalis, longissimus, psoas, and quadratus lumborum. In addition, joint shear forces and resultant joint moments were reduced as the maximum forces of deep muscles were increased. These results from finite element analysis indicate that the variation in maximum forces exerted by trunk muscles could affect the joint forces and joint moments in the lumbar spine.
Touge, Tetsuo; Urai, Yoshiteru; Ikeda, Kazuyo; Kume, Kodai; Deguchi, Kazushi
2012-01-01
Three trials of transcranial magnetic stimulation (TMS) during the maximum voluntary muscle contraction (MVC) were repeated at 15-minute intervals for 1 hour to examine the effects on motor evoked potentials (MEPs) in the digital muscles and pinching muscle force before and after 4 high-intensity TMSs (test 1 condition) or sham TMS (test 2 condition) with MVC. Under the placebo condition, real TMS with MVC was administered only before and 1 hour after the sham TMS with MVC. Magnetic stimulation at the foramen magnum level (FMS) with MVC was performed by the same protocol as that for the test 2 condition. As a result, MEP sizes in the digital muscles significantly increased after TMS with MVC under test conditions compared with the placebo conditions (P MVC under the test conditions than under the placebo condition (P MVC (P MVC facilitates motor neuron excitabilities and muscle force. However, further studies are needed to confirm the effects of TMS with MVC and its mechanism.
Abu Alhaija, Elham S J; Al Zo'ubi, Ibraheem A; Al Rousan, Mohammed E; Hammad, Mohammad M
2010-02-01
This study was carried out to record maximum occlusal bite force (MBF) in Jordanian students with three different facial types: short, average, and long, and to determine the effect of gender, type of functional occlusion, and the presence of premature contacts and parafunctional habits on MBF. Sixty dental students (30 males and 30 females) were divided into three equal groups based on the maxillomandibular planes angle (Max/Mand) and degree of anterior overlap: included short-faced students with a deep anterior overbite (Max/Mand or = 32 degrees). Their age ranged between 20 and 23 years. MBF was measured using a hydraulic occlusal force gauge. Occlusal factors, including the type of functional occlusion, the presence of premature contacts, and parafunctional habits, were recorded. Differences between groups were assessed using a t-test and analysis of variance. The average MBF in Jordanian adults was 573.42 +/- 140.18 N. Those with a short face had the highest MBF (679.60 +/- 117.46 N) while the long-face types had the lowest MBF (453.57 +/- 98.30 N; P < 0.001). The average MBF was 599.02 +/- 145.91 in males and 546.97 +/- 131.18 in females (P = 0.149). No gender differences were observed. The average MBF was higher in patients with premature contacts than those without, while it did not differ in subjects with different types of functional occlusion or in the presence of parafunctional habits.
Prathapa, Siriyara Jagannatha; Mondal, Swastik; van Smaalen, Sander
2013-04-01
Dynamic model densities according to Mondal et al. [(2012), Acta Cryst. A68, 568-581] are presented for independent atom models (IAM), IAMs after high-order refinements (IAM-HO), invariom (INV) models and multipole (MP) models of α-glycine, DL-serine, L-alanine and Ala-Tyr-Ala at T ≃ 20 K. Each dynamic model density is used as prior in the calculation of electron density according to the maximum entropy method (MEM). We show that at the bond-critical points (BCPs) of covalent C-C and C-N bonds the IAM-HO and INV priors produce reliable MEM density maps, including reliable values for the density and its Laplacian. The agreement between these MEM density maps and dynamic MP density maps is less good for polar C-O bonds, which is explained by the large spread of values of topological descriptors of C-O bonds in static MP densities. The density and Laplacian at BCPs of hydrogen bonds have similar values in MEM density maps obtained with all four kinds of prior densities. This feature is related to the smaller spatial variation of the densities in these regions, as expressed by small magnitudes of the Laplacians and the densities. It is concluded that the use of the IAM-HO prior instead of the IAM prior leads to improved MEM density maps. This observation shows interesting parallels to MP refinements, where the use of the IAM-HO as an initial model is the accepted procedure for solving MP parameters. A deconvolution of thermal motion and static density that is better than the deconvolution of the IAM appears to be necessary in order to arrive at the best MP models as well as at the best MEM densities.
Synchronous monitoring of muscle dynamics and muscle force for maximum isometric tetanus
Zakir Hossain, M.; Grill, Wolfgang
2010-03-01
Skeletal muscle is a classic example of a biological soft matter . At both macro and microscopic levels, skeletal muscle is exquisitely oriented for force generation and movement. In addition to the dynamics of contracting and relaxing muscle which can be monitored with ultrasound, variations in the muscle force are also expected to be monitored. To observe such force and sideways expansion variations synchronously for the skeletal muscle a novel detection scheme has been developed. As already introduced for the detection of sideways expansion variations of the muscle, ultrasonic transducers are mounted sideways on opposing positions of the monitored muscle. To detect variations of the muscle force, angle of pull of the monitored muscle has been restricted by the mechanical pull of the sonic force sensor. Under this condition, any variation in the time-of-flight (TOF) of the transmitted ultrasonic signals can be introduced by the variation of the path length between the transducers. The observed variations of the TOF are compared to the signals obtained by ultrasound monitoring for the muscle dynamics. The general behavior of the muscle dynamics and muscle force shows almost an identical concept. Since muscle force also relates the psychological boosting-up effects, the influence of boosting-up on muscle force and muscle dynamics can also be quantified form this study. Length-tension or force-length and force-velocity relationship can also be derived quantitatively with such monitoring.
Liebensteiner, Michael C; Platzer, Hans-Peter; Burtscher, Martin; Hanser, Friedrich; Raschner, Christian
2012-03-01
To investigate for gender differences during eccentric leg-press exercise. Tears of the anterior cruciate ligament (ACL) are considered to be related to eccentric tasks, altered neuromuscular control (e.g., reduced co-contraction of hamstrings), and increased knee abduction (valgus alignment). Based on these observations and the fact that ACL tears are more common in women, it was hypothesized that men and women differ significantly with regard to key parameters of force, knee stabilization, and muscle activity when exposed to maximum eccentric leg extension. Thirteen women and thirteen men were matched for age and physical activity. They performed maximum isokinetic eccentric leg-pressing against footplates of varied stability. The latter was done because earlier studies had shown that perturbational test conditions might be relevant in respect of ACL injuries. Key parameters of force, frontal plane knee stabilization, and muscle recruitment of significant muscles crossing the knee were recorded. The 'force stabilization deficit' (difference between maximum forces under normal and perturbed leg-pressing) did not differ significantly between genders. Likewise, parameters of muscle activity and frontal plane leg stabilization revealed no significant differences between men and women. This study is novel, in that gender differences in parameters of force, muscle activity, and leg kinematic were investigated during functional conditions of eccentric leg-pressing. No gender differences were observed in the measured parameters. However, the conclusion should be viewed with caution because the findings concurred with, but also contrasted, previous research in this field. Diagnostic study, Level III.
Sawers, Andrew; Bhattacharjee, Tapomayukh; McKay, J Lucas; Hackney, Madeleine E; Kemp, Charles C; Ting, Lena H
2017-01-31
Physical interactions between two people are ubiquitous in our daily lives, and an integral part of many forms of rehabilitation. However, few studies have investigated forces arising from physical interactions between humans during a cooperative motor task, particularly during overground movements. As such, the direction and magnitude of interaction forces between two human partners, how those forces are used to communicate movement goals, and whether they change with motor experience remains unknown. A better understanding of how cooperative physical interactions are achieved in healthy individuals of different skill levels is a first step toward understanding principles of physical interactions that could be applied to robotic devices for motor assistance and rehabilitation. Interaction forces between expert and novice partner dancers were recorded while performing a forward-backward partnered stepping task with assigned "leader" and "follower" roles. Their position was recorded using motion capture. The magnitude and direction of the interaction forces were analyzed and compared across groups (i.e. expert-expert, expert-novice, and novice-novice) and across movement phases (i.e. forward, backward, change of direction). All dyads were able to perform the partnered stepping task with some level of proficiency. Relatively small interaction forces (10-30N) were observed across all dyads, but were significantly larger among expert-expert dyads. Interaction forces were also found to be significantly different across movement phases. However, interaction force magnitude did not change as whole-body synchronization between partners improved across trials. Relatively small interaction forces may communicate movement goals (i.e. "what to do and when to do it") between human partners during cooperative physical interactions. Moreover, these small interactions forces vary with prior motor experience, and may act primarily as guiding cues that convey information about
Xuan Guo
2016-01-01
Full Text Available The theoretical formula of the maximum internal forces for circular tunnel lining structure under impact loads of the underground is deduced in this paper. The internal force calculation formula under different equivalent forms of impact pseudostatic loads is obtained. Furthermore, by comparing the theoretical solution with the measured data of the top blasting model test of circular formula under different equivalent forms of impact pseudostatic loads are obtained. Furthermore, by comparing the theoretical solution with the measured data of the top blasting model test of circular tunnel, it is found that the proposed theoretical results accord with the experimental values well. The corresponding equivalent impact pseudostatic triangular load is the most realistic pattern of all test equivalent forms. The equivalent impact pseudostatic load model and maximum solution of the internal force for tunnel lining structure are partially verified.
Lovell, Dale I; Cuneo, Ross; Gass, Greg C
2010-06-01
This study examined the effect of strength training (ST) and short-term detraining on maximum force and rate of force development (RFD) in previously sedentary, healthy older men. Twenty-four older men (70-80 years) were randomly assigned to a ST group (n = 12) and C group (control, n = 12). Training consisted of three sets of six to ten repetitions on an incline squat at 70-90% of one repetition maximum three times per week for 16 weeks followed by 4 weeks of detraining. Regional muscle mass was assessed before and after training by dual-energy X-ray absorptiometry. Training increased RFD, maximum bilateral isometric force, and force in 500 ms, upper leg muscle mass and strength above pre-training values (14, 25, 22, 7, 90%, respectively; P force and RFD of older men. However, older individuals may lose some neuromuscular performance after a period of short-term detraining and that resistance exercise should be performed on a regular basis to maintain training adaptations.
Beenakker, EAC; van der Hoeven, JH; Fock, JM; Maurits, NM
2001-01-01
Since muscle force and functional ability are not related linearly; maximum force can be reduced while functional ability is still maintained. For diagnostic and therapeutic reasons loss of muscle force should be detected as early and accurately as possible. Because of growth factors, maximum muscle
Maximum forces sustained during various methods of exiting commercial tractors, trailers and trucks.
Fathallah, F A; Cotnam, J P
2000-02-01
Many commercial vehicles have steps and grab-rails to assist the driver in safely entering/exiting the vehicle. However, many drivers do not use these aids. The purpose of this study was to compare impact forces experienced during various exit methods from commercial equipment. The study investigated impact forces of ten male subjects while exiting two tractors, a step-van, a box-trailer, and a cube-van. The results showed that exiting from cab-level or trailer-level resulted in impact forces as high as 12 times the subject's body weight; whereas, fully utilizing the steps and grab-rails resulted in impact forces less than two times body weight. An approach that emphasizes optimal design of entry/exit aids coupled with driver training and education is expected to minimize exit-related injuries.
Dargeviciute, Gintare; Masiulis, Nerijus; Kamandulis, Sigitas; Skurvydas, Albertas; Westerblad, Håkan
2013-10-15
We studied the relation between two common force modifications in skeletal muscle: the prolonged force depression induced by unaccustomed eccentric contractions, and the residual force depression (rFD) observed immediately after active shortening. We hypothesized that rFD originates from distortion within the sarcomeres and the extent of rFD: 1) correlates to the force and work performed during the shortening steps, which depend on sarcomeric integrity; and 2) is increased by sarcomeric disorganization induced by eccentric contractions. Nine healthy untrained men (mean age 26 yr) participated in the study. rFD was studied in electrically stimulated knee extensor muscles. rFD was defined as the reduction in isometric torque after active shortening compared with the torque in a purely isometric contraction. Eccentric contractions were performed as 50 repeated drop jumps with active deceleration to 90° knee angle, immediately followed by a maximal upward jump. rFD was assessed before and 5 min to 72 h after drop jumps. The series of drop jumps caused a prolonged force depression, which was about two times larger at 20-Hz than at 50-Hz stimulation. There was a significant correlation between increasing rFD and increasing mechanical work performed during active shortening both before and after drop jumps. In addition, a given rFD was obtained at a markedly lower mechanical work after drop jumps. In conclusion, the extent of rFD correlates to the mechanical work performed during active shortening. A series of eccentric contractions causes a prolonged reduction of isometric force. In addition, eccentric contractions exaggerate rFD, which further decreases muscle performance during dynamic contractions.
Saarinen, Juha J.; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Evans, Alistair R.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Sibly, Richard M.; Stephens, Patrick R.; Theodor, Jessica; Uhen, Mark D.; Smith, Felisa A.
2014-01-01
There is accumulating evidence that macroevolutionary patterns of mammal evolution during the Cenozoic follow similar trajectories on different continents. This would suggest that such patterns are strongly determined by global abiotic factors, such as climate, or by basic eco-evolutionary processes such as filling of niches by specialization. The similarity of pattern would be expected to extend to the history of individual clades. Here, we investigate the temporal distribution of maximum size observed within individual orders globally and on separate continents. While the maximum size of individual orders of large land mammals show differences and comprise several families, the times at which orders reach their maximum size over time show strong congruence, peaking in the Middle Eocene, the Oligocene and the Plio-Pleistocene. The Eocene peak occurs when global temperature and land mammal diversity are high and is best explained as a result of niche expansion rather than abiotic forcing. Since the Eocene, there is a significant correlation between maximum size frequency and global temperature proxy. The Oligocene peak is not statistically significant and may in part be due to sampling issues. The peak in the Plio-Pleistocene occurs when global temperature and land mammal diversity are low, it is statistically the most robust one and it is best explained by global cooling. We conclude that the macroevolutionary patterns observed are a result of the interplay between eco-evolutionary processes and abiotic forcing. PMID:24741007
Saarinen, Juha J; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Evans, Alistair R; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Sibly, Richard M; Stephens, Patrick R; Theodor, Jessica; Uhen, Mark D; Smith, Felisa A
2014-06-07
There is accumulating evidence that macroevolutionary patterns of mammal evolution during the Cenozoic follow similar trajectories on different continents. This would suggest that such patterns are strongly determined by global abiotic factors, such as climate, or by basic eco-evolutionary processes such as filling of niches by specialization. The similarity of pattern would be expected to extend to the history of individual clades. Here, we investigate the temporal distribution of maximum size observed within individual orders globally and on separate continents. While the maximum size of individual orders of large land mammals show differences and comprise several families, the times at which orders reach their maximum size over time show strong congruence, peaking in the Middle Eocene, the Oligocene and the Plio-Pleistocene. The Eocene peak occurs when global temperature and land mammal diversity are high and is best explained as a result of niche expansion rather than abiotic forcing. Since the Eocene, there is a significant correlation between maximum size frequency and global temperature proxy. The Oligocene peak is not statistically significant and may in part be due to sampling issues. The peak in the Plio-Pleistocene occurs when global temperature and land mammal diversity are low, it is statistically the most robust one and it is best explained by global cooling. We conclude that the macroevolutionary patterns observed are a result of the interplay between eco-evolutionary processes and abiotic forcing.
Forced orthodontic eruption for augmentation of soft and hard tissue prior to implant placement
Rafael Scaf de Molon
2013-01-01
Full Text Available Forced orthodontic eruption (FOE is a non-surgical treatment option that allows modifying the osseous and gingival topography. The aim of this article is to present a clinical case of a FOE, which resulted in an improvement of the amount of available bone and soft-tissues for implant site development. Patient was referred for treatment of mobility and unesthetic appearance of their maxillary incisors. Clinical and radiographic examination revealed inflamed gingival tissue, horizontal and vertical tooth mobility and interproximal angular bone defects. It was chosen a multidisciplinary treatment approach using FOE, tooth extraction, and immediate implant placement to achieve better esthetic results. The use of FOE, in periodontally compromised teeth, promoted the formation of a new bone and soft-tissue in a coronal direction, without additional surgical procedures, enabling an esthetic, and functional implant-supported restoration.
Ovalle, E. M.; Bravo, M. A.; Villalobos, C. U.; Foppiano, A. J.
2013-10-01
Ionospheric variability observed prior to mayor earthquakes has been studied for decades. In particular, in many such studies the identification of ionospheric precursors of large earthquakes has been regarded as a specific goal. This paper analyses the observations of the maximum electron concentration (NmF2) over Concepción (36.8°S; 73.0°W) and of the total electron content (TEC) for an area covering the rupture zone corresponding to the very large Chile earthquake of 27 February 2010. The analyses used here are similar to those published before for many earthquakes in Taiwan, Japan and Russia. Possible NmF2 and TEC precursors are compared with other precursors proposed for the same earthquake using different TEC determinations and satellite observations of electron/ion concentration, energetic particle bursts and electromagnetic emissions. Some possible precursors derived from the various observations are consistent with each other. However, none can be unambiguously associated to the Chilean earthquake.
Topp, Robert; Ng, Alex; Cybulski, Alyson; Skelton, Katalin; Papanek, Paula
2014-07-01
The purpose of this study was to compare the vascular responses in the brachial artery and perceived intensity of two different formulas of topical menthol gels prior to and following a bout of maximum voluntary muscular contractions (MVMCs). 18 adults completed the same protocol on different days using blinded topical menthol gels (Old Formula and New Formula). Heart rate, brachial artery blood flow (ml/min), vessel diameter and reported intensity of sensation were measured at baseline (T1), at 5 min after application of the gel to the upper arm (T2), and immediately following five MVMCs hand grips (T3). The New Formula exhibited a significant decline in blood flow (-22.6%) between T1 and T2 which was not different than the nonsignificant declines under the Old Formula 1 (-21.8%). Both formulas resulted in a significant increase in perceived intensity of sensation between T1 and T2. Blood flow increased significantly with the New Formula (488%) between T2 and T3 and nonsignificantly with the Old Formula (355%).
Abdulla Almazrouee
2011-04-01
Full Text Available Coated carbide inserts are considered vital components in machining processes and advanced functional surface integrity of inserts and their coating are decisive factors for tool life. Atomic Force Microscopy (AFM implementation has gained acceptance over a wide spectrum of research and science applications. When used in a proper systematic manner, the AFM features can be a valuable tool for assessment of tool surface integrity. The aim of this paper is to assess the integrity of coated and uncoated carbide inserts using AFM analytical parameters. Surface morphology of as-received coated and uncoated carbide inserts is examined, analyzed, and characterized through the determination of the appropriate scanning setting, the suitable data type imaging techniques and the most representative data analysis parameters using the MultiMode AFM microscope in contact mode. The results indicate that it is preferable to start with a wider scan size in order to get more accurate interpretation of surface topography. Results are found credible to support the idea that AFM can be used efficiently in detecting flaws and defects of coated and uncoated carbide inserts using specific features such as “Roughness” and “Section” parameters. A recommended strategy is provided for surface examination procedures of cutting inserts using various AFM controlling parameters.
Grossi Márcio L
2007-04-01
Full Text Available Abstract Background Vertical facial pattern may be related to the direction of pull of the masticatory muscles, yet its effect on occlusal force and elastic deformation of the mandible still is unclear. This study tested whether the variation in vertical facial pattern is related to the variation in maximum occlusal force (MOF and medial mandibular flexure (MMF in 51 fully-dentate adults. Methods Data from cephalometric analysis according to the method of Ricketts were used to divide the subjects into three groups: Dolichofacial (n = 6, Mesofacial (n = 10 and Brachyfacial (n = 35. Bilateral MOF was measured using a cross-arch force transducer placed in the first molar region. For MMF, impressions of the mandibular occlusal surface were made in rest (R and in maximum opening (O positions. The impressions were scanned, and reference points were selected on the occlusal surface of the contralateral first molars. MMF was calculated by subtracting the intermolar distance in O from the intermolar distance in R. Data were analysed by ANCOVA (fixed factors: facial pattern, sex; covariate: body mass index (BMI; alpha = 0.05. Results No significant difference of MOF or MMF was found among the three facial patterns (P = 0.62 and P = 0.72, respectively. BMI was not a significant covariate for MOF or MMF (P > 0.05. Sex was a significant factor only for MOF (P = 0.007; males had higher MOF values than females. Conclusion These results suggest that MOF and MMF did not vary as a function of vertical facial pattern in this Brazilian sample.
Otto-Bliesner, Bette L.; Brady, Esther C.
2010-01-01
Proxy records indicate that the locations and magnitudes of freshwater forcing to the Atlantic Ocean basin as iceberg discharges into the high-latitude North Atlantic, Laurentide meltwater input to the Gulf of Mexico, or meltwater diversion to the North Atlantic via the St. Lawrence River and other eastern outlets may have influenced the North Atlantic thermohaline circulation and global climate. We have performed Last Glacial Maximum (LGM) simulations with the NCAR Community Climate System Model (CCSM3) in which the magnitude of the freshwater forcing has been varied from 0.1 to 1 Sv and inserted either into the subpolar North Atlantic Ocean or the Gulf of Mexico. In these glacial freshening experiments, the less dense freshwater provides a lid on the ocean water below, suppressing ocean convection and interaction with the atmosphere above and reducing the Atlantic Meridional Overturning Circulation (AMOC). This is the case whether the freshwater is added directly to the area of convection south of Greenland or transported there by the subtropical and subpolar gyres when added to the Gulf of Mexico. The AMOC reduction is less for the smaller freshwater forcings, but is not linear with the size of the freshwater perturbation. The recovery of the AMOC from a "slow" state is ˜200 years for the 0.1 Sv experiment and ˜500 years for the 1 Sv experiment. For glacial climates, with large Northern Hemisphere ice sheets and reduced greenhouse gases, the cold subpolar North Atlantic is primed to respond rapidly and dramatically to freshwater that is either directly dumped into this region or after being advected from the Gulf of Mexico. Greenland temperatures cool by 6-8 °C in all the experiments, with little sensitivity to the magnitude, location or duration of the freshwater forcing, but exhibiting large seasonality. Sea ice is important for explaining the responses. The Northern Hemisphere high latitudes are slow to recover. Antarctica and the Southern Ocean show a
Várnai, Csilla; Burkoff, Nikolas S; Wild, David L
2013-12-10
Maximum Likelihood (ML) optimization schemes are widely used for parameter inference. They maximize the likelihood of some experimentally observed data, with respect to the model parameters iteratively, following the gradient of the logarithm of the likelihood. Here, we employ a ML inference scheme to infer a generalizable, physics-based coarse-grained protein model (which includes Go̅-like biasing terms to stabilize secondary structure elements in room-temperature simulations), using native conformations of a training set of proteins as the observed data. Contrastive divergence, a novel statistical machine learning technique, is used to efficiently approximate the direction of the gradient ascent, which enables the use of a large training set of proteins. Unlike previous work, the generalizability of the protein model allows the folding of peptides and a protein (protein G) which are not part of the training set. We compare the same force field with different van der Waals (vdW) potential forms: a hard cutoff model, and a Lennard-Jones (LJ) potential with vdW parameters inferred or adopted from the CHARMM or AMBER force fields. Simulations of peptides and protein G show that the LJ model with inferred parameters outperforms the hard cutoff potential, which is consistent with previous observations. Simulations using the LJ potential with inferred vdW parameters also outperforms the protein models with adopted vdW parameter values, demonstrating that model parameters generally cannot be used with force fields with different energy functions. The software is available at https://sites.google.com/site/crankite/.
Blaschek, M.; Renssen, H.
2012-01-01
The relatively warm early Holocene climate in the Nordic Seas, known as the Holocene Thermal Maximum (HTM), is often associated with an orbitally forced summer insolation maximum at 10 ka BP. The spatial and temporal response recorded in proxy data in the North Atlantic and the Nordic Seas reveal a
Darwent, David; Ferguson, Sally A; Sargent, Charli; Paech, Gemma M; Williams, Louise; Zhou, Xuan; Matthews, Raymond W; Dawson, Drew; Kennaway, David J; Roach, Greg D
2010-07-01
Shiftworkers are often required to sleep at inappropriate phases of their circadian timekeeping system, with implications for the dynamics of ultradian sleep stages. The independent effects of these changes on cognitive throughput performance are not well understood. This is because the effects of sleep on performance are usually confounded with circadian factors that cannot be controlled under normal day/night conditions. The aim of this study was to assess the contribution of prior wake, core body temperature, and sleep stages to cognitive throughput performance under conditions of forced desynchrony (FD). A total of 11 healthy young adult males resided in a sleep laboratory in which day/night zeitgebers were eliminated and ambient room temperature, lighting levels, and behavior were controlled. The protocol included 2 training days, a baseline day, and 7 x 28-h FD periods. Each FD period consisted of an 18.7-h wake period followed by a 9.3-h rest period. Sleep was assessed using standard polysomnography. Core body temperature and physical activity were assessed continuously in 1-min epochs. Cognitive throughput was measured by a 5-min serial addition and subtraction (SAS) task and a 90-s digit symbol substitution (DSS) task. These were administered in test sessions scheduled every 2.5 h across the wake periods of each FD period. On average, sleep periods had a mean (+/- standard deviation) duration of 8.5 (+/-1.2) h in which participants obtained 7.6 (+/-1.4) h of total sleep time. This included 4.2 (+/-1.2) h of stage 1 and stage 2 sleep (S1-S2 sleep), 1.6 (+/-0.6) h of slow-wave sleep (SWS), and 1.8 (+/-0.6) h of rapid eye movement (REM) sleep. A mixed-model analysis with five covariates indicated significant fixed effects on cognitive throughput for circadian phase, prior wake time, and amount of REM sleep. Significant effects for S1-S2 sleep and SWS were not found. The results demonstrate that variations in core body temperature, time awake, and amount of
Magyari, E. K.; Veres, D.; Wennrich, V.; Wagner, B.; Braun, M.; Jakab, G.; Karátson, D.; Pál, Z.; Ferenczy, Gy; St-Onge, G.; Rethemeyer, J.; Francois, J.-P.; von Reumont, F.; Schäbitz, F.
2014-12-01
The Carpathian Mountains were one of the main mountain reserves of the boreal and cool temperate flora during the Last Glacial Maximum (LGM) in East-Central Europe. Previous studies demonstrated Lateglacial vegetation dynamics in this area; however, our knowledge on the LGM vegetation composition is very limited due to the scarcity of suitable sedimentary archives. Here we present a new record of vegetation, fire and lacustrine sedimentation from the youngest volcanic crater of the Carpathians (Lake St Anne, Lacul Sfânta Ana, Szent-Anna-tó) to examine environmental change in this region during the LGM and the subsequent deglaciation. Our record indicates the persistence of boreal forest steppe vegetation (with Pinus, Betula, Salix, Populus and Picea) in the foreland and low mountain zone of the East Carpathians and Juniperus shrubland at higher elevation. We demonstrate attenuated response of the regional vegetation to maximum global cooling. Between ˜22,870 and 19,150 cal yr BP we find increased regional biomass burning that is antagonistic with the global trend. Increased regional fire activity suggests extreme continentality likely with relatively warm and dry summers. We also demonstrate xerophytic steppe expansion directly after the LGM, from ˜19,150 cal yr BP, and regional increase in boreal woodland cover with Pinus and Betula from 16,300 cal yr BP. Plant macrofossils indicate local (950 m a.s.l.) establishment of Betula nana and Betula pubescens at 15,150 cal yr BP, Pinus sylvestris at 14,700 cal yr BP and Larix decidua at 12,870 cal yr BP. Pollen data furthermore support population genetic inferences regarding the regional presence of some temperate deciduous trees during the LGM (Fagus sylvatica, Corylus avellana, Fraxinus excelsior). Our sedimentological data also demonstrate intensified aeolian dust accumulation between 26,000 and 20,000 cal yr BP.
Eniko M. MAGYARI
2014-11-01
Full Text Available The Carpathian Mountains were one of the main mountain reserves of the boreal and cool temperate flora during the Last Glacial Maximum (LGM in East-Central Europe. Previous studies demonstrated late glacial vegetation dynamics in this area; however, our knowledge on the LGM vegetation composition is limited due to the scarcity of suitable sedimentary archives. Here we present a new record of vegetation, fire and lacustrine sedimentation from the youngest volcanic crater of the Carpathians (Lake St Anne, Lacul Sfânta Ana, Szent-Anna-tó to examine environmental change in this region during the LGM and the subsequent deglaciation. Our record indicates the persistence of boreal forest steppe vegetation (Pinus sylvestris, Pinus mugo, Pinus cembra, Betula, Salix, Populus, Picea abies in the foreland and low mountain zone of the East Carpathians and Juniperus shrubland at higher elevation. We demonstrate attenuated response of the regional vegetation to maximum global cooling. Between ~22,870 and 19,150 cal yr BP we find increased regional biomass burning that is antagonistic with the global trend. Increased regional fire activity suggests extreme continentality likely with relatively warm and dry summers. We also demonstratexerophytic steppe expansion directly after the LGM, from ~19,150 cal yr BP, and regional increase in boreal woodland cover with Pinus and Betula from 16,300 cal yr BP. Plant macrofossils indicate local (950 m a.s.l. establishment of Betula nana and B. pubescens at 15,150 cal yr BP, Pinus sylvestris at 14,700 cal yr BP and Larix decidua at 12,870 cal yr BP. Pollen data furthermore hints at the regional presence of some temperate deciduous trees during the LGM (Fagus sylvatica, Carpinus betulus, Corylus avellana, Fraxinus excelsior, Ulmus. We also present pollen based quantitative climate reconstruction from this site and discuss its connection with other climate reconstructions and climate modeling results.
Sugiura, Yoshito; Hatanaka, Yasuhiko; Arai, Tomoaki; Sakurai, Hiroaki; Kanada, Yoshikiyo
2016-04-01
We aimed to investigate whether a linear regression formula based on the relationship between joint torque and angular velocity measured using a high-speed video camera and image measurement software is effective for estimating 1 repetition maximum (1RM) and isometric peak torque in knee extension. Subjects comprised 20 healthy men (mean ± SD; age, 27.4 ± 4.9 years; height, 170.3 ± 4.4 cm; and body weight, 66.1 ± 10.9 kg). The exercise load ranged from 40% to 150% 1RM. Peak angular velocity (PAV) and peak torque were used to estimate 1RM and isometric peak torque. To elucidate the relationship between force and velocity in knee extension, the relationship between the relative proportion of 1RM (% 1RM) and PAV was examined using simple regression analysis. The concordance rate between the estimated value and actual measurement of 1RM and isometric peak torque was examined using intraclass correlation coefficients (ICCs). Reliability of the regression line of PAV and % 1RM was 0.95. The concordance rate between the actual measurement and estimated value of 1RM resulted in an ICC(2,1) of 0.93 and that of isometric peak torque had an ICC(2,1) of 0.87 and 0.86 for 6 and 3 levels of load, respectively. Our method for estimating 1RM was effective for decreasing the measurement time and reducing patients' burden. Additionally, isometric peak torque can be estimated using 3 levels of load, as we obtained the same results as those reported previously. We plan to expand the range of subjects and examine the generalizability of our results.
2014-01-01
PURPOSE The occlusal splint has been used for many years as an effective treatment of sleep bruxism. Several methods have been used to evaluate efficiency of the occlusal splints. However, the effect of the occlusal splints on occlusal force has not been clarified sufficiently. The purpose of this study was to evaluate the effect of occlusal splints on maximum occlusal force in patients with sleep bruxism and compare two type of splints that are Bruxogard-soft splint and canine protected hard...
Deary Ian J
2009-04-01
Full Text Available Abstract Background Brain size is associated with cognitive ability in adulthood (correlation ~ .3, but few studies have investigated the relationship in normal ageing, particularly beyond age 75 years. With age both brain size and fluid-type intelligence decline, and regional atrophy is often suggested as causing decline in specific cognitive abilities. However, an association between brain size and intelligence may be due to the persistence of this relationship from earlier life. Methods We recruited 107 community-dwelling volunteers (29% male aged 75–81 years for cognitive testing and neuroimaging. We used principal components analysis to derived a 'general cognitive factor' (g from tests of fluid-type ability. Using semi-automated analysis, we measured whole brain volume, intracranial area (ICA (an estimate of maximal brain volume, and volume of frontal and temporal lobes, amygdalo-hippocampal complex, and ventricles. Brain atrophy was estimated by correcting WBV for ICA. Results Whole brain volume (WBV correlated with general cognitive ability (g (r = .21, P Conclusion The association between brain regions and specific cognitive abilities in community dwelling people of older age is due to the life-long association between whole brain size and general cognitive ability, rather than atrophy of specific regions. Researchers and clinicians should therefore be cautious of interpreting global or regional brain atrophy on neuroimaging as contributing to cognitive status in older age without taking into account prior mental ability and brain size.
Karlsson, J S; Ostlund, N; Larsson, B; Gerdle, B
2003-10-01
Frequency analysis of myoelectric (ME) signals, using the mean power spectral frequency (MNF), has been widely used to characterize peripheral muscle fatigue during isometric contractions assuming constant force. However, during repetitive isokinetic contractions performed with maximum effort, output (force or torque) will decrease markedly during the initial 40-60 contractions, followed by a phase with little or no change. MNF shows a similar pattern. In situations where there exist a significant relationship between MNF and output, part of the decrease in MNF may per se be related to the decrease in force during dynamic contractions. This study estimated force effects on the MNF shifts during repetitive dynamic knee extensions. Twenty healthy volunteers participated in the study and both surface ME signals (from the right vastus lateralis, vastus medialis, and rectus femoris muscles) and the biomechanical signals (force, position, and velocity) of an isokinetic dynamometer were measured. Two tests were performed: (i) 100 repetitive maximum isokinetic contractions of the right knee extensors, and (ii) five gradually increasing static knee extensions before and after (i). The corresponding ME signal time-frequency representations were calculated using the continuous wavelet transform. Compensation of the MNF variables of the repetitive contractions was performed with respect to the individual MNF-force relation based on an average of five gradually increasing contractions. Whether or not compensation was necessary was based on the shape of the MNF-force relationship. A significant compensation of the MNF was found for the repetitive isokinetic contractions. In conclusion, when investigating maximum dynamic contractions, decreases in MNF can be due to mechanisms similar to those found during sustained static contractions (force-independent component of fatigue) and in some subjects due to a direct effect of the change in force (force-dependent component of fatigue
Dogan, Arife; Bek, Bulent
2014-01-01
PURPOSE The occlusal splint has been used for many years as an effective treatment of sleep bruxism. Several methods have been used to evaluate efficiency of the occlusal splints. However, the effect of the occlusal splints on occlusal force has not been clarified sufficiently. The purpose of this study was to evaluate the effect of occlusal splints on maximum occlusal force in patients with sleep bruxism and compare two type of splints that are Bruxogard-soft splint and canine protected hard stabilization splint. MATERIALS AND METHODS Twelve students with sleep bruxism were participated in the present study. All participants used two different occlusal splints during sleep for 6 weeks. Maximum occlusal force was measured with two miniature strain-gage transducers before, 3 and 6 weeks after insertion of occlusal splints. Clinical examination of temporomandibular disorders was performed for all individuals according to the Craniomandibular Index (CMI) before and 6 weeks after the insertion of splints. The changes in mean occlusal force before, 3 and 6 weeks after insertion of both splints were analysed with paired sample t-test. The Wilcoxon test was used for the comparison of the CMI values before and 6 weeks after the insertion of splints. RESULTS Participants using stabilization splints showed no statistically significant changes in occlusal force before, 3, and 6 weeks after insertion of splint (P>.05) and participants using Bruxogard-soft splint had statistically significant decreased occlusal force 6 weeks after insertion of splint (P<.05). There was statistically significant improvement in the CMI value of the participants in both of the splint groups (P<.05). CONCLUSION Participants who used Bruxogard-soft splint showed decreases in occlusal force 6 weeks after insertion of splint. The use of both splints led to a significant reduction in the clinical symptoms. PMID:24843394
Christensen, Peter Astrup; Jacobsen, Jacob Ole; Thorlund, Jonas B
2008-01-01
of force development, and maximal jump height were tested to assess muscle strength/power along with whole-body impedance analysis before and after SSR. RESULTS: Body weight, fat-free mass, and total body water decreased (4-5%) after SSR, along with impairments in maximal jump height (-8%) and knee...
Bo You
2015-01-01
Full Text Available In order to predict pressing quality of precision press-fit assembly, press-fit curves and maximum press-mounting force of press-fit assemblies were investigated by finite element analysis (FEA. The analysis was based on a 3D Solidworks model using the real dimensions of the microparts and the subsequent FEA model that was built using ANSYS Workbench. The press-fit process could thus be simulated on the basis of static structure analysis. To verify the FEA results, experiments were carried out using a press-mounting apparatus. The results show that the press-fit curves obtained by FEA agree closely with the curves obtained using the experimental method. In addition, the maximum press-mounting force calculated by FEA agrees with that obtained by the experimental method, with the maximum deviation being 4.6%, a value that can be tolerated. The comparison shows that the press-fit curve and max press-mounting force calculated by FEA can be used for predicting the pressing quality during precision press-fit assembly.
Vandervoort, K. G.; Joaquin, J. C.; Kwan, C.; Bray, J. D.; Torrico, R.; Abramzon, N.; Brelles-Marino, G.
2007-03-01
We present investigations of biofilm-forming bacteria before and after treatment from gas discharge plasmas. Gas discharge plasmas represent a way to inactivate bacteria under conditions where conventional disinfection methods are often ineffective. These conditions involve bacteria in biofilm communities, where cooperative interactions between cells make organisms less susceptible to standard killing methods. Rhizobium gallicum and Chromobacterium violaceum were imaged before and after plasma treatment using an atomic force microscope (AFM). In addition, cell wall elasticity was studied by measuring force distance curves as the AFM tip was pressed into the cell surface. Results for cell surface morphology and micromechanical properties for plasma treatments lasting from 5 to 60 minutes were obtained and will be presented.
Säwén, Elin; Massad, Tariq; Landersjö, Clas; Damberg, Peter; Widmalm, Göran
2010-08-21
The conformational space available to the flexible molecule α-D-Manp-(1-->2)-α-D-Manp-OMe, a model for the α-(1-->2)-linked mannose disaccharide in N- or O-linked glycoproteins, is determined using experimental data and molecular simulation combined with a maximum entropy approach that leads to a converged population distribution utilizing different input information. A database survey of the Protein Data Bank where structures having the constituent disaccharide were retrieved resulted in an ensemble with >200 structures. Subsequent filtering removed erroneous structures and gave the database (DB) ensemble having three classes of mannose-containing compounds, viz., N- and O-linked structures, and ligands to proteins. A molecular dynamics (MD) simulation of the disaccharide revealed a two-state equilibrium with a major and a minor conformational state, i.e., the MD ensemble. These two different conformation ensembles of the disaccharide were compared to measured experimental spectroscopic data for the molecule in water solution. However, neither of the two populations were compatible with experimental data from optical rotation, NMR (1)H,(1)H cross-relaxation rates as well as homo- and heteronuclear (3)J couplings. The conformational distributions were subsequently used as background information to generate priors that were used in a maximum entropy analysis. The resulting posteriors, i.e., the population distributions after the application of the maximum entropy analysis, still showed notable deviations that were not anticipated based on the prior information. Therefore, reparameterization of homo- and heteronuclear Karplus relationships for the glycosidic torsion angles Φ and Ψ were carried out in which the importance of electronegative substituents on the coupling pathway was deemed essential resulting in four derived equations, two (3)J(COCC) and two (3)J(COCH) being different for the Φ and Ψ torsions, respectively. These Karplus relationships are denoted
Dan N. Dumitriu
2015-09-01
Full Text Available A Danaher Thomson linear actuator with ball screw drive and a realtime control system are used here to induce vertical displacements under the driver/user seat of an in-house dynamic car simulator. In order to better support the car simulator and to dynamically protect the actuator’s ball screw drive, a layer of coil springs is used to support the whole simulator chassis. More precisely, one coil spring is placed vertically under each corner of the rectangular chassis. The paper presents the choice of the appropriate coil springs, so that to minimize as much as possible the ball screw drive task of generating linear motions, corresponding to the vertical displacements and accelerations encountered by a driver during a real ride. For this application, coil springs with lower spring constant are more suited to reduce the forces in the ball screw drive and thus to increase the ball screw drive life expectancy.
Menking, Kirsten M.
2015-05-01
Lacustrine sediments from the Estancia Basin of central New Mexico reveal decadal to millennial oscillations in the volume of Lake Estancia during Last Glacial Maximum (LGM) time. LGM sediments consist of authigenic carbonates, detrital clastics delivered to the lake in stream flow pulses, and evaporites that precipitated in mudflats exposed during lake lowstands and were subsequently blown into the lake. Variations in sediment mineralogy thus reflect changes in hydrologic balance and were quantified using Rietveld analysis of X-ray diffraction traces. Radiocarbon dates on ostracode valve calcite allowed the construction of mineralogical time series for the interval ~ 23,600 to ~ 18,300 ka, which were subjected to spectral analysis using REDFIT (Schulz and Mudelsee, 2002). Dominant periods of ~ 900, ~ 375, and ~ 265 yr are similar to cycles in Holocene 14C production reported for a variety of tree ring records, suggesting that the Lake Estancia sediments record variations in solar activity during LGM time. A prominent spectral peak with a period of ~ 88 yr appears to reflect the solar Gleissberg cycle and may help, along with the ~ 265 yr cycle, to explain an ongoing mystery about how Lake Estancia was able to undergo abrupt expansions without overflowing its drainage basin.
Jonhan Ho
2013-01-01
Full Text Available Background: Advances in digital pathology are accelerating integration of this technology into anatomic pathology (AP. To optimize implementation and adoption of digital pathology systems within a large healthcare organization, initial assessment of both end user (pathologist needs and organizational infrastructure are required. Contextual inquiry is a qualitative, user-centered tool for collecting, interpreting, and aggregating such detailed data about work practices that can be employed to help identify specific needs and requirements. Aim: Using contextual inquiry, the objective of this study was to identify the unique work practices and requirements in AP for the United States (US Air Force Medical Service (AFMS that had to be targeted in order to support their transition to digital pathology. Subjects and Methods: A pathology-centered observer team conducted 1.5 h interviews with a total of 24 AFMS pathologists and histology lab personnel at three large regional centers and one smaller peripheral AFMS pathology center using contextual inquiry guidelines. Findings were documented as notes and arranged into a hierarchal organization of common themes based on user-provided data, defined as an affinity diagram. These data were also organized into consolidated graphic models that characterized AFMS pathology work practices, structure, and requirements. Results: Over 1,200 recorded notes were grouped into an affinity diagram composed of 27 third-level, 10 second-level, and five main-level (workflow and workload distribution, quality, communication, military culture, and technology categories. When combined with workflow and cultural models, the findings revealed that AFMS pathologists had needs that were unique to their military setting, when compared to civilian pathologists. These unique needs included having to serve a globally distributed patient population, transient staff, but a uniform information technology (IT structure. Conclusions: The
de Oliveira, Liliam Fernandes; Menegaldo, Luciano Luporini
2010-10-19
EMG-driven models can be used to estimate muscle force in biomechanical systems. Collected and processed EMG readings are used as the input of a dynamic system, which is integrated numerically. This approach requires the definition of a reasonably large set of parameters. Some of these vary widely among subjects, and slight inaccuracies in such parameters can lead to large model output errors. One of these parameters is the maximum voluntary contraction force (F(om)). This paper proposes an approach to find F(om) by estimating muscle physiological cross-sectional area (PCSA) using ultrasound (US), which is multiplied by a realistic value of maximum muscle specific tension. Ultrasound is used to measure muscle thickness, which allows for the determination of muscle volume through regression equations. Soleus, gastrocnemius medialis and gastrocnemius lateralis PCSAs are estimated using published volume proportions among leg muscles, which also requires measurements of muscle fiber length and pennation angle by US. F(om) obtained by this approach and from data widely cited in the literature was used to comparatively test a Hill-type EMG-driven model of the ankle joint. The model uses 3 EMGs (Soleus, gastrocnemius medialis and gastrocnemius lateralis) as inputs with joint torque as the output. The EMG signals were obtained in a series of experiments carried out with 8 adult male subjects, who performed an isometric contraction protocol consisting of 10s step contractions at 20% and 60% of the maximum voluntary contraction level. Isometric torque was simultaneously collected using a dynamometer. A statistically significant reduction in the root mean square error was observed when US-obtained F(om) was used, as compared to F(om) from the literature.
Huang, S.-Y.; Wang, J.
2016-07-01
A coupled force-restore model of surface soil temperature and moisture (FRMEP) is formulated by incorporating the maximum entropy production model of surface heat fluxes and including the gravitational drainage term. The FRMEP model driven by surface net radiation and precipitation are independent of near-surface atmospheric variables with reduced sensitivity to the uncertainties of model input and parameters compared to the classical force-restore models (FRM). The FRMEP model was evaluated using observations from two field experiments with contrasting soil moisture conditions. The modeling errors of the FRMEP predicted surface temperature and soil moisture are lower than those of the classical FRMs forced by observed or bulk formula based surface heat fluxes (bias 1 ~ 2°C versus ~4°C, 0.02 m3 m-3 versus 0.05 m3 m-3). The diurnal variations of surface temperature, soil moisture, and surface heat fluxes are well captured by the FRMEP model measured by the high correlations between the model predictions and observations (r ≥ 0.84). Our analysis suggests that the drainage term cannot be neglected under wet soil condition. A 1 year simulation indicates that the FRMEP model captures the seasonal variation of surface temperature and soil moisture with bias less than 2°C and 0.01 m3 m-3 and correlation coefficients of 0.93 and 0.9 with observations, respectively.
Schmidt, Jesper Hvass; Brandt, Christian; Pedersen, Ellen Raben
2014-01-01
response criteria. User-operated audiometry was developed as an alternative to traditional audiometry for research purposes among musicians. Design: Test-retest reliability of the user-operated audiometry system was evaluated and the user-operated audiometry system was compared with traditional audiometry......Objective: To create a user-operated pure-tone audiometry method based on the method of maximum likelihood (MML) and the two-alternative forced-choice (2AFC) paradigm with high test-retest reliability without the need of an external operator and with minimal influence of subjects' fluctuating....... Study sample: Test-retest reliability of user-operated 2AFC audiometry was tested with 38 naïve listeners. User-operated 2AFC audiometry was compared to traditional audiometry in 41 subjects. Results: The repeatability of user-operated 2AFC audiometry was comparable to traditional audiometry...
Graybill, George
2007-01-01
Forces are at work all around us. Discover what a force is, and different kinds of forces that work on contact and at a distance. We use simple language and vocabulary to make this invisible world easy for students to ""see"" and understand. Examine how forces ""add up"" to create the total force on an object, and reinforce concepts and extend learning with sample problems.
Rameckers, E.A.A.; Smits-Engelsman, B.C.M.; Duysens, J.E.J.
2005-01-01
In this study the hypothesis was tested that children with spastic hemiplegia rely more on externally guided visual feedback when trying to keep force constant with their affected hand (AH) as compared to their non-affected hand (NAH) and as compared to controls. An isometric force task in which a c
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Strizhkov, V. S.
1975-01-01
Exposure of rats to g-forces of high magnitude results in changes in the ultrastructure of the intercellular channels of the adenohypophysis. Evidence indicates that the chromophobic cells in the walls of the channels and pseudofollicles exert a secretory activity.
Massad, Tariq; Jarvet, Jueri [Stockholm University, Department of Biochemistry and Biophysics (Sweden); Tanner, Risto [National Institute of Chemical Physics and Biophysics (Estonia); Tomson, Katrin; Smirnova, Julia; Palumaa, Peep [Tallinn Technical University, Inst. of Gene Technology (Estonia); Sugai, Mariko; Kohno, Toshiyuki [Mitsubishi Kagaku Institute of Life Sciences (MITILS) (Japan); Vanatalu, Kalju [Tallinn Technical University, Inst. of Gene Technology (Estonia); Damberg, Peter [Stockholm University, Department of Biochemistry and Biophysics (Sweden)], E-mail: peter.damberg@dbb.su.se
2007-06-15
In this paper, we present a new method for structure determination of flexible 'random-coil' peptides. A numerical method is described, where the experimentally measured {sup 3}J{sup H{sup N}}{sup H{sup {alpha}}} and {sup 3}J{sup H{sup {alpha}}}{sup N{sup I}+1} couplings, which depend on the {phi} and {psi} dihedral angles, are analyzed jointly with the information from a coil-library through a maximum entropy approach. The coil-library is the distribution of dihedral angles found outside the elements of the secondary structure in the high-resolution protein structures. The method results in residue specific joint {phi},{psi}-distribution functions, which are in agreement with the experimental J-couplings and minimally committal to the information in the coil-library. The 22-residue human peptide hormone motilin, uniformly {sup 15}N-labeled was studied. The {sup 3}J{sup H{sup {alpha}}}{sup N{sup I}+1} were measured from the E.COSY pattern in the sequential NOESY cross-peaks. By employing homodecoupling and an in-phase/anti-phase filter, sharp H{sup {alpha}}-resonances (about 5 Hz) were obtained enabling accurate determination of the coupling with minimal spectral overlap. Clear trends in the resulting {phi},{psi}-distribution functions along the sequence are observed, with a nascent helical structure in the central part of the peptide and more extended conformations of the receptor binding N-terminus as the most prominent characteristics. From the {phi},{psi}-distribution functions, the contribution from each residue to the thermodynamic entropy, i.e., the segmental entropies, are calculated and compared to segmental entropies estimated from {sup 15}N-relaxation data. Remarkable agreement between the relaxation and J-couplings based methods is found. Residues belonging to the nascent helix and the C-terminus show segmental entropies, of approximately -20 J K{sup -1} mol{sup -1} and -12 J K{sup -1} mol{sup -1}, respectively, in both series. The agreement
Massad, Tariq; Jarvet, Jüri; Tanner, Risto; Tomson, Katrin; Smirnova, Julia; Palumaa, Peep; Sugai, Mariko; Kohno, Toshiyuki; Vanatalu, Kalju; Damberg, Peter
2007-06-01
In this paper, we present a new method for structure determination of flexible "random-coil" peptides. A numerical method is described, where the experimentally measured 3J(H(alpha)Nalpha) and [3J(H(alpha)Nalpha+1 couplings, which depend on the phi and psi dihedral angles, are analyzed jointly with the information from a coil-library through a maximum entropy approach. The coil-library is the distribution of dihedral angles found outside the elements of the secondary structure in the high-resolution protein structures. The method results in residue specific joint phi,psi-distribution functions, which are in agreement with the experimental J-couplings and minimally committal to the information in the coil-library. The 22-residue human peptide hormone motilin, uniformly 15N-labeled was studied. The 3J(H(alpha)-N(i+1)) were measured from the E.COSY pattern in the sequential NOESY cross-peaks. By employing homodecoupling and an in-phase/anti-phase filter, sharp H(alpha)-resonances (about 5 Hz) were obtained enabling accurate determination of the coupling with minimal spectral overlap. Clear trends in the resulting phi,psi-distribution functions along the sequence are observed, with a nascent helical structure in the central part of the peptide and more extended conformations of the receptor binding N-terminus as the most prominent characteristics. From the phi,psi-distribution functions, the contribution from each residue to the thermodynamic entropy, i.e., the segmental entropies, are calculated and compared to segmental entropies estimated from 15N-relaxation data. Remarkable agreement between the relaxation and J-couplings based methods is found. Residues belonging to the nascent helix and the C-terminus show segmental entropies, of approximately -20 J K(-1) mol(-1) and -12 J K(-1) mol(-1), respectively, in both series. The agreement between the two estimates of the segmental entropy, the agreement with the observed J-couplings, the agreement with the CD experiments
Drugan, R C; Hibl, P T; Kelly, K J; Dady, K F; Hale, M W; Lowry, C A
2013-12-01
Prior adverse experience alters behavioral responses to subsequent stressors. For example, exposure to a brief swim increases immobility in a subsequent swim test 24h later. In order to determine if qualitative differences (e.g. 19°C versus 25°C) in an initial stressor (15-min swim) impact behavioral, physiological, and associated neural responses in a 5-min, 25°C swim test 24h later, rats were surgically implanted with biotelemetry devices 1 week prior to experimentation then randomly assigned to one of six conditions (Day 1 (15 min)/Day 2 (5 min)): (1) home cage (HC)/HC, (2) HC/25°C swim, (3) 19°C swim/HC, (4) 19°C swim/25°C swim, (5) 25°C swim/HC, (6) 25°C swim/25°C swim. Core body temperature (Tb) was measured on Days 1 and 2 using biotelemetry; behavior was measured on Day 2. Rats were transcardially perfused with fixative 2h following the onset of the swim on Day 2 for analysis of c-Fos expression in midbrain serotonergic neurons. Cold water (19°C) swim on Day 1 reduced Tb, compared to both 25°C swim and HC groups on Day 1, and, relative to rats exposed to HC conditions on Day 1, reduced the hypothermic response to the 25°C swim on Day 2. The 19°C swim on Day 1, relative to HC exposure on Day 1, increased immobility during the 5-min swim on Day 2. Also, 19°C swim, relative to HC conditions, on Day 1 reduced swim (25°C)-induced increases in c-Fos expression in serotonergic neurons within the dorsal and interfascicular parts of the dorsal raphe nucleus. These results suggest that exposure to a 5-min 19°C cold water swim, but not exposure to a 5-min 25°C swim alters physiological, behavioral and serotonergic responses to a subsequent stressor.
Nowak, Karina; Sobota, Grzegorz; Bacik, Bogdan; Hajduk, Grzegorz; Kusz, Damian
2012-01-01
The aim of this study was to check whether there was a correlation between the value of the maximum developed torque of the quadriceps femoris muscle and subjective evaluation of a patient's pain which is measured by the VAS. Also evaluated were changes in the muscle torque value and KSS scale over time. For examining patient's condition use was made of a KSS scale (knee score: pain, range of motion, stability of joint and limb axis) before the surgery and in weeks 6 and 12, as well as 6 months after surgery. It was found to be constantly improving in comparison with the condition before the surgery. This is confirmed by a significant statistical value difference of KSS scale. The surgery substantially increases the quality of live and function recurrence.
Blazevich, Anthony J; Horne, Sara; Cannavan, Dale
2008-01-01
knee extension training was performed 3 x week(-1) for 10 weeks. Maximal isometric strength (+11.2%) and RFD (measured from 0-30/50/100/200 ms, respectively; +10.5%-20.5%) increased after 10 weeks (P training mode. Peak EMG amplitude and rate of EMG rise......This study examined the effects of slow-speed resistance training involving concentric (CON, n = 10) versus eccentric (ECC, n = 11) single-joint muscle contractions on contractile rate of force development (RFD) and neuromuscular activity (EMG), and its maintenance through detraining. Isokinetic...... were not significantly altered with training or detraining. Subjects with below-median normalized RFD (RFD/MVC) at 0 weeks significantly increased RFD after 5- and 10-weeks training, which was associated with increased neuromuscular activity. Subjects who maintained their higher RFD after detraining...
Shortening induced effects on force (re)development in pig urinary smooth muscle
E. van Asselt (Els); J.J.M. Pel (Johan); R. van Mastrigt (Ron)
2007-01-01
textabstractIntroduction: When muscle is allowed to shorten during an active contraction, the maximum force that redevelops after shortening is smaller than the isometric force at the same muscle length without prior shortening. We studied the course of force redevelopment after shortening in smooth
Younger, Jane L; Clucas, Gemma V; Kooyman, Gerald; Wienecke, Barbara; Rogers, Alex D; Trathan, Philip N; Hart, Tom; Miller, Karen J
2015-06-01
The relationship between population structure and demographic history is critical to understanding microevolution and for predicting the resilience of species to environmental change. Using mitochondrial DNA from extant colonies and radiocarbon-dated subfossils, we present the first microevolutionary analysis of emperor penguins (Aptenodytes forsteri) and show their population trends throughout the last glacial maximum (LGM, 19.5-16 kya) and during the subsequent period of warming and sea ice retreat. We found evidence for three mitochondrial clades within emperor penguins, suggesting that they were isolated within three glacial refugia during the LGM. One of these clades has remained largely isolated within the Ross Sea, while the two other clades have intermixed around the coast of Antarctica from Adélie Land to the Weddell Sea. The differentiation of the Ross Sea population has been preserved despite rapid population growth and opportunities for migration. Low effective population sizes during the LGM, followed by a rapid expansion around the beginning of the Holocene, suggest that an optimum set of sea ice conditions exist for emperor penguins, corresponding to available foraging area. © 2015 John Wiley & Sons Ltd.
Patrícia dos Santos Calderon
2006-12-01
Full Text Available The objective of this research was to evaluate the influence of gender and bruxism on the maximum bite force. The concordance for the physical examination of bruxism between examiners was also evaluated. One hundred and eighteen individuals, from both genders, bruxists and non-bruxists, with an average age of 24 years, were selected for this purpose. For group establishment, every individual was submitted to a specific physical examination for bruxism (performed by three different examiners. Subjects were then divided into four groups according to gender and the presence of bruxism. The maximum bite force was measured using a gnathodynamometer at the first molar area, three times on each side, performed twice. The two measurements were made with a 10-day interval. The highest value was recorded. The mean maximum bite force was statistically higher for males (587.2 N when compared to females (424.9 N (p0.05. The concordance between examiners for physical examination of bruxism was considered optimal.O objetivo dessa pesquisa foi avaliar a influência do gênero e do bruxismo na força máxima de mordida. A concordância interexaminadores para o exame físico de bruxismo também foi avaliada. Cento e dezoito voluntários, com idade média de 24 anos, divididos por gênero e pela presença de bruxismo, foram selecionados. Para o estabelecimento da amostra todos os voluntários foram submetidos a um exame físico específico para bruxismo (realizado por três examinadores. Então, os voluntários foram divididos em quarto grupos de acordo com o gênero e a presença de bruxismo. A força máxima de mordida foi mensurada, com o auxílio de um gnatodinamômetro, na região de primeiro molar, três vezes de cada lado, em duas sessões distintas. As sessões foram separadas por um intervalo de 10 dias. O maior valor dentre os doze obtidos, foi utilizado como sendo a força máxima. A força máxima de mordida foi estatisticamente maior para o g
Bowen, John
1990-01-01
Reviews arguments for and against prior administrative review and censorship of student expression. Suggests that prior review strips any pretense of democracy from many American educational institutions. Argues that prior review is journalistically inappropriate, educationally unsound, and practically illogical. (KEH)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
M.J.P. Coelho-Ferraz
2008-12-01
Full Text Available La actividad de los músculos masetero y de la porción anterior temporal de ambos lados, derecho e izquierdo, respectivamente, durante la fuerza máxima de mordedura fue estudiada en voluntarios sanos. El estudio incluyó a 17 voluntarios adultos de ambos sexos, edad promedia de 25 años, que no evidenciaban ningún indicio de disfunción temporomandibular y eran relacionados con la Facultad de Odontología de Piracicaba. Se registraron los datos electromiográficos en ambos lados de la cara del masetero y de la porción anterior de los músculos temporal y suprahioideo en las posiciones postural e isométrica. Se utilizaron electrodos de superficie pasivos para niños, de Ag/AgCl, con forma circular y descargables de Meditrace® Kendall-LTP, modelo Chicopee MA01. Éstos se conectaron a un preamplificador con una ganancia de 20x que formaba un circuito de diferenciales. Se captaron los registros de las señales eléctricas utilizando un equipo EMG-8OOC de EMG System of Brazil, Ltd., de ocho canales, a una frecuencia de 2 KHz con 16 bitios de resolución y un filtro digital con un paso de banda de 20 a 500 Hz. Se utilizó también un transductor de presión que consistía en un tubo de goma con un sensor de presión (MPX 5700* (Motorola SPS, Austin, TX, EE.UU. para registrar la fuerza máxima de mordedura. El análisis estadístico incluyó la correlación lineal, la prueba t emparejada y el análisis de la varianza. Se consideró estadísticamente significativa una probabilidad de pHealthy individuals were examined in terms of the pattern of activity of the masseter and temporal muscles in their anterior portion of both right and left sides, respectively, with the maximum bite force. The study consisted in seventeen adult volunteers with no sign of apparent temporomandibular dysfunction, of both genders, connected to the School of Dentistry of Piracicaba, with average age of 25 years old. The electromyography data were obtained, bilaterally, of
D'Agostini, Giulio
1999-01-01
The choice of priors may become an insoluble problem if priors and Bayes' rule are not seen and accepted in the framework of subjectivism. Therefore, the meaning and the role of subjectivity in science is considered and defended from the pragmatic point of view of an ``experienced scientist''. The case for the use of subjective priors is then supported and some recommendations for routine and frontier measurement applications are given. The issue of reference priors is also considered from the practical point of view and in the general context of ``Bayesian dogmatism''.
Shortening induced effects on force (re)development in pig urinary smooth muscle
van Asselt, Els; Pel, Johan; van Mastrigt, Ron
2007-01-01
textabstractIntroduction: When muscle is allowed to shorten during an active contraction, the maximum force that redevelops after shortening is smaller than the isometric force at the same muscle length without prior shortening. We studied the course of force redevelopment after shortening in smooth muscle to unravel the mechanism responsible for this deactivation. Method: In a first series of measurements the shortening velocity was varied resulting in different shortening amplitudes. In a s...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Cognitive Temporal Document Priors
Peetz, M.H.; de Rijke, M.
2013-01-01
Temporal information retrieval exploits temporal features of document collections and queries. Temporal document priors are used to adjust the score of a document based on its publication time. We consider a class of temporal document priors that is inspired by retention functions considered in cogn
Constructing priors in synesthesia.
van Leeuwen, Tessa M
2014-01-01
A new theoretical framework (PPSMC) applicable to synesthesia has been proposed, in which the discrepancy between the perceptual reality of (some) synesthetic concurrents and their subjective non-veridicality is being explained. The PPSMC framework stresses the relevance of the phenomenology of synesthesia for synesthesia research-and beyond. When describing the emergence and persistence of synesthetic concurrents under PPSMC, it is proposed that precise, high-confidence priors are crucial in synesthesia. I discuss the construction of priors in synesthesia.
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Prior Knowledge Assessment Guide
2014-12-01
determine the students’ extent of prior knowledge, not their reading speed or comprehension. Therefore, be sure to consider the reading and English ...backgrounds and skills to make training more effective, meaningful and efficient. Initial research in this tailored training program was on...history of developing measures of mental ability or cognitive skills for personnel selection; beginning with World War I (Zeidner & Drucker, 1988
Blackburn, Patrick Rowan; Jørgensen, Klaus Frovin
2015-01-01
’s search led him through the work of Castaneda, and back to his own work on hybrid logic: the first made temporal reference philosophically respectable, the second made it technically feasible in a modal framework. With the aid of hybrid logic, Prior built a bridge from a two-dimensional UT calculus...
邓自刚; 王家素; 郑珺; 刘伟; 林群煦; 马光同; 王为; 王素玉; 张娅
2009-01-01
文章通过对15块高温超导块材与永磁轨道相互作用的悬浮力测试,比较了零场冷和场冷两种冷却方式下块材的最大悬浮力关系.实验结果显示零场冷时悬浮力大的块材在场冷时悬浮力不一定就大,反之亦然,两者并无直接的对应关系.在实际的场冷应用中,推荐以场冷下的悬浮力数据为参考.%The paper compares the relationship of maximum levitation force of bulk high temperature superconductor in zero-field-cooling (ZFC) and field-cooling (FC) cases by the levitation measurement of 15 bulks interacting with permanent magnet guideway. The experimental results show that the maximum forces in the two cooling cases are not corresponding to each other. The bulk with large levitation force in ZFC case will not always obtain a large one in the FC case, and vice ver-sa. So, the levitation force data in FC case is recommended to the practical FC applications.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Inference with the Median of a Prior
Ali Mohammad-Djafari
2006-06-01
Full Text Available We consider the problem of inference on one of the two parameters of a probability distribution when we have some prior information on a nuisance parameter. When a prior probability distribution on this nuisance parameter is given, the marginal distribution is the classical tool to account for it. If the prior distribution is not given, but we have partial knowledge such as a fixed number of moments, we can use the maximum entropy principle to assign a prior law and thus go back to the previous case. In this work, we consider the case where we only know the median of the prior and propose a new tool for this case. This new inference tool looks like a marginal distribution. It is obtained by first remarking that the marginal distribution can be considered as the mean value of the original distribution with respect to the prior probability law of the nuisance parameter, and then, by using the median in place of the mean.
Entropic Priors and Bayesian Model Selection
Brewer, Brendon J
2009-01-01
We demonstrate that the principle of maximum relative entropy (ME), used judiciously, can ease the specification of priors in model selection problems. The resulting effect is that models that make sharp predictions are disfavoured, weakening the usual Bayesian "Occam's Razor". This is illustrated with a simple example involving what Jaynes called a "sure thing" hypothesis. Jaynes' resolution of the situation involved introducing a large number of alternative "sure thing" hypotheses that were possible before we observed the data. However, in more complex situations, it may not be possible to explicitly enumerate large numbers of alternatives. The entropic priors formalism produces the desired result without modifying the hypothesis space or requiring explicit enumeration of alternatives; all that is required is a good model for the prior predictive distribution for the data. This idea is illustrated with a simple rigged-lottery example, and we outline how this idea may help to resolve a recent debate amongst ...
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Control Strategies for Accurate Force Generation and Relaxation.
Ohtaka, Chiaki; Fujiwara, Motoko
2016-10-01
Characteristics and motor strategies for force generation and force relaxation were examined using graded tasks during isometric force control. Ten female college students (M age = 20.2 yr., SD = 1.1) were instructed to accurately control the force of isometric elbow flexion using their right arm to match a target force level as quickly as possible. They performed: (1) a generation task, wherein they increased their force from 0% maximum voluntary force to 20% maximum voluntary force (0%-20%), 40% maximum voluntary force (0%-40%), or 60% maximum voluntary force (0%-60%) and (2) and a relaxation task, in which they decreased their force from 60% maximum voluntary force to 40% maximum voluntary force (60%-40%), 20% maximum voluntary force (60%-20%), or to 0% maximum voluntary force (60%-0%). Produced force parameters of point of accuracy (force level, error), quickness (reaction time, adjustment time, rate of force development), and strategy (force wave, rate of force development) were analyzed. Errors of force relaxation were all greater, and reaction times shorter, than those of force generation. Adjustment time depended on the magnitude of force and peak rates of force development and force relaxation differed. Controlled relaxation of force is more difficult with low magnitude of force control.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.
Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan
2016-04-28
This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Accommodating Uncertainty in Prior Distributions
Picard, Richard Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vander Wiel, Scott Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-01-19
A fundamental premise of Bayesian methodology is that a priori information is accurately summarized by a single, precisely de ned prior distribution. In many cases, especially involving informative priors, this premise is false, and the (mis)application of Bayes methods produces posterior quantities whose apparent precisions are highly misleading. We examine the implications of uncertainty in prior distributions, and present graphical methods for dealing with them.
Image Segmentation Using Weak Shape Priors
Xu, Robert Sheng; Salama, Magdy
2010-01-01
The problem of image segmentation is known to become particularly challenging in the case of partial occlusion of the object(s) of interest, background clutter, and the presence of strong noise. To overcome this problem, the present paper introduces a novel approach segmentation through the use of "weak" shape priors. Specifically, in the proposed method, an segmenting active contour is constrained to converge to a configuration at which its geometric parameters attain their empirical probability densities closely matching the corresponding model densities that are learned based on training samples. It is shown through numerical experiments that the proposed shape modeling can be regarded as "weak" in the sense that it minimally influences the segmentation, which is allowed to be dominated by data-related forces. On the other hand, the priors provide sufficient constraints to regularize the convergence of segmentation, while requiring substantially smaller training sets to yield less biased results as compare...
Menarche: Prior Knowledge and Experience.
Skandhan, K. P.; And Others
1988-01-01
Recorded menstruation information among 305 young women in India, assessing the differences between those who did and did not have knowledge of menstruation prior to menarche. Those with prior knowledge considered menarche to be a normal physiological function and had a higher rate of regularity, lower rate of dysmenorrhea, and earlier onset of…
The Importance of Prior Knowledge.
Cleary, Linda Miller
1989-01-01
Recounts a college English teacher's experience of reading and rereading Noam Chomsky, building up a greater store of prior knowledge. Argues that Frank Smith provides a theory for the importance of prior knowledge and Chomsky's work provided a personal example with which to interpret and integrate that theory. (RS)
The Importance of Prior Knowledge.
Cleary, Linda Miller
1989-01-01
Recounts a college English teacher's experience of reading and rereading Noam Chomsky, building up a greater store of prior knowledge. Argues that Frank Smith provides a theory for the importance of prior knowledge and Chomsky's work provided a personal example with which to interpret and integrate that theory. (RS)
Universal Prior Prediction for Communication
Lomnitz, Yuval
2011-01-01
We consider the problem of communicating over an unknown and arbitrarily varying channel, using feedback. This paper focuses on the problem of determining the input behavior, or more specifically, a prior which is used to randomly generate a codebook. We pose the problem of setting the prior as a universal sequential prediction problem using information theoretic abstractions of the communication channel. For the case where the channel is block-wise constant, we show it is possible to asymptotically approach the best rate that can be attained by any system using a fixed prior. For the case where the channel may change on each symbol, we combine a rateless coding scheme with a prior predictor and asymptotically approach the capacity of the average channel universally for every sequence of channels.
Recruiting for Prior Service Market
2008-06-01
perceptions, expectations and issues for re-enlistment • Develop potential marketing and advertising tactics and strategies targeted to the defined...01 JUN 2008 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Recruiting for Prior Service Market 5a. CONTRACT NUMBER 5b. GRANT...Command First Handshake to First Unit of Assignment An Army of One Proud to Be e e to Serve Recruiting for Prior Service Market MAJ Eric Givens / MAJ Brian
Masticatory performance, muscle activity, and occlusal force in preorthognathic surgery patients.
Tate, G S; Throckmorton, G S; Ellis, E; Sinn, D P
1994-05-01
Previous studies have indicated that patients scheduled for orthognathic surgery tend to have lower maximum bite forces and exert lower forces during mastication. The effect of these deficits on masticatory performance have not been previously assessed. Masticatory performance was analyzed in four groups: male and female orthognathic surgery patients prior to presurgical orthodontics (n = 12 and 23), and male and female controls (n = 27 and 31). Mastication performance was analyzed by having the subjects chew 5-g pieces of carrot for 20 cycles and measuring the resulting median particle size with a standard sieve method. Masticatory performance showed the same trends as maximum bite force and masticatory forces: male controls had the best and patients the poorest masticatory performance. There was a weak correlation between masticatory performance and maximum bite force at the molar positions. Masticatory performance also weakly correlated to electromyographic signals during mastication of a constant bolus (gummy bears) for all muscles except the left posterior temporalis. Correlations were generally not present or were very weak between masticatory performance, estimated masticatory forces, and muscle efficiency, suggesting that muscle efficiency and forces generated during mastication are not the primary factors that determine masticatory performance. Other factors contributing to a person's ability to chew food might include occlusal relationships and mechanical advantage.
Objective prior distribution of climate sensitivity
Pueyo, S.
2012-04-01
The problems posed by the choice of prior distribution constitute one of the most fundamental obstacles to assign probabilities to the possible values of climate sensitivity S. The prior is the probability distribution that we assume before introducing data. In the literature about climate sensitivity, the most frequently used prior is the uniform. On first inspection, this distribution would seem to represent absence of information, but, as is well known, this assumption leads to paradoxes. This observation has led to the widespread belief that priors are inherently subjective and should be decided by expert elicitation, even though this amounts to questioning the objective value of scientific results. In general, the climate science community is unaware of the "objective Bayesian" literature, which seeks objective criteria to determine non-informative prior distributions (or reference priors). In a recent paper (Pueyo 2011) I applied an objective Bayesian approach to climate sensitivity. I described three lines of evidence indicating that the distribution that really represents absence of information about S is log-uniform, i.e. it consists of a uniform distribution of log(S) instead of S: • In the case of S, only the log-uniform distribution satisfies Jaynes' invariant groups criterion, i.e. this distribution does not change when modifying assumptions that are not explicitly included in the enunciate of the problem (I only included the definition of S). • In terms of information theory, information about S can be identified with mutual information between changes in radiative forcing and in temperature. Absence of mutual information between these variables implies a log-uniform distribution of S. • The frequency distribution of sets of parameters formally comparable to climate sensitivity is approximately log-uniform for a broad range of values. A log-uniform distribution of S is intermediate between a uniform distribution of S and a uniform distribution
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Bayesian priors for transiting planets
Kipping, David M
2016-01-01
As astronomers push towards discovering ever-smaller transiting planets, it is increasingly common to deal with low signal-to-noise ratio (SNR) events, where the choice of priors plays an influential role in Bayesian inference. In the analysis of exoplanet data, the selection of priors is often treated as a nuisance, with observers typically defaulting to uninformative distributions. Such treatments miss a key strength of the Bayesian framework, especially in the low SNR regime, where even weak a priori information is valuable. When estimating the parameters of a low-SNR transit, two key pieces of information are known: (i) the planet has the correct geometric alignment to transit and (ii) the transit event exhibits sufficient signal-to-noise to have been detected. These represent two forms of observational bias. Accordingly, when fitting transits, the model parameter priors should not follow the intrinsic distributions of said terms, but rather those of both the intrinsic distributions and the observational ...
Arthur Prior and medieval logic
2012-01-01
Though Arthur Prior is now best known for his founding of modern temporal logic and hybrid logic, much of his early philosophical career was devoted to history of logic and historical logic. This interest laid the foundations for both of his ground-breaking innovations in the 1950s and 1960s. Because of the important rôle played by Prior’s research in ancient and medieval logic in his development of temporal and hybrid logic, any student of Prior, temporal logic, or hybrid logic should be fam...
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Corticospinal excitability underlying digit force planning for grasping in humans.
Parikh, Pranav; Davare, Marco; McGurrin, Patrick; Santello, Marco
2014-06-15
Control of digit forces for grasping relies on sensorimotor memory gained from prior experience with the same or similar objects and on online sensory feedback. However, little is known about neural mechanisms underlying digit force planning. We addressed this question by quantifying the temporal evolution of corticospinal excitability (CSE) using single-pulse transcranial magnetic stimulation (TMS) during two reach-to-grasp tasks. These tasks differed in terms of the magnitude of force exerted on the same points on the object to isolate digit force planning from reach and grasp planning. We also addressed the role of intracortical circuitry within primary motor cortex (M1) by quantifying the balance between short intracortical inhibition and facilitation using paired-pulse TMS on the same tasks. Eighteen right-handed subjects were visually cued to plan digit placement at predetermined locations on the object and subsequently to exert either negligible force ("low-force" task, LF) or 10% of their maximum pinch force ("high-force" task, HF) on the object. We found that the HF task elicited significantly smaller CSE than the LF task, but only when the TMS pulse coincided with the signal to initiate the reach. This force planning-related CSE modulation was specific to the muscles involved in the performance of both tasks. Interestingly, digit force planning did not result in modulation of M1 intracortical inhibitory and facilitatory circuitry. Our findings suggest that planning of digit forces reflected by CSE modulation starts well before object contact and appears to be driven by inputs from frontoparietal areas other than M1.
Testability evaluation using prior information of multiple sources
Wang Chao; Qiu Jing; Liu Guanjun; Zhang Yong
2014-01-01
Testability plays an important role in improving the readiness and decreasing the life-cycle cost of equipment. Testability demonstration and evaluation is of significance in measuring such testability indexes as fault detection rate (FDR) and fault isolation rate (FIR), which is useful to the producer in mastering the testability level and improving the testability design, and helpful to the consumer in making purchase decisions. Aiming at the problems with a small sample of testabil-ity demonstration test data (TDTD) such as low evaluation confidence and inaccurate result, a test-ability evaluation method is proposed based on the prior information of multiple sources and Bayes theory. Firstly, the types of prior information are analyzed. The maximum entropy method is applied to the prior information with the mean and interval estimate forms on the testability index to obtain the parameters of prior probability density function (PDF), and the empirical Bayesian method is used to get the parameters for the prior information with a success-fail form. Then, a parametrical data consistency check method is used to check the compatibility between all the sources of prior information and TDTD. For the prior information to pass the check, the prior credibility is calculated. A mixed prior distribution is formed based on the prior PDFs and the corresponding credibility. The Bayesian posterior distribution model is acquired with the mixed prior distribution and TDTD, based on which the point and interval estimates are calculated. Finally, examples of a flying control system are used to verify the proposed method. The results show that the proposed method is feasible and effective.
Testability evaluation using prior information of multiple sources
Wang Chao
2014-08-01
Full Text Available Testability plays an important role in improving the readiness and decreasing the life-cycle cost of equipment. Testability demonstration and evaluation is of significance in measuring such testability indexes as fault detection rate (FDR and fault isolation rate (FIR, which is useful to the producer in mastering the testability level and improving the testability design, and helpful to the consumer in making purchase decisions. Aiming at the problems with a small sample of testability demonstration test data (TDTD such as low evaluation confidence and inaccurate result, a testability evaluation method is proposed based on the prior information of multiple sources and Bayes theory. Firstly, the types of prior information are analyzed. The maximum entropy method is applied to the prior information with the mean and interval estimate forms on the testability index to obtain the parameters of prior probability density function (PDF, and the empirical Bayesian method is used to get the parameters for the prior information with a success-fail form. Then, a parametrical data consistency check method is used to check the compatibility between all the sources of prior information and TDTD. For the prior information to pass the check, the prior credibility is calculated. A mixed prior distribution is formed based on the prior PDFs and the corresponding credibility. The Bayesian posterior distribution model is acquired with the mixed prior distribution and TDTD, based on which the point and interval estimates are calculated. Finally, examples of a flying control system are used to verify the proposed method. The results show that the proposed method is feasible and effective.
Components of Visual Prior Entry
Schneider, Keith A.; Bavelier, Daphne
2003-01-01
The prior entry hypothesis contends that attention accelerates sensory processing, shortening the time to perception. Typical observations supporting the hypothesis may be explained equally well by response biases, changes in decision criteria, or sensory facilitation. In a series of experiments conducted to discriminate among the potential…
Arthur Prior and medieval logic
Uckelman, S.L.
2012-01-01
Though Arthur Prior is now best known for his founding of modern temporal logic and hybrid logic, much of his early philosophical career was devoted to history of logic and historical logic. This interest laid the foundations for both of his ground-breaking innovations in the 1950s and 1960s. Becaus
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
Curvature Prior for MRF-based Segmentation and Shape Inpainting
Shekhovtsov, Alexander; Rother, Carsten
2011-01-01
Most image labeling problems such as segmentation and image reconstruction are fundamentally ill-posed and suffer from ambiguities and noise. Higher order image priors encode high level structural dependencies between pixels and are key to overcoming these problems. However, these priors in general lead to computationally intractable models. This paper addresses the problem of discovering compact representations of higher order priors which allow efficient inference. We propose a framework for solving this problem which uses a recently proposed representation of higher order functions where they are encoded as lower envelopes of linear functions. Maximum a Posterior inference on our learned models reduces to minimizing a pairwise function of discrete variables, which can be done approximately using standard methods. Although this is a primarily theoretical paper, we also demonstrate the practical effectiveness of our framework on the problem of learning a shape prior for image segmentation and reconstruction....
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Quantum steganography using prior entanglement
Mihara, Takashi, E-mail: mihara@toyo.jp
2015-06-05
Steganography is the hiding of secret information within innocent-looking information (e.g., text, audio, image, video, etc.). A quantum version of steganography is a method based on quantum physics. In this paper, we propose quantum steganography by combining quantum error-correcting codes with prior entanglement. In many steganographic techniques, embedding secret messages in error-correcting codes may cause damage to them if the embedded part is corrupted. However, our proposed steganography can separately create secret messages and the content of cover messages. The intrinsic form of the cover message does not have to be modified for embedding secret messages. - Highlights: • Our steganography combines quantum error-correcting codes with prior entanglement. • Our steganography can separately create secret messages and the content of cover messages. • Errors in cover messages do not have affect the recovery of secret messages. • We embed a secret message in the Steane code as an example of our steganography.
Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging
Shuanghui Zhang
2016-04-01
Full Text Available This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP estimation and the maximum likelihood estimation (MLE are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
MAXIMUM DISCLOSURE WITH MINIMUM DELAY
J Van R. du Preez
2012-02-01
Full Text Available In his treatment of the subject 'Die SA Weermag moet ook sy ander wapens effektief aanwend' in 7/1 issue of Militaria Colonel W. Otto regards it as incumbent on the South African Defence Force to make effective use of propaganda (in my book the corruption of the channels of communication.
Mirror averaging with sparsity priors
Dalalyan, Arnak
2010-01-01
We consider the problem of aggregating the elements of a (possibly infinite) dictionary for building a decision procedure, that aims at minimizing a given criterion. Along with the dictionary, an independent identically distributed training sample is available, on which the performance of a given procedure can be tested. In a fairly general set-up, we establish an oracle inequality for the Mirror Averaging aggregate based on any prior distribution. This oracle inequality is applied in the context of sparse coding for different problems of statistics and machine learning such as regression, density estimation and binary classification.
Recursive estimation of prior probabilities using the mixture approach
Kazakos, D.
1974-01-01
The problem of estimating the prior probabilities q sub k of a mixture of known density functions f sub k(X), based on a sequence of N statistically independent observations is considered. It is shown that for very mild restrictions on f sub k(X), the maximum likelihood estimate of Q is asymptotically efficient. A recursive algorithm for estimating Q is proposed, analyzed, and optimized. For the M = 2 case, it is possible for the recursive algorithm to achieve the same performance with the maximum likelihood one. For M 2, slightly inferior performance is the price for having a recursive algorithm. However, the loss is computable and tolerable.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Integrating prior knowledge and structure from motion
Guilbert, Nicolas; Aanæs, Henrik; Larsen, Rasmus
2001-01-01
A new approach for formulating prior knowledge in structure form motion is presented, where the structure is viewed as a 3D stochastic variable, hereby priors are more naturally expressed. It is demonstrated that this formulation is efficient for regularizing structure reconstruction via prior...... knowledge. Specifically algorithms for imposing priors in the proposed formulation are presented....
Phase retrieval with prior information.
Irwan, R; Lane, R G
1998-09-01
An algorithm for phase retrieval with Bayesian statistics is discussed. It is shown how the statistics of Kolmogorov turbulence can be used to compute the likelihood for a particular phase screen. This likelihood is then added to that of the observed data to produce a functional that is maximized directly by use of conjugate gradient maximization. It is shown that although this can significantly improve the quality of the phase estimate,the issue is complicated by local maxima introduced by the possibility of phase wrapping. The causes of the local maxima are analyzed, and a method that increases the likelihood of convergence to the global maximum is presented.
Effective Image Restorations Using a Novel Spatial Adaptive Prior
Limin Luo
2010-01-01
Full Text Available Bayesian or Maximum a posteriori (MAP approaches can effectively overcome the ill-posed problems of image restoration or deconvolution through incorporating a priori image information. Many restoration methods, such as nonquadratic prior Bayesian restoration and total variation regularization, have been proposed with edge-preserving and noise-removing properties. However, these methods are often inefficient in restoring continuous variation region and suppressing block artifacts. To handle this, this paper proposes a Bayesian restoration approach with a novel spatial adaptive (SA prior. Through selectively and adaptively incorporating the nonlocal image information into the SA prior model, the proposed method effectively suppress the negative disturbance from irrelevant neighbor pixels, and utilizes the positive regularization from the relevant ones. A two-step restoration algorithm for the proposed approach is also given. Comparative experimentation and analysis demonstrate that, bearing high-quality edge-preserving and noise-removing properties, the proposed restoration also has good deblocking property.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Prior muscular exercise affects cycling pattern.
Bieuzen, F; Hausswirth, C; Couturier, A; Brisswalter, J
2008-05-01
The aim of this study was to investigate the effect of concentric or eccentric fatiguing exercise on cycling pattern. Eleven well trained cyclists completed three sessions of cycling (control cycling test [CTRL], cycling following concentric [CC] or eccentric [ECC] knee contractions) at a mean power of 276.8 +/- 26.6 Watts. Concentric and eccentric knee contractions were performed at a load corresponding to 80 % of one repetition maximum with both legs. Before and after CTRL, CC or ECC knee contractions and after cycling, a maximal voluntary contraction (MVC) test was performed. Cardiorespiratory, mechanical and electromyographic activity (EMG) of the rectus femoris, vastus lateralis and biceps femoris muscles were recorded during cycling. A significant decrease in MVC values was observed after CC and ECC exercises and after the cycling. ECC exercise induced a significant decrease in EMG root mean square during MVC and a decrease in pedal rate during cycling. EMG values of the three muscles were significantly higher during cycling exercise following CC exercise when compared to CTRL. The main finding of this study was that a prior ECC exercise induces a greater neuromuscular fatigue than a CC exercise, and changes in cycling pattern.
Occupational Outlook Quarterly, 2012
2012-01-01
The labor force is the number of people ages 16 or older who are either working or looking for work. It does not include active-duty military personnel or the institutionalized population, such as prison inmates. Determining the size of the labor force is a way of determining how big the economy can get. The size of the labor force depends on two…
Mishra, S; Rosen, C A; Murry, T
2000-03-01
A retrospective review was conducted of 40 singers presenting with acute voice problems prior to performance. The purpose of this study was to determine the reasons for seeking emergent voice treatment, the types of acute voice disorders, and the performance outcome. The patients were assessed by age, singing style, years of experience, chief complaint, laryngovideostroboscopic findings, and treatment regimens. The outcomes were classified as full, restricted, or no performance. The majority of patients were classical singers. Laryngovideostroboscopy frequently revealed a pattern of early glottic contact at the mid-portion of the membranous vocal fold in patients with acute laryngitis. Experienced singers uniformly sought treatment many days before their performance compared with inexperienced singers who presented closer in time to performance. Six patients initially withheld information, which had a bearing on their acute management. The results of this study suggest that there is a need to accurately diagnose and treat the singer's emergent problem and educate singers regarding early evaluation of medical problems. With modern evaluation techniques and multi-modality treatment, 85% of the singers proceeded to full performance without negative sequelae.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
Maximum-Entropy Inference with a Programmable Annealer
Chancellor, Nicholas; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A
2015-01-01
Optimisation problems in science and engineering typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this approach maximises the likelihood that the solution found is correct. An alternative approach is to make use of prior statistical information about the noise in conjunction with Bayes's theorem. The maximum entropy solution to the problem then takes the form of a Boltzmann distribution over the ground and excited states of the cost function. Here we use a programmable Josephson junction array for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that maximum entropy decoding at finite temperature can in certain cases give competitive and even slightly better bit-error-rates than the maximum likelihood approach at zero temperature, confirming that useful information can be extracted from the excited states of the annealing...
Buhmann, Stefan Yoshi
2012-01-01
In this book, a modern unified theory of dispersion forces on atoms and bodies is presented which covers a broad range of advanced aspects and scenarios. Macroscopic quantum electrodynamics is shown to provide a powerful framework for dispersion forces which allows for discussing general properties like their non-additivity and the relation between microscopic and macroscopic interactions. It is demonstrated how the general results can be used to obtain dispersion forces on atoms in the presence of bodies of various shapes and materials. Starting with a brief recapitulation of volume I, this volume II deals especially with bodies of irregular shapes, universal scaling laws, dynamical forces on excited atoms, enhanced forces in cavity quantum electrodynamics, non-equilibrium forces in thermal environments and quantum friction. The book gives both the specialist and those new to the field a thorough overview over recent results in the field. It provides a toolbox for studying dispersion forces in various contex...
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
34 CFR 642.32 - Prior experience.
2010-07-01
... 34 Education 3 2010-07-01 2010-07-01 false Prior experience. 642.32 Section 642.32 Education....32 Prior experience. (a)(1) The Secretary gives priority to each applicant that has conducted a... points to be awarded each eligible applicant, the Secretary considers the applicant's prior experience of...
Iterated random walks with shape prior
Pujadas, Esmeralda Ruiz; Kjer, Hans Martin; Piella, Gemma;
2016-01-01
We propose a new framework for image segmentation using random walks where a distance shape prior is combined with a region term. The shape prior is weighted by a confidence map to reduce the influence of the prior in high gradient areas and the region term is computed with k-means to estimate th...
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
The power prior: theory and applications.
Ibrahim, Joseph G; Chen, Ming-Hui; Gwon, Yeongjin; Chen, Fang
2015-12-10
The power prior has been widely used in many applications covering a large number of disciplines. The power prior is intended to be an informative prior constructed from historical data. It has been used in clinical trials, genetics, health care, psychology, environmental health, engineering, economics, and business. It has also been applied for a wide variety of models and settings, both in the experimental design and analysis contexts. In this review article, we give an A-to-Z exposition of the power prior and its applications to date. We review its theoretical properties, variations in its formulation, statistical contexts for which it has been used, applications, and its advantages over other informative priors. We review models for which it has been used, including generalized linear models, survival models, and random effects models. Statistical areas where the power prior has been used include model selection, experimental design, hierarchical modeling, and conjugate priors. Frequentist properties of power priors in posterior inference are established, and a simulation study is conducted to further examine the empirical performance of the posterior estimates with power priors. Real data analyses are given illustrating the power prior as well as the use of the power prior in the Bayesian design of clinical trials.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
PET image reconstruction using multi-parametric anato-functional priors
Mehranian, Abolfazl; Belzunce, Martin A.; Niccolini, Flavia; Politis, Marios; Prieto, Claudia; Turkheimer, Federico; Hammers, Alexander; Reader, Andrew J.
2017-08-01
In this study, we investigate the application of multi-parametric anato-functional (MR-PET) priors for the maximum a posteriori (MAP) reconstruction of brain PET data in order to address the limitations of the conventional anatomical priors in the presence of PET-MR mismatches. In addition to partial volume correction benefits, the suitability of these priors for reconstruction of low-count PET data is also introduced and demonstrated, comparing to standard maximum-likelihood (ML) reconstruction of high-count data. The conventional local Tikhonov and total variation (TV) priors and current state-of-the-art anatomical priors including the Kaipio, non-local Tikhonov prior with Bowsher and Gaussian similarity kernels are investigated and presented in a unified framework. The Gaussian kernels are calculated using both voxel- and patch-based feature vectors. To cope with PET and MR mismatches, the Bowsher and Gaussian priors are extended to multi-parametric priors. In addition, we propose a modified joint Burg entropy prior that by definition exploits all parametric information in the MAP reconstruction of PET data. The performance of the priors was extensively evaluated using 3D simulations and two clinical brain datasets of [18F]florbetaben and [18F]FDG radiotracers. For simulations, several anato-functional mismatches were intentionally introduced between the PET and MR images, and furthermore, for the FDG clinical dataset, two PET-unique active tumours were embedded in the PET data. Our simulation results showed that the joint Burg entropy prior far outperformed the conventional anatomical priors in terms of preserving PET unique lesions, while still reconstructing functional boundaries with corresponding MR boundaries. In addition, the multi-parametric extension of the Gaussian and Bowsher priors led to enhanced preservation of edge and PET unique features and also an improved bias-variance performance. In agreement with the simulation results, the clinical results
``Force,'' ontology, and language
Brookes, David T.; Etkina, Eugenia
2009-06-01
We introduce a linguistic framework through which one can interpret systematically students’ understanding of and reasoning about force and motion. Some researchers have suggested that students have robust misconceptions or alternative frameworks grounded in everyday experience. Others have pointed out the inconsistency of students’ responses and presented a phenomenological explanation for what is observed, namely, knowledge in pieces. We wish to present a view that builds on and unifies aspects of this prior research. Our argument is that many students’ difficulties with force and motion are primarily due to a combination of linguistic and ontological difficulties. It is possible that students are primarily engaged in trying to define and categorize the meaning of the term “force” as spoken about by physicists. We found that this process of negotiation of meaning is remarkably similar to that engaged in by physicists in history. In this paper we will describe a study of the historical record that reveals an analogous process of meaning negotiation, spanning multiple centuries. Using methods from cognitive linguistics and systemic functional grammar, we will present an analysis of the force and motion literature, focusing on prior studies with interview data. We will then discuss the implications of our findings for physics instruction.
Modeling Climate Responses to Spectral Solar Forcing on Centennial and Decadal Time Scales
Wen, G.; Cahalan, R.; Rind, D.; Jonas, J.; Pilewskie, P.; Harder, J.
2012-01-01
We report a series of experiments to explore clima responses to two types of solar spectral forcing on decadal and centennial time scales - one based on prior reconstructions, and another implied by recent observations from the SORCE (Solar Radiation and Climate Experiment) SIM (Spectral 1rradiance Monitor). We apply these forcings to the Goddard Institute for Space Studies (GISS) Global/Middle Atmosphere Model (GCMAM). that couples atmosphere with ocean, and has a model top near the mesopause, allowing us to examine the full response to the two solar forcing scenarios. We show different climate responses to the two solar forCing scenarios on decadal time scales and also trends on centennial time scales. Differences between solar maximum and solar minimum conditions are highlighted, including impacts of the time lagged reSponse of the lower atmosphere and ocean. This contrasts with studies that assume separate equilibrium conditions at solar maximum and minimum. We discuss model feedback mechanisms involved in the solar forced climate variations.
Influence of Pareto optimality on the maximum entropy methods
Peddavarapu, Sreehari; Sunil, Gujjalapudi Venkata Sai; Raghuraman, S.
2017-07-01
Galerkin meshfree schemes are emerging as a viable substitute to finite element method to solve partial differential equations for the large deformations as well as crack propagation problems. However, the introduction of Shanon-Jayne's entropy principle in to the scattered data approximation has deviated from the trend of defining the approximation functions, resulting in maximum entropy approximants. Further in addition to this, an objective functional which controls the degree of locality resulted in Local maximum entropy approximants. These are based on information-theoretical Pareto optimality between entropy and degree of locality that are defining the basis functions to the scattered nodes. The degree of locality in turn relies on the choice of locality parameter and prior (weight) function. The proper choices of both plays vital role in attain the desired accuracy. Present work is focused on the choice of locality parameter which defines the degree of locality and priors: Gaussian, Cubic spline and quartic spline functions on the behavior of local maximum entropy approximants.
Without the strong force, there could be no life. The carbon in living matter is synthesised in stars via the strong force. Lighter atomic nuclei become bound together in a process called nuclear fusion. A minor change in this interaction would make life impossible. As its name suggests, the strong force is the most powerful of the 4 forces, yet its sphere of influence is limited to within the atomic nucleus. Indeed it is the strong force that holds together the quarks inside the positively charged protons. Without this glue, the quarks would fly apart repulsed by electromagnetism. In fact, it is impossible to separate 2 quarks : so much energy is needed, that a second pair of quarks is produced. Text for the interactive: Can you pull apart the quarks inside a proton?
Comparison Between Bayesian and Maximum Entropy Analyses of Flow Networks†
Steven H. Waldrip
2017-02-01
Full Text Available We compare the application of Bayesian inference and the maximum entropy (MaxEnt method for the analysis of ﬂow networks, such as water, electrical and transport networks. The two methods have the advantage of allowing a probabilistic prediction of ﬂow rates and other variables, when there is insufﬁcient information to obtain a deterministic solution, and also allow the effects of uncertainty to be included. Both methods of inference update a prior to a posterior probability density function (pdf by the inclusion of new information, in the form of data or constraints. The MaxEnt method maximises an entropy function subject to constraints, using the method of Lagrange multipliers,to give the posterior, while the Bayesian method ﬁnds its posterior by multiplying the prior with likelihood functions incorporating the measured data. In this study, we examine MaxEnt using soft constraints, either included in the prior or as probabilistic constraints, in addition to standard moment constraints. We show that when the prior is Gaussian,both Bayesian inference and the MaxEnt method with soft prior constraints give the same posterior means, but their covariances are different. In the Bayesian method, the interactions between variables are applied through the likelihood function, using second or higher-order cross-terms within the posterior pdf. In contrast, the MaxEnt method incorporates interactions between variables using Lagrange multipliers, avoiding second-order correlation terms in the posterior covariance. The MaxEnt method with soft prior constraints, therefore, has a numerical advantage over Bayesian inference, in that the covariance terms are avoided in its integrations. The second MaxEnt method with soft probabilistic constraints is shown to give posterior means of similar, but not identical, structure to the other two methods, due to its different formulation.
Functional uniform priors for nonlinear modeling.
Bornkamp, Björn
2012-09-01
This article considers the topic of finding prior distributions when a major component of the statistical model depends on a nonlinear function. Using results on how to construct uniform distributions in general metric spaces, we propose a prior distribution that is uniform in the space of functional shapes of the underlying nonlinear function and then back-transform to obtain a prior distribution for the original model parameters. The primary application considered in this article is nonlinear regression, but the idea might be of interest beyond this case. For nonlinear regression the so constructed priors have the advantage that they are parametrization invariant and do not violate the likelihood principle, as opposed to uniform distributions on the parameters or the Jeffrey's prior, respectively. The utility of the proposed priors is demonstrated in the context of design and analysis of nonlinear regression modeling in clinical dose-finding trials, through a real data example and simulation.
Bayesian variable selection with spherically symmetric priors
De Kock, M. B.; Eggers, H. C.
2014-01-01
We propose that Bayesian variable selection for linear parametrisations with Gaussian iid likelihoods be based on the spherical symmetry of the diagonalised parameter space. Our r-prior results in closed forms for the evidence for four examples, including the hyper-g prior and the Zellner-Siow prior, which are shown to be special cases. Scenarios of a single variable dispersion parameter and of fixed dispersion are studied, and asymptotic forms comparable to the traditional information criter...
Lee, Taek-Soo; Tsui, Benjamin M.W. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Radiology; Gullberg, Grant T. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States)
2011-07-01
We evaluated and proposed here a 4D maximum a posteriori rescaled-block iterative (MAP-RBI)-EM image reconstruction method with a motion prior to improve the accuracy of 4D gated myocardial perfusion (GMP) SPECT images. We hypothesized that a 4D motion prior which resembles the global motion of the true 4D motion of the heart will improve the accuracy of the reconstructed images with regional myocardial motion defect. Normal heart model in the 4D XCAT (eXtended CArdiac-Torso) phantom is used as the prior in the 4D MAP-RBI-EM algorithm where a Gaussian-shaped distribution is used as the derivative of potential function (DPF) that determines the smoothing strength and range of the prior in the algorithm. The mean and width of the DPF equal to the expected difference between the reconstructed image and the motion prior, and smoothing range, respectively. To evaluate the algorithm, we used simulated projection data from a typical clinical {sup 99m}Tc Sestamibi GMP SPECT study using the 4D XCAT phantom. The noise-free projection data were generated using an analytical projector that included the effects of attenuation, collimator-detector response and scatter (ADS) and Poisson noise was added to generated noisy projection data. The projection datasets were reconstructed using the modified 4D MAP-RBI-EM with various iterations, prior weights, and sigma values as well as with ADS correction. The results showed that the 4D reconstructed image estimates looked more like the motion prior with sharper edges as the weight of prior increased. It also demonstrated that edge preservation of the myocardium in the GMP SPECT images could be controlled by a proper motion prior. The Gaussian-shaped DPF allowed stronger and weaker smoothing force for smaller and larger difference of neighboring voxel values, respectively, depending on its parameter values. We concluded the 4D MAP-RBI-EM algorithm with the general motion prior can be used to provide 4D GMP SPECT images with improved
Penalised Complexity Priors for Stationary Autoregressive Processes
Sørbye, Sigrunn Holbek
2017-05-25
The autoregressive (AR) process of order p(AR(p)) is a central model in time series analysis. A Bayesian approach requires the user to define a prior distribution for the coefficients of the AR(p) model. Although it is easy to write down some prior, it is not at all obvious how to understand and interpret the prior distribution, to ensure that it behaves according to the users\\' prior knowledge. In this article, we approach this problem using the recently developed ideas of penalised complexity (PC) priors. These prior have important properties like robustness and invariance to reparameterisations, as well as a clear interpretation. A PC prior is computed based on specific principles, where model component complexity is penalised in terms of deviation from simple base model formulations. In the AR(1) case, we discuss two natural base model choices, corresponding to either independence in time or no change in time. The latter case is illustrated in a survival model with possible time-dependent frailty. For higher-order processes, we propose a sequential approach, where the base model for AR(p) is the corresponding AR(p-1) model expressed using the partial autocorrelations. The properties of the new prior distribution are compared with the reference prior in a simulation study.
Without the weak force, the sun wouldn't shine. The weak force causes beta decay, a form of radioactivity that triggers nuclear fusion in the heart of the sun. The weak force is unlike other forces: it is characterised by disintegration. In beta decay, a down quark transforms into an up quark and an electron is emitted. Some materials are more radioactive than others because the delicate balance between the strong force and the weak force varies depending on the number of particles in the atomic nucleus. We live in the midst of a natural radioactive background that varies from region to region. For example, in Cornwall where there is a lot of granite, levels of background radiation are much higher than in the Geneva region. Text for the interactive: Move the Geiger counter to find out which samples are radioactive - you may be surprised. It is the weak force that is responsible for the Beta radioactivity here. The electrons emitted do not cross the plastic cover. Why do you think there is some detected radioa...
Efficient maximum likelihood parameterization of continuous-time Markov processes
McGibbon, Robert T
2015-01-01
Continuous-time Markov processes over finite state-spaces are widely used to model dynamical processes in many fields of natural and social science. Here, we introduce an maximum likelihood estimator for constructing such models from data observed at a finite time interval. This estimator is drastically more efficient than prior approaches, enables the calculation of deterministic confidence intervals in all model parameters, and can easily enforce important physical constraints on the models such as detailed balance. We demonstrate and discuss the advantages of these models over existing discrete-time Markov models for the analysis of molecular dynamics simulations.
Extracting volatility signal using maximum a posteriori estimation
Neto, David
2016-11-01
This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Relationship between Prior Knowledge and Reading Comprehension
Abdelaal, Noureldin Mohamed; Sase, Amal Saleh
2014-01-01
This study investigates the relationship between prior knowledge and reading comprehension in second language among postgraduate students in UPM. Participants in the study were 20 students who have the same level in English as a second language from several faculties. On the basis of a prior-knowledge questionnaire and test, students were…
Quantitative Evidence Synthesis with Power Priors
Rietbergen, C.|info:eu-repo/dai/nl/322847796
2016-01-01
The aim of this thesis is to provide the applied researcher with a practical approach for quantitative evidence synthesis using the conditional power prior that allows for subjective input and thereby provides an alternative tgbgo deal with the difficulties as- sociated with the joint power prior
Quantitative Evidence Synthesis with Power Priors
Rietbergen, C.
2016-01-01
The aim of this thesis is to provide the applied researcher with a practical approach for quantitative evidence synthesis using the conditional power prior that allows for subjective input and thereby provides an alternative tgbgo deal with the difficulties as- sociated with the joint power prior di
Signaling Without Common Prior : An Experiment
Drouvelis, M.; Müller, W.; Possajennikov, A.
2009-01-01
The common prior assumption is pervasive in game-theoretic models with incomplete information. This paper investigates experimentally the importance of inducing a common prior in a two-person signaling game. For a specific probability distribution of the sender’s type, the long-run behavior without
Signaling Without Common Prior : An Experiment
Drouvelis, M.; Müller, W.; Possajennikov, A.
2009-01-01
The common prior assumption is pervasive in game-theoretic models with incomplete information. This paper investigates experimentally the importance of inducing a common prior in a two-person signaling game. For a specific probability distribution of the sender’s type, the long-run behavior without
Improving Open Access through Prior Learning Assessment
Yin, Shuangxu; Kawachi, Paul
2013-01-01
This paper explores and presents new data on how to improve open access in distance education through using prior learning assessments. Broadly there are three types of prior learning assessment (PLAR): Type-1 for prospective students to be allowed to register for a course; Type-2 for current students to avoid duplicating work-load to gain…
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Principle of maximum Fisher information from Hardy's axioms applied to statistical systems.
Frieden, B Roy; Gatenby, Robert A
2013-10-01
Consider a finite-sized, multidimensional system in parameter state a. The system is either at statistical equilibrium or general nonequilibrium, and may obey either classical or quantum physics. L. Hardy's mathematical axioms provide a basis for the physics obeyed by any such system. One axiom is that the number N of distinguishable states a in the system obeys N=max. This assumes that N is known as deterministic prior knowledge. However, most observed systems suffer statistical fluctuations, for which N is therefore only known approximately. Then what happens if the scope of the axiom N=max is extended to include such observed systems? It is found that the state a of the system must obey a principle of maximum Fisher information, I=I(max). This is important because many physical laws have been derived, assuming as a working hypothesis that I=I(max). These derivations include uses of the principle of extreme physical information (EPI). Examples of such derivations were of the De Broglie wave hypothesis, quantum wave equations, Maxwell's equations, new laws of biology (e.g., of Coulomb force-directed cell development and of in situ cancer growth), and new laws of economic fluctuation and investment. That the principle I=I(max) itself derives from suitably extended Hardy axioms thereby eliminates its need to be assumed in these derivations. Thus, uses of I=I(max) and EPI express physics at its most fundamental level, its axiomatic basis in math.
Radiohumeral stability to forced translation
Jensen, Steen Lund; Olsen, Bo Sanderhoff; Seki, Atsuhito
2002-01-01
Radiohumeral stability to forced translation was experimentally analyzed in 8 osteocartilaginous joint preparations. The joints were dislocated in 8 centrifugal directions at 12 different combinations of joint flexion and rotation while a constant joint compression force of 23 N was applied....... Stability was measured as the maximum resistance to translation. On average, the specimens could resist a transverse force of 16.4 N (range, 13.0-19.1 N). Stability was greater in some directions than in others. Rotating the joint changed the direction at which stability was greatest, whereas joint flexion...
Amneet Pal Singh Bhalla
Full Text Available A fundamental issue in locomotion is to understand how muscle forcing produces apparently complex deformation kinematics leading to movement of animals like undulatory swimmers. The question of whether complicated muscle forcing is required to create the observed deformation kinematics is central to the understanding of how animals control movement. In this work, a forced damped oscillation framework is applied to a chain-link model for undulatory swimming to understand how forcing leads to deformation and movement. A unified understanding of swimming, caused by muscle contractions ("active" swimming or by forces imparted by the surrounding fluid ("passive" swimming, is obtained. We show that the forcing triggers the first few deformation modes of the body, which in turn cause the translational motion. We show that relatively simple forcing patterns can trigger seemingly complex deformation kinematics that lead to movement. For given muscle activation, the forcing frequency relative to the natural frequency of the damped oscillator is important for the emergent deformation characteristics of the body. The proposed approach also leads to a qualitative understanding of optimal deformation kinematics for fast swimming. These results, based on a chain-link model of swimming, are confirmed by fully resolved computational fluid dynamics (CFD simulations. Prior results from the literature on the optimal value of stiffness for maximum speed are explained.
Bhalla, Amneet Pal Singh; Griffith, Boyce E; Patankar, Neelesh A
2013-01-01
A fundamental issue in locomotion is to understand how muscle forcing produces apparently complex deformation kinematics leading to movement of animals like undulatory swimmers. The question of whether complicated muscle forcing is required to create the observed deformation kinematics is central to the understanding of how animals control movement. In this work, a forced damped oscillation framework is applied to a chain-link model for undulatory swimming to understand how forcing leads to deformation and movement. A unified understanding of swimming, caused by muscle contractions ("active" swimming) or by forces imparted by the surrounding fluid ("passive" swimming), is obtained. We show that the forcing triggers the first few deformation modes of the body, which in turn cause the translational motion. We show that relatively simple forcing patterns can trigger seemingly complex deformation kinematics that lead to movement. For given muscle activation, the forcing frequency relative to the natural frequency of the damped oscillator is important for the emergent deformation characteristics of the body. The proposed approach also leads to a qualitative understanding of optimal deformation kinematics for fast swimming. These results, based on a chain-link model of swimming, are confirmed by fully resolved computational fluid dynamics (CFD) simulations. Prior results from the literature on the optimal value of stiffness for maximum speed are explained.
Bhalla, Amneet Pal Singh; Griffith, Boyce E.; Patankar, Neelesh A.
2013-01-01
A fundamental issue in locomotion is to understand how muscle forcing produces apparently complex deformation kinematics leading to movement of animals like undulatory swimmers. The question of whether complicated muscle forcing is required to create the observed deformation kinematics is central to the understanding of how animals control movement. In this work, a forced damped oscillation framework is applied to a chain-link model for undulatory swimming to understand how forcing leads to deformation and movement. A unified understanding of swimming, caused by muscle contractions (“active” swimming) or by forces imparted by the surrounding fluid (“passive” swimming), is obtained. We show that the forcing triggers the first few deformation modes of the body, which in turn cause the translational motion. We show that relatively simple forcing patterns can trigger seemingly complex deformation kinematics that lead to movement. For given muscle activation, the forcing frequency relative to the natural frequency of the damped oscillator is important for the emergent deformation characteristics of the body. The proposed approach also leads to a qualitative understanding of optimal deformation kinematics for fast swimming. These results, based on a chain-link model of swimming, are confirmed by fully resolved computational fluid dynamics (CFD) simulations. Prior results from the literature on the optimal value of stiffness for maximum speed are explained. PMID:23785272
Spratford, Wayne; Campbell, Rhiannon
2017-02-14
Recurve archery is an Olympic sport that requires extreme precision, upper body strength and endurance. The purpose of this research was to quantify how postural stability variables both pre- and post-arrow release, draw force, flight time, arrow length and clicker reaction time, collectively, impacted on the performance or scoring outcomes in elite recurve archery athletes. Thirty-nine elite-level recurve archers (23 male and 16 female; mean age = 24.7 ± 7.3 years) from four different countries volunteered to participate in this study prior to competing at a World Cup event. An AMTI force platform (1000Hz) was used to obtain centre of pressure (COP) measurements 1s prior to arrow release and 0.5s post-arrow release. High-speed footage (200Hz) allowed for calculation of arrow flight time and score. Results identified clicker reaction time, draw force and maximum sway speed as the variables that best predicted shot performance. Specifically, reduced clicker reaction time, greater bow draw force and reduced postural sway speed post-arrow release were predictors of higher scoring shots. It is suggested that future research should focus on investigating shoulder muscle tremors at full draw in relation to clicker reaction time, and the effect of upper body strength interventions (specifically targeting the musculature around the shoulder girdle) on performance in recurve archers.
Colorization by classifying the prior knowledge
DU Weiwei
2011-01-01
When a one-dimensional luminance scalar is replaced by a vector of a colorful multi-dimension for every pixel of a monochrome image,the process is called colorization.However,colorization is under-constrained.Therefore,the prior knowledge is considered and given to the monochrome image.Colorization using optimization algorithm is an effective algorithm for the above problem.However,it cannot effectively do with some images well without repeating experiments for confirming the place of scribbles.In this paper,a colorization algorithm is proposed,which can automatically generate the prior knowledge.The idea is that firstly,the prior knowledge crystallizes into some points of the prior knowledge which is automatically extracted by downsampling and upsampling method.And then some points of the prior knowledge are classified and given with corresponding colors.Lastly,the color image can be obtained by the color points of the prior knowledge.It is demonstrated that the proposal can not only effectively generate the prior knowledge but also colorize the monochrome image according to requirements of user with some experiments.
Ponedel, Benjamin; Knobloch, Edgar
2016-11-01
We study spatial localization in the real subcritical Ginzburg-Landau equation ut =m0 u +m1 cos2/π l x u +uxx +d | u | 2 u -| u | 4 u with spatially periodic forcing. When d > 0 and m1 = 0 this equation exhibits bistability between the trivial state u = 0 and a homogeneous nontrivial state u =u0 with stationary localized structures which accumulate at the Maxwell point m0 = - 3d2 / 16 . When spatial forcing is included its wavelength is imprinted on u0 creating conditions favorable to front pinning and hence spatial localization. We use numerical continuation to show that under appropriate conditions such forcing generates a sequence of localized states organized within a snakes-and-ladders structure centered on the Maxwell point, and refer to this phenomenon as forced snaking. We determine the stability properties of these states and show that longer lengthscale forcing leads to stationary trains consisting of a finite number of strongly localized, weakly interacting pulses exhibiting foliated snaking.
Terminology for pregnancy loss prior to viability
Kolte, A M; Bernardi, L A; Christiansen, O B
2015-01-01
Pregnancy loss prior to viability is common and research in the field is extensive. Unfortunately, terminology in the literature is inconsistent. The lack of consensus regarding nomenclature and classification of pregnancy loss prior to viability makes it difficult to compare study results from...... different centres. In our opinion, terminology and definitions should be based on clinical findings, and when possible, transvaginal ultrasound. With this Early Pregnancy Consensus Statement, it is our goal to provide clear and consistent terminology for pregnancy loss prior to viability....
Peck, V. L.; Hall, I. R.; Zahn, R.; Scourse, J. D.
2007-01-01
We present high-resolution benthic δ13C records from intermediate water depth core site MD01-2461 (1153 m water depth), from the Porcupine Seabight, NE Atlantic, spanning 43 to 8 kyr B.P. At an average proxy time step of 160 ± 56 years this core provides information on the linkage between records from the Portuguese Margin and high-latitude North Atlantic basin, allowing additional insights into North Atlantic thermohaline circulation (THC) variability during millennial-scale climatic events of the last glacial. Together, these records document both discrete and progressive reductions in Glacial North Atlantic Intermediate Water (GNAIW) formation preceding Heinrich (H) events 1, 2, and 4, recorded through the apparent interchange of glacial northern and southern-sourced intermediate water signatures along the European Margin. Close coupling of NW European ice sheet (NWEIS) instability and GNAIW formation is observed through transient advances of SCW along the European margin concurrent with pulses of ice-rafted debris and meltwater release into the NE Atlantic between 27 and 16 kyr B.P., when the NWEIS was at maximum extent and proximal to Last Glacial Maximum convection zones in the open North Atlantic. It is such NWEIS instability and meltwater forcing that may have triggered reduced North Atlantic THC prior to collapse of the Laurentide ice sheet at H1 and H2. Precursory reduction in GNAIW formation prior to H4 may also be inferred. However, limited NWEIS ice volume prior to H4 and convection occurring in the Norwegian-Greenland Sea require that if a meltwater trigger is invoked, as appears to be the case at H1 and H2, the source of meltwater prior to H4 is elsewhere, likely higher-latitude ice sheets. Clarification of the sequencing and likely mechanisms of precursory decrease of the North Atlantic THC support theories of H event initiation relating to ice shelf growth during cold periods associated with reduced North Atlantic THC and subsequent ablation
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
A Simulation of Pell Grant Awards and Costs Using Prior-Prior Year Financial Data
Kelchen, Robert; Jones, Gigi
2015-01-01
We examine the likely implications of switching from a prior year (PY) financial aid system, the current practice in which students file the Free Application for Federal Student Aid (FAFSA) using income data from the previous tax year, to prior-prior year (PPY), in which data from two years before enrollment is used. While PPY allows students to…
Buckingham, A D
1975-11-06
The nature of molecular interactions is examined. Intermolecular forces are divided into long-range and short-range components; the former operate at distances where the effects of electron exchange are negligible and decrease as an inverse power of the separation. The long-range interactions may be subdividied into electrostatic, induction and dispersion contributions, where the electrostatic component is the interaction of the permanent charge distributions and the others originate in the fluctuations in the distributions. Typical magnitudes of the various contributions are given. The forces between macroscopic bodies are briefly considered, as are the effects of a medium. Some of the manifestations of molecular interactions are discussed.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Kesic, Dragana; Thomas, Stuart D M
2014-01-01
Despite sustained large-scale educational campaigns, public attitudes towards mental illness have remained persistently negative. Associated with this, recent research from Victoria, Australia, reported that police commonly associated violent behaviour with mental illness. The present study examined 4267 cases of police use of force and considered what differentiated and characterised violent from non-violent behaviours reported by police in the context of a use of force incident. The specific focus was to examine the effects that historical variables such as age, gender, prior violent offending and having a prior diagnosis of mental disorder, as well as incident specific factors such as exhibiting signs of mental disorder and substance intoxication have on violent behaviour during the use of force incident. The proximal factors of apparent mental disorder and alcohol intoxication were significantly associated with violent behaviour towards police, whilst having a history of prior violence and prior mental disorder diagnoses was not associated with violence. The results challenge traditional stereotyped views about the violence risk posed by people with prior contact with mental health services and those with prior violent offending histories. A service model that allows for psychiatric triage would be able to assist with streamlining police involvement and facilitating timely access to mental health services.
Prior Authorization of PMDs Demonstration - Status Update
U.S. Department of Health & Human Services — CMS implemented a Prior Authorization process for scooters and power wheelchairs for people with Fee-For-Service Medicare who reside in seven states with high...
Cortical control of anticipatory postural adjustments prior to stepping.
Varghese, J P; Merino, D M; Beyer, K B; McIlroy, W E
2016-01-28
Human bipedal balance control is achieved either reactively or predictively by a distributed network of neural areas within the central nervous system with a potential role for cerebral cortex. While the role of the cortex in reactive balance has been widely explored, only few studies have addressed the cortical activations related to predictive balance control. The present study investigated the cortical activations related to the preparation and execution of anticipatory postural adjustment (APA) that precede a step. This study also examined whether the preparatory cortical activations related to a specific movement is dependent on the context of control (postural component vs. focal component). Ground reaction forces and electroencephalographic (EEG) data were recorded from 14 healthy adults while they performed lateral weight shift and lateral stepping with and without initially preloading their weight to the stance leg. EEG analysis revealed that there were distinct movement-related potentials (MRPs) with concurrent event-related desynchronization (ERD) of mu and beta rhythms prior to the onset of APA and also to the onset of foot-off during lateral stepping in the fronto-central cortical areas. Also, the MRPs and ERD prior to the onset of APA and onset of lateral weight shift were not significantly different suggesting the comparable cortical activations for the generation of postural and focal movements. The present study reveals the occurrence of cortical activation prior to the execution of an APA that precedes a step. Importantly, this cortical activity appears independent of the context of the movement.
Nowakowska, Marzena
2017-04-01
The development of the Bayesian logistic regression model classifying the road accident severity is discussed. The already exploited informative priors (method of moments, maximum likelihood estimation, and two-stage Bayesian updating), along with the original idea of a Boot prior proposal, are investigated when no expert opinion has been available. In addition, two possible approaches to updating the priors, in the form of unbalanced and balanced training data sets, are presented. The obtained logistic Bayesian models are assessed on the basis of a deviance information criterion (DIC), highest probability density (HPD) intervals, and coefficients of variation estimated for the model parameters. The verification of the model accuracy has been based on sensitivity, specificity and the harmonic mean of sensitivity and specificity, all calculated from a test data set. The models obtained from the balanced training data set have a better classification quality than the ones obtained from the unbalanced training data set. The two-stage Bayesian updating prior model and the Boot prior model, both identified with the use of the balanced training data set, outperform the non-informative, method of moments, and maximum likelihood estimation prior models. It is important to note that one should be careful when interpreting the parameters since different priors can lead to different models.
Transferring visual prior for online object tracking.
Wang, Qing; Chen, Feng; Yang, Jimei; Xu, Wenli; Yang, Ming-Hsuan
2012-07-01
Visual prior from generic real-world images can be learned and transferred for representing objects in a scene. Motivated by this, we propose an algorithm that transfers visual prior learned offline for online object tracking. From a collection of real-world images, we learn an overcomplete dictionary to represent visual prior. The prior knowledge of objects is generic, and the training image set does not necessarily contain any observation of the target object. During the tracking process, the learned visual prior is transferred to construct an object representation by sparse coding and multiscale max pooling. With this representation, a linear classifier is learned online to distinguish the target from the background and to account for the target and background appearance variations over time. Tracking is then carried out within a Bayesian inference framework, in which the learned classifier is used to construct the observation model and a particle filter is used to estimate the tracking result sequentially. Experiments on a variety of challenging sequences with comparisons to several state-of-the-art methods demonstrate that more robust object tracking can be achieved by transferring visual prior.
Varying prior information in Bayesian inversion
Walker, Matthew; Curtis, Andrew
2014-06-01
Bayes' rule is used to combine likelihood and prior probability distributions. The former represents knowledge derived from new data, the latter represents pre-existing knowledge; the Bayesian combination is the so-called posterior distribution, representing the resultant new state of knowledge. While varying the likelihood due to differing data observations is common, there are also situations where the prior distribution must be changed or replaced repeatedly. For example, in mixture density neural network (MDN) inversion, using current methods the neural network employed for inversion needs to be retrained every time prior information changes. We develop a method of prior replacement to vary the prior without re-training the network. Thus the efficiency of MDN inversions can be increased, typically by orders of magnitude when applied to geophysical problems. We demonstrate this for the inversion of seismic attributes in a synthetic subsurface geological reservoir model. We also present results which suggest that prior replacement can be used to control the statistical properties (such as variance) of the final estimate of the posterior in more general (e.g., Monte Carlo based) inverse problem solutions.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results...
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Interaction of Rate of Force Development and Duration of Rate in Isometric Force.
Siegel, Donald
A study attempted to determine whether force and duration parameters are programmed in an interactive or independent fashion prior to executing ballistic type isometric contractions of graded intensities. Four adult females each performed 360 trials of producing ballistic type forces representing 25, 40, 55, and 75 percent of their maximal…
Contribution of hand and foot force to take-off velocity for the kick-start in competitive swimming.
Takeda, Tsuyoshi; Sakai, Shin; Takagi, Hideki; Okuno, Keisuke; Tsubakimoto, Shozo
2017-03-01
This study examines the hand and foot reaction force recorded independently while performing the kick-start technique. Eleven male competitive swimmers performed three trials for the kick-start with maximum effort. Three force platforms (main block, backplate and handgrip) were used to measure reaction forces during starting motion. Force impulses from the hands, front foot and rearfoot were calculated via time integration. During the kick-start, the vertical impulse from the front foot was significantly higher than that from the rearfoot and the horizontal impulse from the rearfoot was significantly higher than that from the front foot. The force impulse from the front foot was dominant for generating vertical take-off velocity and the force impulse from the rearfoot was dominant for horizontal take-off velocity. The kick-start's shorter block time in comparison to prior measurements of the grab start was explained by the development of horizontal reaction force from the hands and the rearfoot at the beginning of the starting motion.
Tuning your priors to the world.
Feldman, Jacob
2013-01-01
The idea that perceptual and cognitive systems must incorporate knowledge about the structure of the environment has become a central dogma of cognitive theory. In a Bayesian context, this idea is often realized in terms of "tuning the prior"-widely assumed to mean adjusting prior probabilities so that they match the frequencies of events in the world. This kind of "ecological" tuning has often been held up as an ideal of inference, in fact defining an "ideal observer." But widespread as this viewpoint is, it directly contradicts Bayesian philosophy of probability, which views probabilities as degrees of belief rather than relative frequencies, and explicitly denies that they are objective characteristics of the world. Moreover, tuning the prior to observed environmental frequencies is subject to overfitting, meaning in this context overtuning to the environment, which leads (ironically) to poor performance in future encounters with the same environment. Whenever there is uncertainty about the environment-which there almost always is-an agent's prior should be biased away from ecological relative frequencies and toward simpler and more entropic priors.
Generative supervised classification using Dirichlet process priors.
Davy, Manuel; Tourneret, Jean-Yves
2010-10-01
Choosing the appropriate parameter prior distributions associated to a given bayesian model is a challenging problem. Conjugate priors can be selected for simplicity motivations. However, conjugate priors can be too restrictive to accurately model the available prior information. This paper studies a new generative supervised classifier which assumes that the parameter prior distributions conditioned on each class are mixtures of Dirichlet processes. The motivations for using mixtures of Dirichlet processes is their known ability to model accurately a large class of probability distributions. A Monte Carlo method allowing one to sample according to the resulting class-conditional posterior distributions is then studied. The parameters appearing in the class-conditional densities can then be estimated using these generated samples (following bayesian learning). The proposed supervised classifier is applied to the classification of altimetric waveforms backscattered from different surfaces (oceans, ices, forests, and deserts). This classification is a first step before developing tools allowing for the extraction of useful geophysical information from altimetric waveforms backscattered from nonoceanic surfaces.
Force decomposition in robot force control
Murphy, Steve H.; Wen, John T.
1991-01-01
The unit inconsistency in force decomposition has motivated an investigation into the force control problem in multiple-arm manipulation. Based on physical considerations, it is argued that the force that should be controlled is the internal force at the specified frame in the payload. This force contains contributions due to both applied forces from the arms and the inertial force from the payload and the arms. A least-squares scheme free of unit inconsistency for finding this internal force is presented. The force control issue is analyzed, and an integral force feedback controller is proposed.
ZHANG De'er; Demaree Gaston
2004-01-01
In the context of historical climate records of China and early meteorological measurements of Beijing discovered recently in Europe, a study is undertaken on the 1743 hottest summer of north China over the last 700 a, covering Beijing, Tianjin, and the provinces of Hebei, Shanxi and Shandong, with the highest temperature reaching 44.4℃ in July 1743 in Beijing, in excess of the maximum climate record in the 20th century. Results show that the related weather/climate features of the 1743 heat wave, e.g., flood/ drought distribution and Meiyu activity and the external forcings, such as solar activity and equatorial Pacific SST condition are the same as those of the 1942 and 1999 heat events. It is noted that the 1743 burning summer event occurs in a relatively warm climate background prior to the Industrial Revolution, with a lower level of CO2 release.
GENERAL ASPECTS REGARDING THE PRIOR DISCIPLINARY RESEARCH
ANDRA PURAN (DASCĂLU
2012-05-01
Full Text Available Disciplinary research is the first phase of the disciplinary action. According to art. 251 paragraph 1 of the Labour Code no disciplinary sanction may be ordered before performing the prior disciplinary research.These regulations provide an exception: the sanction of written warning. The current regulations in question, kept from the old regulation, provides a protection for employees against abuses made by employers, since sanctions are affecting the salary or the position held, or even the development of individual employment contract. Thus, prior research of the fact that is a misconduct, before a disciplinary sanction is applied, is an essential condition for the validity of the measure ordered. Through this study we try to highlight some general issues concerning the characteristics, processes and effects of prior disciplinary research.
Structured sparse priors for image classification.
Srinivas, Umamahesh; Suo, Yuanming; Dao, Minh; Monga, Vishal; Tran, Trac D
2015-06-01
Model-based compressive sensing (CS) exploits the structure inherent in sparse signals for the design of better signal recovery algorithms. This information about structure is often captured in the form of a prior on the sparse coefficients, with the Laplacian being the most common such choice (leading to l1 -norm minimization). Recent work has exploited the discriminative capability of sparse representations for image classification by employing class-specific dictionaries in the CS framework. Our contribution is a logical extension of these ideas into structured sparsity for classification. We introduce the notion of discriminative class-specific priors in conjunction with class specific dictionaries, specifically the spike-and-slab prior widely applied in Bayesian sparse regression. Significantly, the proposed framework takes the burden off the demand for abundant training image samples necessary for the success of sparsity-based classification schemes. We demonstrate this practical benefit of our approach in important applications, such as face recognition and object categorization.
Commissioning of the PRIOR proton microscope
Varentsov, D; Bakhmutova, A; Barnes, C W; Bogdanov, A; Danly, C R; Efimov, S; Endres, M; Fertman, A; Golubev, A A; Hoffmann, D H H; Ionita, B; Kantsyrev, A; Krasik, Ya E; Lang, P M; Lomonosov, I; Mariam, F G; Markov, N; Merrill, F E; Mintsev, V B; Nikolaev, D; Panyushkin, V; Rodionova, M; Schanz, M; Schoenberg, K; Semennikov, A; Shestov, L; Skachkov, V S; Turtikov, V; Udrea, S; Vasylyev, O; Weyrich, K; Wilde, C; Zubareva, A
2015-01-01
Recently a new high energy proton microscopy facility PRIOR (Proton Microscope for FAIR) has been designed, constructed and successfully commissioned at GSI Helmholtzzentrum f\\"ur Schwerionenforschung (Darmstadt, Germany). As a result of the experiments with 3.5-4.5 GeV proton beams delivered by the heavy ion synchrotron SIS-18 of GSI, 30 um spatial and 10 ns temporal resolutions of the proton microscope have been demostrated. A new pulsed power setup for studying properties of matter under extremes has been developed for the dynamic commissioning of the PRIOR facility. This paper describes the PRIOR setup as well as the results of the first static and dynamic proton radiography experiments performed at GSI.
Generative Prior Knowledge for Discriminative Classification
DeJong, G; 10.1613/jair.1934
2011-01-01
We present a novel framework for integrating prior knowledge into discriminative classifiers. Our framework allows discriminative classifiers such as Support Vector Machines (SVMs) to utilize prior knowledge specified in the generative setting. The dual objective of fitting the data and respecting prior knowledge is formulated as a bilevel program, which is solved (approximately) via iterative application of second-order cone programming. To test our approach, we consider the problem of using WordNet (a semantic database of English language) to improve low-sample classification accuracy of newsgroup categorization. WordNet is viewed as an approximate, but readily available source of background knowledge, and our framework is capable of utilizing it in a flexible way.
Genome position specific priors for genomic prediction
Brøndum, Rasmus Froberg; Su, Guosheng; Lund, Mogens Sandø
2012-01-01
population when using a prior derived from the Nordic Holstein population compared to using no prior information. These improvements were significant (PHotelling Williams t-test for protein- and fat yield Conclusion For some traits the method might be advantageous compared to pooling...... to estimate SNP effects, except in the case of fat yield. The small size of the Jersey validation set meant that these improvements in accuracy were not significant using a Hotelling-Williams t-test at the 5% level. An increase in accuracy of 1-2% for all traits was observed in the Australian Holstein...
Numbers and prior knowledge in sentence comprehension
Pedro Macizo
2013-01-01
Full Text Available We evaluated whether the comprehension of sentences that contained numerical information could benefit from presenting numbers in Arabic format and from using prior knowledge. Participants read sentences including numbers (Arabic digits or number words while the comprehension accuracy was evaluated. In addition, the sentences were biased or unbiased by people's prior knowledge about quantities. The results showed better comprehension for sentences that contained Arabic digits as compared to number words. Moreover, biased sentences were understood more accurately than unbiased sentences. These results indicate that information about magnitude in sentence context is comprehended better when quantities are presented in Arabic format and when they are associated with participants' world knowledge.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Appearance of anodised aluminium: Effect of alloy composition and prior surface finish
Aggerbeck, Martin; Canulescu, Stela; Dirscherl, Kai
2014-01-01
prior to anodising were analysed using scanning electron microscopy and atomic force microscopy. The optical appearance of the anodised surface with and without sealing was investigated using a photography setup, photospectrometry and bidirectional reflectance distribution function. It was found...... appearance was kept for alloys of high purity. Sealing made the specular reflection of the mechanically polished specimens more distinct....
President Nixon speaks at Hickam AFB prior to presenting Medal of Freedom
1970-01-01
President Richard M. Nixon speaks at Hickam Air Force Base prior to presenting the nation's highest civilian award to the Apollo 13 crew. Receiving the Presidential Medal of Freedom were Astronauts James A. Lovell Jr. (next to the Chief Executive), commander; John L. Swigert Jr. (left), command module pilot; and Fred W. Haise Jr., lunar module pilot.
Neuromuscular determinants of maximum walking speed in well-functioning older adults.
Clark, David J; Manini, Todd M; Fielding, Roger A; Patten, Carolynn
2013-03-01
Maximum walking speed may offer an advantage over usual walking speed for clinical assessment of age-related declines in mobility function that are due to neuromuscular impairment. The objective of this study was to determine the extent to which maximum walking speed is affected by neuromuscular function of the lower extremities in older adults. We recruited two groups of healthy, well functioning older adults who differed primarily on maximum walking speed. We hypothesized that individuals with slower maximum walking speed would exhibit reduced lower extremity muscle size and impaired plantarflexion force production and neuromuscular activation during a rapid contraction of the triceps surae muscle group (soleus (SO) and gastrocnemius (MG)). All participants were required to have usual 10-meter walking speed of >1.0m/s. If the difference between usual and maximum 10m walking speed was 0.6m/s, the individual was assigned to the "Faster" group (n=12). Peak rate of force development (RFD) and rate of neuromuscular activation (rate of EMG rise) of the triceps surae muscle group were assessed during a rapid plantarflexion movement. Muscle cross sectional area of the right triceps surae, quadriceps and hamstrings muscle groups was determined by magnetic resonance imaging. Across participants, the difference between usual and maximal walking speed was predominantly dictated by maximum walking speed (r=.85). We therefore report maximum walking speed (1.76 and 2.17m/s in Slower and Faster, ptriceps surae (p=.44), quadriceps (p=.76) and hamstrings (p=.98). MG rate of EMG rise was positively associated with RFD and maximum 10m walking speed, but not the usual 10m walking speed. These findings support the conclusion that maximum walking speed is limited by impaired neuromuscular force and activation of the triceps surae muscle group. Future research should further evaluate the utility of maximum walking speed for use in clinical assessment to detect and monitor age
Negotiating Multicollinearity with Spike-and-Slab Priors.
Ročková, Veronika; George, Edward I
2014-08-01
In multiple regression under the normal linear model, the presence of multicollinearity is well known to lead to unreliable and unstable maximum likelihood estimates. This can be particularly troublesome for the problem of variable selection where it becomes more difficult to distinguish between subset models. Here we show how adding a spike-and-slab prior mitigates this difficulty by filtering the likelihood surface into a posterior distribution that allocates the relevant likelihood information to each of the subset model modes. For identification of promising high posterior models in this setting, we consider three EM algorithms, the fast closed form EMVS version of Rockova and George (2014) and two new versions designed for variants of the spike-and-slab formulation. For a multimodal posterior under multicollinearity, we compare the regions of convergence of these three algorithms. Deterministic annealing versions of the EMVS algorithm are seen to substantially mitigate this multimodality. A single simple running example is used for illustration throughout.
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Unification of Field Theory and Maximum Entropy Methods for Learning Probability Densities
Kinney, Justin B
2014-01-01
Bayesian field theory and maximum entropy are two methods for learning smooth probability distributions (a.k.a. probability densities) from finite sampled data. Both methods were inspired by statistical physics, but the relationship between them has remained unclear. Here I show that Bayesian field theory subsumes maximum entropy density estimation. In particular, the most common maximum entropy methods are shown to be limiting cases of Bayesian inference using field theory priors that impose no boundary conditions on candidate densities. This unification provides a natural way to test the validity of the maximum entropy assumption on one's data. It also provides a better-fitting nonparametric density estimate when the maximum entropy assumption is rejected.
Validity in assessment of prior learning
Wahlgren, Bjarne; Aarkrog, Vibe
2015-01-01
The article deals with the results of a study of school-based assessment of adults who have enrolled as students at a vocational college in order to qualify for occupations as skilled workers. Based on examples of methods for assessing the students’ prior learning in a programme for hairdressers...
Action priors for learning domain invariances
Rosman, Benjamin S
2015-04-01
Full Text Available , defined as distributions over the action space, conditioned on environment state, and show how these can be learnt from a set of value functions. We apply action priors in the setting of reinforcement learning, to bias action selection during exploration...
Recognition of Prior Learning: The Participants' Perspective
Miguel, Marta C.; Ornelas, José H.; Maroco, João P.
2016-01-01
The current narrative on lifelong learning goes beyond formal education and training, including learning at work, in the family and in the community. Recognition of prior learning is a process of evaluation of those skills and knowledge acquired through life experience, allowing them to be formally recognized by the qualification systems. It is a…
Validity in assessment of prior learning
Wahlgren, Bjarne; Aarkrog, Vibe
2015-01-01
The article deals with the results of a study of school-based assessment of adults who have enrolled as students at a vocational college in order to qualify for occupations as skilled workers. Based on examples of methods for assessing the students’ prior learning in a programme for hairdressers...
The prior statistics of object colors
Koenderink, J.J.
2010-01-01
The prior statistics of object colors is of much interest because extensive statistical investigations of reflectance spectra reveal highly non-uniform structure in color space common to several very different databases. This common structure is due to the visual system rather than to the statistics
Models for Validation of Prior Learning (VPL)
Ehlers, Søren
would have been categorized as utopian can become realpolitik. Validation of Prior Learning (VPL) was in Europe mainly regarded as utopian while universities in the United States of America (USA) were developing ways to obtain credits to those students which was coming with experiences from working life....
Offending prior to first psychiatric contact
Stevens, H; Agerbo, E; Dean, K
2012-01-01
There is a well-established association between psychotic disorders and subsequent offending but the extent to which those who develop psychosis might have a prior history of offending is less clear. Little is known about whether the association between illness and offending exists in non-psychot...
Erich Regener and the maximum in ionisation of the atmosphere
Carlson, P
2014-01-01
In the 1930s the German physicist Erich Regener (1881-1955) did important work on the measurement of the rate of production of ionisation deep under-water and in the atmosphere. He discovered, along with one of his students, Georg Pfotzer, the altitude at which the production of ionisation in the atmosphere reaches a maximum, often, but misleadingly, called the Pfotzer maximum. Regener was one of the first to estimate the energy density of cosmic rays, an estimate that was used by Baade and Zwicky to bolster their postulate that supernovae might be their source. Yet Regener's name is less recognised by present-day cosmic ray physicists than it should be largely because in 1937 he was forced to take early retirement by the National Socialists as his wife had Jewish ancestors. In this paper we briefly review his work on cosmic rays and recommend an alternative naming of the ionisation maximum. The influence that Regener had on the field through his son, his son-in-law, his grandsons and his students and through...
The Impact of Prior Heterosexual Experiences on Homosexuality in Women
Marissa A. Harrison
2008-04-01
Full Text Available An abundance of unwanted sexual opportunities perpetrated by insensitive, physically and sexually abusive men may be a factor in the expression of homosexuality in some women. In the present study, we examined self-reports of dating histories, sexual experiences, and physical and sexual abuse among lesbians and heterosexual women. Lesbians with prior heterosexual experience reported more severe and more frequent physical abuse by men. Lesbians also reported more instances of forced, unwanted sexual contact perpetrated by men, and this sexual abuse occurred at a significantly earlier age. These data show that adverse experiences with the opposite sex are more common in lesbians than heterosexual women, and therefore negative heterosexual experiences may be a factor in the expression of a same-sex sexual orientation in women. We propose an evolutionary psychological interpretation of this phenomenon based on the cardinally different mating strategies of women and men that have evolved for maximizing the likelihood of reproduction.
Pet Ownership and Evacuation Prior to Hurricane Irene
Nick Rohrbaugh
2012-09-01
Full Text Available Pet ownership has historically been one of the biggest risk factors for evacuation failure prior to natural disasters. The forced abandonment of pets during Hurricane Katrina in 2005 made national headlines and led to the passage of the Pet Evacuation and Transportation Standards Act (PETS, 2006 which mandated local authorities to plan for companion animal evacuation. Hurricane Irene hit the East Coast of the United States in 2011, providing an excellent opportunity to examine the impact of the PETS legislation on frequency and ease of evacuation among pet owners and non-pet owners. Ninety pet owners and 27 non-pet owners who lived in mandatory evacuation zones completed questionnaires assessing their experiences during the hurricane and symptoms of depression, PTSD, dissociative experiences, and acute stress. Pet ownership was not found to be a statistical risk factor for evacuation failure. However, many pet owners who failed to evacuate continue to cite pet related reasons.
Optimal specific wavelength for maximum thrust production in undulatory propulsion.
Nangia, Nishant; Bale, Rahul; Chen, Nelson; Hanna, Yohanna; Patankar, Neelesh A
2017-01-01
What wavelengths do undulatory swimmers use during propulsion? In this work we find that a wide range of body/caudal fin (BCF) swimmers, from larval zebrafish and herring to fully-grown eels, use specific wavelength (ratio of wavelength to tail amplitude of undulation) values that fall within a relatively narrow range. The possible emergence of this constraint is interrogated using numerical simulations of fluid-structure interaction. Based on these, it was found that there is an optimal specific wavelength (OSW) that maximizes the swimming speed and thrust generated by an undulatory swimmer. The observed values of specific wavelength for BCF animals are relatively close to this OSW. The mechanisms underlying the maximum propulsive thrust for BCF swimmers are quantified and are found to be consistent with the mechanisms hypothesized in prior work. The adherence to an optimal value of specific wavelength in most natural hydrodynamic propulsors gives rise to empirical design criteria for man-made propulsors.
Intramuscular fiber conduction velocity, isometric force and explosive performance.
Methenitis, Spyridon; Terzis, Gerasimos; Zaras, Nikolaos; Stasinaki, Angeliki-Nikoletta; Karandreas, Nikolaos
2016-06-01
Conduction of electrical signals along the surface of muscle fibers is acknowledged as an essential neuromuscular component which is linked with muscle force production. However, it remains unclear whether muscle fiber conduction velocity (MFCV) is also linked with explosive performance. The aim of the present study was to investigate the relationship between vastus lateralis MFCV and countermovement jumping performance, the rate of force development and maximum isometric force. Fifteen moderately-trained young females performed countermovement jumps as well as an isometric leg press test in order to determine the rate of force development and maximum isometric force. Vastus lateralis MFCV was measured with intramuscular microelectrodes at rest on a different occasion. Maximum MFCV was significantly correlated with maximum isometric force (r = 0.66, p rate of force development at 100 ms, 150 ms, 200 ms, and 250 ms (r = 0.85, r = 0.89, r = 0.91, r = 0.92, respectively, p rate of force development than with maximum isometric leg press force. Lower, but significant correlations were found between mean MFCV and countermovement jump power (r = 0.65, p rate of force development than with isometric force, perhaps because conduction velocity is higher in the larger and fastest muscle fibers which are recognized to contribute to explosive actions.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Inferential permutation tests for maximum entropy models in ecology.
Shipley, Bill
2010-09-01
Maximum entropy (maxent) models assign probabilities to states that (1) agree with measured macroscopic constraints on attributes of the states and (2) are otherwise maximally uninformative and are thus as close as possible to a specified prior distribution. Such models have recently become popular in ecology, but classical inferential statistical tests require assumptions of independence during the allocation of entities to states that are rarely fulfilled in ecology. This paper describes a new permutation test for such maxent models that is appropriate for very general prior distributions and for cases in which many states have zero abundance and that can be used to test for conditional relevance of subsets of constraints. Simulations show that the test gives correct probability estimates under the null hypothesis. Power under the alternative hypothesis depends primarily on the number and strength of the constraints and on the number of states in the model; the number of empty states has only a small effect on power. The test is illustrated using two empirical data sets to test the community assembly model of B. Shipley, D. Vile, and E. Garnier and the species abundance distribution models of S. Pueyo, F. He, and T. Zillio.
Constructing Maximum Entropy Language Models for Movie Review Subjectivity Analysis
Bo Chen; Hui He; Jun Guo
2008-01-01
Document subjectivity analysis has become an important aspect of web text content mining. This problem is similar to traditional text categorization, thus many related classification techniques can be adapted here. However, there is one significant difference that more language or semantic information is required for better estimating the subjectivity of a document. Therefore, in this paper, our focuses are mainly on two aspects. One is how to extract useful and meaningful language features, and the other is how to construct appropriate language models efficiently for this special task. For the first issue, we conduct a Global-Filtering and Local-Weighting strategy to select and evaluate language features in a series of n-grams with different orders and within various distance-windows. For the second issue, we adopt Maximum Entropy (MaxEnt) modeling methods to construct our language model framework. Besides the classical MaxEnt models, we have also constructed two kinds of improved models with Gaussian and exponential priors respectively. Detailed experiments given in this paper show that with well selected and weighted language features, MaxEnt models with exponential priors are significantly more suitable for the text subjectivity analysis task.
Combining experiments and simulations using the maximum entropy principle.
Wouter Boomsma
2014-02-01
Full Text Available A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
Combining experiments and simulations using the maximum entropy principle.
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-02-01
A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges.
Christensen, Thomas Budde
.g. sustainability or quality of life. The purpose of this paper is to explore how and to what extent public sector interventions that aim at forcing cluster development in industries can support sustainable development as defined in the Brundtland tradition and more recently elaborated in such concepts as eco......, Portugal and New Zealand have adopted the concept. Public sector interventions that aim to support cluster development in industries most often focus upon economic policy goals such as enhanced employment and improved productivity, but rarely emphasise broader societal policy goals relating to e...... to the automotive sector in Wales. Specifically, the paper evaluates the "Accelerates" programme initiated by the Welsh Development Agency and elaborates on how and to what extent the Accelerate programme supports the development of a sustainable automotive industry cluster. The Accelerate programme was set up...
Marciuc, Daly; Solschi, Viorel
2017-04-01
Understanding the Coriolis effect is essential for explaining the movement of air masses and ocean currents. The lesson we propose aims to familiarize students with the manifestation of the Coriolis effect. Students are guided to build, using the GeoGebra software, a simulation of the motion of a body, related to a rotating reference system. The mathematical expression of the Coriolis force is deduced, for particular cases, and the Foucault's pendulum is presented and explained. Students have the opportunity to deepen the subject, by developing materials related to topics such as: • Global Wind Pattern • Ocean Currents • Coriolis Effect in Long Range Shooting • Finding the latitude with a Foucault Pendulum
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
The influence of material cues on early grasping force
Bergmann Tiest, W.M.; Kappers, A.M.L.
2014-01-01
The object of this study was to see whether differences in texture influence grip force in the very early phase of grasping an object. Subjects were asked to pick up objects with different textures either blindfolded or sighted, while grip force was measured. Maximum force was found to be adjusted t
Prior Information in Inverse Boundary Problems
Garde, Henrik
This thesis gives a threefold perspective on the inverse problem of inclusion detection in electrical impedance tomography: depth dependence, monotonicitybased reconstruction, and sparsity-based reconstruction. The depth dependence is given in terms of explicit bounds on the datum norm, which shows...... into how much noise that can be allowed in the datum before an inclusion cannot be detected. The monotonicity method is a direct reconstruction method that utilizes a monotonicity property of the forward problem in order to characterize the inclusions. Here we rigorously prove that the method can...... of the method. Sparsity-based reconstruction is an iterative method, that through an optimization problem with a sparsity prior, approximates the inhomogeneities. Here we make use of prior information, that can cheaply be obtained from the monotonicity method, to improve both the contrast and resolution...
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Prior exercise and antioxidant supplementation: effect on oxidative stress and muscle injury
Schilling Brian K
2007-10-01
Full Text Available Abstract Background Both acute bouts of prior exercise (preconditioning and antioxidant nutrients have been used in an attempt to attenuate muscle injury or oxidative stress in response to resistance exercise. However, most studies have focused on untrained participants rather than on athletes. The purpose of this work was to determine the independent and combined effects of antioxidant supplementation (vitamin C + mixed tocopherols/tocotrienols and prior eccentric exercise in attenuating markers of skeletal muscle injury and oxidative stress in resistance trained men. Methods Thirty-six men were randomly assigned to: no prior exercise + placebo; no prior exercise + antioxidant; prior exercise + placebo; prior exercise + antioxidant. Markers of muscle/cell injury (muscle performance, muscle soreness, C-reactive protein, and creatine kinase activity, as well as oxidative stress (blood protein carbonyls and peroxides, were measured before and through 48 hours of exercise recovery. Results No group by time interactions were noted for any variable (P > 0.05. Time main effects were noted for creatine kinase activity, muscle soreness, maximal isometric force and peak velocity (P Conclusion There appears to be no independent or combined effect of a prior bout of eccentric exercise or antioxidant supplementation as used here on markers of muscle injury in resistance trained men. Moreover, eccentric exercise as used in the present study results in minimal blood oxidative stress in resistance trained men. Hence, antioxidant supplementation for the purpose of minimizing blood oxidative stress in relation to eccentric exercise appears unnecessary in this population.
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
[Registration of prehabilitation prior to surgery].
Tønnesen, Hanne; Duus, Benn R
2008-04-21
Four to eight week prehabilitation programs for smokers and harmful drinkers were included in the national guidelines in 2001. In October 2007 a guarantee for surgery within one month of waiting time came into effect in Denmark. The present Danish patient administration system already contains room for registration of prehabilitation prior to surgery. Using one specific code for prehabilitation at the surgical department and another for prehabilitation at other departments will enable correct registration. Thereby, it is possible to differentiate between ordinary waiting time before surgery and time for prehabilitation.
Hernandez, Rafael; Onar-Thomas, Arzu; Travascio, Francesco; Asfour, Shihab
2017-04-14
Laparoscopic training with visual force feedback can lead to immediate improvements in force moderation. However, the long-term retention of this kind of learning and its potential decay are yet unclear. A laparoscopic resection task and force sensing apparatus were designed to assess the benefits of visual force feedback training. Twenty-two male university students with no previous experience in laparoscopy underwent relevant FLS proficiency training. Participants were randomly assigned to either a control or treatment group. Both groups trained on the task for 2 weeks as follows: initial baseline, sixteen training trials, and post-test immediately after. The treatment group had visual force feedback during training, whereas the control group did not. Participants then performed four weekly test trials to assess long-term retention of training. Outcomes recorded were maximum pulling and pushing forces, completion time, and rated task difficulty. Extreme maximum pulling force values were tapered throughout both the training and retention periods. Average maximum pushing forces were significantly lowered towards the end of training and during retention period. No significant decay of applied force learning was found during the 4-week retention period. Completion time and rated task difficulty were higher during training, but results indicate that the difference eventually fades during the retention period. Significant differences in aptitude across participants were found. Visual force feedback training improves on certain aspects of force moderation in a laparoscopic resection task. Results suggest that with enough training there is no significant decay of learning within the first month of the retention period. It is essential to account for differences in aptitude between individuals in this type of longitudinal research. This study shows how an inexpensive force measuring system can be used with an FLS Trainer System after some retrofitting. Surgical
Crimi, Alessandro; Lillholm, Martin; Nielsen, Mads
2011-01-01
the estimates' influence on a missing-data reconstruction task, where high resolution vertebra and cartilage models are reconstructed from incomplete and lower dimensional representations. Our results demonstrate that our methods outperform the traditional ML method and Tikhonov regularization......., and may lead to unreliable results. In this paper, we discuss regularization by prior knowledge using maximum a posteriori (MAP) estimates. We compare ML to MAP using a number of priors and to Tikhonov regularization. We evaluate the covariance estimates on both synthetic and real data, and we analyze...
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
An Integrated Modeling Framework for Probable Maximum Precipitation and Flood
Gangrade, S.; Rastogi, D.; Kao, S. C.; Ashfaq, M.; Naz, B. S.; Kabela, E.; Anantharaj, V. G.; Singh, N.; Preston, B. L.; Mei, R.
2015-12-01
With the increasing frequency and magnitude of extreme precipitation and flood events projected in the future climate, there is a strong need to enhance our modeling capabilities to assess the potential risks on critical energy-water infrastructures such as major dams and nuclear power plants. In this study, an integrated modeling framework is developed through high performance computing to investigate the climate change effects on probable maximum precipitation (PMP) and probable maximum flood (PMF). Multiple historical storms from 1981-2012 over the Alabama-Coosa-Tallapoosa River Basin near the Atlanta metropolitan area are simulated by the Weather Research and Forecasting (WRF) model using the Climate Forecast System Reanalysis (CFSR) forcings. After further WRF model tuning, these storms are used to simulate PMP through moisture maximization at initial and lateral boundaries. A high resolution hydrological model, Distributed Hydrology-Soil-Vegetation Model, implemented at 90m resolution and calibrated by the U.S. Geological Survey streamflow observations, is then used to simulate the corresponding PMF. In addition to the control simulation that is driven by CFSR, multiple storms from the Community Climate System Model version 4 under the Representative Concentrations Pathway 8.5 emission scenario are used to simulate PMP and PMF in the projected future climate conditions. The multiple PMF scenarios developed through this integrated modeling framework may be utilized to evaluate the vulnerability of existing energy-water infrastructures with various aspects associated PMP and PMF.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Tensors, BICEP2, prior dependence, and dust
Cortês, Marina; Parkinson, David
2014-01-01
We investigate the prior dependence on the inferred spectrum of primordial tensor perturbations, in light of recent results from BICEP2 and taking into account a possible dust contribution to polarized anisotropies. We highlight an optimized parameterization of the tensor power spectrum, and adoption of a logarithmic prior on its amplitude $A_T$, leading to results that transform more evenly under change of pivot scale. In the absence of foregrounds the tension between the results of BICEP2 and Planck drives the tensor spectral index $n_T$ to be blue-tilted in a joint analysis, which would be in contradiction to the standard inflation prediction ($n_T<0$). When foregrounds are accounted for, the BICEP2 results no longer require non-standard inflationary parameter regions. We present limits on primordial $A_T$ and $n_T$, adopting foreground scenarios put forward by Mortonson & Seljak and motivated by Planck 353 GHz observations, and assess what dust contribution leaves a detectable cosmological signal. ...
How prior expectations shape multisensory perception.
Gau, Remi; Noppeney, Uta
2016-01-01
The brain generates a representation of our environment by integrating signals from a common source, but segregating signals from different sources. This fMRI study investigated how the brain arbitrates between perceptual integration and segregation based on top-down congruency expectations and bottom-up stimulus-bound congruency cues. Participants were presented audiovisual movies of phonologically congruent, incongruent or McGurk syllables that can be integrated into an illusory percept (e.g. "ti" percept for visual «ki» with auditory /pi/). They reported the syllable they perceived. Critically, we manipulated participants' top-down congruency expectations by presenting McGurk stimuli embedded in blocks of congruent or incongruent syllables. Behaviorally, participants were more likely to fuse audiovisual signals into an illusory McGurk percept in congruent than incongruent contexts. At the neural level, the left inferior frontal sulcus (lIFS) showed increased activations for bottom-up incongruent relative to congruent inputs. Moreover, lIFS activations were increased for physically identical McGurk stimuli, when participants segregated the audiovisual signals and reported their auditory percept. Critically, this activation increase for perceptual segregation was amplified when participants expected audiovisually incongruent signals based on prior sensory experience. Collectively, our results demonstrate that the lIFS combines top-down prior (in)congruency expectations with bottom-up (in)congruency cues to arbitrate between multisensory integration and segregation.
Depth image enhancement using perceptual texture priors
Bang, Duhyeon; Shim, Hyunjung
2015-03-01
A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Stability of Nonlinear Force-Free Magnetic Fields
胡友秋
2001-01-01
Based on the magnetohydrodynamic energy principle, it is proved that Gold-Hoyle's nonlinear force-free magnetic field is unstable. This disproves the sufficient criterion for stability of nonlinear force-free magnetic fields given by Kriiger that a nonlinear force-free field is stable if the maximum absolute value of the force-free factor is smaller than the lowest eigenvalue associated with the domain of interest.
Modeling Mediterranean ocean climate of the Last Glacial Maximum
U. Mikolajewicz
2010-10-01
Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the last glacial maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions nontrivial. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of the salinity in the Mediterranean in spite of reduced net evaporation.
Modeling Mediterranean Ocean climate of the Last Glacial Maximum
U. Mikolajewicz
2011-03-01
Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the Last Glacial Maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions complicated. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of salinity in the Mediterranean in spite of reduced net evaporation.
A cutting force model for micromilling applications
Bissacco, Giuliano; Hansen, Hans Nørgaard; De Chiffre, Leonardo
2006-01-01
In micro milling the maximum uncut chip thickness is often smaller than the cutting edge radius. This paper introduces a new cutting force model for ball nose micro milling that is capable of taking into account the effect of the edge radius.......In micro milling the maximum uncut chip thickness is often smaller than the cutting edge radius. This paper introduces a new cutting force model for ball nose micro milling that is capable of taking into account the effect of the edge radius....
Optimal Tuning of Amplitude Proportional Coulomb Friction Damper for Maximum Cable Damping
Weber, Felix; Høgsberg, Jan Becker; Krenk, Steen
2010-01-01
This paper investigates numerically the optimal tuning of Coulomb friction dampers on cables, where the optimality criterion is maximum additional damping in the first vibration mode. The expression for the optimal friction force level of Coulomb friction dampers follows from the linear viscous...... damper via harmonic averaging. It turns out that the friction force level has to be adjusted in proportion to cable amplitude at damper position which is realized by amplitude feedback in real time. The performance of this adaptive damper is assessed by simulated free decay curves from which the damping...... is estimated. It is found that the damping efficiency agrees well with the expected value at the theoretical optimum. However, maximum damping is larger and achieved at a force to amplitude ratio of 1.4 times the analytical value. Investigations show that the increased damping results from energy spillover...
Variance components in discrete force production tasks.
Varadhan, S K M; Zatsiorsky, Vladimir M; Latash, Mark L
2010-09-01
The study addresses the relationships between task parameters and two components of variance, "good" and "bad", during multi-finger accurate force production. The variance components are defined in the space of commands to the fingers (finger modes) and refer to variance that does ("bad") and does not ("good") affect total force. Based on an earlier study of cyclic force production, we hypothesized that speeding-up an accurate force production task would be accompanied by a drop in the regression coefficient linking the "bad" variance and force rate such that variance of the total force remains largely unaffected. We also explored changes in parameters of anticipatory synergy adjustments with speeding-up the task. The subjects produced accurate ramps of total force over different times and in different directions (force-up and force-down) while pressing with the four fingers of the right hand on individual force sensors. The two variance components were quantified, and their normalized difference was used as an index of a total force stabilizing synergy. "Good" variance scaled linearly with force magnitude and did not depend on force rate. "Bad" variance scaled linearly with force rate within each task, and the scaling coefficient did not change across tasks with different ramp times. As a result, a drop in force ramp time was associated with an increase in total force variance, unlike the results of the study of cyclic tasks. The synergy index dropped 100-200 ms prior to the first visible signs of force change. The timing and magnitude of these anticipatory synergy adjustments did not depend on the ramp time. Analysis of the data within an earlier model has shown adjustments in the variance of a timing parameter, although these adjustments were not as pronounced as in the earlier study of cyclic force production. Overall, we observed qualitative differences between the discrete and cyclic force production tasks: Speeding-up the cyclic tasks was associated with
Bayesian optimal experimental design for priors of compact support
Long, Quan
2016-01-08
In this study, we optimize the experimental setup computationally by optimal experimental design (OED) in a Bayesian framework. We approximate the posterior probability density functions (pdf) using truncated Gaussian distributions in order to account for the bounded domain of the uniform prior pdf of the parameters. The underlying Gaussian distribution is obtained in the spirit of the Laplace method, more precisely, the mode is chosen as the maximum a posteriori (MAP) estimate, and the covariance is chosen as the negative inverse of the Hessian of the misfit function at the MAP estimate. The model related entities are obtained from a polynomial surrogate. The optimality, quantified by the information gain measures, can be estimated efficiently by a rejection sampling algorithm against the underlying Gaussian probability distribution, rather than against the true posterior. This approach offers a significant error reduction when the magnitude of the invariants of the posterior covariance are comparable to the size of the bounded domain of the prior. We demonstrate the accuracy and superior computational efficiency of our method for shock-tube experiments aiming to measure the model parameters of a key reaction which is part of the complex kinetic network describing the hydrocarbon oxidation. In the experiments, the initial temperature and fuel concentration are optimized with respect to the expected information gain in the estimation of the parameters of the target reaction rate. We show that the expected information gain surface can change its shape dramatically according to the level of noise introduced into the synthetic data. The information that can be extracted from the data saturates as a logarithmic function of the number of experiments, and few experiments are needed when they are conducted at the optimal experimental design conditions.
Clarke, Jennifer G.; Stein, L. A. R.; Martin, Rosemarie A.; Martin, Stephen A.; Parker, Donna; Lopes, Cheryl E.; McGovern, Arthur R.; Simon, Rachel; Roberts, Mary; Friedman, Peter; Bock, Beth
2015-01-01
Importance Millions of Americans are forced to quit smoking as they enter tobacco-free prisons and jails, but most return to smoking within days of release. Interventions are needed to sustain tobacco abstinence after release from incarceration. Objective To evaluate the extent to which the WISE intervention (Working Inside for Smoking Elimination), based on motivational interviewing (MI) and cognitive behavioral therapy (CBT), decreases relapse to smoking after release from a smoke-free prison. Design Participants were recruited approximately 8 weeks prior to their release from a smoke-free prison and randomized to 6 weekly sessions of either education videos (control) or the WISE intervention. Setting A tobacco-free prison in the United States. Participants A total of 262 inmates (35% female). Main Outcome Measure Continued smoking absti nence was defined as 7-day point-prevalence abstinence validated by urine cotinine measurement. Results At the 3-week follow-up, 25% of participants in the WISE intervention (31 of 122) and 7% of the control participants (9 of 125) continued to be tobacco abstinent (odds ratio [OR], 4.4; 95% CI, 2.0-9.7). In addition to the intervention, Hispanic ethnicity, a plan to remain abstinent, and being incarcerated for more than 6 months were all associated with increased likelihood of remaining abstinent. In the logistic regression analysis, participants randomized to the WISE intervention were 6.6 times more likely to remain tobacco abstinent at the 3-week follow up than those randomized to the control condition (95% CI, 2.5-17.0). Nonsmokers at the 3-week follow-up had an additional follow-up 3 months after release, and overall 12% of the participants in the WISE intervention (14 of 122) and 2% of the control participants (3 of 125) were tobacco free at 3 months, as confirmed by urine cotinine measurement (OR, 5.3; 95% CI, 1.4-23.8). Conclusions and Relevance Forced tobacco abstinence alone during incarceration has little impact on
Forces acting in quasi 2d emulsions
Orellana, Carlos; Lowensohn, Janna; Weeks, Eric
We study the forces in a quasi two dimensional emulsion system. Our samples are oil-in-water emulsions confined between two close-spaced parallel plates, so that the oil droplets are deformed into pancake shapes. By means of microscopy, we measure the droplet positions and their deformation, which we can relate to the contact forces due to surface tension. We improve over prior work in our lab, achieving a better force resolution. We use this result to measure and calibrate the viscous forces acting in our system, which fully determine all the forces on the droplets. Our results can be applied to study static configurations of emulsion, as well as faster flows.
Controlling police (excessive force: The American case
Zakir Gül
2013-09-01
Full Text Available This article addresses the issue of police abuse of power, particularly police use of excessive force. Since the misuse of force by police is considered a problem, some entity must discover a way to control and prevent the illegal use of coercive power. Unlike most of the previous studies on the use of excessive force, this study uses a path analysis. However, not all the findings are consistent with the prior studies and hypotheses. In general, findings indicate that training may be a useful tool in terms of decreasing the use of excessive force, thereby reducing civilians’ injuries and citizens’ complaints. The results show that ethics training in the academy is significantly related to the use of excessive force. Further, it was found that community-oriented policing training in the academy was associated with the citizens’ complaints. A national (secondary data, collected from the law enforcement agencies in the United States are used to explore the research questions.
Birth-death prior on phylogeny and speed dating
Sennblad Bengt
2008-03-01
Full Text Available Abstract Background In recent years there has been a trend of leaving the strict molecular clock in order to infer dating of speciations and other evolutionary events. Explicit modeling of substitution rates and divergence times makes formulation of informative prior distributions for branch lengths possible. Models with birth-death priors on tree branching and auto-correlated or iid substitution rates among lineages have been proposed, enabling simultaneous inference of substitution rates and divergence times. This problem has, however, mainly been analysed in the Markov chain Monte Carlo (MCMC framework, an approach requiring computation times of hours or days when applied to large phylogenies. Results We demonstrate that a hill-climbing maximum a posteriori (MAP adaptation of the MCMC scheme results in considerable gain in computational efficiency. We demonstrate also that a novel dynamic programming (DP algorithm for branch length factorization, useful both in the hill-climbing and in the MCMC setting, further reduces computation time. For the problem of inferring rates and times parameters on a fixed tree, we perform simulations, comparisons between hill-climbing and MCMC on a plant rbcL gene dataset, and dating analysis on an animal mtDNA dataset, showing that our methodology enables efficient, highly accurate analysis of very large trees. Datasets requiring a computation time of several days with MCMC can with our MAP algorithm be accurately analysed in less than a minute. From the results of our example analyses, we conclude that our methodology generally avoids getting trapped early in local optima. For the cases where this nevertheless can be a problem, for instance when we in addition to the parameters also infer the tree topology, we show that the problem can be evaded by using a simulated-annealing like (SAL method in which we favour tree swaps early in the inference while biasing our focus towards rate and time parameter changes
A Clustering Method Based on the Maximum Entropy Principle
Edwin Aldana-Bobadilla
2015-01-01
Full Text Available Clustering is an unsupervised process to determine which unlabeled objects in a set share interesting properties. The objects are grouped into k subsets (clusters whose elements optimize a proximity measure. Methods based on information theory have proven to be feasible alternatives. They are based on the assumption that a cluster is one subset with the minimal possible degree of “disorder”. They attempt to minimize the entropy of each cluster. We propose a clustering method based on the maximum entropy principle. Such a method explores the space of all possible probability distributions of the data to find one that maximizes the entropy subject to extra conditions based on prior information about the clusters. The prior information is based on the assumption that the elements of a cluster are “similar” to each other in accordance with some statistical measure. As a consequence of such a principle, those distributions of high entropy that satisfy the conditions are favored over others. Searching the space to find the optimal distribution of object in the clusters represents a hard combinatorial problem, which disallows the use of traditional optimization techniques. Genetic algorithms are a good alternative to solve this problem. We benchmark our method relative to the best theoretical performance, which is given by the Bayes classifier when data are normally distributed, and a multilayer perceptron network, which offers the best practical performance when data are not normal. In general, a supervised classification method will outperform a non-supervised one, since, in the first case, the elements of the classes are known a priori. In what follows, we show that our method’s effectiveness is comparable to a supervised one. This clearly exhibits the superiority of our method.
Effects of prior stimulus and prior perception on neural correlates of auditory stream segregation.
Snyder, Joel S; Holder, W Trent; Weintraub, David M; Carter, Olivia L; Alain, Claude
2009-11-01
We examined whether effects of prior experience are mediated by distinct brain processes from those processing current stimulus features. We recorded event-related potentials (ERPs) during an auditory stream segregation task that presented an adaptation sequence with a small, intermediate, or large frequency separation between low and high tones (Deltaf), followed by a test sequence with intermediate Deltaf. Perception of two streams during the test was facilitated by small prior Deltaf and by prior perception of two streams and was accompanied by more positive ERPs. The scalp topography of these perception-related changes in ERPs was different from that observed for ERP modulations due to increasing the current Deltaf. These results reveal complex interactions between stimulus-driven activity and temporal-context-based processes and suggest a complex set of brain areas involved in modulating perception based on current and previous experience.
Improving semantic scene understanding using prior information
Laddha, Ankit; Hebert, Martial
2016-05-01
Perception for ground robot mobility requires automatic generation of descriptions of the robot's surroundings from sensor input (cameras, LADARs, etc.). Effective techniques for scene understanding have been developed, but they are generally purely bottom-up in that they rely entirely on classifying features from the input data based on learned models. In fact, perception systems for ground robots have a lot of information at their disposal from knowledge about the domain and the task. For example, a robot in urban environments might have access to approximate maps that can guide the scene interpretation process. In this paper, we explore practical ways to combine such prior information with state of the art scene understanding approaches.
Theoretical Priors On Modified Growth Parametrisations
Song, Yong-Seon; Caldera-Cabral, Gabriela; Koyama, Kazuya
2010-01-01
Next generation surveys will observe the large-scale structure of the Universe with unprecedented accuracy. This will enable us to test the relationships between matter over-densities, the curvature perturbation and the Newtonian potential. Any large-distance modification of gravity or exotic nature of dark energy modifies these relationships as compared to those predicted in the standard smooth dark energy model based on General Relativity. In linear theory of structure growth such modifications are often parameterised by virtue of two functions of space and time that enter the relation of the curvature perturbation to, first, the matter over-density, and second, the Newtonian potential. We investigate the predictions for these functions in Brans-Dicke theory, clustering dark energy models and interacting dark energy models. We find that each theory has a distinct path in the parameter space of modified growth. Understanding these theoretical priors on the parameterisations of modified growth is essential to...
Care for women with prior preterm birth
Iams, Jay D.; Berghella, Vincenzo
2013-01-01
Women who have delivered an infant between 16 and 36 weeks’ gestation have an increased risk of preterm birth in subsequent pregnancies. The risk increases with more than 1 preterm birth and is inversely proportional to the gestational age of the previous preterm birth. African American women have rates of recurrent preterm birth that are nearly twice that of women of other backgrounds. An approximate risk of recurrent preterm birth can be estimated by a comprehensive reproductive history, with emphasis on maternal race, the number and gestational age of prior births, and the sequence of events preceding the index preterm birth. Interventions including smoking cessation, eradication of asymptomatic bacteriuria, progestational agents, and cervical cerclage can reduce the risk of recurrent preterm birth when employed appropriately. PMID:20417491
Sparse Multivariate Modeling: Priors and Applications
Henao, Ricardo
modeling, a model for peptide-protein/protein-protein interactions called latent protein tree, a framework for sparse Gaussian process classification based on active set selection and a linear multi-category sparse classifier specially targeted to gene expression data. The thesis is organized to provide......This thesis presents a collection of statistical models that attempt to take advantage of every piece of prior knowledge available to provide the models with as much structure as possible. The main motivation for introducing these models is interpretability since in practice we want to be able...... to use them as hypothesis generating tools. All of our models start from a family of structures, for instance factor models, directed acyclic graphs, classifiers, etc. Then we let them be selectively sparse as a way to provide them with structural fl exibility and interpretability. Finally, we complement...
Care for women with prior preterm birth.
Iams, Jay D; Berghella, Vincenzo
2010-08-01
Women who have delivered an infant between 16 and 36 weeks' gestation have an increased risk of preterm birth in subsequent pregnancies. The risk increases with more than 1 preterm birth and is inversely proportional to the gestational age of the previous preterm birth. African American women have rates of recurrent preterm birth that are nearly twice that of women of other backgrounds. An approximate risk of recurrent preterm birth can be estimated by a comprehensive reproductive history, with emphasis on maternal race, the number and gestational age of prior births, and the sequence of events preceding the index preterm birth. Interventions including smoking cessation, eradication of asymptomatic bacteriuria, progestational agents, and cervical cerclage can reduce the risk of recurrent preterm birth when employed appropriately. Copyright (c) 2010 Mosby, Inc. All rights reserved.
Early medical abortion without prior ultrasound.
Raymond, Elizabeth G; Bracken, Hillary
2015-09-01
To explore the potential for using last menstrual period (LMP) rather than ultrasound to establish gestational age (GA) eligibility for medical abortion. We used the results of a recently published systematic review to identify studies with data on the number of abortion patients with GA more than 63 or 70 days by ultrasound but less than those or other specific limits by LMP. We analyzed data from these studies to estimate the proportion of women with GAs greater than 63 or 70 days by ultrasound in various subgroups of women defined by LMP. We found three studies with relevant data. One enrolled 4257 medical abortion patients of whom 4% had GAs of >70 days by ultrasound. Of the 2681 who were certain that their LMPs began no more than 56 days prior, only 16 (0.6%) were >70 days by ultrasound. In a second much smaller study of surgical abortion patients, of whom 19% were >70 days by ultrasound, 90 women were certain that their LMPs started more than 56 days prior, and of those, 7 (7.8%) had GAs of >70 days by ultrasound. In the third study, which included surgical abortion patients with a mean GA of 61 days, at least 12% of the 138 patients with LMPs 70 days by ultrasound. The possibility that access to medical abortion can be enhanced for selected women by omitting the requirement for a screening ultrasound is promising and should be further investigated. Gestational dating using LMP rather than ultrasound may be reasonable for selected patients before medical abortion. Copyright © 2015 Elsevier Inc. All rights reserved.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
Active contour segmentation using level set function with enhanced image from prior intensity.
Kim, Sunhee; Kim, Youngjun; Lee, Deukhee; Park, Sehyung
2015-01-01
This paper presents a new active contour segmentation model using a level set function that can correctly capture both the strong and the weak boundaries of a target enclosed by bright and dark regions at the same time. We introduce an enhanced image obtained from prior information about the intensity of the target. The enhanced image emphasizes the regions where pixels have intensities close to the prior intensity. This enables a desirable segmentation of an image having a partially low contrast with the target surrounded by regions that are brighter or darker than the target. We define an edge indicator function on an original image, and local and regularization forces on an enhanced image. An edge indicator function and two forces are incorporated in order to identify the strong and weak boundaries, respectively. We established an evolution equation of contours in the level set formulation and experimented with several medical images to show the performance of the proposed method.
Beyond maximum entropy: Fractal Pixon-based image reconstruction
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
Shape Modelling Using Maximum Autocorrelation Factors
Larsen, Rasmus
2001-01-01
of the training set are in reality a time series, e.g.\\$\\backslash\\$ snapshots of a beating heart during the cardiac cycle or when the shapes are slices of a 3D structure, e.g. the spinal cord. Second, in almost all applications a natural order of the landmark points along the contour of the shape is introduced......This paper addresses the problems of generating a low dimensional representation of the shape variation present in a training set after alignment using Procrustes analysis and projection into shape tangent space. We will extend the use of principal components analysis in the original formulation...... of Active Shape Models by Timothy Cootes and Christopher Taylor by building new information into the model. This new information consists of two types of prior knowledge. First, in many situation we will be given an ordering of the shapes of the training set. This situation occurs when the shapes...
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Arzura Idris
2012-01-01
This paper analyzes the phenomenon of "forced migration" in Malaysia. It examines the nature of forced migration, the challenges faced by Malaysia, the policy responses and their impact on the country and upon the forced migrants...
1982-09-01
defined.) Prior knowledge of the approximate frequency band allows for pretest seleccion of the bandpass filter passband, which in turn permits...covariance of the parameter es-imate corrected for nonwhite innovations e natural logrithm basis, e = 2.71828 Fi generalized force scaled by generalized...process noise and measurement noise respectively ac standard deviation of damping parameter estimate W frequency, rad/sec. Wn, undamped natural frequency
Stefanescu, Dan Mihai
2011-01-01
Part I introduces the basic ""Principles and Methods of Force Measurement"" acording to a classification into a dozen of force transducers types: resistive, inductive, capacitive, piezoelectric, electromagnetic, electrodynamic, magnetoelastic, galvanomagnetic (Hall-effect), vibrating wires, (micro)resonators, acoustic and gyroscopic. Two special chapters refer to force balance techniques and to combined methods in force measurement. Part II discusses the ""(Strain Gauge) Force Transducers Components"", evolving from the classical force transducer to the digital / intelligent one, with the inco
Arzura Idris
2012-01-01
This paper analyzes the phenomenon of “forced migration” in Malaysia. It examines the nature of forced migration, the challenges faced by Malaysia, the policy responses and their impact on the country and upon the forced migrants. It considers forced migration as an event hosting multifaceted issues related and relevant to forced migrants and suggests that Malaysia has been preoccupied with the issue of forced migration movements. This is largely seen in various responses invoked from Malaysi...
Improved Maximum Entropy Method with an Extended Search Space
Rothkopf, Alexander
2012-01-01
We report on an improvement to the implementation of the Maximum Entropy Method (MEM). It amounts to departing from the search space obtained through a singular value decomposition (SVD) of the Kernel. Based on the shape of the SVD basis functions we argue that the MEM spectrum for given $N_\\tau$ data-points $D(\\tau)$ and prior information $m(\\omega)$ does not in general lie in this $N_\\tau$ dimensional singular subspace. Systematically extending the search basis will eventually recover the full search space and the correct extremum. We illustrate this idea through a mock data analysis inspired by actual lattice spectra, to show where our improvement becomes essential for the success of the MEM. To remedy the shortcomings of Bryan's SVD prescription we propose to use the real Fourier basis, which consists of trigonometric functions. Not only does our approach lead to more stable numerical behavior, as the SVD is not required for the determination of the basis functions, but also the resolution of the MEM beco...
Prior expectations facilitate metacognition for perceptual decision.
Sherman, M T; Seth, A K; Barrett, A B; Kanai, R
2015-09-01
The influential framework of 'predictive processing' suggests that prior probabilistic expectations influence, or even constitute, perceptual contents. This notion is evidenced by the facilitation of low-level perceptual processing by expectations. However, whether expectations can facilitate high-level components of perception remains unclear. We addressed this question by considering the influence of expectations on perceptual metacognition. To isolate the effects of expectation from those of attention we used a novel factorial design: expectation was manipulated by changing the probability that a Gabor target would be presented; attention was manipulated by instructing participants to perform or ignore a concurrent visual search task. We found that, independently of attention, metacognition improved when yes/no responses were congruent with expectations of target presence/absence. Results were modeled under a novel Bayesian signal detection theoretic framework which integrates bottom-up signal propagation with top-down influences, to provide a unified description of the mechanisms underlying perceptual decision and metacognition. Copyright © 2015 Elsevier Inc. All rights reserved.
Seismicity prior to the 2016 Kumamoto earthquakes
Nanjo, K Z; Orihara, Y; Furuse, N; Togo, S; Nitta, H; Okada, T; Tanaka, R; Kamogawa, M; Nagao, T
2016-01-01
The 2016 Kumamoto earthquakes occurred under circumstance that seismicity remains high in all parts of Japan since the 2011 Tohoku-Oki earthquake. Identifying what happened before this incident is one starting point for promote earthquake forecast research to prepare for subsequent large earthquakes in the near future in Japan. Here we report precursory seismic patterns prior to the Kumamoto earthquakes, measured by four different methods based on seismicity changes that can be used for earthquake forecasting: b-value method, two kinds of seismic quiescence evaluation methods, and a method of detailed foreshock evaluation. The spatial extent of precursory patterns differs from one method to the other and ranges from local scales (typically asperity size), to regional scales (e.g., 2{\\deg} x 3{\\deg} around the source zone). The earthquakes are preceded by periods of pronounced anomalies, which lasted decade scales (e.g., 20 years or longer) to yearly scales (e.g., 1~2 years). We demonstrate that combination of...
Cathodic ARC surface cleaning prior to brazing
Dave, V. R. (Vivek R.); Hollis, K. J. (Kendall J.); Castro, R. G. (Richard G.); Smith, F. M. (Frank M.); Javernick, D. A. (Daniel A.)
2002-01-01
Surface cleanliness is one the critical process variables in vacuum furnace brazing operations. For a large number of metallic components, cleaning is usually accomplished either by water-based alkali cleaning, but may also involve acid etching or solvent cleaning / rinsing. Nickel plating may also be necessary to ensure proper wetting. All of these cleaning or plating technologies have associated waste disposal issues, and this article explores an alternative cleaning process that generates minimal waste. Cathodic arc, or reserve polarity, is well known for welding of materials with tenacious oxide layers such as aluminum alloys. In this work the reverse polarity effect is used to clean austenitic stainless steel substrates prior to brazing with Ag-28%Cu. This cleaning process is compared to acid pickling and is shown to produce similar wetting behavior as measured by dynamic contact angle experiments. Additionally, dynamic contact angle measurements with water drops are conducted to show that cathodic arc cleaning can remove organic contaminants as well. The process does have its limitations however, and alloys with high titanium and aluminum content such as nickel-based superalloys may still require plating to ensure adequate wetting.
Human papilloma virus infection prior to coitarche.
Doerfler, Daniela; Bernhaus, Astrid; Kottmel, Andrea; Sam, Christine; Koelle, Dieter; Joura, Elmar A
2009-05-01
The aim of our study was to determine the prevalence and the natural course of anogenital human papilloma virus (HPV) infections in girls prior to coitarche attending an outpatient gynecological unit. Specimens were taken from the anogenital region of 114 unselected 4-15 year old girls who were referred consecutively for various gynecological problems. Four girls were excluded because of sexual abuse. Low-risk HPV-deoxyribonucleic acid (DNA) was detected in 4 girls (3.6%) and high-risk HPV DNA in 15 children (13.6%). Two girls testing positive for HPV DNA had clinical apparent warts. After 1 year, 2 children had persistent high-risk HPV DNA, and in 1 case we found a switch from high-risk to low-risk HPV DNA. Subclinical genital low- and high-risk HPV infections are common in girls without any history of sexual abuse or sexual activity. We found persistence of genital HPV infection in children, which could be a reservoir for HPV-associated diseases later in life.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Burgess, C P
1925-01-01
In this report it is shown that determining the instantaneous angle of pitch, the acceleration of the gust is as important as its maximum velocity or yaw. Hitherto it has been assumed that the conditions encountered in gusts could be approximately represented by considering the airship to be at an instantaneous angle of yaw or pitch (according to whether the gust is horizontal or vertical), the instantaneous angle being tan to the (-1) power (v/v), where v is the component of the velocity of the gust at right angles to the longitudinal axis of the ship, and v is the speed of the ship. An expression is derived for this instantaneous angle in terms of the speed and certain aerodynamic characteristics of the airship, and of the maximum velocity and the acceleration of the gust, and the application of the expression to the determination of the forces on the ship is illustrated by numerical examples.
Multi-digit maximum voluntary torque production on a circular object
SHIM, JAE KUN; HUANG, JUNFENG; HOOKE, ALEXANDER W.; LATSH, MARK L.; ZATSIORSKY, VLADIMIR M.
2010-01-01
Individual digit-tip forces and moments during torque production on a mechanically fixed circular object were studied. During the experiments, subjects positioned each digit on a 6-dimensional force/moment sensor attached to a circular handle and produced a maximum voluntary torque on the handle. The torque direction and the orientation of the torque axis were varied. From this study, it is concluded that: (1) the maximum torque in the closing (clockwise) direction was larger than in the opening (counter clockwise) direction; (2) the thumb and little finger had the largest and the smallest share of both total normal force and total moment, respectively; (3) the sharing of total moment between individual digits was not affected by the orientation of the torque axis or by the torque direction, while the sharing of total normal force between the individual digit varied with torque direction; (4) the normal force safety margins were largest and smallest in the thumb and little finger, respectively. PMID:17454086
Scaling relations for galaxies prior to reionization
Chen, Pengfei; Norman, Michael L.; Xu, Hao [CASS, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093 (United States); Wise, John H. [Center for Relativistic Astrophysics, School of Physics, Georgia Institute of Technology, 837 State Street, Atlanta, GA 30332 (United States); O' Shea, Brian W., E-mail: pec008@ucsd.edu, E-mail: mlnorman@ucsd.edu, E-mail: hxu@ucsd.edu, E-mail: jwise@gatech.edu, E-mail: oshea@msu.edu [Lyman Briggs College and Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States)
2014-11-10
The first galaxies in the universe are the building blocks of all observed galaxies. We present scaling relations for galaxies forming at redshifts z ≥ 15 when reionization is just beginning. We utilize the 'Rarepeak' cosmological radiation hydrodynamics simulation that captures the complete star formation history in over 3300 galaxies, starting with massive Population III stars that form in dark matter halos as small as ∼10{sup 6} M {sub ☉}. We make various correlations between the bulk halo quantities, such as virial, gas, and stellar masses and metallicities and their respective accretion rates, quantifying a variety of properties of the first galaxies up to halo masses of 10{sup 9} M {sub ☉}. Galaxy formation is not solely relegated to atomic cooling halos with virial temperatures greater than 10{sup 4} K, where we find a dichotomy in galaxy properties between halos above and below this critical mass scale. Halos below the atomic cooling limit have a stellar mass-halo mass relationship log M {sub *} ≅ 3.5 + 1.3log (M {sub vir}/10{sup 7} M {sub ☉}). We find a non-monotonic relationship between metallicity and halo mass for the smallest galaxies. Their initial star formation events enrich the interstellar medium and subsequent star formation to a median of 10{sup –2} Z {sub ☉} and 10{sup –1.5} Z {sub ☉}, respectively, in halos of total mass 10{sup 7} M {sub ☉}, which is then diluted by metal-poor inflows well beyond Population III pre-enrichment levels of 10{sup –3.5} Z {sub ☉}. The scaling relations presented here can be employed in models of reionization, galaxy formation, and chemical evolution in order to consider these galaxies forming prior to reionization.
Directional interactions between current and prior saccades
Stephanie Anne Holland Jones
2014-10-01
Full Text Available One way to explore how prior sensory and motor events impact eye movements is to ask someone to look to targets located about a central point, returning gaze to the central point after each eye movement. Concerned about the contribution of this return to centre movement, Anderson et al. (2008 used a sequential saccade paradigm in which participants made a continuous series of saccades to peripheral targets that appeared to the left or right of the currently fixated location in a random sequence (the next eye movement began from the last target location. Examining the effects of previous saccades (n-x on current saccade latency (n, they found that saccadic reaction times (RT were reduced when the direction of the current saccade matched that of a preceding saccade (e.g. two left saccades, even when the two saccades in question were separated by multiple saccades in any direction. We examined if this pattern extends to conditions in which targets appear inside continuously marked locations that provide stable visual features (i.e. target ‘placeholders’ and when saccades are prompted by central arrows. Participants completed 3 conditions: peripheral targets (PT; continuous, sequential saccades to peripherally presented targets without placeholders; PT with placeholders; and centrally presented arrows (CA; left or right pointing arrows at the currently fixated location instructing participants to saccade to the left or right. We found reduced saccadic RT when the immediately preceding saccade (n-1 was in the same (vs. opposite direction in the PT without placeholders and CA conditions. This effect varied when considering the effect of the previous 2-5 (n-x saccades on current saccade latency (n. The effects of previous eye movements on current saccade latency may be determined by multiple, time-varying mechanisms related to sensory (i.e., retinotopic location, motor (i.e., saccade direction, and environmental (i.e., persistent visual objects
Jensen, Randall L; Ebben, William P
2007-08-01
Because the intensity of plyometric exercises usually is based simply upon anecdotal recommendations rather than empirical evidence, this study sought to quantify a variety of these exercises based on forces placed upon the knee. Six National Collegiate Athletic Association Division I athletes who routinely trained with plyometric exercises performed depth jumps from 46 and 61 cm, a pike jump, tuck jump, single-leg jump, countermovement jump, squat jump, and a squat jump holding dumbbells equal to 30% of 1 repetition maximum (RM). Ground reaction forces obtained via an AMTI force plate and video analysis of markers placed on the left hip, knee, lateral malleolus, and fifth metatarsal were used to estimate rate of eccentric force development (E-RFD), peak ground reaction forces (GRF), ground reaction forces relative to body weight (GRF/BW), knee joint reaction forces (K-JRF), and knee joint reaction forces relative to body weight (K-JRF/BW) for each plyometric exercise. One-way repeated measures analysis of variance indicated that E-RFD, K-JRF, and K-JRF/BW were different across the conditions (p 0.05). Results indicate that there are quantitative differences between plyometric exercises in the rate of force development during landing and the forces placed on the knee, though peak GRF forces associated with landing may not differ.
Unilateral arm strength training improves contralateral peak force and rate of force development.
Adamson, Michael; Macquaide, Niall; Helgerud, Jan; Hoff, Jan; Kemi, Ole Johan
2008-07-01
Neural adaptation following maximal strength training improves the ability to rapidly develop force. Unilateral strength training also leads to contralateral strength improvement, due to cross-over effects. However, adaptations in the rate of force development and peak force in the contralateral untrained arm after one-arm training have not been determined. Therefore, we aimed to detect contralateral effects of unilateral maximal strength training on rate of force development and peak force. Ten adult females enrolled in a 2-month strength training program focusing of maximal mobilization of force against near-maximal load in one arm, by attempting to move the given load as fast as possible. The other arm remained untrained. The training program did not induce any observable hypertrophy of any arms, as measured by anthropometry. Nevertheless, rate of force development improved in the trained arm during contractions against both submaximal and maximal loads by 40-60%. The untrained arm also improved rate of force development by the same magnitude. Peak force only improved during a maximal isometric contraction by 37% in the trained arm and 35% in the untrained arm. One repetition maximum improved by 79% in the trained arm and 9% in the untrained arm. Therefore, one-arm maximal strength training focusing on maximal mobilization of force increased rapid force development and one repetition maximal strength in the contralateral untrained arm. This suggests an increased central drive that also crosses over to the contralateral side.
Tail paradox, partial identifiability, and influential priors in Bayesian branch length inference.
Rannala, Bruce; Zhu, Tianqi; Yang, Ziheng
2012-01-01
Recent studies have observed that Bayesian analyses of sequence data sets using the program MrBayes sometimes generate extremely large branch lengths, with posterior credibility intervals for the tree length (sum of branch lengths) excluding the maximum likelihood estimates. Suggested explanations for this phenomenon include the existence of multiple local peaks in the posterior, lack of convergence of the chain in the tail of the posterior, mixing problems, and misspecified priors on branch lengths. Here, we analyze the behavior of Bayesian Markov chain Monte Carlo algorithms when the chain is in the tail of the posterior distribution and note that all these phenomena can occur. In Bayesian phylogenetics, the likelihood function approaches a constant instead of zero when the branch lengths increase to infinity. The flat tail of the likelihood can cause poor mixing and undue influence of the prior. We suggest that the main cause of the extreme branch length estimates produced in many Bayesian analyses is the poor choice of a default prior on branch lengths in current Bayesian phylogenetic programs. The default prior in MrBayes assigns independent and identical distributions to branch lengths, imposing strong (and unreasonable) assumptions about the tree length. The problem is exacerbated by the strong correlation between the branch lengths and parameters in models of variable rates among sites or among site partitions. To resolve the problem, we suggest two multivariate priors for the branch lengths (called compound Dirichlet priors) that are fairly diffuse and demonstrate their utility in the special case of branch length estimation on a star phylogeny. Our analysis highlights the need for careful thought in the specification of high-dimensional priors in Bayesian analyses.
Cudlip, Alan C; Fischer, Steven L; Wells, Richard; Dickerson, Clark R
2013-06-01
This study examined the influence of frequency and direction of force application on psychophysically acceptable forces for simulated work tasks. Fifteen male participants exerted psychophysically acceptable forces on a force transducer at 1, 3, or 5 repetitions per minute by performing both a downward press and a pull toward the body. These exertions were shown previously to be strength and balance limited, respectively. Workers chose acceptable forces at a lower percentage of their maximum voluntary force capacity during downward (strength-limited) exertions than during pulling (balance-limited) exertions at all frequencies (4% to 11%, P = .035). Frequency modulated acceptable hand force only during downward exertions, where forces at five repetitions per minute were 13% less (P = .005) than those at one exertion per minute. This study provides insight into the relationship between biomechanically limiting factors and the selection of acceptable forces for unilateral manual tasks.
Holtermann, Andreas; Roeleveld, Karin; Vereijken, Beatrix; Ettema, Gertjan
2007-04-01
The force generated during a maximal voluntary contraction (MVC) is known to increase by resistance training. Although this increase cannot be solely attributed to changes in the muscle itself, many studies examining muscle activation at peak force failed to detect neural adaptations with resistance training. However, the activation prior to peak force can have an impact on maximal force generation. This study aims at investigating the role of rate of force development (RFD) on maximal force during resistance training. Fourteen subjects carried out 5 days of isometric resistance training with dorsiflexion of the ankle with the instruction to generate maximal force. In a second experiment, 18 subjects performed the same task with the verbal instruction to generate maximal force (instruction I) and to generate force as fast and forcefully as possible (instruction II). The main findings were that RFD increased twice as much as the 16% increase in maximal force with training, with a positive association between RFD and force within the last session of training and between training sessions. Instruction II generated a higher RFD than instruction I, with no difference in maximal force. These findings suggest that the positive association between RFD and maximal force is not causal, but is mediated by a third factor. In the discussion, we argue for the third factor to be physiological changes affecting both aspects of a MVC or different processes affecting RFD and maximal force separately, rather than a voluntary strategic change of both aspects of MVC.
Superlens induced loss-insensitive optical force
Cui, Xiaohan; Chan, C T
2016-01-01
A slab with relative permittivity $\\epsilon = - 1 + i\\delta$ and permeability $\\mu = - 1 + i\\delta $ has a critical distance away from the slab where a small particle will either be cloaked or imaged depending on whether it is located inside or outside that critical distance. We find that the optical force acting on a small cylinder under plane wave illumination reaches a maximum value at this critical distance. Contrary to the usual observation that superlens systems should be highly loss-sensitive, this maximum optical force remains a constant when loss is changed within a certain range. For a fixed particle-slab distance, increasing loss can even amplify the optical force acting on the small cylinder, contrary to the usual belief that loss compromises the response of supenlens.
Reducing complexity of inverse problems using geostatistical priors
Hansen, Thomas Mejer; Mosegaard, Klaus; Cordua, Knud Skou
a posterior sample, can be reduced significantly using informed priors based on geostatistical models. We discuss two approaches to include such geostatistically based prior information. One is based on a parametric description of the prior likelihood that applies to 2-point based statistical models...
He, Xin; Cheng, Lishui; Fessler, Jeffrey A; Frey, Eric C
2011-06-01
In simultaneous dual-isotope myocardial perfusion SPECT (MPS) imaging, data are simultaneously acquired to determine the distributions of two radioactive isotopes. The goal of this work was to develop penalized maximum likelihood (PML) algorithms for a novel cross-tracer prior that exploits the fact that the two images reconstructed from simultaneous dual-isotope MPS projection data are perfectly registered in space. We first formulated the simultaneous dual-isotope MPS reconstruction problem as a joint estimation problem. A cross-tracer prior that couples voxel values on both images was then proposed. We developed an iterative algorithm to reconstruct the MPS images that converges to the maximum a posteriori solution for this prior based on separable surrogate functions. To accelerate the convergence, we developed a fast algorithm for the cross-tracer prior based on the complete data OS-EM (COSEM) framework. The proposed algorithm was compared qualitatively and quantitatively to a single-tracer version of the prior that did not include the cross-tracer term. Quantitative evaluations included comparisons of mean and standard deviation images as well as assessment of image fidelity using the mean square error. We also evaluated the cross tracer prior using a three-class observer study with respect to the three-class MPS diagnostic task, i.e., classifying patients as having either no defect, reversible defect, or fixed defects. For this study, a comparison with conventional ordered subsets-expectation maximization (OS-EM) reconstruction with postfiltering was performed. The comparisons to the single-tracer prior demonstrated similar resolution for areas of the image with large intensity changes and reduced noise in uniform regions. The cross-tracer prior was also superior to the single-tracer version in terms of restoring image fidelity. Results of the three-class observer study showed that the proposed cross-tracer prior and the convergent algorithms improved the
Modelling the maximum voluntary joint torque/angular velocity relationship in human movement.
Yeadon, Maurice R; King, Mark A; Wilson, Cassie
2006-01-01
The force exerted by a muscle is a function of the activation level and the maximum (tetanic) muscle force. In "maximum" voluntary knee extensions muscle activation is lower for eccentric muscle velocities than for concentric velocities. The aim of this study was to model this "differential activation" in order to calculate the maximum voluntary knee extensor torque as a function of knee angular velocity. Torque data were collected on two subjects during maximal eccentric-concentric knee extensions using an isovelocity dynamometer with crank angular velocities ranging from 50 to 450 degrees s(-1). The theoretical tetanic torque/angular velocity relationship was modelled using a four parameter function comprising two rectangular hyperbolas while the activation/angular velocity relationship was modelled using a three parameter function that rose from submaximal activation for eccentric velocities to full activation for high concentric velocities. The product of these two functions gave a seven parameter function which was fitted to the joint torque/angular velocity data, giving unbiased root mean square differences of 1.9% and 3.3% of the maximum torques achieved. Differential activation accounts for the non-hyperbolic behaviour of the torque/angular velocity data for low concentric velocities. The maximum voluntary knee extensor torque that can be exerted may be modelled accurately as the product of functions defining the maximum torque and the maximum voluntary activation level. Failure to include differential activation considerations when modelling maximal movements will lead to errors in the estimation of joint torque in the eccentric phase and low velocity concentric phase.
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Maximum speeds and alpha angles of flowing avalanches
McClung, David; Gauer, Peter
2016-04-01
A flowing avalanche is one which initiates as a slab and, if consisting of dry snow, will be enveloped in a turbulent snow dust cloud once the speed reaches about 10 m/s. A flowing avalanche has a dense core of flowing material which dominates the dynamics by serving as the driving force for downslope motion. The flow thickness typically on the order of 1 -10 m which is on the order of about 1% of the length of the flowing mass. We have collected estimates of maximum frontal speed um (m/s) from 118 avalanche events. The analysis is given here with the aim of using the maximum speed scaled with some measure of the terrain scale over which the avalanches ran. We have chosen two measures for scaling, from McClung (1990), McClung and Schaerer (2006) and Gauer (2012). The two measures are the √H0-;√S0-- (total vertical drop; total path length traversed). Our data consist of 118 avalanches with H0 (m)estimated and 106 with S0 (m)estimated. Of these, we have 29 values with H0 (m),S0 (m)and um (m/s)estimated accurately with the avalanche speeds measured all or nearly all along the path. The remainder of the data set includes approximate estimates of um (m/s)from timing the avalanche motion over a known section of the path where approximate maximum speed is expected and with either H0or S0or both estimated. Our analysis consists of fitting the values of um/√H0--; um/√S0- to probability density functions (pdf) to estimate the exceedance probability for the scaled ratios. In general, we found the best fits for the larger data sets to fit a beta pdf and for the subset of 29, we found a shifted log-logistic (s l-l) pdf was best. Our determinations were as a result of fitting the values to 60 different pdfs considering five goodness-of-fit criteria: three goodness-of-fit statistics :K-S (Kolmogorov-Smirnov); A-D (Anderson-Darling) and C-S (Chi-squared) plus probability plots (P-P) and quantile plots (Q-Q). For less than 10% probability of exceedance the results show that
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Digital communication constraints in prior space missions
Yassine, Nathan K.
2004-01-01
Digital communication is crucial for space endeavors. Jt transmits scientific and command data between earth stations and the spacecraft crew. It facilitates communications between astronauts, and provides live coverage during all phases of the mission. Digital communications provide ground stations and spacecraft crew precise data on the spacecraft position throughout the entire mission. Lessons learned from prior space missions are valuable for our new lunar and Mars missions set by our president s speech. These data will save our agency time and money, and set course our current developing technologies. Limitations on digital communications equipment pertaining mass, volume, data rate, frequency, antenna type and size, modulation, format, and power in the passed space missions are of particular interest. This activity is in support of ongoing communication architectural studies pertaining to robotic and human lunar exploration. The design capabilities and functionalities will depend on the space and power allocated for digital communication equipment. My contribution will be gathering these data, write a report, and present it to Communications Technology Division Staff. Antenna design is very carefully studied for each mission scenario. Currently, Phased array antennas are being developed for the lunar mission. Phased array antennas use little power, and electronically steer a beam instead of DC motors. There are 615 patches in the phased array antenna. These patches have to be modified to have high yield. 50 patches were created for testing. My part is to assist in the characterization of these patch antennas, and determine whether or not certain modifications to quartz micro-strip patch radiators result in a significant yield to warrant proceeding with repairs to the prototype 19 GHz ferroelectric reflect-array antenna. This work requires learning how to calibrate an automatic network, and mounting and testing antennas in coaxial fixtures. The purpose of this
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Knudsen forces on microcantilevers
Passian, A.; Wig, A.; Meriaudeau, F.; Ferrell, T. L.; Thundat, T.
2002-11-01
When two surfaces at two different temperatures are separated by a distance comparable to a mean-free path of the molecules of the ambient medium, the surfaces experience Knudsen force. This mechanical force can be important in microelectromechanical systems and in atomic force microscopy. A theoretical discussion of the magnitude of the forces and the conditions where they can be encountered is discussed. A potential application of the Knudsen force in designing a cantilever-based vacuum gauge is discussed.
Instrument for measuring human biting force
Kopola, Harri K.; Mantyla, Olavi; Makiniemi, Matti; Mahonen, Kalevi; Virtanen, Kauko
1995-02-01
Alongside EMG activity, biting force is the primary parameter used for assessing the biting problems of dentulous patients and patients with dentures. In a highly conductive oral cavity, dielectric measurement methods are preferred, for safety reasons. The maximum biting force for patients with removable dentures is not more than 100 ... 300 N. We report here on an instrument developed for measuring human biting force which consists of three units: a mouthpiece, a signal processing and interface unit (SPI), and a PC. The mouthpiece comprises a sensor head of thickness 3.4 mm, width 20 mm and length 30 mm constructed of two stainless steel plates and with a fiber optic microbending sensor between them. This is connected to the SPI unit by a three-meter fiber optic cable, and the SPI unit to the PC by an RS connection. A computer program has been developed that includes measurement, display, zeroing, and calibration operations. The instrument measures biting force as a function of time and displays the time-dependent force profile and maximum force on a screen or plots it in hard copy. The dynamic measurement range of the mouthpiece is from 0 to 1000 N, and the resolution of the instrument is 10 N. The results of preliminary clinical measurements and repeatability tests are reported.
The Last Glacial Maximum experiment in PMIP4-CMIP6
Kageyama, Masa; Braconnot, Pascale; Abe-Ouchi, Ayako; Harrison, Sandy; Lambert, Fabrice; Peltier, W. Richard; Tarasov, Lev
2016-04-01
The Last Glacial Maximum (LGM), around 21,000 years ago, is a cold climate extreme. As such, it has been the focus of many studies on modelling and climate reconstruction, which have brought knowledge on the mechanisms explaining this climate, in terms of climate on the continents and of the ocean state, and in terms relationships between climate changes over land, ice sheets and oceans. It is still a challenge for climate or Earth System models to represent the amplitude of climate changes for this period, under the following forcings: - Ice sheets, which represent perturbations in land surface type, altitude and land/ocean distribution - Atmospheric composition - Astronomical parameters Feedbacks from the vegetation and dust are also known to have played a role in setting up the LGM climate but have not been accounted for in previous PMIP experiments. In this poster, we will present the experimental set-up of the PMIP4 LGM experiment, which is presently being discussed and will be finalized for March 2016. For more information and discussion of the PMIP4-CMIP6 experimental design, please visit: https://wiki.lsce.ipsl.fr/pmip3/doku.php/pmip3:cmip6:design:index
Maximum likelihood molecular clock comb: analytic solutions.
Chor, Benny; Khetan, Amit; Snir, Sagi
2006-04-01
Maximum likelihood (ML) is increasingly used as an optimality criterion for selecting evolutionary trees, but finding the global optimum is a hard computational task. Because no general analytic solution is known, numeric techniques such as hill climbing or expectation maximization (EM), are used in order to find optimal parameters for a given tree. So far, analytic solutions were derived only for the simplest model--three taxa, two state characters, under a molecular clock. Four taxa rooted trees have two topologies--the fork (two subtrees with two leaves each) and the comb (one subtree with three leaves, the other with a single leaf). In a previous work, we devised a closed form analytic solution for the ML molecular clock fork. In this work, we extend the state of the art in the area of analytic solutions ML trees to the family of all four taxa trees under the molecular clock assumption. The change from the fork topology to the comb incurs a major increase in the complexity of the underlying algebraic system and requires novel techniques and approaches. We combine the ultrametric properties of molecular clock trees with the Hadamard conjugation to derive a number of topology dependent identities. Employing these identities, we substantially simplify the system of polynomial equations. We finally use tools from algebraic geometry (e.g., Gröbner bases, ideal saturation, resultants) and employ symbolic algebra software to obtain analytic solutions for the comb. We show that in contrast to the fork, the comb has no closed form solutions (expressed by radicals in the input data). In general, four taxa trees can have multiple ML points. In contrast, we can now prove that under the molecular clock assumption, the comb has a unique (local and global) ML point. (Such uniqueness was previously shown for the fork.).
In-shoe plantar tri-axial stress profiles during maximum-effort cutting maneuvers.
Cong, Yan; Lam, Wing Kai; Cheung, Jason Tak-Man; Zhang, Ming
2014-12-18
Soft tissue injuries, such as anterior cruciate ligament rupture, ankle sprain and foot skin problems, frequently occur during cutting maneuvers. These injuries are often regarded as associated with abnormal joint torque and interfacial friction caused by excessive external and in-shoe shear forces. This study simultaneously investigated the dynamic in-shoe localized plantar pressure and shear stress during lateral shuffling and 45° sidestep cutting maneuvers. Tri-axial force transducers were affixed at the first and second metatarsal heads, lateral forefoot, and heel regions in the midsole of a basketball shoe. Seventeen basketball players executed both cutting maneuvers with maximum efforts. Lateral shuffling cutting had a larger mediolateral braking force than 45° sidestep cutting. This large braking force was concentrated at the first metatarsal head, as indicated by its maximum medial shear stress (312.2 ± 157.0 kPa). During propulsion phase, peak shear stress occurred at the second metatarsal head (271.3 ± 124.3 kPa). Compared with lateral shuffling cutting, 45° sidestep cutting produced larger peak propulsion shear stress (463.0 ± 272.6 kPa) but smaller peak braking shear stress (184.8 ± 181.7 kPa), of which both were found at the first metatarsal head. During both cutting maneuvers, maximum medial and posterior shear stress occurred at the first metatarsal head, whereas maximum pressure occurred at the second metatarsal head. The first and second metatarsal heads sustained relatively high pressure and shear stress and were expected to be susceptible to plantar tissue discomfort or injury. Due to different stress distribution, distinct pressure and shear cushioning mechanisms in basketball footwear might be considered over different foot regions.
Wood saccharification by enzyme systems without prior delignification
Koshijima, T.; Yaku, F.; Muraki, E.; Tanaka, T.; Azuma, J.
1983-01-01
Around 80% of the polysaccharides contained in Akamatsu (Pinus densiflora) wood was hydrolyzed by using Cellulosin AP originated from Aspergillus niger without prior delignification when wood meal had been previously finely divided for 24 h by a vibration-type ball mill. The substrate concentration used in the enzymatic hydrolysis was 1 g/100 ml. It has been found that the hydrolysis rate increases to 86% by using a 1:1 mixture of Cellulosin AP and Onozuka R-10 cellulases and wood milled for 2 h. When the hydrolysis is performed at the substrate concentration 4 g/100 ml, however, 2 h of milling allows only a 42.5% hydrolysis rate, and it is necessary to take more than 24 h to degrade 80% of the polysaccharides contained. The reaction rate of enzymatic hydrolysis increased with increasing number of roll millings, thus 22 times of roll milling resulted in a twofold increase in reaction rate of the enzymatic hydrolysis compared with that of no milling. Plotting of the reaction rate V of the enzymatic hydrolysis against substrate concentration (S) showed that V leveled off at 10 g/100 ml of (S) in the 22 times roll-milled wood, and the value was only 4 g/100 ml in case of the untreated wood. The enzymatic hydrolysis rate has been compared for cellulose and wood, both of which were previously milled to different extents. In the case of cellulose, the hydrolysis rate showed a maximum at 24 h of milling, then decreased thereafter. This is not the case with woodmeal, where the existence of lignin as a radical scavenger may prevent the radicals formed in cellulose molecules from acting as retardant for enzymatic hydrolysis of cellulose. 9 references, 9 figures, 7 tables.
Smits-Engelsman, B.C.M.; Rameckers, E.A.A.; Duysens, J.E.J.
2005-01-01
Force control ability was investigated in 10 males and 10 females, between 5 and 15 years old with spastic hemiplegia (mild and moderate hand dysfunction), and an aged-matched control group (eight males, 12 females). An isometric force production task at five different levels of maximum voluntary co
Detecting Casimir Forces through a Tunneling Electromechanical Transducer
Onofrio, Roberto; Carugno, Giovanni
1995-01-01
We propose the use of a tunneling electromechanical transducer to dynamically detect Casimir forces between two conducting surfaces. The maximum distance for which Casimir forces should be detectable with our method is around $1 \\mu$m, while the lower limit is given by the ability to approach the surfaces. This technique should permit to study gravitational forces on the same range of distances, as well as the vacuum friction provided that very low dissipation mechanical resonators are used.
Intramuscular fiber conduction velocity, isometric force and explosive performance
Methenitis Spyridon; Terzis Gerasimos; Zaras Nikolaos; Stasinaki Angeliki-Nikoletta; Karandreas Nikolaos
2016-01-01
Conduction of electrical signals along the surface of muscle fibers is acknowledged as an essential neuromuscular component which is linked with muscle force production. However, it remains unclear whether muscle fiber conduction velocity (MFCV) is also linked with explosive performance. The aim of the present study was to investigate the relationship between vastus lateralis MFCV and countermovement jumping performance, the rate of force development and maximum isometric force. Fifteen moder...
Intramuscular fiber conduction velocity, isometric force and explosive performance
Methenitis, Spyridon; Terzis, Gerasimos; Zaras, Nikolaos; Stasinaki, Angeliki-Nikoletta; Karandreas, Nikolaos
2016-01-01
Abstract Conduction of electrical signals along the surface of muscle fibers is acknowledged as an essential neuromuscular component which is linked with muscle force production. However, it remains unclear whether muscle fiber conduction velocity (MFCV) is also linked with explosive performance. The aim of the present study was to investigate the relationship between vastus lateralis MFCV and countermovement jumping performance, the rate of force development and maximum isometric force. Fift...
Domire, Zachary J; Challis, John H
2010-12-01
The maximum velocity of shortening of a muscle is an important parameter in musculoskeletal models. The most commonly used values are derived from animal studies; however, these values are well above the values that have been reported for human muscle. The purpose of this study was to examine the sensitivity of simulations of maximum vertical jumping performance to the parameters describing the force-velocity properties of muscle. Simulations performed with parameters derived from animal studies were similar to measured jump heights from previous experimental studies. While simulations performed with parameters derived from human muscle were much lower than previously measured jump heights. If current measurements of maximum shortening velocity in human muscle are correct, a compensating error must exist. Of the possible compensating errors that could produce this discrepancy, it was concluded that reduced muscle fibre excursion is the most likely candidate.
The Prediction of Maximum Amplitudes of Solar Cycles and the Maximum Amplitude of Solar Cycle 24
无
2002-01-01
We present a brief review of predictions of solar cycle maximum ampli-tude with a lead time of 2 years or more. It is pointed out that a precise predictionof the maximum amplitude with such a lead-time is still an open question despiteprogress made since the 1960s. A method of prediction using statistical character-istics of solar cycles is developed: the solar cycles are divided into two groups, ahigh rising velocity (HRV) group and a low rising velocity (LRV) group, dependingon the rising velocity in the ascending phase for a given duration of the ascendingphase. The amplitude of Solar Cycle 24 can be predicted after the start of thecycle using the formula derived in this paper. Now, about 5 years before the startof the cycle, we can make a preliminary prediction of 83.2-119.4 for its maximumamplitude.
Maximum Relative Entropy Updating and the Value of Learning
Patryk Dziurosz-Serafinowicz
2015-03-01
Full Text Available We examine the possibility of justifying the principle of maximum relative entropy (MRE considered as an updating rule by looking at the value of learning theorem established in classical decision theory. This theorem captures an intuitive requirement for learning: learning should lead to new degrees of belief that are expected to be helpful and never harmful in making decisions. We call this requirement the value of learning. We consider the extent to which learning rules by MRE could satisfy this requirement and so could be a rational means for pursuing practical goals. First, by representing MRE updating as a conditioning model, we show that MRE satisfies the value of learning in cases where learning prompts a complete redistribution of one’s degrees of belief over a partition of propositions. Second, we show that the value of learning may not be generally satisfied by MRE updates in cases of updating on a change in one’s conditional degrees of belief. We explain that this is so because, contrary to what the value of learning requires, one’s prior degrees of belief might not be equal to the expectation of one’s posterior degrees of belief. This, in turn, points towards a more general moral: that the justification of MRE updating in terms of the value of learning may be sensitive to the context of a given learning experience. Moreover, this lends support to the idea that MRE is not a universal nor mechanical updating rule, but rather a rule whose application and justification may be context-sensitive.
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Daigle Bernie J
2012-05-01
Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods
Species abundance distributions, statistical mechanics and the priors of MaxEnt.
Bowler, M G
2014-03-01
The methods of Maximum Entropy have been deployed for some years to address the problem of species abundance distributions. In this approach, it is important to identify the correct weighting factors, or priors, to be applied before maximising the entropy function subject to constraints. The forms of such priors depend not only on the exact problem but can also depend on the way it is set up; priors are determined by the underlying dynamics of the complex system under consideration. The problem is one of statistical mechanics and it is the properties of the system that yield the correct MaxEnt priors, appropriate to the way the problem is framed. Here I calculate, in several different ways, the species abundance distribution resulting when individuals in a community are born and die independently. In the usual formulation the prior distribution for the number of species over the number of individuals is 1/n; the problem can be reformulated in terms of the distribution of individuals over species classes, with a uniform prior. Results are obtained using master equations for the dynamics and separately through the combinatoric methods of elementary statistical mechanics; the MaxEnt priors then emerge a posteriori. The first object is to establish the log series species abundance distribution as the outcome of per capita guild dynamics. The second is to clarify the true nature and origin of priors in the language of MaxEnt. Finally, I consider how it may come about that the distribution is similar to log series in the event that filled niches dominate species abundance. For the general ecologist, there are two messages. First, that species abundance distributions are determined largely by population sorting through fractional processes (resulting in the 1/n factor) and secondly that useful information is likely to be found only in departures from the log series. For the MaxEnt practitioner, the message is that the prior with respect to which the entropy is to be
Comparison of no-prior and soft-prior regularization in biomedical microwave imaging
Amir H Golnabi
2011-01-01
Full Text Available Microwave imaging for medical applications is attractive because the range of dielectric properties of different soft tissues can be substantial. Breast cancer detection and monitoring of treatment response are areas where this technology could be important because of the contrast between normal and malignant tissue. Unfortunately, the technique is unable to achieve the high spatial resolution at depth in tissue which is available from other conventional modalities such as x-ray computed tomography (CT or magnetic resonance imaging (MRI. We have incorporated a soft-prior regularization strategy within our microwave reconstruction algorithm and compared it with the images obtained with traditional no-prior (Levenberg-Marquardt regularization. Initial simulation and phantom results show a significant improvement of the recovered electrical properties. Specifically, errors in the microwave property estimates were improved by as much as 95%. The effects of a false-inclusion region were also evaluated and the results show that a small residual property bias of 6% in permittivity and 15% in conductivity can occur that does not otherwise degrade the property recovery accuracy of inclusions that actually exist. The work sets the stage for integrating microwave imaging with MR for improved resolution and functional imaging of the breast in the future.
Fundamental limits of optical force and torque
Rahimzadegan, A.; Alaee, R.; Fernandez-Corbaton, I.; Rockstuhl, C.
2017-01-01
Optical force and torque provide unprecedented control on the spatial motion of small particles. A valid scientific question, that has many practical implications, concerns the existence of fundamental upper bounds for the achievable force and torque exerted by a plane wave illumination with a given intensity. Here, while studying isotropic particles, we show that different light-matter interaction channels contribute to the exerted force and torque, and analytically derive upper bounds for each of the contributions. Specific examples for particles that achieve those upper bounds are provided. We study how and to which extent different contributions can add up to result in the maximum optical force and torque. Our insights are important for applications ranging from molecular sorting, particle manipulation, and nanorobotics up to ambitious projects such as laser-propelled spaceships.
Mista, Christian Ariel; Graven-Nielsen, Thomas
2013-01-01
-dimensional force task during acute muscle pain. Twelve right-handed healthy volunteers participated in the experiment. Three-dimensional force signals were acquired during isometric elbow flexion at 5%, 15%, and 30% of the maximum voluntary contraction (MVC). The force components were represented by a circle...... the sense of effort and motor output during contractions. However, little is known about the pain effects on the force components when task-related or three-dimensional force matching task are required. The aim of this study was to quantify changes in the force variability during task-related and three...... on a computer screen, and a moving square was used for the visual target. Subjects were asked to match the main direction of the contraction during the task-related (1D) or all the force components during the three-dimensional (3D) force matching tasks. Isotonic and hypertonic saline injections were randomly...
Mista, Christian Ariel; Graven-Nielsen, Thomas
2013-01-01
injected into the biceps brachii muscle. The coefficient of variation (CV) was used to analyze the variability on the task-related force direction. The total excursion of the center of pressure (CoP) was used to quantify the variability on the tangential force directions. Complexity of the force......Experimentally muscle pain induces changes in the distribution of muscle activity and affects the muscle coordination. The force steadiness is impaired during muscle pain in the task-related force direction as well as in the tangential directions. In addition, pain lead to a mismatch between......-dimensional force task during acute muscle pain. Twelve right-handed healthy volunteers participated in the experiment. Three-dimensional force signals were acquired during isometric elbow flexion at 5%, 15%, and 30% of the maximum voluntary contraction (MVC). The force components were represented by a circle...
GaoChunwen; XuJingzhen; RichardSinding-Larsen
2005-01-01
A Bayesian approach using Markov chain Monte Carlo algorithms has been developed to analyze Smith's discretized version of the discovery process model. It avoids the problems involved in the maximum likelihood method by effectively making use of the information from the prior distribution and that from the discovery sequence according to posterior probabilities. All statistical inferences about the parameters of the model and total resources can be quantified by drawing samples directly from the joint posterior distribution. In addition, statistical errors of the samples can be easily assessed and the convergence properties can be monitored during the sampling. Because the information contained in a discovery sequence is not enough to estimate all parameters, especially the number of fields, geologically justified prior information is crucial to the estimation. The Bayesian approach allows the analyst to specify his subjective estimates of the required parameters and his degree of uncertainty about the estimates in a clearly identified fashion throughout the analysis. As an example, this approach is applied to the same data of the North Sea on which Smith demonstrated his maximum likelihood method. For this case, the Bayesian approach has really improved the overly pessimistic results and downward bias of the maximum likelihood procedure.
Pattern formation, logistics, and maximum path probability
Kirkaldy, J. S.
1985-05-01
The concept of pattern formation, which to current researchers is a synonym for self-organization, carries the connotation of deductive logic together with the process of spontaneous inference. Defining a pattern as an equivalence relation on a set of thermodynamic objects, we establish that a large class of irreversible pattern-forming systems, evolving along idealized quasisteady paths, approaches the stable steady state as a mapping upon the formal deductive imperatives of a propositional function calculus. In the preamble the classical reversible thermodynamics of composite systems is analyzed as an externally manipulated system of space partitioning and classification based on ideal enclosures and diaphragms. The diaphragms have discrete classification capabilities which are designated in relation to conserved quantities by descriptors such as impervious, diathermal, and adiabatic. Differentiability in the continuum thermodynamic calculus is invoked as equivalent to analyticity and consistency in the underlying class or sentential calculus. The seat of inference, however, rests with the thermodynamicist. In the transition to an irreversible pattern-forming system the defined nature of the composite reservoirs remains, but a given diaphragm is replaced by a pattern-forming system which by its nature is a spontaneously evolving volume partitioner and classifier of invariants. The seat of volition or inference for the classification system is thus transferred from the experimenter or theoretician to the diaphragm, and with it the full deductive facility. The equivalence relations or partitions associated with the emerging patterns may thus be associated with theorems of the natural pattern-forming calculus. The entropy function, together with its derivatives, is the vehicle which relates the logistics of reservoirs and diaphragms to the analog logistics of the continuum. Maximum path probability or second-order differentiability of the entropy in isolation are
Self-similar prior and wavelet bases for hidden incompressible turbulent motion
Héas, Patrick; Kadri-Harouna, Souleymane
2013-01-01
This work is concerned with the ill-posed inverse problem of estimating turbulent flows from the observation of an image sequence. From a Bayesian perspective, a divergence-free isotropic fractional Brownian motion (fBm) is chosen as a prior model for instantaneous turbulent velocity fields. This self-similar prior characterizes accurately second-order statistics of velocity fields in incompressible isotropic turbulence. Nevertheless, the associated maximum a posteriori involves a fractional Laplacian operator which is delicate to implement in practice. To deal with this issue, we propose to decompose the divergent-free fBm on well-chosen wavelet bases. As a first alternative, we propose to design wavelets as whitening filters. We show that these filters are fractional Laplacian wavelets composed with the Leray projector. As a second alternative, we use a divergence-free wavelet basis, which takes implicitly into account the incompressibility constraint arising from physics. Although the latter decomposition ...
Partin, Judson Wiley
The West Pacific Warm Pool (WPWP) plays an important role in the global heat budget and global hydrologic cycle, so knowledge about its past variability would improve our understanding of global climate. Variations in WPWP precipitation are most notable during El Nino-Southern Oscillation events, when climate changes in the tropical Pacific impact rainfall not only in the WPWP, but around the globe. The stalagmite records presented in this dissertation provide centennial-to-millennial-scale constraints of WPWP precipitation during three distinct climatic periods: the Last Glacial Maximum (LGM), the last deglaciation, and the Holocene. In Chapter 2, the methodologies associated with the generation of U/Th-based absolute ages for the stalagmites are presented. In the final age models for the stalagmites, dates younger than 11,000 years have absolute errors of +/-400 years or less, and dates older than 11,000 years have a relative error of +/-2%. Stalagmite-specific 230Th/ 232Th ratios, calculated using isochrons, are used to correct for the presence of unsupported 230Th in a stalagmite at the time of formation. Hiatuses in the record are identified using a combination of optical properties, high 232Th concentrations, and extrapolation from adjacent U/Th dates. In Chapter 3, stalagmite oxygen isotopic composition (delta18O) records from N. Borneo are presented which reveal millennial-scale rainfall changes that occurred in response to changes in global climate boundary conditions, radiative forcing, and abrupt climate changes. The stalagmite delta18O records detect little change in inferred precipitation between the LGM and the present, although significant uncertainties are associated with the impact of the Sunda Shelf on rainfall delta 18O during the LGM. A millennial-scale drying in N. Borneo, inferred from an increase in stalagmite delta18O, peaks at ˜16.5ka coeval with timing of Heinrich event 1, possibly related to a southward movement of the Intertropical
Modeling the effects of prior infection on vaccine efficacy
Smith, D.J.; Forrest, S.; Ackley, D.H. [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Perelson, A.S. [Los Alamos National Lab., NM (United States)
1997-11-01
We performed computer simulations to study the effects of prior infection on vaccine efficacy. We injected three antigens sequentially. The first antigen, designated the prior, represented a prior infection or vaccination. The second antigen, the vaccine, represented a single component of the trivalent influenza vaccine. The third antigen, the epidemic, represented challenge by an epidemic strain. For a fixed vaccine to epidemic strain cross-reactivities to the vaccine and to the epidemic strains. We found that, for many cross-reactivities, vaccination, when it had been preceded by a prior infection, provided more protection than vaccination alone. However, at some cross-reactivities, the prior infection reduced protection by clearing the vaccine before it had the chance to produce protective memory. The cross-reactivities between the prior, vaccine and epidemic strains played a major role in determining vaccine efficacy. This work has applications to understanding vaccination against viruses such as influenza that are continually mutating.
Labor Force Participation Rate
City and County of Durham, North Carolina — This thematic map presents the labor force participation rate of working-age people in the United States in 2010. The 2010 Labor Force Participation Rate shows the...
Arzura Idris
2012-06-01
Full Text Available This paper analyzes the phenomenon of “forced migration” in Malaysia. It examines the nature of forced migration, the challenges faced by Malaysia, the policy responses and their impact on the country and upon the forced migrants. It considers forced migration as an event hosting multifaceted issues related and relevant to forced migrants and suggests that Malaysia has been preoccupied with the issue of forced migration movements. This is largely seen in various responses invoked from Malaysia due to “south-south forced migration movements.” These responses are, however, inadequate in terms of commitment to the international refugee regime. While Malaysia did respond to economic and migration challenges, the paper asserts that such efforts are futile if she ignores issues critical to forced migrants.
Non-negative matrix factorization with Gaussian process priors
Schmidt, Mikkel Nørgaard; Laurberg, Hans
2008-01-01
We present a general method for including prior knowledge in a nonnegative matrix factorization (NMF), based on Gaussian process priors. We assume that the nonnegative factors in the NMF are linked by a strictly increasing function to an underlying Gaussian process specified by its covariance...... function. This allows us to find NMF decompositions that agree with our prior knowledge of the distribution of the factors, such as sparseness, smoothness, and symmetries. The method is demonstrated with an example from chemical shift brain imaging....
Spectrally Consistent Satellite Image Fusion with Improved Image Priors
Nielsen, Allan Aasbjerg; Aanæs, Henrik; Jensen, Thomas B.S.;
2006-01-01
Here an improvement to our previous framework for satellite image fusion is presented. A framework purely based on the sensor physics and on prior assumptions on the fused image. The contributions of this paper are two fold. Firstly, a method for ensuring 100% spectrally consistency is proposed......, even when more sophisticated image priors are applied. Secondly, a better image prior is introduced, via data-dependent image smoothing....
Acquisition of multiple prior distributions in tactile temporal order judgment
Yasuhito eNagai
2012-08-01
Full Text Available The Bayesian estimation theory proposes that the brain acquires the prior distribution of a task and integrates it with sensory signals to minimize the effect of sensory noise. Psychophysical studies have demonstrated that our brain actually implements Bayesian estimation in a variety of sensory-motor tasks. However, these studies only imposed one prior distribution on participants within a task period. In this study, we investigated the conditions that enable the acquisition of multiple prior distributions in temporal order judgment (TOJ of two tactile stimuli across the hands. In Experiment 1, stimulation intervals were randomly selected from one of two prior distributions (biased to right hand earlier and biased to left hand earlier in association with color cues (green and red, respectively. Although the acquisition of the two priors was not enabled by the color cues alone, it was significant when participants shifted their gaze (above or below in response to the color cues. However, the acquisition of multiple priors was not significant when participants moved their mouths (opened or closed. In Experiment 2, the spatial cues (above and below were used to identify which eye position or retinal cue position was crucial for the eye-movement-dependent acquisition of multiple priors in Experiment 1. The acquisition of the two priors was significant when participants moved their gaze to the cues (i.e., the cue positions on the retina were constant across the priors, as well as when participants did not shift their gazes (i.e., the cue positions on the retina changed according to the priors. Thus, both eye and retinal cue positions were effective in acquiring multiple priors. Based on previous neurophysiological reports, we discuss possible neural correlates that contribute to the acquisition of multiple priors.
Non-negative matrix factorization with Gaussian process priors
Schmidt, Mikkel Nørgaard; Laurberg, Hans
2008-01-01
We present a general method for including prior knowledge in a nonnegative matrix factorization (NMF), based on Gaussian process priors. We assume that the nonnegative factors in the NMF are linked by a strictly increasing function to an underlying Gaussian process specified by its covariance...... function. This allows us to find NMF decompositions that agree with our prior knowledge of the distribution of the factors, such as sparseness, smoothness, and symmetries. The method is demonstrated with an example from chemical shift brain imaging....
Cardiorespiratory Fitness of Inmates of a Maximum Security Prison ...
USER
Maximum Security Prison; and also to determine the effects of age, gender, and period of incarceration on CRF. A total of 247 apparently healthy inmates of Maiduguri Maximum Security ... with different types of cardiovascular and metabolic.
Maximum likelihood polynomial regression for robust speech recognition
LU Yong; WU Zhenyang
2011-01-01
The linear hypothesis is the main disadvantage of maximum likelihood linear re- gression （MLLR）. This paper applies the polynomial regression method to model adaptation and establishes a nonlinear model adaptation algorithm using maximum likelihood polyno
Jendrzejczyk, Joseph A.
1982-01-01
An electrical fluid force transducer for measuring the magnitude and direction of fluid forces caused by lateral fluid flow, includes a movable sleeve which is deflectable in response to the movement of fluid, and a rod fixed to the sleeve to translate forces applied to the sleeve to strain gauges attached to the rod, the strain gauges being connected in a bridge circuit arrangement enabling generation of a signal output indicative of the magnitude and direction of the force applied to the sleeve.
Hydrophobic Forces in Flotation
Pazhianur, Rajesh R
1999-01-01
An atomic force microscope (AFM) has been used to conduct force measurements to better understand the role of hydrophobic forces in flotation. The force measurements were conducted between a flat mineral substrate and a hydrophobic glass sphere in aqueous solutions. It is assumed that the hydrophobic glass sphere may simulate the behavior of air bubbles during flotation. The results may provide information relevant to the bubble-particle interactions occurring during flotation. The glass ...
Bayesian estimation of generalized exponential distribution under noninformative priors
Moala, Fernando Antonio; Achcar, Jorge Alberto; Tomazella, Vera Lúcia Damasceno
2012-10-01
The generalized exponential distribution, proposed by Gupta and Kundu (1999), is a good alternative to standard lifetime distributions as exponential, Weibull or gamma. Several authors have considered the problem of Bayesian estimation of the parameters of generalized exponential distribution, assuming independent gamma priors and other informative priors. In this paper, we consider a Bayesian analysis of the generalized exponential distribution by assuming the conventional noninformative prior distributions, as Jeffreys and reference prior, to estimate the parameters. These priors are compared with independent gamma priors for both parameters. The comparison is carried out by examining the frequentist coverage probabilities of Bayesian credible intervals. We shown that maximal data information prior implies in an improper posterior distribution for the parameters of a generalized exponential distribution. It is also shown that the choice of a parameter of interest is very important for the reference prior. The different choices lead to different reference priors in this case. Numerical inference is illustrated for the parameters by considering data set of different sizes and using MCMC (Markov Chain Monte Carlo) methods.
Prior knowledge in recalling arguments in bioethical dilemmas
Hiemke Katharina Schmidt
2015-09-01
Full Text Available Prior knowledge is known to facilitate learning new information. Normally in studies confirming this outcome the relationship between prior knowledge and the topic to be learned is obvious: the information to be acquired is part of the domain or topic to which the prior knowledge belongs. This raises the question as to whether prior knowledge of various domains facilitates recalling information. In this study 79 eleventh-grade students completed a questionnaire on their prior knowledge of seven different domains related to the bioethical dilemma of prenatal diagnostics. The students read a text containing arguments for and arguments against prenatal diagnostics. After one week and again 12 weeks later they were asked to write down all the arguments they remembered. Prior knowledge helped them recall the arguments one week (r = .350 and 12 weeks (r = .316 later. Prior knowledge of three of the seven domains significantly helped them recall the arguments one week later (correlations between r = .194 to r = .394. Partial correlations with interest as a control item revealed that interest did not explain the relationship between prior knowledge and recall. Prior knowledge of different domains jointly supports the recall of arguments related to bioethical topics.
Debunking Coriolis Force Myths
Shakur, Asif
2014-01-01
Much has been written and debated about the Coriolis force. Unfortunately, this has done little to demystify the paradoxes surrounding this fictitious force invoked by an observer in a rotating frame of reference. It is the purpose of this article to make another valiant attempt to slay the dragon of the Coriolis force! This will be done without…
Ridgely, Charles T.
2010-01-01
Many textbooks dealing with general relativity do not demonstrate the derivation of forces in enough detail. The analyses presented herein demonstrate straightforward methods for computing forces by way of general relativity. Covariant divergence of the stress-energy-momentum tensor is used to derive a general expression of the force experienced…
Ridgely, Charles T.
2010-01-01
Many textbooks dealing with general relativity do not demonstrate the derivation of forces in enough detail. The analyses presented herein demonstrate straightforward methods for computing forces by way of general relativity. Covariant divergence of the stress-energy-momentum tensor is used to derive a general expression of the force experienced…
M. Mihelich
2014-11-01
Full Text Available We derive rigorous results on the link between the principle of maximum entropy production and the principle of maximum Kolmogorov–Sinai entropy using a Markov model of the passive scalar diffusion called the Zero Range Process. We show analytically that both the entropy production and the Kolmogorov–Sinai entropy seen as functions of f admit a unique maximum denoted fmaxEP and fmaxKS. The behavior of these two maxima is explored as a function of the system disequilibrium and the system resolution N. The main result of this article is that fmaxEP and fmaxKS have the same Taylor expansion at first order in the deviation of equilibrium. We find that fmaxEP hardly depends on N whereas fmaxKS depends strongly on N. In particular, for a fixed difference of potential between the reservoirs, fmaxEP(N tends towards a non-zero value, while fmaxKS(N tends to 0 when N goes to infinity. For values of N typical of that adopted by Paltridge and climatologists (N ≈ 10 ~ 100, we show that fmaxEP and fmaxKS coincide even far from equilibrium. Finally, we show that one can find an optimal resolution N* such that fmaxEP and fmaxKS coincide, at least up to a second order parameter proportional to the non-equilibrium fluxes imposed to the boundaries. We find that the optimal resolution N* depends on the non equilibrium fluxes, so that deeper convection should be represented on finer grids. This result points to the inadequacy of using a single grid for representing convection in climate and weather models. Moreover, the application of this principle to passive scalar transport parametrization is therefore expected to provide both the value of the optimal flux, and of the optimal number of degrees of freedom (resolution to describe the system.
Mamuris, Z.; Dumont, J.; Dutrillaux, B.; Aurias, A. (Institut Curie, Paris (France))
1989-10-01
A cytogenetic study of 14 patients with secondary acute nonlymphocytic leukemia (S-ANLL) with prior treatment for breast cancer is reported. The chromosomes recurrently involved in numerical or structural anomalies are chromosomes 7, 5, 17, and 11, in decreasing order of frequency. The distribution of the anomalies detected in this sample of patients is similar to that observed in published cases with prior breast or other solid tumors, though anomalies of chromosome 11 were not pointed out, but it significantly differs from that of the S-ANLL with prior hematologic malignancies. This difference is principally due to a higher involvement of chromosome 7 in patients with prior hematologic malignancies and of chromosomes 11 and 17 in patients with prior solid tumors. A genetic determinism involving abnormal recessive alleles located on chromosomes 5, 7, 11, and 17 uncovered by deletions of the normal homologs may be a cause of S-ANLL. The difference between patients with prior hematologic malignancies or solid tumors may be explained by different constitutional mutations of recessive genes in the two groups of patients.
20 CFR 617.14 - Maximum amount of TRA.
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Maximum amount of TRA. 617.14 Section 617.14... FOR WORKERS UNDER THE TRADE ACT OF 1974 Trade Readjustment Allowances (TRA) § 617.14 Maximum amount of TRA. (a) General rule. Except as provided under paragraph (b) of this section, the maximum amount of...
40 CFR 94.107 - Determination of maximum test speed.
2010-07-01
... specified in 40 CFR 1065.510. These data points form the lug curve. It is not necessary to generate the... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Determination of maximum test speed... Determination of maximum test speed. (a) Overview. This section specifies how to determine maximum test...
14 CFR 25.1505 - Maximum operating limit speed.
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating limit speed. 25.1505... Operating Limitations § 25.1505 Maximum operating limit speed. The maximum operating limit speed (V MO/M MO airspeed or Mach Number, whichever is critical at a particular altitude) is a speed that may not...
Maximum Performance Tests in Children with Developmental Spastic Dysarthria.
Wit, J.; And Others
1993-01-01
Three Maximum Performance Tasks (Maximum Sound Prolongation, Fundamental Frequency Range, and Maximum Repetition Rate) were administered to 11 children (ages 6-11) with spastic dysarthria resulting from cerebral palsy and 11 controls. Despite intrasubject and intersubject variability in normal and pathological speakers, the tasks were found to be…
Maximum physical capacity testing in cancer patients undergoing chemotherapy
Knutsen, L.; Quist, M; Midtgaard, J
2006-01-01
BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determine...
Constant force extensional rheometry of polymer solutions
Szabo, Peter; McKinley, Gareth H.; Clasen, Christian
2012-01-01
We revisit the rapid stretching of a liquid filament under the action of a constant imposed tensile force, a problem which was first considered by Matta and Tytus [J. Non-Newton. Fluid Mech. 35 (1990) 215–229]. A liquid bridge formed from a viscous Newtonian fluid or from a dilute polymer solution...... filament can be probed. In particular, we show that with this constant force pull (CFP) technique it is possible to readily impose very large material strains and strain rates so that the maximum extensibility of the polymer molecules may be quantified. This unique characteristic of the experiment...
Huber, Daniel R; Eason, Thomas G; Hueter, Robert E; Motta, Philip J
2005-01-01
Three-dimensional static equilibrium analysis of the forces generated by the jaw musculature of the horn shark Heterodontus francisci was used to theoretically estimate the maximum force distributions...
On the Maximum Storage Capacity of the Hopfield Model
Folli, Viola; Leonetti, Marco; Ruocco, Giancarlo
2017-01-01
Recurrent neural networks (RNN) have traditionally been of great interest for their capacity to store memories. In past years, several works have been devoted to determine the maximum storage capacity of RNN, especially for the case of the Hopfield network, the most popular kind of RNN. Analyzing the thermodynamic limit of the statistical properties of the Hamiltonian corresponding to the Hopfield neural network, it has been shown in the literature that the retrieval errors diverge when the number of stored memory patterns (P) exceeds a fraction (≈ 14%) of the network size N. In this paper, we study the storage performance of a generalized Hopfield model, where the diagonal elements of the connection matrix are allowed to be different from zero. We investigate this model at finite N. We give an analytical expression for the number of retrieval errors and show that, by increasing the number of stored patterns over a certain threshold, the errors start to decrease and reach values below unit for P ≫ N. We demonstrate that the strongest trade-off between efficiency and effectiveness relies on the number of patterns (P) that are stored in the network by appropriately fixing the connection weights. When P≫N and the diagonal elements of the adjacency matrix are not forced to be zero, the optimal storage capacity is obtained with a number of stored memories much larger than previously reported. This theory paves the way to the design of RNN with high storage capacity and able to retrieve the desired pattern without distortions. PMID:28119595
Maximum host survival at intermediate parasite infection intensities.
Martin Stjernman
Full Text Available BACKGROUND: Although parasitism has been acknowledged as an important selective force in the evolution of host life histories, studies of fitness effects of parasites in wild populations have yielded mixed results. One reason for this may be that most studies only test for a linear relationship between infection intensity and host fitness. If resistance to parasites is costly, however, fitness may be reduced both for hosts with low infection intensities (cost of resistance and high infection intensities (cost of parasitism, such that individuals with intermediate infection intensities have highest fitness. Under this scenario one would expect a non-linear relationship between infection intensity and fitness. METHODOLOGY/PRINCIPAL FINDINGS: Using data from blue tits (Cyanistes caeruleus in southern Sweden, we investigated the relationship between the intensity of infection of its blood parasite (Haemoproteus majoris and host survival to the following winter. Presence and intensity of parasite infections were determined by microscopy and confirmed using PCR of a 480 bp section of the cytochrome-b-gene. While a linear model suggested no relationship between parasite intensity and survival (F = 0.01, p = 0.94, a non-linear model showed a significant negative quadratic effect (quadratic parasite intensity: F = 4.65, p = 0.032; linear parasite intensity F = 4.47, p = 0.035. Visualization using the cubic spline technique showed maximum survival at intermediate parasite intensities. CONCLUSIONS/SIGNIFICANCE: Our results indicate that failing to recognize the potential for a non-linear relationship between parasite infection intensity and host fitness may lead to the potentially erroneous conclusion that the parasite is harmless to its host. Here we show that high parasite intensities indeed reduced survival, but this effect was masked by reduced survival for birds heavily suppressing their parasite intensities. Reduced survival among hosts with low
Fiebig, H R
2002-01-01
We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss practical issues of the approach.
Maximum entropy as a consequence of Bayes' theorem in differentiable manifolds
Davis, Sergio
2015-01-01
Bayesian inference and the principle of maximum entropy (PME) are usually presented as separate but complementary branches of inference, the latter playing a central role in the foundations of Statistical Mechanics. In this work it is shown that the PME can be derived from Bayes' theorem and the divergence theorem for systems whose states can be mapped to points in a differentiable manifold. In this view, entropy must be interpreted as the invariant measure (non-informative prior) on the space of probability densities.
Berry, Vincent; Nicolas, François
2006-01-01
Given a set of evolutionary trees on a same set of taxa, the maximum agreement subtree problem (MAST), respectively, maximum compatible tree problem (MCT), consists of finding a largest subset of taxa such that all input trees restricted to these taxa are isomorphic, respectively compatible. These problems have several applications in phylogenetics such as the computation of a consensus of phylogenies obtained from different data sets, the identification of species subjected to horizontal gene transfers and, more recently, the inference of supertrees, e.g., Trees Of Life. We provide two linear time algorithms to check the isomorphism, respectively, compatibility, of a set of trees or otherwise identify a conflict between the trees with respect to the relative location of a small subset of taxa. Then, we use these algorithms as subroutines to solve MAST and MCT on rooted or unrooted trees of unbounded degree. More precisely, we give exact fixed-parameter tractable algorithms, whose running time is uniformly polynomial when the number of taxa on which the trees disagree is bounded. The improves on a known result for MAST and proves fixed-parameter tractability for MCT.
Bialynicki-Birula, I; Cirone, M.A.; Dahl, Jens Peder
2002-01-01
) a singular quantum force located at the origin, and (iii) the centrifugal force associated with non-vanishing angular momentum. Moreover, we use Heisenberg's uncertainty relation to introduce a lower bound for the kinetic energy of an ensemble of neutral particles. This bound is quadratic in the number......We present Heisenberg's equation of motion for the radial variable of a free non-relativistic particle in D dimensions. The resulting radial force consists of three contributions: (i) the quantum fictitious force which is either attractive or repulsive depending on the number of dimensions, (ii...... of atoms and can be traced back to the repulsive quantum fictitious potential. All three forces arise for a free particle: "Force without force"....
Bialynicki-Birula, I; Cirone, M.A.; Dahl, Jens Peder
2002-01-01
We present Heisenberg's equation of motion for the radial variable of a free non-relativistic particle in D dimensions. The resulting radial force consists of three contributions: (i) the quantum fictitious force which is either attractive or repulsive depending on the number of dimensions, (ii......) a singular quantum force located at the origin, and (iii) the centrifugal force associated with non-vanishing angular momentum. Moreover, we use Heisenberg's uncertainty relation to introduce a lower bound for the kinetic energy of an ensemble of neutral particles. This bound is quadratic in the number...... of atoms and can be traced back to the repulsive quantum fictitious potential. All three forces arise for a free particle: "Force without force"....
Bialynicki-Birula, I. [Center for Theoretical Physics, Polish Academy of Sciences, Warsaw (Poland); Abt. fuer Quantenphysik, Univ. Ulm, Ulm (Germany); Cirone, M.A.; Straub, F.; Schleich, W.P. [Abt. fuer Quantenphysik, Univ. Ulm, Ulm (Germany); Dahl, J.P. [Abt. fuer Quantenphysik, Univ. Ulm, Ulm (Germany); Chemical Physics, Dept. of Chemistry, Technical Univ. of Denmark, Lyngby (Denmark); Seligman, T.H. [Centro de Ciencias Fisicas, Univ. of Mexico (UNAM), Cuernavaca (Mexico)
2002-07-01
We present Heisenberg's equation of motion for the radial variable of a free non-relativistic particle in D dimensions. The resulting radial force consists of three contributions: (i) the quantum fictitious force which is either attractive or repulsive depending on the number of dimensions, (ii) a singular quantum force located at the origin, and (iii) the centrifugal force associated with non-vanishing angular momentum. Moreover, we use Heisenberg's uncertainty relation to introduce a lower bound for the kinetic energy of an ensemble of neutral particles. This bound is quadratic in the number of atoms and can be traced back to the repulsive quantum fictitious potential. All three forces arise for a free particle: ''Force without force''. (orig.)
Hernández-Trujillo, Jesús; Cortés-Guzmán, Fernando; Fang, De-Chai; Bader, Richard F W
2007-01-01
Chemistry is determined by the electrostatic forces acting within a collection of nuclei and electrons. The attraction of the nuclei for the electrons is the only attractive force in a molecule and is the force responsible for the bonding between atoms. This is the attractive force acting on the electrons in the Ehrenfest force and on the nuclei in the Feynman force, one that is countered by the repulsion between the electrons in the former and by the repulsion between the nuclei in the latter. The virial theorem relates these forces to the energy changes resulting from interactions between atoms. All bonding, as signified by the presence of a bond path, has a common origin in terms of the mechanics determined by the Ehrenfest, Feynman and virial theorems. This paper is concerned in particular with the mechanics of interaction encountered in what are classically described as 'nonbonded interactions'--are atoms that 'touch' bonded or repelling one another?
Vasile Cojocaru
2016-12-01
Full Text Available Several methods can be used in the FEM studies to apply the loads on a plain bearing. The paper presents a comparative analysis of maximum stress obtained for three loading scenarios: resultant force applied on the shaft – bearing assembly, variable pressure with sinusoidal distribution applied on the bearing surface, variable pressure with parabolic distribution applied on the bearing surface.
Based on the Force Deployment Model of Unascertained Expectation
Jianli Chen
2013-05-01
Full Text Available In this study, we utilize the unascertained mathematics method to give the unascertained number of countermeasure of anti-terrorism strategic force deployment and unknown event. It has been defined the situation sets of force deployment, condition density and mathematical expectation of density model. It has been given the unascertained parameters Cij which decide and direct the force deployment. Find out the condition density matrix of force deployment, further get the conditional density of single target force deployment, using the maximum density mathematical expectation in order to get the optimal mathematical model of multiple target force deployment. Analyzing the coefficient of model and provide two kinds of discussed computing method. The model overcomes the limitation of past deterministic thinking method which study the force deployment and provide the decision maker a relative substantial theory evidence.
Acoustic myography, electromyography and bite force in the masseter muscle.
Tortopidis, D; Lyons, M F; Baxendale, R H
1998-12-01
Acoustic myography (AMG) offers some advantages over electromyography (EMG) in certain circumstances, but the use of AMG on the jaw-closing muscles has not been fully tested. The purpose of this study was to examine the relationship between AMG, EMG and force in the masseter muscles of nine healthy male subjects. The AMG was recorded using a piezoelectric crystal microphone and the EMG was recorded simultaneously with surface electrodes. Force was recorded between the anterior teeth with a strain-gauge transducer. Analysis showed that Pearson's correlation coefficient was 0.913 for force/AMG and 0.973 for force/EMG in all subjects, indicating a linear relationship between force, AMG and EMG at the four different force levels tested (25-75% of maximum). It is apparent that AMG may be used as an accurate monitor of masseter muscle force production, although some care is required in the technique.
A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator
Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai
2016-08-01
To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.
A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator
Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai
2017-05-01
To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.
High School Students' Publication Rights and Prior Restraint.
Huffman, John L.; Trauth, Denise M.
1981-01-01
Federal court decisions on high school students' publication rights in the Second, Fourth, Fifth, and Seventh Circuits reveal substantial disagreement about school officials' power of prior restraint over student publications. The courts' opinions range from approval of broad powers of prior restraint to denial of any power. (Author/RW)
25 CFR 13.13 - Technical assistance prior to petitioning.
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Technical assistance prior to petitioning. 13.13 Section... OF JURISDICTION OVER CHILD CUSTODY PROCEEDINGS Reassumption § 13.13 Technical assistance prior to... matters, Bureau agency and Area Offices shall provide technical assistance and make available any...
Prior Abdominal Surgery Jeopardizes Quality of Resection in Colorectal Cancer
Stommel, M.W.J.; Wilt, J.H.W. de; Broek, R.P.G ten; Strik, C.; Rovers, M.M.; Goor, H. van
2016-01-01
BACKGROUND: Prior abdominal surgery increases complexity of abdominal operations. Effort to prevent injury during adhesiolysis might result in less extensive bowel resection in colorectal cancer surgery. The aim of this study was to evaluate the effect of prior abdominal surgery on the outcome of
Effects of Prior Knowledge on Memory: Implications for Education
Shing, Yee Lee; Brod, Garvin
2016-01-01
The encoding, consolidation, and retrieval of events and facts form the basis for acquiring new skills and knowledge. Prior knowledge can enhance those memory processes considerably and thus foster knowledge acquisition. But prior knowledge can also hinder knowledge acquisition, in particular when the to-be-learned information is inconsistent with…
Source-specific Informative Prior for i-Vector Extraction
Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou
2015-01-01
-informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows that extracting i-vectors for a heterogeneous dataset, containing speech samples recorded from multiple sources, using informative priors instead is applicable, and leads to favorable results...
Designing conjoint choice experiments using managers' prior beliefs
Sandor, Z; Wedel, M
2001-01-01
The authors provide more efficient designs for conjoint choice experiments based on prior information elicited from managers about the parameters and their associated uncertainty. The authors use a Bayesian design procedure that assumes a prior distribution of likely parameter values and optimizes
18 CFR 415.51 - Prior non-conforming structures.
2010-04-01
... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Prior non-conforming... ADMINISTRATIVE MANUAL BASIN REGULATIONS-FLOOD PLAIN REGULATIONS Enforcement § 415.51 Prior non-conforming... this part): (a) A non-conforming structure in the floodway may not be expanded, except that it may...
EEG Sequence Imaging: A Markov Prior for the Variational Garrote
Hansen, Sofie Therese; Hansen, Lars Kai
2013-01-01
We propose the following generalization of the Variational Garrote for sequential EEG imaging: A Markov prior to promote sparse, but temporally smooth source dynamics. We derive a set of modied Variational Garrote updates and analyze the role of the prior's hyperparameters. An experimental evalua...
Self-Assessment in University Assessment of Prior Learning Procedures
Brinke, D. Joosten-Ten; Sluijsmans, D. M. A.; Jochems, W. M. G.
2009-01-01
Competency-based university education, in which lifelong learning and flexible learning are key elements, demands a renewed vision on assessment. Within this vision, Assessment of Prior Learning (APL), in which learners have to show their prior learning in order for their goals to be recognised, becomes an important element. This article focuses…
7 CFR 1412.74 - Prior enrollment in DCP.
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Prior enrollment in DCP. 1412.74 Section 1412.74... (ACRE) Program § 1412.74 Prior enrollment in DCP. (a) If a farm was enrolled in a DCP contract according... request to have the DCP contract withdrawn for that crop year. To participate in an annual ACRE...
Portfolios for Prior Learning Assessment: Caught between Diversity and Standardization
Sweygers, Annelies; Soetewey, Kim; Meeus, Wil; Struyf, Elke; Pieters, Bert
2009-01-01
In recent years, procedures have been established in Flanders for "Prior Learning Assessment" (PLA) outside the formal learning circuit, of which the portfolio is a regular component. In order to maximize the possibilities of acknowledgement of prior learning assessment, the Flemish government is looking for a set of common criteria and…
Academic Credit for Prior Learning: 2016 Progress Report
Washington Student Achievement Council, 2017
2017-01-01
Students come to college with skills and knowledge acquired through work, military, and other experiences. Academic credit for prior learning is awarded when a student's prior learning is assessed and found to be the equivalent of specific college course outcomes, and when the award of credit is consistent with the policies of the institution.…
On the use of a pruning prior for neural networks
Goutte, Cyril
1996-01-01
We address the problem of using a regularization prior that prunes unnecessary weights in a neural network architecture. This prior provides a convenient alternative to traditional weight-decay. Two examples are studied to support this method and illustrate its use. First we use the sunspots...
Drunkorexia: Calorie Restriction Prior to Alcohol Consumption among College Freshman
Burke, Sloane C.; Cremeens, Jennifer; Vail-Smith, Karen; Woolsey, Conrad
2010-01-01
Using a sample of 692 freshmen at a southeastern university, this study examined caloric restriction among students prior to planned alcohol consumption. Participants were surveyed for self-reported alcohol consumption, binge drinking, and caloric intake habits prior to drinking episodes. Results indicated that 99 of 695 (14%) of first year…
28 CFR 2.9 - Study prior to sentencing.
2010-07-01
... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Study prior to sentencing. 2.9 Section 2... PRISONERS, YOUTH OFFENDERS, AND JUVENILE DELINQUENTS United States Code Prisoners and Parolees § 2.9 Study prior to sentencing. When an adult Federal offender has been committed to an institution by...
12 CFR 303.82 - Transactions requiring prior notice.
2010-01-01
... company requiring prior notice to the FDIC, if, immediately after the transaction, the acquiring person..., shall give the FDIC 60 days prior written notice, as specified in § 303.84, before acquiring control of an insured state nonmember bank or any parent company, unless the acquisition is exempt under §...
Drunkorexia: Calorie Restriction Prior to Alcohol Consumption among College Freshman
Burke, Sloane C.; Cremeens, Jennifer; Vail-Smith, Karen; Woolsey, Conrad
2010-01-01
Using a sample of 692 freshmen at a southeastern university, this study examined caloric restriction among students prior to planned alcohol consumption. Participants were surveyed for self-reported alcohol consumption, binge drinking, and caloric intake habits prior to drinking episodes. Results indicated that 99 of 695 (14%) of first year…
Gidmark, Nicholas J; Konow, Nicolai; Lopresti, Eric; Brainerd, Elizabeth L
2013-04-23
Bite force is critical to feeding success, especially in animals that crush strong, brittle foods. Maximum bite force is typically measured as one value per individual, but the force-length relationship of skeletal muscle suggests that each individual should possess a range of gape height-specific, and, therefore, prey size-specific, bite forces. We characterized the influence of prey size on pharyngeal jaw bite force in the snail-eating black carp (Mylopharyngodon piceus, family Cyprinidae), using feeding trials on artificial prey that varied independently in size and strength. We then measured jaw-closing muscle lengths in vivo for each prey size, and then determined the force-length relationship of the same muscle in situ using tetanic stimulations. Maximum bite force was surprisingly high: the largest individual produced nearly 700 N at optimal muscle length. Bite force decreased on large and small prey, which elicited long and short muscle lengths, respectively, demonstrating that the force-length relationship of skeletal muscle results in prey size-specific bite force.
Construction and test of the PRIOR proton microscope; Aufbau und Test des Protonenmikroskops PRIOR
Lang, Philipp-Michael
2015-01-15
The study of High Energy Density Matter (HEDM) in the laboratory makes great demands on the diagnostics because these states can usually only be created for a short time and usual diagnostic techniques with visible light or X-rays come to their limit because of the high density. The high energy proton radiography technique that was developed in the 1990s at the Los Alamos National Laboratory is a very promising possibility to overcome those limits so that one can measure the density of HEDM with high spatial and time resolution. For this purpose the proton microscope PRIOR (Proton Radiography for FAIR) was set up at GSI, which not only reproduces the image, but also magnifies it by a factor of 4.2 and thereby penetrates matter with a density up to 20 g/cm{sup 2}. Straightaway a spatial resolution of less than 30 μm and a time resolution on the nanosecond scale was achieved. This work describes details to the principle, design and construction of the proton microscope as well as first measurements and simulations of essential components like magnetic lenses, a collimator and a scintillator screen. For the latter one it was possible to show that plastic scintillators can be used as converter as an alternative to the slower but more radiation resistant crystals, so that it is possible to reach a time resolution of 10 ns. Moreover the characteristics were investigated for the system at the commissioning in April 2014. Also the changes in the magnetic field due to radiation damage were studied. Besides that an overview about future applications is given. First experiments with Warm Dense Matter created by using a Pulsed Power Setup have already been performed. Furthermore the promising concept of combining proton radiography with particle therapy has been investigated in context of the PaNTERA project. An outlook on the possibilities with future experiments at the FAIR accelerator facility is given as well. Because of higher beam intensity an energy one can expect even
Reference priors of nuisance parameters in Bayesian sequential population analysis
Bousquet, Nicolas
2010-01-01
Prior distributions elicited for modelling the natural fluctuations or the uncertainty on parameters of Bayesian fishery population models, can be chosen among a vast range of statistical laws. Since the statistical framework is defined by observational processes, observational parameters enter into the estimation and must be considered random, similarly to parameters or states of interest like population levels or real catches. The former are thus perceived as nuisance parameters whose values are intrinsically linked to the considered experiment, which also require noninformative priors. In fishery research Jeffreys methodology has been presented by Millar (2002) as a practical way to elicit such priors. However they can present wrong properties in multiparameter contexts. Therefore we suggest to use the elicitation method proposed by Berger and Bernardo to avoid paradoxical results raised by Jeffreys priors. These benchmark priors are derived here in the framework of sequential population analysis.
Prior Subject Interest, Students' Evaluations, And Instructional Effectiveness.
Marsh, H W; Cooper, T L
1981-01-01
Students' Prior Subject Interest in a course showed similar correlations with student ratings of instructional effectiveness in two university settings (N = 1102 classes). Correlations between Prior Subject Interest and different dimensions of instructional effectiveness varied from approximately zero to .44. Though these correlations were not high, Prior Subject Interest predicted student ratings better than any of 15 other student/course/instructor characteristics considered (e.g., Expected Grade, Class Size, Workload/Difficulty, Teacher Rank). Instructor self-evaluations of their own teaching effectiveness (N = 329 classes) were also positively correlated with both their own and their students' perceptions of Prior Subject Interest; the dimensions that were most highly correlated -- particularly Learning/Value -- were the same as observed with student ratings. Since both student and instructor self evaluations were similarly related to Prior Subject Interest, it appears that this variable actually affects instructional effectiveness in a way that is accurately reflected in the student ratings.
Training shortest-path tractography: Automatic learning of spatial priors
Kasenburg, Niklas; Liptrot, Matthew George; Reislev, Nina Linde;
2016-01-01
knowledge. Here we demonstrate how such prior knowledge, or indeed any prior spatial information, can be automatically incorporated into a shortest-path tractography approach to produce more robust results. We describe how such a prior can be automatically generated (learned) from a population, and we...... demonstrate that our framework also retains support for conventional interactive constraints such as waypoint regions. We apply our approach to the open access, high quality Human Connectome Project data, as well as a dataset acquired on a typical clinical scanner. Our results show that the use of a learned...... prior substantially increases the overlap of tractography outputwith a reference atlas on both populations, and this is confirmed by visual inspection. Furthermore, we demonstrate how a prior learned on the high quality dataset significantly increases the overlap with the reference for the more typical...
Asymptotic admissibility of priors and elliptic differential equations
Hartigan, J A
2010-01-01
We evaluate priors by the second order asymptotic behavior of the corresponding estimators.Under certain regularity conditions, the risk differences between efficient estimators of parameters taking values in a domain D, an open connected subset of R^d, are asymptotically expressed as elliptic differential forms depending on the asymptotic covariance matrix V. Each efficient estimator has the same asymptotic risk as a 'local Bayes' estimate corresponding to a prior density p. The asymptotic decision theory of the estimators identifies the smooth prior densities as admissible or inadmissible, according to the existence of solutions to certain elliptic differential equations. The prior p is admissible if the quantity pV is sufficiently small near the boundary of D. We exhibit the unique admissible invariant prior for V=I,D=R^d-{0). A detailed example is given for a normal mixture model.
Significance of maximum current for voltage boosting of microbial fuel cells in series
An, Junyeong; Lee, Yoo Seok; Kim, Taeyoung; Chang, In Seop
2016-08-01
Differences in internal resistances or operational conditions that affect the current between series-connected MFC units are known to cause voltage reversal. In this work, we proved that voltage reversal does not happen when MFCs produce an identical maximum current (i.e., limiting current), even though their internal resistances may differ. Here, two MFCs having an internal resistance difference of 206 Ω produced an almost identical maximum current of 0.4 mA in non-stacked mode. When the MFCs were connected in series, there was no voltage reversal; the voltage at the maximum current of 0.37 mA ranged from 1 mV to 3 mV. This result clearly indicates that differences of internal resistances or operational conditions are not an essential prerequisite for occurrences of voltage reversal in stacked MFCs, and that the maximum current of MFCs may be a direct indicator for predicting voltage reversal occurrences prior to the series connection of MFCs.
Direct measurement of Vorticella contraction force by micropipette deflection.
France, Danielle; Tejada, Jonathan; Matsudaira, Paul
2017-02-01
The ciliated protozoan Vorticella convallaria is noted for its exceptionally fast adenosine triphosphate-independent cellular contraction, but direct measurements of contractile force have proven difficult given the length scale, speed, and forces involved. We used high-speed video microscopy to image live Vorticella stalled in midcontraction by deflection of an attached micropipette. Stall forces correlate with both distance contracted and the resting stalk length. Estimated isometric forces range from 95 to 177 nanonewtons (nN), or 1.12 nN·μm(-1) of the stalk. Maximum velocity and work are also proportional to distance contracted. These parameters constrain proposed biochemical/physical models of the contractile mechanism.
Influence of Emotion on the Control of Low-Level Force Production
Naugle, Kelly M.; Coombes, Stephen A.; Cauraugh, James H.; Janelle, Christopher M.
2012-01-01
The accuracy and variability of a sustained low-level force contraction (2% of maximum voluntary contraction) was measured while participants viewed unpleasant, pleasant, and neutral images during a feedback occluded force control task. Exposure to pleasant and unpleasant images led to a relative increase in force production but did not alter the…
Deciphering The Fall And Rise Of The Dead Sea In Relation To Solar Forcing
Yousef, Shahinaz M.
2005-03-01
Solar Forcing on closed seas and Lakes is space time dependent. The Cipher of the Dead Sea level variation since 1200 BC is solved in the context of millenium and Wolf-Gleissberg solar cycles time scales. It is found that the pattern of Dead Sea level variation follows the pattern of major millenium solar cycles. The 70 m rise of Dead Sea around 1AD is due to the forcing of the maximum millenium major solar cycle. Although the pattern of the Dead Sea level variation is almost identical to major solar cycles pattern between 1100 and 1980 AD, there is a dating problem of the Dead Sea time series around 1100-1300 AD that time. A discrepancy that should be corrected for the solar and Dead Sea series to fit. Detailed level variations of the Dead Sea level for the past 200 years are solved in terms of the 80-120 years solar Wolf-Gliessberg magnetic cycles. Solar induced climate changes do happen at the turning points of those cycles. Those end-start and maximum turning points are coincident with the change in the solar rotation rate due to the presence of weak solar cycles. Such weak cycles occur in series of few cycles between the end and start of those Wolf-Gleissberg cycles. Another one or two weak r solar cycle occur following the maximum of those Wolf-Gleissberg cycles. Weak cycles induce drop in the energy budget emitted from the sun and reaching the Earth thus causing solar induced climate change. An 8 meter sudden rise of Dead Sea occur prior 1900 AD due to positive solar forcing of the second cycle of the weak cycles series on the Dead Sea. The same second weak cycle induced negative solar forcing on Lake Chad. The first weak solar cycle forced Lake Victoria to rise abruptly in 1878. The maximum turning point of the solar Wolf-Gleissberg cycle induced negative forcing on both the Aral Sea and the Dead Sea causing their shrinkage to an alarming reduced area ever since. On the other hand, few years delayed positive forcing caused Lake Chad and the Equatorial
Present and Last Glacial Maximum climates as states of maximum entropy production
Herbert, Corentin; Kageyama, Masa; Dubrulle, Berengere
2011-01-01
The Earth, like other planets with a relatively thick atmosphere, is not locally in radiative equilibrium and the transport of energy by the geophysical fluids (atmosphere and ocean) plays a fundamental role in determining its climate. Using simple energy-balance models, it was suggested a few decades ago that the meridional energy fluxes might follow a thermodynamic Maximum Entropy Production (MEP) principle. In the present study, we assess the MEP hypothesis in the framework of a minimal climate model based solely on a robust radiative scheme and the MEP principle, with no extra assumptions. Specifically, we show that by choosing an adequate radiative exchange formulation, the Net Exchange Formulation, a rigorous derivation of all the physical parameters can be performed. The MEP principle is also extended to surface energy fluxes, in addition to meridional energy fluxes. The climate model presented here is extremely fast, needs very little empirical data and does not rely on ad hoc parameterizations. We in...
29 CFR 4022.23 - Computation of maximum guaranteeable benefits.
2010-07-01
... means an annuity under which if the participant dies prior to the time when he has received pension... means an annuity under which if the participant dies prior to the time he has received pension payments... Section 4022.23 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY...
Estimating Metabolic Fluxes Using a Maximum Network Flexibility Paradigm
Megchelenbrink, Wout; Rossell, Sergio; Huynen, Martijn A.
2015-01-01
Motivation Genome-scale metabolic networks can be modeled in a constraint-based fashion. Reaction stoichiometry combined with flux capacity constraints determine the space of allowable reaction rates. This space is often large and a central challenge in metabolic modeling is finding the biologically most relevant flux distributions. A widely used method is flux balance analysis (FBA), which optimizes a biologically relevant objective such as growth or ATP production. Although FBA has proven to be highly useful for predicting growth and byproduct secretion, it cannot predict the intracellular fluxes under all environmental conditions. Therefore, alternative strategies have been developed to select flux distributions that are in agreement with experimental “omics” data, or by incorporating experimental flux measurements. The latter, unfortunately can only be applied to a limited set of reactions and is currently not feasible at the genome-scale. On the other hand, it has been observed that micro-organisms favor a suboptimal growth rate, possibly in exchange for a more “flexible” metabolic network. Instead of dedicating the internal network state to an optimal growth rate in one condition, a suboptimal growth rate is used, that allows for an easier switch to other nutrient sources. A small decrease in growth rate is exchanged for a relatively large gain in metabolic capability to adapt to changing environmental conditions. Results Here, we propose Maximum Metabolic Flexibility (MMF) a computational method that utilizes this observation to find the most probable intracellular flux distributions. By mapping measured flux data from central metabolism to the genome-scale models of Escherichia coli and Saccharomyces cerevisiae we show that i) indeed, most of the measured fluxes agree with a high adaptability of the network, ii) this result can be used to further reduce the space of feasible solutions iii) this reduced space improves the quantitative predictions
Estimating Metabolic Fluxes Using a Maximum Network Flexibility Paradigm.
Wout Megchelenbrink
Full Text Available Genome-scale metabolic networks can be modeled in a constraint-based fashion. Reaction stoichiometry combined with flux capacity constraints determine the space of allowable reaction rates. This space is often large and a central challenge in metabolic modeling is finding the biologically most relevant flux distributions. A widely used method is flux balance analysis (FBA, which optimizes a biologically relevant objective such as growth or ATP production. Although FBA has proven to be highly useful for predicting growth and byproduct secretion, it cannot predict the intracellular fluxes under all environmental conditions. Therefore, alternative strategies have been developed to select flux distributions that are in agreement with experimental "omics" data, or by incorporating experimental flux measurements. The latter, unfortunately can only be applied to a limited set of reactions and is currently not feasible at the genome-scale. On the other hand, it has been observed that micro-organisms favor a suboptimal growth rate, possibly in exchange for a more "flexible" metabolic network. Instead of dedicating the internal network state to an optimal growth rate in one condition, a suboptimal growth rate is used, that allows for an easier switch to other nutrient sources. A small decrease in growth rate is exchanged for a relatively large gain in metabolic capability to adapt to changing environmental conditions.Here, we propose Maximum Metabolic Flexibility (MMF a computational method that utilizes this observation to find the most probable intracellular flux distributions. By mapping measured flux data from central metabolism to the genome-scale models of Escherichia coli and Saccharomyces cerevisiae we show that i indeed, most of the measured fluxes agree with a high adaptability of the network, ii this result can be used to further reduce the space of feasible solutions iii this reduced space improves the quantitative predictions made by FBA and
Are buckling force measurements reliable in nocturnal penile tumescence studies?
Nofzinger, E A; Fasiczka, A L; Thase, M E; Reynolds, C F; Frank, E; Jennings, J R; Garamoni, G L; Matzzie, J V; Kupfer, D J
1993-02-01
The study of nocturnal penile tumescence (NPT) is frequently used to evaluate male erectile dysfunction. Buckling force, a measure of rigidity, is an important part of this evaluation, but its reliability is unknown. Accordingly, we studied the reliability of buckling force measurement and the stability of "maximum buckling force" between consecutive NPT series repeated in the same subject. For individual subjects, we correlated buckling forces for separate episodes of sleep-related tumescence that were of comparable fullness (0-100%) as rated by a technician's visual estimates. For healthy control subjects, test-retest correlations were > 0.8 both within-night and across study series separated by an average of 70 weeks. In depressed men, correlations within nights were > 0.9, but fell to 0.64 across study series separated by an average of 21 weeks. Despite the high reliability of buckling force measurement, we found little stability of "maximum buckling force" between NPT series for individual subjects. Considerable variability in the maximum degree of penile rigidity was seen over time despite a constant level of reported daytime erectile function. We conclude that although penile rigidity is one of the more important variables in the assessment of male erectile dysfunction and can be measured reliably, the instability of maximum rigidity during sleep-related erections suggests that it is, at best, an imprecise correlate of daytime erectile function.
Rath, W; Kuhn, W; Hilgers, R
1985-09-01
In a prospective, randomized study, 10 patients with primary sterility received an intracervical application of 0.1 mg Sulprostone-Tylose gel for cervical priming 12 hours prior to panoramic CO2-hysteroscopy and pelviscopy with chromopertubation. Ten patients who served as controls were not treated with the local prostaglandin. The force required to overcome the cervical canal was measured with a special tonometer for Hegar 3 before application of the gel in the group treated with Sulprostone and was 3-8 mm in both groups of patients immediately preoperatively. Cervical priming led to a significant reduction in the force required to dilate the cervix. After priming with Sulprostone, the cervical canal was freely passable for an average of 6.7 mm. In none of these patients was a force of 7 Newton exceeded for Hegar 8, whereas in the control group a mean force of 8.2 Newton was required to dilate the cervix for Hegar 6. Haemorrhage and epithelial lesions of the cervix caused by the dilatation can largely be avoided, and the risk of uterine damage reduced by local priming of the cervix. The intracervical application of prostaglandin gel is an easy, efficient and gentle method of dilatation for hysteroscopy, particularly in patients with a firmly closed and rigid cervix.
Maximum Power Point Tracking of DC To DC Boost Converter Using Sliding Mode Control
Anusuyadevi R
2013-07-01
Full Text Available A sliding mode controller is used to estimate the maximum power point as a reference for it to track that point and force the PV system to operate in this point. In sliding mode control, the trajectories of the system are forced to reach a sliding manifold of surface, where it exhibit desirable features, in finite time and to stay on the manifold for all future time. The load is composed of a battery bank. It is obtained by controlling the duty cycle of a DC-DC converter using sliding mode control. This method has the advantage that it will guarantee the maximum output power possible by the array configuration while considering the dynamic parameters solar irradiance and delivering more power to charge the battery. The proposed system with sliding mode control is tested using MATLAB / SIMULINK platform in which a maximum power is tracked under constant and varying solar irradiance and delivered to the battery which increasing the current that is charging the battery and reduces the charging time.
Sengbusch, E; Pérez-Andújar, A; DeLuca, P M; Mackie, T R
2009-02-01
Several compact proton accelerator systems for use in proton therapy have recently been proposed. Of paramount importance to the development of such an accelerator system is the maximum kinetic energy of protons, immediately prior to entry into the patient, that must be reached by the treatment system. The commonly used value for the maximum kinetic energy required for a medical proton accelerator is 250 MeV, but it has not been demonstrated that this energy is indeed necessary to treat all or most patients eligible for proton therapy. This article quantifies the maximum kinetic energy of protons, immediately prior to entry into the patient, necessary to treat a given percentage of patients with rotational proton therapy, and examines the impact of this energy threshold on the cost and feasibility of a compact, gantry-mounted proton accelerator treatment system. One hundred randomized treatment plans from patients treated with IMRT were analyzed. The maximum radiological pathlength from the surface of the patient to the distal edge of the treatment volume was obtained for 180 degrees continuous arc proton therapy and for 180 degrees split arc proton therapy (two 90 degrees arcs) using CT# profiles from the Pinnacle (Philips Medical Systems, Madison, WI) treatment planning system. In each case, the maximum kinetic energy of protons, immediately prior to entry into the patient, that would be necessary to treat the patient was calculated using proton range tables for various media. In addition, Monte Carlo simulations were performed to quantify neutron production in a water phantom representing a patient as a function of the maximum proton kinetic energy achievable by a proton treatment system. Protons with a kinetic energy of 240 MeV, immediately prior to entry into the patient, were needed to treat 100% of patients in this study. However, it was shown that 90% of patients could be treated at 198 MeV, and 95% of patients could be treated at 207 MeV. Decreasing the
Qin, Yujie; Hou, Xiaojing
2011-02-01
This paper studied the influence of maglev force relaxation on the force (both levitation and guidance forces) of bulk high-temperature superconductor (HTSC) subjected to different lateral displacements above a NdFeB guideway. Firstly, the maglev forces relaxation property of bulk HTSC above the permanent-magnet guideway (PMG) was studied experimentally, then the levitation and guidance forces were measured by SCML-2 measurement system synchronously at different lateral displacements, some times later(after relaxation), the forces were measured again as the same way. Compared to the two measured results, it was found that the change of the levitation force was larger compared to the case without relaxation, while the change of the guidance force was smaller. In addition, the rate of change of levitation force and guidance force was different for different maximum lateral displacements. This work provided a scientific analysis for the practical application of the bulk HTS.
Cassio Neri
2014-05-01
Full Text Available We study the problem of finding probability densities that match given European call option prices. To allow prior information about such a density to be taken into account, we generalise the algorithm presented in Neri and Schneider (Appl. Math. Finance 2013 to find the maximum entropy density of an asset price to the relative entropy case. This is applied to study the impact of the choice of prior density in two market scenarios. In the first scenario, call option prices are prescribed at only a small number of strikes, and we see that the choice of prior, or indeed its omission, yields notably different densities. The second scenario is given by CBOE option price data for S&P500 index options at a large number of strikes. Prior information is now considered to be given by calibrated Heston, Schöbel–Zhu or Variance Gamma models. We find that the resulting digital option prices are essentially the same as those given by the (non-relative Buchen–Kelly density itself. In other words, in a sufficiently liquid market, the influence of the prior density seems to vanish almost completely. Finally, we study variance swaps and derive a simple formula relating the fair variance swap rate to entropy. Then we show, again, that the prior loses its influence on the fair variance swap rate as the number of strikes increases.
Bioucas-Dias, José M
2006-04-01
Image deconvolution is formulated in the wavelet domain under the Bayesian framework. The well-known sparsity of the wavelet coefficients of real-world images is modeled by heavy-tailed priors belonging to the Gaussian scale mixture (GSM) class; i.e., priors given by a linear (finite of infinite) combination of Gaussian densities. This class includes, among others, the generalized Gaussian, the Jeffreys, and the Gaussian mixture priors. Necessary and sufficient conditions are stated under which the prior induced by a thresholding/shrinking denoising rule is a GSM. This result is then used to show that the prior induced by the "nonnegative garrote" thresholding/shrinking rule, herein termed the garrote prior, is a GSM. To compute the maximum a posteriori estimate, we propose a new generalized expectation maximization (GEM) algorithm, where the missing variables are the scale factors of the GSM densities. The maximization step of the underlying expectation maximization algorithm is replaced with a linear stationary second-order iterative method. The result is a GEM algorithm of O(N log N) computational complexity. In a series of benchmark tests, the proposed approach outperforms or performs similarly to state-of-the art methods, demanding comparable (in some cases, much less) computational complexity.
Nonlinear Dynamic Force Spectroscopy
Björnham, Oscar
2016-01-01
Dynamic force spectroscopy (DFS) is an experimental technique that is commonly used to assess information of the strength, energy landscape, and lifetime of noncovalent bio-molecular interactions. DFS traditionally requires an applied force that increases linearly with time so that the bio-complex under investigation is exposed to a constant loading rate. However, tethers or polymers can modulate the applied force in a nonlinear regime. For example, bacterial adhesion pili and polymers with worm-like chain properties are examples of structures that show nonlinear force responses. In these situations, the theory for traditional DFS cannot be readily applied. In this work we expand the theory for DFS to also include nonlinear external forces while still maintaining compatibility with the linear DFS theory. To validate the theory we modeled a bio-complex expressed on a stiff, an elastic and a worm-like chain polymer, using Monte Carlo methods, and assessed the corresponding rupture force spectra. It was found th...
Reconstructing Volcanic Forcing of Climate: Past, Present and Future
Toohey, M.; Timmreck, C.; Sigl, M.
2015-12-01
Radiative forcing resulting from major volcanic eruptions has been a dominant driver of climate variability during Earth's history. Including volcanic forcing in climate model simulations is therefore essential to recreate past climate variability, and provides the opportunity to test the ability of models to respond accurately to external forcing. Ice cores provide estimates of the volcanic sulfate loadings from past eruptions, from which radiative forcing can be reconstructed, with associated uncertainties. Using prior reconstructions, climate models have reproduced the gross features of global mean temperature variability reconstructed from climate proxies, although some significant differences between model results and reconstructions remain. There is much less confidence in the accuracy of the dynamical responses to volcanic forcing produced by climate models, and thus the regional aspects of post-volcanic climate anomalies are much more uncertain—a result which mirrors uncertainties in the dynamical responses to future climate change. Improvements in model's response to volcanic forcing may be possible through improving the accuracy of the forcing data. Recent advances on multiple fronts have motivated the development of a next-generation volcanic forcing timeseries for use in climate models, based on (1) improved dating and precision of ice core records, (2) better understanding of the atmospheric transport and microphysical evolution of volcanic aerosol, including its size distribution, and (3) improved representations of the spatiotemporal structure of volcanic radiative forcing. A new volcanic forcing data set, covering the past 2500 years, will be introduced and compared with prior reconstructions. Preliminary results of climate model simulations using the new forcing will also be shown, and current and future applications of the forcing set discussed.
Implicit Priors in Galaxy Cluster Mass and Scaling Relation Determinations
Mantz, A.; Allen, S. W.
2011-01-01
Deriving the total masses of galaxy clusters from observations of the intracluster medium (ICM) generally requires some prior information, in addition to the assumptions of hydrostatic equilibrium and spherical symmetry. Often, this information takes the form of particular parametrized functions used to describe the cluster gas density and temperature profiles. In this paper, we investigate the implicit priors on hydrostatic masses that result from this fully parametric approach, and the implications of such priors for scaling relations formed from those masses. We show that the application of such fully parametric models of the ICM naturally imposes a prior on the slopes of the derived scaling relations, favoring the self-similar model, and argue that this prior may be influential in practice. In contrast, this bias does not exist for techniques which adopt an explicit prior on the form of the mass profile but describe the ICM non-parametrically. Constraints on the slope of the cluster mass-temperature relation in the literature show a separation based the approach employed, with the results from fully parametric ICM modeling clustering nearer the self-similar value. Given that a primary goal of scaling relation analyses is to test the self-similar model, the application of methods subject to strong, implicit priors should be avoided. Alternative methods and best practices are discussed.
Random geometric prior forest for multiclass object segmentation.
Liu, Xiao; Song, Mingli; Tao, Dacheng; Bu, Jiajun; Chen, Chun
2015-10-01
Recent advances in object detection have led to the development of segmentation by detection approaches that integrate top-down geometric priors for multiclass object segmentation. A key yet under-addressed issue in utilizing top-down cues for the problem of multiclass object segmentation by detection is efficiently generating robust and accurate geometric priors. In this paper, we propose a random geometric prior forest scheme to obtain object-adaptive geometric priors efficiently and robustly. In the scheme, a testing object first searches for training neighbors with similar geometries using the random geometric prior forest, and then the geometry of the testing object is reconstructed by linearly combining the geometries of its neighbors. Our scheme enjoys several favorable properties when compared with conventional methods. First, it is robust and very fast because its inference does not suffer from bad initializations, poor local minimums or complex optimization. Second, the figure/ground geometries of training samples are utilized in a multitask manner. Third, our scheme is object-adaptive but does not require the labeling of parts or poselets, and thus, it is quite easy to implement. To demonstrate the effectiveness of the proposed scheme, we integrate the obtained top-down geometric priors with conventional bottom-up color cues in the frame of graph cut. The proposed random geometric prior forest achieves the best segmentation results of all of the methods tested on VOC2010/2012 and is 90 times faster than the current state-of-the-art method.
Learning priors for Bayesian computations in the nervous system.
Max Berniker
Full Text Available Our nervous system continuously combines new information from our senses with information it has acquired throughout life. Numerous studies have found that human subjects manage this by integrating their observations with their previous experience (priors in a way that is close to the statistical optimum. However, little is known about the way the nervous system acquires or learns priors. Here we present results from experiments where the underlying distribution of target locations in an estimation task was switched, manipulating the prior subjects should use. Our experimental design allowed us to measure a subject's evolving prior while they learned. We confirm that through extensive practice subjects learn the correct prior for the task. We found that subjects can rapidly learn the mean of a new prior while the variance is learned more slowly and with a variable learning rate. In addition, we found that a Bayesian inference model could predict the time course of the observed learning while offering an intuitive explanation for the findings. The evidence suggests the nervous system continuously updates its priors to enable efficient behavior.
Novice and expert teachers' conceptions of learners' prior knowledge
Meyer, Helen
2004-11-01
This study presents comparative case studies of preservice and first-year teachers' and expert teachers' conceptions of the concept of prior knowledge. Kelly's (The Psychology of Personal Construct, New York: W.W. Norton, 1955) theory of personal constructs as discussed by Akerson, Flick, and Lederman (Journal of Research in Science Teaching, 2000, 37, 363-385) in relationship to prior knowledge underpins the study. Six teachers were selected to participate in the case studies based upon their level experience teaching science and their willingness to take part. The comparative case studies of the novice and expert teachers provide insights into (a) how novice and expert teachers understand the concept of prior knowledge and (b) how they use this knowledge to make instructional decisions. Data collection consisted of interviews, classroom observations, and document analysis. Findings suggest that novice teachers hold insufficient conceptions of prior knowledge and its role in instruction to effectively implement constructivist teaching practices. While expert teachers hold a complex conception of prior knowledge and make use of their students' prior knowledge in significant ways during instruction. A second finding was an apparent mismatch between the novice teachers' beliefs about their urban students' life experiences and prior knowledge and the wealth of knowledge the expert teachers found to draw upon.
Relativistic Linear Restoring Force
Clark, D.; Franklin, J.; Mann, N.
2012-01-01
We consider two different forms for a relativistic version of a linear restoring force. The pair comes from taking Hooke's law to be the force appearing on the right-hand side of the relativistic expressions: d"p"/d"t" or d"p"/d["tau"]. Either formulation recovers Hooke's law in the non-relativistic limit. In addition to these two forces, we…
A Note on k-Limited Maximum Base
Yang Ruishun; Yang Xiaowei
2006-01-01
The problem of k-limited maximum base was specified into two special problems of k-limited maximum base; that is, let subset D of the problem of k-limited maximum base be an independent set and a circuit of the matroid, respectively. It was proved that under this circumstance the collections of k-limited base satisfy base axioms. Then a new matroid was determined, and the problem of k-limited maximum base was transformed to the problem of maximum base of this new matroid. Aiming at the problem, two algorithms, which in essence are greedy algorithms based on former matroid, were presented for the two special problems of k-limited maximum base. They were proved to be reasonable and more efficient than the algorithm presented by Ma Zhongfan in view of the complexity of algorithm.